Aug
22
2014
--

When (and how) to move an InnoDB table outside the shared tablespace

In my last post, “A closer look at the MySQL ibdata1 disk space issue and big tables,” I looked at the growing ibdata1 problem under the perspective of having big tables residing inside the so-called shared tablespace. In the particular case that motivated that post, we had a customer running out of disk space in his server who was looking for a way to make the ibdata1 file shrink. As you may know, that file (or, as explained there, the set of ibdata files composing the shared tablespace) stores all InnoDB tables created when innodb_file_per_table is disabled, but also other InnoDB structures, such as undo logs and data dictionary.

For example, when you run a transaction involving InnoDB tables, MySQL will first write all the changes it triggers in an undo log, for the case you later decide to “roll them back”. Long standing, uncommited transactions are one of the causes for a growing ibdata file. Of course, if you have innodb_file_per_table disabled then all your InnoDB tables live inside it. That was what happened in that case.

So, how do you move a table outside the shared tablespace and change the storage engine it relies on? As importantly, how does that affects disk space use? I’ve explored some of the options presented in the previous post and now share my findings with you below.

The experiment

I created a very simple InnoDB table inside the shared tablespace of a fresh installed Percona Server 5.5.37-rel35.1 with support for TokuDB and configured it with a 1GB buffer pool. I’ve used a 50G partition to host the ‘datadir’ and another one for ‘tmpdir’:

Filesystem                 Size Used Avail Use% Mounted on
/dev/mapper/vg0-lvFernando1 50G 110M  47G   1%  /media/lvFernando1  # datadir
/dev/mapper/vg0-lvFernando2 50G  52M  47G   1%  /media/lvFernando2  # tmpdir

Here’s the table structure:

CREATE TABLE `joinit` (
  `i` int(11) NOT NULL AUTO_INCREMENT,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int(11) NOT NULL,
  PRIMARY KEY (`i`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

I populated it with 134 million rows using the following routine:

INSERT INTO joinit VALUES (NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )));
INSERT INTO joinit SELECT NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )) FROM joinit;  # repeat until you reach your target number of rows

which resulted in the table below:

mysql> show table status from test like 'joinit'G
*************************** 1. row ***************************
Name: joinit
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 134217909
Avg_row_length: 72
Data_length: 9783214080
Max_data_length: 0
Index_length: 0
Data_free: 1013972992
Auto_increment: 134872552
Create_time: 2014-07-30 20:42:42
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

The resulting ibdata1 file was showing to have 11G, which accounted in practice for 100% of the datadir partition use then. What follows next is a few experiences I did by converting that table to use a different storage engine, moving it outside the shared tablespace, compressing it, and dumping and restoring the database back to see the effects in disk space use. I haven’t timed how long running each command took and focused mostly on the generated files size. As a bonus, I’ve also looked at how to extend the shared table space by adding an extra ibdata file.

#1) Converting to MyISAM

Technical characteristics and features apart, MyISAM tables are know to occupy less disk space than InnoDB’s ones. How much less depends on the actual table structure. Here I made the conversion in the most simplest way:

mysql> ALTER TABLE test.joinit ENGINE=MYISAM;

which created the following files (the .frm file already existed):

$ ls -lh /media/lvFernando1/data/test/
total 8.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 16:21 joinit.frm
-rw-rw---- 1 fernando.laudares admin 7.0G Jul 31 16:27 joinit.MYD
-rw-rw---- 1 fernando.laudares admin 1.3G Jul 31 16:27 joinit.MYI

The resulting MyISAM files amounted for an additional 8.3G of disk space use:

/dev/mapper/vg0-lvFernando1 50G 19G 29G 40% /media/lvFernando1

I was expecting smaller files but, of course, the result depends largely on the data types of the columns composing the table. The problem (or the consequence) is that we end up with close to the double of the initial disk space being used:

As it happens with the other solutions presented in this section that migrate the target table outside the shared tablespace, the common/safest way to reclaim the freed (unused) space inside the ibdata1 file back to the operating system is by doing a dump & restore of the full database.

There’s an alternative approach with MyISAM though, which doesn’t involve dump & restore and only requires a MySQL restart. However, you need to convert all InnoDB tables to MyISAM, stop MySQL, delete all ib* files (there should be no remaining .ibd files after you’ve converted all InnoDB tables to MyISAM), and then restart MySQL again. Upon MySQL restart, ibdata1 will be re-created with it’s default initial size (more on this below). You can then convert the MyISAM tables back to InnoDB and if you have innodb_file_per_table enabled this time then the tables will be created with their own private tablespace file.

#2) Exporting the table to a private tablespace

Once you have innodb_file_per_table enabled you can move a table residing inside ibdata1 to it’s private tablespace (it’s own .ibd file) by either running ALTER TABLE or OPTIMIZE TABLE. Both commands create a “temporary” (though InnoDB, not MyISAM) table with it’s own tablespace file inside the database directory (and not in the tmpdir, as I believed it would happen), the rows from the target table being copied over there.

Here’s showing the temporary table (#sql-4f10_1) that was created while the process was still ongoing:

$ ls -lh /media/lvFernando1/data/test
total 2.2G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 20:42 joinit.frm
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 23:05 #sql-4f10_1.frm
-rw-rw---- 1 fernando.laudares admin 2.2G Jul 30 23:12 #sql-4f10_1.ibd

and the resulting .ibd file from when the process completed:

$ ls -lh /media/lvFernando1/data/test
total 9.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 23:05 joinit.frm
-rw-rw---- 1 fernando.laudares admin 9.3G Jul 30 23:35 joinit.ibd

Note that the new joinit.ibd file is about 9.3G. Again, the process resulted in the use of extra disk space:

/dev/mapper/vg0-lvFernando1 50G 20G 28G 42% /media/lvFernando1

#3) Dump and restore: looking at the disk space use

As pointed, the way to reclaim unused disk space inside ibdata1 back to the file system (and consequently making it shrink) is by dumping the full database to a text file and then restoring it back. I’ve started by doing a simple full mysqldump encompassing all databases:

$ mysqldump -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox --all-databases > dump.sql

which created the following file:

-rw-r--r-- 1 fernando.laudares admin 8.1G Jul 31 00:02 dump.sql

I’ve then stopped MySQL, wiped out the full datadir and used the script mysql_install_db to (re-)create the system tables (though that’s not needed, I did it to level comparisons; once you have all InnoDB tables out of the system tablespace you can simply delete all ib* files along any .ibd and .frm files for related InnoDB tables), started MySQL again, and finally restored the backup:

$ mysql -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox < dump.sql

This resulted in:

/dev/mapper/vg0-lvFernando1 50G 9.4G 38G 21% /media/lvFernando1

and:

-rw-rw---- 1 fernando.laudares admin 18M Jul 31 05:05 /media/lvFernando1/data/ibdata1

ibdata1 did shrink in the process (returning to the default* initial size) and we recovered back a couple of gigabytes that were not being used by the single InnoDB table that used to live inside it.

* The default value for innodb_data_file_path for the Percona Server release we used on the tests is “ibdata1:10M:autoextend”. However, the manual states that “the default behavior is to create a single auto-extending data file, slightly larger than 10MB“. Given that innodb_autoextend_increment is set to 8M by default what we get in practice is an initialized ibdata1 file of 18M, as seen above.

#4) Compressing the table

Back to the original scenario, with the test table still living in the shared tablespace, I’ve enabled innodb_file_per_table, set Barracuda as the innodb_file_format and then ran:

mysql> ALTER TABLE test.joinit ENGINE=INNODB ROW_FORMAT=Compressed;

This resulted in an .ibd file with half the size of an uncompressed one:

-rw-rw---- 1 fernando.laudares admin 4.7G Jul 31 13:59 joinit.ibd

The process was a lengthy one but resulted in considerable less disk space use than converting the table to MyISAM:

/dev/mapper/vg0-lvFernando1 50G 20G 28G 42% /media/lvFernando1

#5) Converting the table to TokuDB

I’ve used TokuDB version 7.1.6 (not the latest one), which was bundled in Percona Server 5.5.37-rel35.1.

mysql> ALTER TABLE test.joinit ENGINE=TOKUDB;

As was the case with converting the InnoDB table to it’s own tablespace, a temporary table was created to intermediate the process, with the table definition residing in the test database directory:

$ ls -lh /media/lvFernando1/data/test/
total 24K
-rw-r----- 1 fernando.laudares admin 8.5K Jul 31 14:22 joinit.frm
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 14:28 #sql-7f51_1.frm

However, TokuDB created a temporary file in the main datadir to copy the data into, show here at some point during the process:

-rwxrwx--x 1 fernando.laudares admin 32K Jul 31 14:28 _test_sql_7f51_1_main_a_2_19.tokudb
-rwxrwx--x 1 fernando.laudares admin 16K Jul 31 14:28 _test_sql_7f51_1_status_a_1_19.tokudb
-rw------- 1 fernando.laudares admin 528M Jul 31 14:29 tokuldNk5W4v

Once the process completed the file tokuldNk5W4v disappeared and the data ended up in the _test_sql_7f51_1_main_a_2_19.tokudb file:

-rwxrwx--x 1 fernando.laudares admin 1.1G Jul 31 14:32 _test_sql_7f51_1_main_d_1_19_B_0.tokudb

The new .tokudb file is so small compared to the other files obtained in the previous approaches that it almost goes unnoticed when looking at the disk space use:

/dev/mapper/vg0-lvFernando1 50G 12G 36G 25% /media/lvFernando1

Curiously, that file retained the name/reference of the temporary table it used, even though the temporary table file description (#sql-7f51_1.frm) disappeared as well; only joinit.frm remained under the test database directory.

I decided to do a dump & restore of the whole database to see what that would result in this case:

$ mysqldump -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox --all-databases > dump2.sql
$ ls -lh dump2.sql
-rw-r--r-- 1 fernando.laudares admin 8.1G Jul 31 15:07 /home/fernando.laudares/dump2.sql

Once it was restored we found the ibdata1 file back to its initial default size and the disk space used by the datadir was down to a scant 1.1G. Curiously (again), the dump & restore procedure did fixed the name of the TokuDB table file:

-rwxrwx--x 1 fernando.laudares admin 980M Jul 31 15:36 _test_joinit_main_21_1_19_B_0.tokudb

The process of converting the resident InnoDB table to TokuDB was faster than compressing it with ROW_FORMAT=Compressed and resulted in a much smaller file. This is not to say that using TokuDB is the best solution but to point that the process can be accomplished in less time and make use of lesser extra disk space than InnoDB does, which can simply come up handy if you don’t have much space left. Also, remember the test was done with a not very large (and simple in structure) test table; you may need to check if yours can be converted to TokuDB and what changes to indexes you might need to do (if any).

#6) Expanding the shared tablespace

As previously mentioned, I’ve been using the default value of  “ibdata1:10M:autoextend” for innodb_data_file_path for my tests. If you have another partition with unused space available and you look only for a solution to the “running out of disk space” problem and don’t mind keeping your big table inside the shared tablespace, then you can expand it. That can be done by means of adding a second ibdata file to the tablespace definition. Of course, that only works if you create it outside the partition already hosting ibdata1.

To do so, you need to stop MySQL and verify the size of the ibdata1 file you have (list size in bytes as “ls -lh” will round it here):

$ ls -l /media/lvFernando1/data/ibdata1
-rw-r----- 1 fernando.laudares admin 10957619200 Aug 1 16:20 /media/lvFernando1/data/ibdata1

I had available space on the /media/lvFernando2 partition so I decided to have ibdata2 created there. To do so, I’ve added the following 2 lines to my.cnf:

innodb_data_home_dir =
innodb_data_file_path=/media/lvFernando1/data/ibdata1:10957619200;/media/lvFernando2/tmp/ibdata2:10M:autoextend

Then I restarted MySQL and confirmed that ibdata2 was created there upon initialization:

$ ls -lh /media/lvFernando2/tmp/
total 10M
-rw-rw---- 1 fernando.laudares admin 10M Aug 1 16:30 ibdata2

Three important things to note on this approach:

  1. There can be only one ibdata file listed in innodb_data_file_path configured with “autoextend” – the last one in the list.
  2. You need to redefine ibdata1 in innodb_data_file_path using it’s current size. Any other size won’t work.
  3. From the moment you redefined ibdata1 with it’s current size and added ibdata2 configured with “autoexend”, ibdata1 won’t grow any bigger; the common tablespace will be increased through ibdata2.

What the 3rd point really means in practice is that you need to plan this maneuver accordingly: if you still have space available in the partition that hosts ibdata1 and you want to use it first then you need to delay these changes until you get to that point. From the moment you add ibdata2 to the common tablespace definition new data will start to be added into it.

Conclusion

The little experiment I did helped explain how some of the solutions proposed in my previous post for moving an InnoDB table outside the shared tablespace would work in practice and, most importantly, how much more disk space would be needed along the process. It was interesting to see that none of them made use of the tmpdir when they created temporary tables to intermediate the table conversion process; those were always created on the main datadir. The test table I’ve used had only a bit more than 10G, far away from a 1TB-sized table, so the results I’ve got may differ from what you would find in a larger environment.

As complementary information, MySQL 5.6 allows for the creation of InnoDB tables having their private tablespaces living outside the datadir (but that must be defined at table creation time and can’t be changed by means of an ALTER TABLE) as well as transportable tablespaces (which allow for the copy of private tablespaces from one database server to another).

The post When (and how) to move an InnoDB table outside the shared tablespace appeared first on MySQL Performance Blog.

Aug
22
2014
--

When (and how) to move an InnoDB table outside the shared tablespace

In my last post, “A closer look at the MySQL ibdata1 disk space issue and big tables,” I looked at the growing ibdata1 problem under the perspective of having big tables residing inside the so-called shared tablespace. In the particular case that motivated that post, we had a customer running out of disk space in his server who was looking for a way to make the ibdata1 file shrink. As you may know, that file (or, as explained there, the set of ibdata files composing the shared tablespace) stores all InnoDB tables created when innodb_file_per_table is disabled, but also other InnoDB structures, such as undo logs and data dictionary.

For example, when you run a transaction involving InnoDB tables, MySQL will first write all the changes it triggers in an undo log, for the case you later decide to “roll them back”. Long standing, uncommited transactions are one of the causes for a growing ibdata file. Of course, if you have innodb_file_per_table disabled then all your InnoDB tables live inside it. That was what happened in that case.

So, how do you move a table outside the shared tablespace and change the storage engine it relies on? As importantly, how does that affects disk space use? I’ve explored some of the options presented in the previous post and now share my findings with you below.

The experiment

I created a very simple InnoDB table inside the shared tablespace of a fresh installed Percona Server 5.5.37-rel35.1 with support for TokuDB and configured it with a 1GB buffer pool. I’ve used a 50G partition to host the ‘datadir’ and another one for ‘tmpdir’:

Filesystem                 Size Used Avail Use% Mounted on
/dev/mapper/vg0-lvFernando1 50G 110M  47G   1%  /media/lvFernando1  # datadir
/dev/mapper/vg0-lvFernando2 50G  52M  47G   1%  /media/lvFernando2  # tmpdir

Here’s the table structure:

CREATE TABLE `joinit` (
  `i` int(11) NOT NULL AUTO_INCREMENT,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int(11) NOT NULL,
  PRIMARY KEY (`i`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

I populated it with 134 million rows using the following routine:

INSERT INTO joinit VALUES (NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )));
INSERT INTO joinit SELECT NULL, uuid(), time(now()),  (FLOOR( 1 + RAND( ) *60 )) FROM joinit;  # repeat until you reach your target number of rows

which resulted in the table below:

mysql> show table status from test like 'joinit'G
*************************** 1. row ***************************
Name: joinit
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 134217909
Avg_row_length: 72
Data_length: 9783214080
Max_data_length: 0
Index_length: 0
Data_free: 1013972992
Auto_increment: 134872552
Create_time: 2014-07-30 20:42:42
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

The resulting ibdata1 file was showing to have 11G, which accounted in practice for 100% of the datadir partition use then. What follows next is a few experiences I did by converting that table to use a different storage engine, moving it outside the shared tablespace, compressing it, and dumping and restoring the database back to see the effects in disk space use. I haven’t timed how long running each command took and focused mostly on the generated files size. As a bonus, I’ve also looked at how to extend the shared table space by adding an extra ibdata file.

#1) Converting to MyISAM

Technical characteristics and features apart, MyISAM tables are know to occupy less disk space than InnoDB’s ones. How much less depends on the actual table structure. Here I made the conversion in the most simplest way:

mysql> ALTER TABLE test.joinit ENGINE=MYISAM;

which created the following files (the .frm file already existed):

$ ls -lh /media/lvFernando1/data/test/
total 8.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 16:21 joinit.frm
-rw-rw---- 1 fernando.laudares admin 7.0G Jul 31 16:27 joinit.MYD
-rw-rw---- 1 fernando.laudares admin 1.3G Jul 31 16:27 joinit.MYI

The resulting MyISAM files amounted for an additional 8.3G of disk space use:

/dev/mapper/vg0-lvFernando1 50G 19G 29G 40% /media/lvFernando1

I was expecting smaller files but, of course, the result depends largely on the data types of the columns composing the table. The problem (or the consequence) is that we end up with close to the double of the initial disk space being used:

As it happens with the other solutions presented in this section that migrate the target table outside the shared tablespace, the common/safest way to reclaim the freed (unused) space inside the ibdata1 file back to the operating system is by doing a dump & restore of the full database.

There’s an alternative approach with MyISAM though, which doesn’t involve dump & restore and only requires a MySQL restart. However, you need to convert all InnoDB tables to MyISAM, stop MySQL, delete all ib* files (there should be no remaining .ibd files after you’ve converted all InnoDB tables to MyISAM), and then restart MySQL again. Upon MySQL restart, ibdata1 will be re-created with it’s default initial size (more on this below). You can then convert the MyISAM tables back to InnoDB and if you have innodb_file_per_table enabled this time then the tables will be created with their own private tablespace file.

#2) Exporting the table to a private tablespace

Once you have innodb_file_per_table enabled you can move a table residing inside ibdata1 to it’s private tablespace (it’s own .ibd file) by either running ALTER TABLE or OPTIMIZE TABLE. Both commands create a “temporary” (though InnoDB, not MyISAM) table with it’s own tablespace file inside the database directory (and not in the tmpdir, as I believed it would happen), the rows from the target table being copied over there.

Here’s showing the temporary table (#sql-4f10_1) that was created while the process was still ongoing:

$ ls -lh /media/lvFernando1/data/test
total 2.2G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 20:42 joinit.frm
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 23:05 #sql-4f10_1.frm
-rw-rw---- 1 fernando.laudares admin 2.2G Jul 30 23:12 #sql-4f10_1.ibd

and the resulting .ibd file from when the process completed:

$ ls -lh /media/lvFernando1/data/test
total 9.3G
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 30 23:05 joinit.frm
-rw-rw---- 1 fernando.laudares admin 9.3G Jul 30 23:35 joinit.ibd

Note that the new joinit.ibd file is about 9.3G. Again, the process resulted in the use of extra disk space:

/dev/mapper/vg0-lvFernando1 50G 20G 28G 42% /media/lvFernando1

#3) Dump and restore: looking at the disk space use

As pointed, the way to reclaim unused disk space inside ibdata1 back to the file system (and consequently making it shrink) is by dumping the full database to a text file and then restoring it back. I’ve started by doing a simple full mysqldump encompassing all databases:

$ mysqldump -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox --all-databases > dump.sql

which created the following file:

-rw-r--r-- 1 fernando.laudares admin 8.1G Jul 31 00:02 dump.sql

I’ve then stopped MySQL, wiped out the full datadir and used the script mysql_install_db to (re-)create the system tables (though that’s not needed, I did it to level comparisons; once you have all InnoDB tables out of the system tablespace you can simply delete all ib* files along any .ibd and .frm files for related InnoDB tables), started MySQL again, and finally restored the backup:

$ mysql -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox < dump.sql

This resulted in:

/dev/mapper/vg0-lvFernando1 50G 9.4G 38G 21% /media/lvFernando1

and:

-rw-rw---- 1 fernando.laudares admin 18M Jul 31 05:05 /media/lvFernando1/data/ibdata1

ibdata1 did shrink in the process (returning to the default* initial size) and we recovered back a couple of gigabytes that were not being used by the single InnoDB table that used to live inside it.

* The default value for innodb_data_file_path for the Percona Server release we used on the tests is “ibdata1:10M:autoextend”. However, the manual states that “the default behavior is to create a single auto-extending data file, slightly larger than 10MB“. Given that innodb_autoextend_increment is set to 8M by default what we get in practice is an initialized ibdata1 file of 18M, as seen above.

#4) Compressing the table

Back to the original scenario, with the test table still living in the shared tablespace, I’ve enabled innodb_file_per_table, set Barracuda as the innodb_file_format and then ran:

mysql> ALTER TABLE test.joinit ENGINE=INNODB ROW_FORMAT=Compressed;

This resulted in an .ibd file with half the size of an uncompressed one:

-rw-rw---- 1 fernando.laudares admin 4.7G Jul 31 13:59 joinit.ibd

The process was a lengthy one but resulted in considerable less disk space use than converting the table to MyISAM:

/dev/mapper/vg0-lvFernando1 50G 20G 28G 42% /media/lvFernando1

#5) Converting the table to TokuDB

I’ve used TokuDB version 7.1.6 (not the latest one), which was bundled in Percona Server 5.5.37-rel35.1.

mysql> ALTER TABLE test.joinit ENGINE=TOKUDB;

As was the case with converting the InnoDB table to it’s own tablespace, a temporary table was created to intermediate the process, with the table definition residing in the test database directory:

$ ls -lh /media/lvFernando1/data/test/
total 24K
-rw-r----- 1 fernando.laudares admin 8.5K Jul 31 14:22 joinit.frm
-rw-rw---- 1 fernando.laudares admin 8.5K Jul 31 14:28 #sql-7f51_1.frm

However, TokuDB created a temporary file in the main datadir to copy the data into, show here at some point during the process:

-rwxrwx--x 1 fernando.laudares admin 32K Jul 31 14:28 _test_sql_7f51_1_main_a_2_19.tokudb
-rwxrwx--x 1 fernando.laudares admin 16K Jul 31 14:28 _test_sql_7f51_1_status_a_1_19.tokudb
-rw------- 1 fernando.laudares admin 528M Jul 31 14:29 tokuldNk5W4v

Once the process completed the file tokuldNk5W4v disappeared and the data ended up in the _test_sql_7f51_1_main_a_2_19.tokudb file:

-rwxrwx--x 1 fernando.laudares admin 1.1G Jul 31 14:32 _test_sql_7f51_1_main_d_1_19_B_0.tokudb

The new .tokudb file is so small compared to the other files obtained in the previous approaches that it almost goes unnoticed when looking at the disk space use:

/dev/mapper/vg0-lvFernando1 50G 12G 36G 25% /media/lvFernando1

Curiously, that file retained the name/reference of the temporary table it used, even though the temporary table file description (#sql-7f51_1.frm) disappeared as well; only joinit.frm remained under the test database directory.

I decided to do a dump & restore of the whole database to see what that would result in this case:

$ mysqldump -S /tmp/mysql_sandbox5537.sock --user=msandbox --password=msandbox --all-databases > dump2.sql
$ ls -lh dump2.sql
-rw-r--r-- 1 fernando.laudares admin 8.1G Jul 31 15:07 /home/fernando.laudares/dump2.sql

Once it was restored we found the ibdata1 file back to its initial default size and the disk space used by the datadir was down to a scant 1.1G. Curiously (again), the dump & restore procedure did fixed the name of the TokuDB table file:

-rwxrwx--x 1 fernando.laudares admin 980M Jul 31 15:36 _test_joinit_main_21_1_19_B_0.tokudb

The process of converting the resident InnoDB table to TokuDB was faster than compressing it with ROW_FORMAT=Compressed and resulted in a much smaller file. This is not to say that using TokuDB is the best solution but to point that the process can be accomplished in less time and make use of lesser extra disk space than InnoDB does, which can simply come up handy if you don’t have much space left. Also, remember the test was done with a not very large (and simple in structure) test table; you may need to check if yours can be converted to TokuDB and what changes to indexes you might need to do (if any).

#6) Expanding the shared tablespace

As previously mentioned, I’ve been using the default value of  “ibdata1:10M:autoextend” for innodb_data_file_path for my tests. If you have another partition with unused space available and you look only for a solution to the “running out of disk space” problem and don’t mind keeping your big table inside the shared tablespace, then you can expand it. That can be done by means of adding a second ibdata file to the tablespace definition. Of course, that only works if you create it outside the partition already hosting ibdata1.

To do so, you need to stop MySQL and verify the size of the ibdata1 file you have (list size in bytes as “ls -lh” will round it here):

$ ls -l /media/lvFernando1/data/ibdata1
-rw-r----- 1 fernando.laudares admin 10957619200 Aug 1 16:20 /media/lvFernando1/data/ibdata1

I had available space on the /media/lvFernando2 partition so I decided to have ibdata2 created there. To do so, I’ve added the following 2 lines to my.cnf:

innodb_data_home_dir =
innodb_data_file_path=/media/lvFernando1/data/ibdata1:10957619200;/media/lvFernando2/tmp/ibdata2:10M:autoextend

Then I restarted MySQL and confirmed that ibdata2 was created there upon initialization:

$ ls -lh /media/lvFernando2/tmp/
total 10M
-rw-rw---- 1 fernando.laudares admin 10M Aug 1 16:30 ibdata2

Three important things to note on this approach:

  1. There can be only one ibdata file listed in innodb_data_file_path configured with “autoextend” – the last one in the list.
  2. You need to redefine ibdata1 in innodb_data_file_path using it’s current size. Any other size won’t work.
  3. From the moment you redefined ibdata1 with it’s current size and added ibdata2 configured with “autoexend”, ibdata1 won’t grow any bigger; the common tablespace will be increased through ibdata2.

What the 3rd point really means in practice is that you need to plan this maneuver accordingly: if you still have space available in the partition that hosts ibdata1 and you want to use it first then you need to delay these changes until you get to that point. From the moment you add ibdata2 to the common tablespace definition new data will start to be added into it.

Conclusion

The little experiment I did helped explain how some of the solutions proposed in my previous post for moving an InnoDB table outside the shared tablespace would work in practice and, most importantly, how much more disk space would be needed along the process. It was interesting to see that none of them made use of the tmpdir when they created temporary tables to intermediate the table conversion process; those were always created on the main datadir. The test table I’ve used had only a bit more than 10G, far away from a 1TB-sized table, so the results I’ve got may differ from what you would find in a larger environment.

As complementary information, MySQL 5.6 allows for the creation of InnoDB tables having their private tablespaces living outside the datadir (but that must be defined at table creation time and can’t be changed by means of an ALTER TABLE) as well as transportable tablespaces (which allow for the copy of private tablespaces from one database server to another).

The post When (and how) to move an InnoDB table outside the shared tablespace appeared first on MySQL Performance Blog.

Aug
21
2014
--

A closer look at the MySQL ibdata1 disk space issue and big tables

A recurring and very common customer issue seen here at the Percona Support team involves how to make the ibdata1 file “shrink” within MySQL. I can only imagine there’s a degree of regret by some of the InnoDB architects on their design decisions regarding disk-space management by the shared tablespace* because this has been a big frustration for many MySQL users over the years.

There’s a very old bug (“InnoDB ibdata1 never shrinks after data is removed,” Sept. 8 2003) documenting user dissatisfaction. Shortly before that issue celebrated its 10th anniversary, James Day, MySQL senior principal support engineer at Oracle, posted a comment explaining why things haven’t changed and he also offered possible alternative solutions. Maybe we’ll see it fixed in a future release of MySQL. We can only be sure that any new storage engine aiming to warrant the sympathy of MySQL users can’t make that same mistake again.

One general misunderstanding that exacerbates the problem is the belief that if we enable innodb_file_per_table then all InnoDB tables will live in their own tablespace (a “private” .ibd file), even though the manual is clear about the role this variable plays. The truth is that when you enable innodb_file_per_table it will only immediately affect how new InnoDB tables are created – it won’t magically export tables currently living in the shared tablespace into their own separate .ibd files. That can be manually accomplished at any time afterwards by running ALTER TABLE or OPTIMIZE TABLE on the target table. The “gotcha” here is that by doing so you’ll actually be using additional disk space – as much as the table size,  for the newly created .ibd file. ibdata1 won’t automatically shrink: the space previously used by that table will be marked as being “free” (for internal InnoDB use) but won’t be returned to the file system.

Note: Throughout this post I make a common reference between ibdata1 and the shared (also called system) tablespace. In fact, the latter can be composed by a list of file definitions (ibdata1;ibdata2, …). Each file can have a fixed size or be specified with the autoextend parameter and have a cap limiting how big they can grow (well, actually only the last file defined in that list can). The default setting for the variable ruling how the shared tablespace is defined has it living on a file located in the datadir, named ibdata1, and configured with autoextend, hence the popular reference of a “growing ibdata1″.

A big table scenario

The shared tablespace contains more than data and indexes from InnoDB tables created inside it, such as the rollback segment for running transactions (a.k.a. “undo logs”). My colleague Miguel Angel Nieto wrote a blog post last year that explains in detail what is stored inside ibdata files, why they can grow bigger than the sum of data from the tables it hosts and won’t “shrink,” and explains the only real way to reclaim unused disk space to the file system.

But there’s one scenario where the ibdata1 file grows in a “legitimate” way: When it’s storing a big table. If there’s a big table living inside the shared tablespace accounting for most of its size then there’s no “shrinking” it. We can argue this would run counter to best practices (or not) but let’s remember that innodb_file_per_table used to be disabled by default. The first releases of MySQL 5.5 had it enabled by default but then the later ones had it disabled. We find the exact opposite happening with MySQL 5.6 – it is enabled in the later versions.

If a developer who is not a database specialist much less a MySQL expert creates a successful product around a single table and was “unlucky” enough to be using a MySQL release that had innodb_file_per_table disabled by default, he or she might soon find a Terabyte table living inside ibdata1 (BTW, I recommend Bill Karwin’s excellent “How to Avoid Common (but Deadly) MySQL Development Mistakes” recorded webinar and slides for all developers using MySQL).

What to do then ?

We recently helped a customer with such a problem. They found themselves running out of disk space with a steadily growing ibdata1 file containing data from a 1.5 Terabyte table. The replicas weren’t able to keep up either, and binary logs where pilling up, thus contributing to more disk space use. They had access to a network mounted partition in their main master server but the storage was quite slow. We started looking for possible solutions to their case and the team came up with the following list of “solutions”:

1) Add one or more disks to the system. Hopefully they had the datadir lying inside a LVM partition using the XFS filesystem so the perspective of simply increasing it’s size by adding more disk space was a good one to contemplate.

2) Re-arranging files in different partitions. The binary logs were stored inside the datadir and got to use a considerable amount of disk space so moving them to another partition would free a few hundreds gigabytes already. The complicated part here was that the only available alternative storage was the slow network one, and that would contribute to make the replicas delay yet more. Later we contemplated adding a second file (ibdata2) to the shared tablespace to be located in a different partition but, again, the only one with space available was the slow network mount.

3) Archive data. Delete a large amount of old data from the table (possibly using pt-archiver) and then do a mysqldump dump and restore the table with less data; or simply leave it as is – even though the freed space won’t be reclaimed by the filesystem it will be made available internally for InnoDB to re-use it. Be aware that deleting data is a slow process; if you have a lot of old unused data then it might be more efficient to dump the data you want to preserve, truncate the table, and then import the data back to it.

4) Convert the table to MyISAM. This could be seem as a polemic solution: it certainly shouldn’t be taken lightly as a change of storage engine is a big deal. A giving application that works with InnoDB might simply not do with MyISAM. While we do not normally recommend MyISAM, it does take less space to store data than InnoDB. Be aware that MyISAM is not a transactional storage engine, and if you crash, you will have to perform a lengthy repair process on the table. Also keep in mind that after converting the table to MyISAM you would have to either do a dump & restore of the full database to have ibdata1 shrink OR make sure you have all InnoDB tables converted to MyISAM, stop MySQL, remove all ib* files, and start MySQL again to have it recreate ibdata1 with its default size.

5) Try compressing the table. Rebuild the InnoDB table with file per table enabled and ROW_FORMAT=Compressed. If replicas are already lagging behind, replication may not be able to keep up when compressed tables are used. Also, if you have a very high write volume, it probably won’t perform well enough. If that is the case, this solution is not a viable one. However, if you would consider switching to the new TokuDB storage engine you will count with much better compression than InnoDB overall, and also better performance for write intensive workloads.

Of course, from all the outlined solutions the first 2 look to be by far the least invasive ones. The problem is that not always the server has free slots waiting for the addition of an extra disk, plus the arrangement around RAID setups brings extra constraints. Moving the binary logs to a different partition could be handy but it’s not always practical or possible at all, as is the case of making more space available by deleting/archiving data and extending the shared tablespace by adding a new ibdata2 file residing in a different partition. Finally, the other solutions involving converting the table to a different storage engine and compressing were not an option in the case of this particular customer because they required the use of additional disk space in the process, which they didn’t had.

What the customer ended doing was dumping the whole data and importing it into another server, compressing the table in the process. And then re-importing it back into the main server. It took time to complete and it wasn’t an option they were keen to try at first. But sometimes there’s just no easy way out of a problem like this. They made changes on the replicas to allow them to keep up with replication even when using compressed tables and the problem was solved for now, even though they’re aware the disk space they recovered won’t be enough in the long term.

Conclusion

The most important take-away here is to never get yourself into the situation of getting too close to running out of disk space. The best solutions available to minimize disk space being used by InnoDB tables inside the shared tablespace usually require the (temporary, if you intend to do a dump & restore afterwards) use of yet more disk space. And the bigger the table the longer it will take for dump & restore and table conversions so if you’re running out of time it only make things more complicated.

This case made me curious to explore some of the other options mentioned above a little further. In a followup post I’ll share some of my findings.

The post A closer look at the MySQL ibdata1 disk space issue and big tables appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com