Sep
09
2015
--

Testing MySQL partitioning with pt-online-schema-change

There are cases where we need to implement MySQL partitioning, or at least test if it is an option to solve an issue. However, how are we able to test it when the table on our production environment has hundreds of millions of rows, several composite indexes and/or the size on disk is hundreds of gigabytes?

Testing environments usually don’t have all the data that production has and if they have, probably you are not testing all the use-cases at a database level. Therefore, is it possible to test MySQL Partitioning on production impacting as less as possible?

When we execute pt-online-schema-change, it creates a table, triggers, and then copies the data. As we are going to test partitioning we are going to need both tables – with and without partitioning – and we are going to use triggers to keep both tables consistent. A good thing about changing a table to use partitioning is that, usually, you won’t need to change the structure of the row, which means that you are able to use practically the same statement to insert, update or delete on both tables.

Let’s suppose that we have this sysbench table:

CREATE TABLE `sbtest` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`k` int(10) unsigned NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB;

If we want to partition it, we execute:

ALTER TABLE percona.sbtest PARTITION BY HASH(id) partitions 4;

We will execute pt-online-schema-change like this:

pt-online-schema-change h=localhost,D=percona,t=sbtest
--recursion-method none --execute
--alter "PARTITION BY HASH(id) partitions 4"

But as we are going to test partitioning, we want to:

  • keep with the original table
  • do not swap the tables
  • do not drop the triggers

That is why we are going to execute pt-online-schema-change like this:

pt-online-schema-change h=localhost,D=percona,t=sbtest
--recursion-method none --execute --no-swap-tables --no-drop-old-table
--no-drop-new-table --no-drop-triggers
--alter "PARTITION BY HASH(id) partitions 4"

At the end we are going to have 2 tables, sbtest, which is not partitioned and _sbtest_new which is partitioned:

first

The next step that pt-osc was going to do was to swap the tables, but we used –no-swap-tables, so we are going to do it manually. But first, we are going to add the triggers to _sbtest_new, so that it can load the data to sbtest, which will be renamed to _sbtest_old. However, we need to create the trigger now, which are going to be very similar to the ones that already exists, but with the table name _sbtest_old, and that will end up in an error as _sbtest_old doesn’t exist yet. That is why we create the triggers handling the error:

delimiter //
CREATE DEFINER=`root`@`%` TRIGGER `pt_osc_percona__sbtest_new_ins` AFTER INSERT ON `percona`.`_sbtest_new` FOR EACH ROW
begin
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02' BEGIN END;
REPLACE INTO `percona`.`_sbtest_old` (`id`, `k`, `c`, `pad`) VALUES (NEW.`id`, NEW.`k`, NEW.`c`, NEW.`pad`);
end;
//
CREATE DEFINER=`root`@`%` TRIGGER `pt_osc_percona__sbtest_new_upd` AFTER UPDATE ON `percona`.`_sbtest_new` FOR EACH ROW
begin
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02' BEGIN END;
REPLACE INTO `percona`.`_sbtest_old` (`id`, `k`, `c`, `pad`) VALUES (NEW.`id`, NEW.`k`, NEW.`c`, NEW.`pad`);
end;
//
CREATE DEFINER=`root`@`%` TRIGGER `pt_osc_percona__sbtest_new_del` AFTER DELETE ON `percona`.`_sbtest_new` FOR EACH ROW
begin
DECLARE CONTINUE HANDLER FOR SQLSTATE '42S02' BEGIN END;
DELETE IGNORE FROM `percona`.`_sbtest_old` WHERE `percona`.`_sbtest_old`.`id` <=> OLD.`id`;
end;
//
delimiter ;

The schema is now:

second
We are going to create a table _sbtest_diff which will be the table that is going to be renamed to _sbtest_new. It doesn’t need to have indexes or be partitioned, so that it is simple:

CREATE TABLE `_sbtest_diff` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`k` int(10) unsigned NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`)
) ENGINE=InnoDB;

At this point we are able to swap the tables, the command to execute will be:

RENAME TABLE sbtest TO _sbtest_old, _sbtest_new TO sbtest, _sbtest_diff TO _sbtest_new;

The rename table will do this:

third

Now you can test performance on the table.  If we want to return to the previous stage, we just execute:

RENAME TABLE _sbtest_new TO _sbtest_diff, sbtest TO _sbtest_new, _sbtest_old TO sbtest;

With this two “RENAME TABLE” commands, we are able to back and forth to partition and non-partition table. Once you are satisfied with your testing, the remaining task is to clean up the triggers and the tables. At the end, there are 2 possible outcomes:

  • The partitioned table is working as expected. The cleanup commands will be:
DROP TRIGGER `pt_osc_percona__sbtest_new_ins`;
DROP TRIGGER `pt_osc_percona__sbtest_new_upd`;
DROP TRIGGER `pt_osc_percona__sbtest_new_del`;
DROP TABLE _sbtest_new;
DROP TABLE _sbtest_old;

  • We decided to keep the original table, which implies execute:
DROP TRIGGER `pt_osc_percona_sbtest_ins`;
DROP TRIGGER `pt_osc_percona_sbtest_upd`;
DROP TRIGGER `pt_osc_percona_sbtest_del`;
DROP TABLE _sbtest_new;
DROP TABLE _sbtest_old;

Conclusion:

With this procedure, you will have both tables – with and without partitioning – synchronized and you will be able to swap between them until you decide to keep one of them.

The post Testing MySQL partitioning with pt-online-schema-change appeared first on MySQL Performance Blog.

Jun
15
2015
--

Speed up GROUP BY queries with subselects in MySQL

We usually try to avoid subselects because sometimes they force the use of a temporary table and limits the use of indexes. But, when is good to use a subselect?

This example was tested over table a (1310723 rows), b, c and d ( 5 rows each) and with MySQL version 5.5 and 5.6.

Let’s suppose we have a query like this:

select a.name,sum(a.count) aSum,avg(a.position) aAVG,b.col1,c.col2,d.col3
from
a join
b on (a.bid = b.id) join
c on (a.cid = c.id) join
d on (a.did = d.id)
group by a.name,b.id,c.id,d.id

What will MySQL do? First it will take the entire data set – this means that will go through each row scanning the value of  “bid,” “cid” and “did” and then apply the join to each table. At this point it has the complete data set and then it will start to cluster it, executing the sum and the average functions.

Let’s analyze it step by step:

  1. Scan each row of  table a which has 1310720 rows.
  2. Join each row of table a with b, c and d – this means that each of the 1310720 rows will be joined, making the temporary table bigger.
  3. Execute the group by which will scan again the 1310720 rows and creating the result data set.

What can we do to optimize this query? We can’t avoid the group by over the 1.3M rows, but we are able to avoid the join over 1.3M of rows. How? We need all of the information from table a for the “group by” but we don’t need to execute all the joins before clustering them. Let’s rewrite the query:

select a.name,aSum,aAVG,b.col1,c.col2,d.col3
from
( select name,sum(count) aSum ,avg(position) aAVG,bid,cid,did
  from a
  group by name,bid,cid,did) a join
b on (a.bid = b.id) join
c on (a.cid = c.id) join
d on (a.did = d.id)

We see from the above query that we are doing the “group by” only over table a, the result data set of that subquery is just 20 rows. But what about the query response time? The first query took 2.3 sec avg and the optimized query took 1.8 sec average, half a second faster.

What about adding a covering index? The index that we can add will be:

alter table a add index (name,bid,cid,did,count,position);

The explain plan of both queries shows that it is using just the index to resolve the query.

Now, the response time of the original query is 1.9 sec which is near the time of the optimized query. However, the response time of the optimized query now is 0.7 sec, nearly 3x faster. The cons of adding this index is that we are indexing the whole table and it shows that the index length is near 80% of the data length.

If the original query had “where” conditions, it will depend over which field. Let’s suppose add c.col2=3:
select a.name,sum(a.count) aSum,avg(a.position) aAVG,b.col1,c.col2,d.col3
from
a join
b on (a.bid = b.id) join
c on (a.cid = c.id) join
d on (a.did = d.id)
where c.col2=3
group by a.name,b.id,c.id,d.id
Now, in the new query, the subquery will change. Table c and the “where” clause must be added to the subquery:
select a.name,aSum,aAVG,b.col1,a.col2,d.col3
from
( select a.name,sum(count) aSum ,avg(position) aAVG,bid,cid,did,c.col2
 from a join
 c on (a.cid = c.id)
 where c.col2=3
 group by name,bid,cid,did) a join
b on (a.bid = b.id) join
d on (a.did = d.id)

But the differences in times are not as big (original query 1.1 sec and new query 0.9). Why? because the original query will have less data to group by. Adding c.col2=3 to the original query, the amount of data to group by is reduced from 1.3M to 262k. Indeed, if you add more “where” conditions on different tables, the dataset to sort will be smaller and the speed-up will decrease.

Conclusion: We usually add the GROUP BY at the end of queries, and that is ok because the syntax forces us to do it. However we can use a subquery to group only the data that we need and then perform the joins over other tables. This could speed up some of our GROUP BY queries.

The post Speed up GROUP BY queries with subselects in MySQL appeared first on MySQL Performance Blog.

Jan
21
2015
--

Importing big tables with large indexes with Myloader MySQL tool

Mydumper is known as the faster (much faster) mysqldump alternative. So, if you take a logical backup you will choose Mydumper instead of mysqldump. But what about the restore? Well, who needs to restore a logical backup? It takes ages! Even with Myloader. But this could change just a bit if we are able to take advantage of Fast Index Creation.

As you probably know, Mydumper and mysqldump export the struct of a table, with all the indexes and the constraints, and of course, the data. Then, Myloader and MySQL import the struct of the table and import the data. The most important difference is that you can configure Myloader to import the data using a certain amount of threads. The import steps are:

  1. Create the complete struct of the table
  2. Import the data

When you execute Myloader, internally it first creates the tables executing the “-schema.sql” files and then takes all the filenames without “schema.sql” and puts them in a task queue. Every thread takes a filename from the queue, which actually is a chunk of the table, and executes it.  When finished it takes another chunk from the queue, but if the queue is empty it just ends.

This import procedure works fast for small tables, but with big tables with large indexes the inserts are getting slower caused by the overhead of insert the new values in secondary indexes. Another way to import the data is:

  1. Split the table structure into table creation with primary key, indexes creation and constraint creation
  2. Create tables with primary key
  3. Per table do:
    1. Load the data
    2. Create index
  4. Create constraints

This import procedure is implemented in a branch of Myloader that can be downloaded from here or directly executing bzr with the repository:

bzr branch lp:~david-ducos/mydumper/mydumper

The tool reads the schema files and splits them into three separate statements which create the tables with the primary key, the indexes and the constraints. The primary key is kept in the table creation in order to avoid the recreation of the table when a primary key is added and the “KEY” and “CONSTRAINT” lines are removed. These lines are added to the index and constraint statements, respectively.

It processes tables according to their size starting with the largest because creating the indexes of a big table could take hours and is single-threaded. While we cannot process other indexes at the time, we are potentially able to create other tables with the remaining threads.

It has a new thread (monitor_process) that decides which chunk of data will be put in the task queue and a communication queue which is used by the task processes to tell the monitor_process which chunk has been completed.

I run multiple imports on an AWS m1.xlarge machine with one table comparing Myloader and this branch and I found that with large indexes the times were:

myloader

As you can see, when you have less than 150M rows, import the data and then create the indexes is higher than import the table with the indexes all at once. But everything changes after 150M rows, import 200M takes 64 minutes more for Myloader but just 24 minutes for the new branch.

On a table of 200M rows with a integer primary key and 9 integer columns, you will see how the time increases as the index gets larger:

myloader2

Where:

2-2-0: two 1-column and two 2-column index
2-2-1: two 1-column, two 2-column and one 3-column index
2-3-1: two 1-column, three 2-column and one 3-column index
2-3-2: two 1-column, three 2-column and two 3-column index

Conclusion

This branch can only import all the tables with this same strategy, but with this new logic in Myloader, in a future version it could be able to import each table with the best strategy reducing the time of the restore considerably.

The post Importing big tables with large indexes with Myloader MySQL tool appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com