I have been working for a customer benchmarking insert performance on Amazon EC2, and I have some interesting results that I wanted to share. I used a nice and effective tool iiBench which has been developed by Tokutek. Though the “1 billion row insert challenge” for which this tool was originally built is long over, but still the tool serves well for benchmark purposes.
OK, let’s start off with the configuration details.
Configuration
First of all let me describe the EC2 instance type that I used.
EC2 Configuration
I chose m2.4xlarge instance as that’s the instance type with highest memory available, and memory is what really really matters.
High-Memory Quadruple Extra Large Instance 68.4 GB of memory 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: m2.4xlarge
As for the IO configuration I chose 8 x 200G EBS volumes in software RAID 10.
Now let’s come to the MySQL configuration.
MySQL Configuration
I used Percona Server 5.5.22-55 for the tests. Following is the configuration that I used:
## InnoDB options innodb_buffer_pool_size = 55G innodb_log_file_size = 1G innodb_log_files_in_group = 4 innodb_buffer_pool_instances = 4 innodb_adaptive_flushing = 1 innodb_adaptive_flushing_method = estimate innodb_flush_log_at_trx_commit = 2 innodb_flush_method = O_DIRECT innodb_max_dirty_pages_pct = 50 innodb_io_capacity = 800 innodb_read_io_threads = 8 innodb_write_io_threads = 4 innodb_file_per_table = 1 ## Disabling query cache query_cache_size = 0 query_cache_type = 0
You can see that the buffer pool is sized at 55G and I am using 4 buffer pool instances
to reduce the contention caused by buffer pool mutexes. Another important configuration that I am using is that I am using “estimate
” flushing method available only on Percona Server. The “estimate
” method reduces the impact of traditional InnoDB log flushing, which can cause downward spikes in performance. Other then that, I have also disabled query cache
to avoid contention caused by query cache on write heavy workload.
OK, so that was all about the configuration of the EC2 instance and MySQL.
Now as far as the benchmark itself is concerned, I made no code changes to iiBench, and used the version available here. But I changed the table to use range partitioning
. I defined a partitioning scheme such that every partition would hold 100 million rows.
Table Structure
The table structure of the table with no secondary indexes is as follows:
CREATE TABLE `purchases_noindex` ( `transactionid` int(11) NOT NULL AUTO_INCREMENT, `dateandtime` datetime DEFAULT NULL, `cashregisterid` int(11) NOT NULL, `customerid` int(11) NOT NULL, `productid` int(11) NOT NULL, `price` float NOT NULL, PRIMARY KEY (`transactionid`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 /*!50100 PARTITION BY RANGE (transactionid) (PARTITION p0 VALUES LESS THAN (100000000) ENGINE = InnoDB, PARTITION p1 VALUES LESS THAN (200000000) ENGINE = InnoDB, PARTITION p2 VALUES LESS THAN (300000000) ENGINE = InnoDB, PARTITION p3 VALUES LESS THAN (400000000) ENGINE = InnoDB, PARTITION p4 VALUES LESS THAN (500000000) ENGINE = InnoDB, PARTITION p5 VALUES LESS THAN (600000000) ENGINE = InnoDB, PARTITION p6 VALUES LESS THAN (700000000) ENGINE = InnoDB, PARTITION p7 VALUES LESS THAN (800000000) ENGINE = InnoDB, PARTITION p8 VALUES LESS THAN (900000000) ENGINE = InnoDB, PARTITION p9 VALUES LESS THAN (1000000000) ENGINE = InnoDB, PARTITION p10 VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
While the structure of the table with secondary indexes is as follows:
CREATE TABLE `purchases_index` ( `transactionid` int(11) NOT NULL AUTO_INCREMENT, `dateandtime` datetime DEFAULT NULL, `cashregisterid` int(11) NOT NULL, `customerid` int(11) NOT NULL, `productid` int(11) NOT NULL, `price` float NOT NULL, PRIMARY KEY (`transactionid`), KEY `marketsegment` (`price`,`customerid`), KEY `registersegment` (`cashregisterid`,`price`,`customerid`), KEY `pdc` (`price`,`dateandtime`,`customerid`) ) ENGINE=InnoDB AUTO_INCREMENT=11073789 DEFAULT CHARSET=latin1 /*!50100 PARTITION BY RANGE (transactionid) (PARTITION p0 VALUES LESS THAN (100000000) ENGINE = InnoDB, PARTITION p1 VALUES LESS THAN (200000000) ENGINE = InnoDB, PARTITION p2 VALUES LESS THAN (300000000) ENGINE = InnoDB, PARTITION p3 VALUES LESS THAN (400000000) ENGINE = InnoDB, PARTITION p4 VALUES LESS THAN (500000000) ENGINE = InnoDB, PARTITION p5 VALUES LESS THAN (600000000) ENGINE = InnoDB, PARTITION p6 VALUES LESS THAN (700000000) ENGINE = InnoDB, PARTITION p7 VALUES LESS THAN (800000000) ENGINE = InnoDB, PARTITION p8 VALUES LESS THAN (900000000) ENGINE = InnoDB, PARTITION p9 VALUES LESS THAN (1000000000) ENGINE = InnoDB, PARTITION p10 VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
Also, I ran 5 instances of iiBench simultaneously to simulate 5 concurrent connections writing to the table, with each instance of iiBench writing 200 million single row inserts, for a total of 1 billion rows. I ran the test both with the table purchases_noindex
which has no secondary index and only a primary index, and against the table purchases_index
which has 3 secondary indexes. Another thing I would like to share is that, the size of the table without secondary indexes is 56G while the size of the table with secondary indexes is 181G.
Now let’s come down to the interesting part.
Results
With the table purchases_noindex, that has no secondary indexes, I was able to achieve an avg. insert rate of ~25k INSERTs Per Second, while with the table purchases_index, the avg. insert rate reduced to ~9k INSERTs Per Second. Let’s take a look at the graphs have a better view of the whole picture.
Note, in the above graph, we have “millions of rows” on the x-axis and “INSERTs Per Second” on the y-axis.
The reason why I have chosen to show “millions of rows” on the x-axis so that we can see the impact of growth in data-set on the insert rate.
We can see that adding the secondary indexes to the table has decreased the insert rate by 3x, and its not even consistent. While with the table having no secondary indexes, you can see that the insert rate is pretty much constant remaining between ~25k to ~26k INSERTs Per Second. But on the other hand, with the table having secondary indexes, we can see that there are regular spikes in the insert rate, and the variation in the rate can be classified as large, because it varies between ~6.5k to ~12.5k INSERTs per second, with noticeable spikes after every 100 million rows inserted.
I noticed that the insert rate drop was mainly caused by IO pressure caused by increase in flushing and checkpointing activity. This caused spikes in write activity to the point that the insert rate was decreased.
Conclusion
As we all now there are pros and cons to using secondary indexes. While secondary indexes cause read performance to improve, but they have an impact on the write performance. Well most of the apps rely on read performance and hence having secondary indexes is an obvious choice. But for those applications that are write mostly or that rely a lot on write performance, reducing the no. of secondary indexes or even going away with secondary indexes causes a write throughput increase of 2x to 3x. In this particular case, since I was mostly concerned with write performance, so I went ahead to choose a table structure with no secondary indexes. Other important things to consider when you are concerned with write performance is using partitioning to reduce the size of the B+tree, having multiple buffer pool instances to reduce contention problems caused by buffer pool mutexes, using “estimate” checkpoint method to reduce chances of log flush storms and disabling the query cache.