
TiDB is an open-source distributed database developed by PingCAP. This is a very interesting project as it is can be used as a MySQL drop-in replacement: it implements MySQL protocol, and basically emulates MySQL. PingCAP defines TiDB is as a “one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads”. In this blog post I have decided to see how TiDB performs on a single server compared to MySQL for both OLTP and OLAP workload. Please note, this benchmark is very limited in scope: we are only testing TiDB and MySQL on a single server – TiDB is a distributed database out of the box.
Short version: TiDB supports parallel query execution for selects and can utilize many more CPU cores – MySQL is limited to a single CPU core for a single select query. For the higher-end hardware – ec2 instances in my case – TiDB can be 3-4 times faster for complex select queries (OLAP workload) which do not use, or benefit from, indexes. At the same time point selects and writes, especially inserts, can be 5x-10x slower. Again, please note that this test was on a single server, with a single TiKV process.
Installation
Please note: the following setup is only intended for testing and not for production.
I installed the latest version of TiDB to take advantage of the latest performance improvements, at the time of writing:
cat make-full-tidb-server
#!/bin/bash
set -x
cd /tidb
wget http://download.pingcap.org/tidb-v2.1.2-linux-amd64.tar.gz
tar -xzf tidb-*.tar.gz
cd tidb-*-linux-amd64/
./bin/pd-server --data-dir=pd --log-file=pd.log &
sleep 5
./bin/tikv-server --pd="127.0.0.1:2379" --data-dir=tikv -A 127.0.0.1:20165 --log-file=tikv.log &
sleep 5
cd ~/go/src/github.com/pingcap/tidb
make server
./bin/tidb-server --store=tikv --path="127.0.0.1:2379"
The normal installation process is described here (different methods are available).
Benchmarks
The main purpose of this test is to compare MySQL to TiDB. As with any distributed database it is hard to design an “apples to apples” comparison: we may compare a distributed workload spanning across many servers/nodes (in this case TiDB) to a single server workload (in this case MySQL). To overcome this challenge, I decided to focus on “efficiency”. If the distributed database is not efficient – i.e. it may require 10s or 100s of nodes to do the same job as the non-distributed database – it may be cost prohibitive to use such database for a small or medium size DB.
The preliminary results are: TiDB is much more efficient for SELECT (OLAP workload) but much less efficient for WRITES and typical OLTP workload. To overcome these limitations it is possible to use more servers.
For this test I was using two types of benchmarks:
- OLAP: a set of complex queries on top of an “ontime” database (airline historical flight information database). For this benchmark I used different AWS ec2 instances with CPU cores ranging from 2 to 96. This is response time test (not a throughput test)
- OLTP: sysbench (as always): point-select and write-only standard workloads. This is throughput test, increasing the number of threads.
OLAP / analytical queries test
Database size is 70Gb in MySQL and 30Gb in TiDB (compressed). The table has no secondary indexes (except the primary key).
I used the following four queries:
- Simple count(*):
select count(*) from ontime;
- Simple group by:
select count(*), year from ontime group by year order by year;
- Complex filter for a full table scan:
select * from ontime where UniqueCarrier = 'DL' and TailNum = 'N317NB' and FlightNum = '2' and Origin = 'JFK' and Dest = 'FLL' limit 10;
- Complex group by and order by query:
select SQL_CALC_FOUND_ROWS
FlightDate, UniqueCarrier as carrier,
FlightNum,
Origin,
Dest
FROM ontime
WHERE
DestState not in ('AK', 'HI', 'PR', 'VI')
and OriginState not in ('AK', 'HI', 'PR', 'VI')
and flightdate > '2015-01-01'
and ArrDelay < 15
and cancelled = 0 and Diverted = 0
and DivAirportLandings = '0'
ORDER by DepDelay DESC
LIMIT 10;
I used five ec2 instances:
- t2.medium: 2 CPU cores
- x1e.xlarge: 4 CPU cores
- r4.4xlarge: 16 CPU cores
- m4.16xlarge: 64 CPU cores
- m5.24xlarge: 96 CPU cores
The following graph represents the results (bars represents the query response time, the smaller the better):

As we can see, TiDB scales very well increasing the number of CPU cores, as we go from lower to higher end instances. t2.medium and x1e.xlarge is interesting here thou:
- t2.medium has 2 CPU cores and not enough RAM (2Gb) to store database in memory. Both MySQL/InnoDB and TiDB/TiKV performs a lots of disk reads – this is disk bound workload
- x1e.xlarge is an example of the opposite instance type: 4 CPU core and 122GB RAM, I’m using memory bound workload here (where both MySQL and TiDB data is cached).
All other instances have enough RAM to cache the database in memory, and with more CPU TiDB can take advantages of query parallelism and provide better response time.
Sysbench test
Select test
I used point select (meaning select one row by primary key, threads ranges from 1 to 128) with Sysbench on an m4.16xlarge instance (memory bound: no disk reads). The results are here. The bars represent the number of transactions per second, the more the better:
This workload is actually gives a great advantage to MySQL/InnoDB as it retrieves a single row based on the primary key. MySQL is significantly faster here: 5x to 10x faster. Unlike the previous workload – 1 single slow query – for “point select” queries MySQL scales much better than TiDB with more CPU cores.
Write only test
I have used a write-only sysbench workload as well with threads ranging from 1 to 128. The instance has enough memory to cache full datast. Here are the results:
Here we can see that TiDB is also significatly slower than MySQL (for an in-memory workload).
Limitations of the write only test
Running TiDB as a single server is not a recommended (or documented) configuration, so some optimizations for this case may be missing. To create a production level test, we would need to compare TiDB to MySQL with the binlog enabled + some sort of synchronous/virtually synchronous or semi-sync replication (e.g. Percona XtraDB Cluster, group replication or semi-sync replication). Both of these changes are known to decrease the write-throughput of MySQL considerably. Some tuning may be done to reduce the effects of that.
Some of the performance characteristics here are also derived from TiDB using RocksDB. The performance of InnoDB should be higher for an in-memory insert, with an LSM tree performing better for data sets that no longer fit in memory.
Conclusion
TiDB scales very well for OLAP / analytical queries (typically complex queries not able to take advantages of indexes) – this is the area where MySQL performs much worse as it does not take advantage of multiple CPU cores. At the same time, there is always a price to pay: TiDB has worse “efficiency” for fast queries (i.e. select by primary key) and writes. TiDB can scale across multiple servers (nodes). However, if we need to archive the same level of write efficiency as MySQL we will have to setup tens of nodes. In my opinion, TiDB can be a great fit for an analytical workload when you need almost full compatibility with MySQL: syntax compatibility, inserts/updates, etc.