ClickHouse: One Year!

ClickHouse One Year

ClickHouse One YearIn this blog, we’ll look at ClickHouse on its one year anniversary.

It’s been a year already since the Yandex team released ClickHouse as open source software. I’ve had an interest in this project from the very start, as I didn’t think there was an open source analytical database that could compete with industry leaders like Vertica (for example).

This was an exciting year for ClickHouse early adopters. Let’s look at what it accomplished so far.

ClickHouse initially generated interest due to the Yandex name – the most popular search engine in Russia. It wasn’t long before jaw-dropping responses popped up: guys, this thing is crazy fast! Many early adopters who tried ClickHouse were really impressed.

Fast doesn’t mean convenient though. That was the main community concern to ClickHouse over the past months. Developed as an internal project for an internal customer (Yandex.Metrica), ClickHouse had a lot of limitations for general community use. It took several months before Yandex could restructure the team and mindset, establish proper communication with the community and start addressing external needs. There are still a lot of things that need to be done. The public roadmap is not easily available, for example, but the wheels are pointed in the right direction. The ClickHouse team has added a lot of the features people were screaming for, and more are in ClickHouse’s future plans.

The Yandex guys are actively attending international conferences, and they were:

They are speaking much more in Russia (no big surprise).

We were very excited by Yandex’s ClickHouse performance claims at Percona, and could not resist making our own benchmarks:

ClickHouse did very well in these benchmarks. There are many other benchmarks by other groups as well, including a benchmark against Amazon RedShift by Altinity.

The first ClickHouse production deployments outside of Yandex started in October-November 2016. Today, Yandex reports that dozens of companies around the world are using ClickHouse in production, with the biggest installations operating with up to several petabytes of data. Hundreds of other enterprises are deploying pilot installations or actively evaluating the software.

There are also interesting reports from CloudFare (How Cloudflare analyzes 1M DNS queries per second) and from Carto (Geospatial processing with ClickHouse).

There are also various community projects around ClickHouse worth mentioning:

Percona is also working to adapt ClickHouse to our projects. We are using ClickHouse to handle Query Analytics and as a long term metrics data for Metrics inside a new version (under development) of Percona Monitoring and Management.

I also will be speaking about ClickHouse at BIG DATA DAY LA 2017 on August 5th, 2017. You are welcome to attend if you are in Los Angeles this day!

ClickHouse has the potential to become one of the leading open source analytical DBMSs – much like MySQL and PostreSQL are leaders for OLTP workloads. We will see in the next couple of years if it happens or not. Congratulations to the Yandex team on their one-year milestone!


MySQL webinar: ‘Introduction to open source column stores’

MySQL webinar: 'Introduction to column stores'Join me Wednesday, September 18 at 10 a.m. PDT for an hour-long webinar where I will introduce the basic concepts behind column store technology. The webinar’s title is: “Introduction to open source column stores.”

What will be discussed?

This webinar will talk about Infobright, LucidDB, MonetDB, Hadoop (Impala) and other column stores

  • I will compare features between major column stores (both open and closed source).
  • Some benchmarks will be used to demonstrate the basic performance characteristics of the open source column stores.
  • There will be a question and answer session to ask me anything you like about column stores (you can also ask in the comments section below if you prefer to submit them in advance).

Who should attend?

  • Anyone interested in analytics or OLAP
  • Those new to column store technology who want to find out how they will benefit from using a column store
  • Database administrators or users that have to query large amounts of data quickly, especially in aggregate


A randomly selected participant in the webinar will be chosen to receive a free Percona t-shirt.  You must be present at the end of the webinar during the question and answer section to be eligible for the shirt.

You can also win a Percona t-shirt by answering one or more of the following questions (the first to answer correctly wins, one winner only, sorry. ):

  • What is one advantage of columnar storage?
  • What is one big drawback of columnar storage?
  • Describe (or link to) an application or project that uses a columnar storage

Register here.

The post MySQL webinar: ‘Introduction to open source column stores’ appeared first on MySQL Performance Blog.


Using Flexviews – part one, introduction to materialized views

If you know me, then you probably have heard of Flexviews. If not, then it might not be familiar to you. I’m giving a talk on it at the MySQL 2011 CE, and I figured I should blog about it before then. For those unfamiliar, Flexviews enables you to create and maintain incrementally refreshable materialized views.

You might be asking yourself “what is an incrementally refreshable materialized view?”. If so, then keep reading. This is the first in a multi-part series describing Flexviews.

You can find part 2 of the series here: http://www.mysqlperformanceblog.com/2011/03/25/using-flexviews-part-two-change-data-capture/

The output of any SQL query is logically an in-memory table
The output of a SQL query, and what we normally think of as a result set is really a virtual table. It has columns and rows, just like a database table, but it is temporary in nature, usually existing only in memory for a short time. This concept extends from relational algebra, upon which SQL is built. All SQL can be broken down into relational algebra, which is convenient because that means there are lots of transformations that can be done to it, all without changing the meaning or the output. The output of all relational algebraic statements is a table, so conceptually so too is the output of a SQL statement.

MySQL even includes a SQL statement that makes this perfectly clear: CREATE TABLE .. AS SELECT (CTAS). The results of the SELECT portion of the statement are stored in a table. Storing the results of a SQL statement into a table (even a temporary table) is called materializing the results.

Views are similar to regular SQL statements.
A view is a SQL statement which acts like a table. When you execute a query on a view, the result set is ephemeral, generated at run-time for consumption and then immediately discarded.

Views are not generally considered to be a performance optimization because:

  • The contents of the result set are computed each time the view is accessed.
  • If multiple statements access the same view repeatedly or concurrently, then this computation is likely to be expensive
  • If the view accesses large quantities of data, then the computation is likely expensive.
  • Views containing grouping, aggregation, sorting, distinct or other conditions must be fully computed and stored in a temporary table before they can be accessed, which is very expensive.

What is a materialized view (MV)?
A materialized view is similar to a regular view, in that it represents the result set of a query, but the contents are stored (materialized!) as a real table.  So a MV is similar to a table created with the CTAS command described above. This similarity is fairly superficial though. While both store results in a table, the MV represents the results of a SQL statement at a specific point in time, essentially a consistent snapshot of the query result. It is not possible to create multiple different tables via CTAS and have them all be transactionally consistent with one another, unless you stop all database write activity.

Materialized views can be used to enhance performance by acting as a cache. Further, the cost of a cache miss is lower because incremental refresh is faster than recomputing the contents from scratch.

  • The contents of the result set are updated periodically, not each time the view is accessed.
  • If multiple statements access the same view repeatedly or concurrently it is not likely to be very expensive.
  • If the view is large, accessing the table will be considerably cheaper
  • You can add indexes to the MV.
  • Since the data is already joined together and pre-aggregated, CPU and memory usage may be reduced compared to computing the results.
  • You can often generate multiple different blocks of content from the summarized data in one or more views

Materialized views must be refreshed.
Refreshing a MV brings it up to date to reflect the changes in the database since the view was either first created or last refreshed, whichever is more recent. More importantly a MV can can be refreshed to a specific point in time, not just “now”. This means that you can maintain multiple MVs and keep them synced to the same point in time.   There are two different methods by which a MV can be refreshed.

The first is the complete refresh method, which rebuilds the entire contents of the view from scratch. This is the less desirable method:

  • During a complete refresh, the view contents are completely recalculated, which could be very expensive
  • On some databases, the contents of the view are not available during complete refresh (not true of Flexviews)
  • During refresh the view may occupy twice as much space (similar to ALTER TABLE)
  • Supports all SQL syntax (like outer join) but can’t be refreshed to a specific point in time.
  • Performs no better than CTAS, but gives a convenient method of creating and refreshing the materialized results

The second is refresh method is incremental refresh. This method updates the view. It usually examines only the rows which have changed since the view was last refreshed.

Incremental refresh has obvious benefits which include:

  • Refreshing large views is orders of magnitude faster than complete refresh.
  • When updating the view, only a subset of the database rows must be examined
  • The rows examined are related to the rows which have changed since the last refresh
  • The view can be refreshed forward to to a specific transactional point in time
  • Multiple views can be rolled forward to the exact same consistent point in time, with respect to committed transactions
  • The processing can be done on a slave dedicated to such work.

And some drawbacks:

  • Not all SQL syntax supported (no outer join), no non-deterministic functions, etc.
  • There is overhead for change-data capture.
  • Some extra storage is used for the changes and deltas.
  • Creating an MV is not as simple as I’d like.

Incremental refresh capabilities imply that Flexviews has some way to capture the changes that happen in the database and then apply those changes to the views.

These capabilities break down into three main categories which will be blogged about in subsequent posts:

  • Change Data Capture – How Flexviews figures out what changed in the database
  • Delta computation – How it uses those changes to compute the differences between the old results and the new results
  • Delta application – How Flexviews uses the deltas to actually make the changes
  • Finally, the last blog post will describe how to use Flexviews for online schema change, or as an ELT* tool and it will cover how to create the materialized views which Flexviews manages.

*ELT – extract, load and transform, a variant of the data integration Extract-Transform-Load process (ETL).

You can find part 2 of the series here: http://www.mysqlperformanceblog.com/2011/03/25/using-flexviews-part-two-change-data-capture/


Shard-Query adds parallelism to queries

Preamble: On performance, workload and scalability:
MySQL has always been focused on OLTP workloads. In fact, both Percona Server and MySQL 5.5.7rc have numerous performance improvements which benefit workloads that have high concurrency. Typical OLTP workloads feature numerous clients (perhaps hundreds or thousands) each reading and writing small chunks of data. The recent improvements to MySQL make it scale better for this workload when more resources (such as additional CPUs) are added. By scaling better I mean that it is able to take advantage of all available resources in order to handle the workload. One reason that it works best for this workload is that a single query executes in a single thread. MySQL never takes advantage of more than a single CPU when aggregating data and fetching rows from the buffer pool, with respect to a single query, but it can run many queries at once.

There are workloads other than OLTP and the recent optimizations to MySQL still leave a lot of low hanging fruit where these are concerned. This is particularly true for OLAP workloads. While I’m not going to diverge into a discussion of how OLAP varies from OLTP, it suffices to say that a typical OLAP workload features a low number of concurrent queries which each examine large amounts of data. Since a single query is single threaded in MySQL, the new optimizations don’t really help with this workload.

The following tests assume a workload consisting of a small number of concurrent queries (or only one) to demonstrate how much improvement could be made to MySQL so that is could better utilize all available resources (that is, scale better) when running small numbers of queries which examine a lot of data.

What is Shard-Query?
Shard-Query was initially conceived as a utility to add parallelism to horizontally partitioned data sets by running queries against each host in parallel, with the added feature of supporting aggregation. Then I hit upon the idea of taking SQL constructs like IN and BETWEEN and making these queries execute in parallel on a each host. If you have a sharded data set, then this gives you the opportunity for additional parallelism for each query. If you have only a single server, but it has enough resources to answer queries in parallel, then it can be used to add parallelism to queries which use IN or BETWEEN clauses. This added parallelism can have significant performance advantages as demonstrated in this blog post.

Many database servers can add this parallelism natively, but most of those are not open source. In the future, Shard-Query can be extended to other database servers such as PostgreSQL or Firebird fairly easily.

What machine did I use?

I used MySQL 5.5.7rc on a powerful Cisco UCS server with 12 real cores and 384GB of ram. The amount of ram is significantly larger than my already hefty 55GB testing data set, so this means that if MySQL could fully utilize all cores for my workload, then this test would be CPU bound.

What data did I use?
I loaded 55GB of the same data used in this earlier post into a partitioned InnoDB table.

Partitioning example:


  1. /*!50500 PARTITION BY RANGE  COLUMNS(FlightDate)
  2. (PARTITION p2 VALUES LESS THAN (‘1988-01-01’) ENGINE = InnoDB,
  3. PARTITION p3 VALUES LESS THAN (‘1988-02-01’) ENGINE = InnoDB,
  4. PARTITION p253 VALUES LESS THAN (‘2008-12-01’) ENGINE = InnoDB,

What queries did I use?
I used a version of the queries in that same blog post. The original queries tend to filter on the Year column. I partitioned the table into months using the FlightDate column using the improved MySQL 5.5 partitioning options which work directly on columns without the need to use TO_DAYS. To accommodate my partitioning schema I modified the queries to use the FlightDate column instead of the Year column. See the “full disclosure” section at the end for the complete SQL.

These tests were done using ‘run_query.php’, which is the example application which comes with Shard-Query. As the name implies, it takes a list of queries in a file (or stdin) and a config file. It runs the SQL via ShardQuery and it prints the results.

Test #1
This set of queries tests the most basic aggregation (count(*)) on a range of records. This table is partitioned by month which means that MySQL can use partition pruning to reduce the amount of data which must be examined. With this in mind, I modified Vadim’s queries to use the FlightDate column in the WHERE clause instead of Year.

Each iteration reads an additional year of data. That is, the first query reads one year, and the last query reads 21 years of collected flight data.

For example (the final query):


  1. SELECT count(*)
  2.    FROM ontime.ontime 
  3.  WHERE FlightDate BETWEEN ‘1988-01-01’ AND ‘2008-12-31’;

Graph shows shard query is more scalable than regular MySQL



No Shard-Query

16 worker

24 worker

32 worker

No Shard-Query

16 worker

24 worker

32 worker


1 year










2 year










3 year










4 year










5 year










6 year










7 year










8 year










9 year










10 year










11 year










12 year










13 year










14 year










15 year










16 year










17 year










18 year










19 year










20 year










21 year









The reason that Shard-Query performs better is that it turns the OLAP query into something more like OLTP. Instead of getting one big chunk of data in one query, it runs many smaller queries each requesting significantly less data. On the other hand, MySQL 5.5.7 does not do this on its own. This is the low hanging fruit I was talking about. Even though the data is partitioned, MySQL will examine each partition serially. In the end, this means that things get slower as the query has to examine larger volumes of data.

Regarding the performance of Shard-Query, this machine has 12 real cores and 12 virtual cores, so we don’t see any advantage after increasing the number of workers past 24. The query becomes CPU bound at that point. If I needed more performance I could divide the data between two or more shards, or if possible, I could add more CPU cores. Regardless, even with a single server Shard-Query will perform much better than regular MySQL as the volume of data grows. Remember that this workload fits entirely in the buffer pool so adding CPUs will help only until we run out of memory bandwidth.

Test #2
My second test involved the next four queries on Vadim’s list. The purpose of this test is to demonstrate that Shard-Query works with GROUP BY and other constructs.


  1. Q1 – SELECT count(*) FROM ontime.ontime WHERE FlightDate BETWEEN ‘1988-01-01’ AND ‘1988-12-31’;
  3. Q2 – SELECT DayOfWeek, count(*) AS c FROM ontime WHERE FlightDate BETWEEN ‘2000-01-01’ AND ‘2008-12-31’ GROUP BY DayOfWeek ORDER BY c DESC;
  5. Q3 – SELECT Origin, count(*) AS c FROM ontime WHERE DepDelay>10 AND FlightDate BETWEEN ‘2000-01-01’ AND ‘2008-12-31’  GROUP BY Origin ORDER BY c DESC LIMIT 10;
  7. Q4 – SELECT carrier, count(*) FROM ontime WHERE DepDelay>10 AND FlightDate BETWEEN ‘2007-01-01’ AND ‘2007-12-31’ GROUP BY carrier ORDER BY 2 DESC;

Comparing the performance of four queries at 16 workers


Query Number

No shard-query

16 workers

No shard-query

16 workers

























As you can see, each of the queries runs significantly faster than just running the SQL via MySQL.

The remainder of Vadim’s queries use subqueries in the FROM clause, which Shard-Query does not yet add parallelism to. I plan to add support for those queries though, and I’ll post a follow-up when I do.
-- Full Disclosure --
This is a text file containing the information.

Entry posted by Justin Swanhart |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Intro to OLAP

This is the first of a series of posts about business intelligence tools, particularly OLAP (or online analytical processing) tools using MySQL and other free open source software. OLAP tools are a part of the larger topic of business intelligence, a topic that has not had a lot of coverage on MPB. Because of this, I am going to start out talking about these topics in general, rather than getting right to gritty details of their performance.

I plan on covering the following topics:

  1. Introduction to OLAP and business intelligence. (this post)
  2. Identifying the differences between a data warehouse, and a data mart.
  3. Introduction to MDX queries and the kind of SQL which a ROLAP tool must generate to answer those queries.
  4. Performance challenges with larger databases, and some ways to help performance using aggregation.
  5. Using materialized views to automate that aggregation process.
  6. Comparing the performance of OLAP with and without aggregation over multiple MySQL storage engines at various data scales.

What is BI?
Chances are that you have heard the term business intelligence. Business intelligence (or BI) is a term which encompasses many different tools and methods for analyzing data, usually presenting it in a way that is easily consumed by upper management. This analysis is often used to determine how effectively the business has been at meeting certain performance goals, and to forecast how they will do in the future. To put it another way the tools are designed to provide insight about the business process, hence the name. Probably the most popular BI activity for web sites is click analysis.

As far as BI is concerned, this series of posts focuses on OLAP analysis and in a lesser sense, on data warehousing. Data warehouses often provide the information upon which OLAP analysis is performed, but more on this in post #2.

OLAP? What is that?
OLAP is an acronym which stands for online analytical processing. OLAP analysis, which is really just another name for multidimensional analysis, consists of displaying summary aggregations of the data broken down into different groups. A typical OLAP analysis might show “sale total, by year, by sales rep, by product category”. OLAP analysis is usually used for reporting on current data, looking at historical trends and trying to make predictions about future trends.

Multidimensional Analysis
Multidimensional analysis is a form of statistical analysis. In multidimensional analysis samples representing a particular measure are compared or broken down into different dimensions. For example, in a sales analysis, the “sale amount” is a measure. Measures are always aggregated values. That is, total sales might be expressed as SUM(sale_amt). This is because the SUM of the individual sales will be grouped along different dimensions, such as by year or by product. I’m getting a little ahead of myself. Before we talk about measures and dimensions, we should talk about the two ways in which this information can be stored.

There are two main ways to store multidimensional data for OLAP analysis
OLAP servers typically come in two basic flavors. Some servers have specialized data stores which store data in a form which is highly effective for multidimensional analysis. These servers are termed MOLAP and they tend to have exceptional performance due to their specialized data store. Almost all MOLAP solutions pre-compute many (or even all) of the possible answers to multi-dimensional queries. Palo is an example of an open source version of this technology. ESSbase is an example of closed source product. MOLAP servers often feature extensive compression of data which can improve performance. Loading data into a MOLAP server usually takes a very long time because many of the answers in the cube must be calculated. The extra time spent during the load is usually called “processing” time.

A relational OLAP (or ROLAP) server uses data stored in an RDBMS. These systems trade the performance of a multidimensional store for the convenience of an RDBMS. These servers almost always query over a database which is structured as a STAR or snowflake type schema. To go back to the sales analysis example above, in a STAR schema the facts about the sales would be stored in the fact table, and the list of customers and products would be stored in separate dimension tables. Some ROLAP servers support the aggregation of data into additional tables, and can use the tables automatically. These servers can approach the performance of MOLAP with the convenience of ROLAP, but there are still challenges with this approach. The biggest challenges are the amount of time that it takes to keep the tables updated and in the complexity of the many scripts or jobs which might be necessary to keep the tables in sync. Part five of my series will introduce materialized views which attempt to address these challenges in a manageable way.

What makes a ROLAP so great?
An OLAP server usually returns information to the user as a ‘pivot table‘ or ‘pivot report’. While you could create such a report in a spreadsheet, the ROLAP tool is designed to deal with millions or even billions of rows of data, much more than a spreadsheet can usually handle. MOLAP servers usually require that all, or almost all of the data must fit it memory. Another difference is the ease by which this analysis is constructed. You don’t necessarily have to write queries or drag and drop a report together in order to analyze multidimensional data using an OLAP tool.

Data before pivoting:
Example image from Wikimedia commons showing detail data for sales

Data summarized in pivot form:
Wikimedia commons image showing data summarized in pivot format

ROLAP tools use star schema
As I said before, a sale amount would be considered a measure, and it would usually be aggregated with SUM. The other information about the sale, such as the product, when it was sold and to whom it was sold would be defined in dimension tables. The fact table contains columns which are joined to the dimension tables, such as product_id and customer_id. These are often defined as foreign keys from the fact table to the dimension tables.

A note about degenerate dimensions:
Any values in the fact table that don’t join to dimensions are either considered degenerate dimensions or measures. In the example below the status of the order is a degenerate dimension. A degenerate dimension is stored as an ENUM in many cases. In the example below that there is no actual dimension table which includes the two different order statuses. Such a dimension would add an extra join, which is expensive. Any yes/no field and/or fields with a very low cardinality (such as gender or order status) will probably be stored in the fact table instead of in a dedicated dimension. In the “pivot data” example above, all the dimensions are degenerate: gender, region, style, date.

Star schema with degenerate dimension

Example star schema about sales.

Often a dimension will include redundant information to make reporting easier, a process called “denormalization”. Hierarchical information may be stored in a single dimension. For example, a dimension for products may include both the category AND a sub-category. A time dimension includes year, month and quarter. You can create multiple different hierarchies from a single dimension. This allows ‘drill down’ into the dimension. By default the data would be summarized by year, but you can drill down to quarter or month level aggregation.
Sample date hierarchy, showing quarter, month, year and day hierarchies.

The screenshots here in the jPivot (an OLAP cube browser) documentation can give you a better idea about the display of data. The examples break down sales by product, by category, and by region.

The information is presented in such a fashion that it can be “drilled into” and “filtered on” to provide an easy to use interface to the underlying data. Graphical display of the data as pie, line or bar charts is possible.

Focusing on ROLAP.
This is the MySQL performance blog, and as such an in depth discussion of MOLAP technology is not particularly warranted here. Our discussion will focus on Mondrian. Mondrian is an open source ROLAP server featuring an in-memory OLAP cache. Mondrian is part of the Pentaho open source business intelligence suite. Mondrian is also used by other projects such as Wabit and Jaspersoft. If you are using open source BI then you are probably already using Mondrian. Closed source ROLAP servers include Microstrategy, Microsoft Analysis Services and Oracle BI.

Mondrian speaks MDX, olap4j and XML for analysis. This means that there is a very high chance that your existing BI tools (if you have them) will work with it. MDX is a query language that looks similar to SQL but is actually very different. Olap4j is an OLAP interface for java applications. XML for analysis (XMLA) is an industry standard analytical interface originally created by Microsoft, SAS and Hyperion.

Whats next?
Next we’ll talk about the difference between data marts and data warehouses. The former are usually used for OLAP analysis, but they can be fundamentally related to a warehouse.

Entry posted by Justin Swanhart |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks


New OLAP Wikistat benchmark: Introduction and call for feedbacks

I’ve seen my posts on Ontime Air traffic and Star Schema Benchmark got a lot of interest

However benchmarks by itself did not cover all cases I would want, so I was thinking about better scenario. The biggest problem is to get real big enough dataset, and I thank to Bradley C. Kuszmaul, he pointed me on Wikipedia statistics on access to Wikipedia pages, and thank to Domas, who made stats accessible. Link to the archives: http://dammit.lt/wikistats/archive/ or the original Domas’s announcement .

Although the table does not have very much different information, I think it is good enough to represent cases you can face in Web application ( log processing, page visits, clickstream, etc).

I made some efforts to normalize data to have model in classic star schema and prepared queries that could be run on proposed dataset (John Sichi, lead of LucidDB helped me to draft some queries).
You can see details on our Percona Wikistat benchmark Wiki.

I have next goals with proposed benchmark:

  • Compare engines in OLAP queries for planning, predicting growth, analyzing access patterns to wiki pages, draw trends.
  • Compare engines in statistical queries for end users, which can be executed in real-time. I.e. How many times that or another page was accessed yesterday vs today.
  • Understand specific features and characteristic of each engine.
  • Compare throughput on simple queries (queries and scenario to be drafted yet)
  • Check ability to load data and serve queries at the same time ( availability during data load ) (queries and scenario to be drafted yet)

So in proposed schema I have four tables:
pagestat (fact table), and pages, datesinfo, projects (dimensions tables).

Dimensions tables are supposed to be static and not changed, and we can change datasize
by varying amount of months loaded into fact table (so this is scale factor).

EER diagram

( made with MySQL Workbench )

In current dataset, which you can download from Amazon snapshot (name: “percona-wikistat”, ID:snap-a5f9bacc) we have:

  • Table pages: 724.550.811 rows. data size: 40476M
  • Table datesinfo: 9624 rows, one entry represents 1 hour
  • Table projects: 2025 rows
  • Table pagestats
    Data for 2009-06: # 3.453.013.109 rows / size 68352M
    Data for 2009-07: # 3.442.375.618 rows / size 68152M

So with two months of stats we have about 172GB of data with about 7 billion rows in fact table.

Example of query ( again, full list on Benchmark Wiki)


  1. SELECT project, sum(page_count) sm
  2.  FROM pagestat
  3.    JOIN datesinfo di ON ( di.id=date_id )
  4.    JOIN projects p ON  (p.id=project_id )
  5.  WHERE di.calmonth=7 AND di.calyear=2009
  6.  GROUP BY project
  7.  ORDER BY sm DESC
  8.  LIMIT 20;

I am going to load data and run queries against available engines:

  • MySQL MyISAM / InnoDB (to have reference results)
  • InfoBright
  • InfiniDB
  • MonetDB
  • LucidDB
  • Greenplum

and I will report my results ( so stay with MySQLPerformanceBlog ;) )

I’d like also to test also Paraccel, Vertica and KickFire systems, but I do not have access to.

I welcome your feedback on the benchmark, and what else you would like to see here.

Entry posted by Vadim |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Star Schema Bechmark: InfoBright, InfiniDB and LucidDB

In my previous rounds with DataWarehouse oriented engines I used single table without joins, and with small (as for DW) datasize (see http://www.mysqlperformanceblog.com/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/, http://www.mysqlperformanceblog.com/2009/10/26/air-traffic-queries-in-luciddb/, http://www.mysqlperformanceblog.com/2009/11/02/air-traffic-queries-in-infinidb-early-alpha/). Addressing these issues, I took Star Schema Benchmark, which is TPC-H modification, and tried run queries against InfoBright, InfiniDB, LucidDB and MonetDB. I did not get results for MonetDB, will explain later why. Again primary goal for test was not to get just numbers, but understand specifics of each engine and their ability to handle amount of data and execute queries.

All details I have are available on our Wiki http://www.percona.com/docs/wiki/benchmark:ssb:start and the specification of benchmarks you can get there http://www.percona.com/docs/wiki/_media/benchmark:ssb:starschemab.pdf.

I generated data with scale factor = 1000, which gave me 610GB of data in raw format and loaded into each engine.

There difference in engines gets into play. While InfoBright and InfiniDB does not need indexes at all (you actually can’t create indexes here), they needed for LucidDB and MonetDB, and it changes load time and datasize after load significantly. The numbers
I put in results do not include indexing time, but it also should be considered.

And indexes are exactly reason why I could not get results for MonetDB, there I faced issue
I was not prepared for. MonetDB requires that index fits into memory during indexing procedure, and for 610GB the index may get to 120GB size, and I did not have that amount of memory ( the box is only 16GB of RAM). MonetDB experts recommended me to extended
swap partition to 128GB, but my partitions layout was not really prepared for, I just did not expect I need big swap partition.

So load time.
InfiniDB can really utilize all available cores/cpus in systems ( I run benchmark on 8 cores box), and it allowed to load data faster than other engines. Though LucidDB and MonetDB
are also have multi-thread loaders, only InfoBright ICE used single core.

InfiniDB: 24 010 sec
MonetDB: 42 608 sec (without indexes)
InfoBright: 51 779 sec
LucidDB: 140 736 sec (without indexes)

I should note that time to create indexes in LucidDB was also significant and exceeded loading time. Full report on indexes are available here http://www.percona.com/docs/wiki/benchmark:ssb:luciddb:start

Data size

Size after load is also interesting factor. InfoBright is traditionally good with compression,
though compression rate is less than in case with AirTraffic table. I was told this is because
lineorder table comes not in sorted order, which one would expect in real life. Actually
the same complain I heard from InfiniDB experts – if put lineorder data in sorted order, loading
time can decrease significantly.

Datasize after load:
InfoBright: 112G
LucidDB: 120GB (without indexes)
InfiniDB: 626GB
MonetDB: 650GB (without indexes)

Queries time

Now on queries time.
Full results you can find on page http://www.percona.com/docs/wiki/benchmark:ssb:start,
and graph is below. There couple comments from me.

InfoBright was fully 1 CPU bound during all queries. I think the problem
that engine can use only single cpu/core is getting significant limitation
for them. For query 3.1 I got the surprising result, after 36h of work I got
error that query can’t be resolved by InfoBright optimizer and I need
to enable MySQL optimizer.

InfiniDB is otherwise was IO-bound, and processed data fully utilizing
sequential reads and reading data with speed 120MB/s. I think it allowed
InfiniDB to get the best time in the most queries.

LucidDB on this stage is also can utilize only singe thread with results sometime better,
sometime worse than InfoBright.


Query InfoBright InfiniDB LucidDB
Q1.1 48 min 21.67 sec (2901.67 sec) 24 min 26.05 sec (1466.05 sec) 3503.792 sec
Q1.2 44 min 55.37 sec (2695.37 sec) 24 min 25.83 sec (1465.83 sec) 2889.903 sec
Q1.3 45 min 53.49 sec (2753.49 sec) 24 min 27.25 sec (1467.25 sec) 2763.464 sec
Q2.1 1 hour 54 min 27.74 sec (6867.74) 19 min 44.35 sec (1184.35 sec) 9694.534 sec
Q2.2 1 hour 13 min 33.15 sec (4413.15) 19 min 49.56 sec (1189.56 sec) 9399.965 sec
Q2.3 1 hour 8 min 23.41 sec (4103.41) 19 min 52.27 sec (1192.25 sec) 8875.349 sec
Q3.1 NA 19 min 11.23 sec (1151.23 sec) 16376.93 sec
Q3.2 3 hours 30 min 17.64 sec (12617.64 sec) 19 min 28.55 sec (1168.55 sec) 5560.977 sec
Q3.3 2 hours 58 min 18.87 sec (10698.87 sec) 19 min 58.29 sec (1198.29 sec) 2517.621 sec
Q3.4 1 hour 41 min 41.29 sec (6101.29 sec) 12 min 57.96 sec (777.96 sec) 686.202 sec
Q4.1 8 hours 53 min 52.55 sec (32032.55 sec) 32 min 57.49 sec (1977.49 sec ) 19843.213 sec
Q4.2 5 hours 38 min 7.60 sec / 5 hours 36 min 35.69 sec (20195.69 sec) 33 min 35.45 sec (2015.45 sec) 15292.648 sec
Q4.3 12 hours 58 min 4.27 sec (46684.27 sec) 33 min 47.32 sec (2027.32 sec) 7241.791 sec

Graph with results (time in sec, less time is better)


  • InfiniDB is doing just great using available CPU cores full IO bandwidth reading from disk. You can see more details on InfiniDB scalability on InfiniDB’s blog http://infinidb.org/infinidb-blog/mysql-parallel-query-processing-of-ssb-queries-via-infinidb-.html
  • SSB benchmark may be not good for InfoBright, the synthetic nature of benchmark
    does not allow InfoBright to show better results. But I hope InfoBright will be able to reuse multi-cores / multi-disks soon.
  • I’d like MonetDB is able to use disk to build indexes, not only rely on available memory
  • Taking complains on SSB I am looking to get another more realistic dataset and
    compare bigger set of available DW solutions

Air traffic queries in MyISAM and Tokutek (TokuDB)

This is next post in series
Analyzing air traffic performance with InfoBright and MonetDB
Air traffic queries in LucidDB
Air traffic queries in InfiniDB: early alpha

Let me explain the reason of choosing these engines. After initial three posts I am often asked “What is baseline ? Can we compare results with standard MySQL engines ?”. So there come MyISAM to consider it as base point to see how column-oriented-analytic engines are better here.

However, take into account, that for MyISAM we need to choose proper indexes to execute queries effectively, and there is pain coming with indexes: – load of data is getting slower; – to design proper indexes is additional research, especially when MySQL optimizer is not smart in picking best one.

The really nice thing about MonetDB, InfoBright, InfiniDB is that they do not need indexes, so you may not worry about maintaining them and picking best one. I am not sure about LucidDB, I was told indexes are needed, but creating new index was really fast even on full database, so I guess, it’s not B-Tree indexes. So this my reflexion on indexes turned me onto TokuDB direction.

What is so special about TokuDB ? There two things: indexes have special structure and are “cheap”, by “cheap” I mean the maintenance cost is constant and independent on datasize. With regular B-Tree indexes cost grows exponentially on datasize (Bradley Kuszmaul from Tokutek will correct me if I am wrong in this statement). Another point with TokuDB, it uses compression, so I expect less size of loaded data and less IO operations during query execution.

So what indexes we need for queries. To recall you details, the schema is available in this post
http://www.mysqlperformanceblog.com/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/, and
queries I posted on sheet “Queries” in my summary Spreadsheet.

With Bradley’s help we chose next indexes:


  1. KEY `Year` (`Year`,`Month`),
  2.   KEY `Year_2` (`Year`,`DayOfWeek`),
  3.   KEY `DayOfWeek` (`DayOfWeek`,`Year`,`DepDelay`),
  4.   KEY `DestCityName` (`DestCityName`,`OriginCityName`,`Year`),
  5.   KEY `Year_3` (`Year`,`DestCityName`,`OriginCityName`),
  6.   KEY `Year_4` (`Year`,`Carrier`,`DepDelay`),
  7.   KEY `Origin` (`Origin`,`Year`,`DepDelay`)

And I measured load time for both MyISAM and TokuDB in empty table with created indexes.

Load time for MyISAM: 16608 sec
For TokuDB: 19131 sec

Datasize (including indexes)

MyISAM: 36.7GB
TokuDB: 6.7GB

I am a bit surprised that TokuDB is slower loading data, but my guess it is related to compression, and I expect with bigger amount of data TokuDB will be faster MyISAM.

Now to queries. Bradley pointed me that query Q5 SELECT t.carrier, c, c2, c*1000/c2 as c3 FROM (SELECT carrier,
count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY
carrier) t JOIN (SELECT carrier, count(*) AS c2 FROM ontime WHERE
Year=2007 GROUP BY carrier) t2 ON (t.Carrier=t2.Carrier) ORDER BY c3
can be rewritten as
SELECT carrier,totalflights,ndelayed,ndelayed*1000/totalflights as c3 FROM (SELECT carrier,count(*) as totalflights,sum(if(depdelay>10,1,0)) as ndelayed from ontime where year=2007 group by carrier) t order by c3 desc; ( I name it as Query Q5i)

The summary table with queries execution time (in sec, less is better):

Query MyISAM TokuDB
Q0 72.84 50.25
Q1 61.03 55.01
Q2 98.12 58.36
Q3 123.04 66.87
Q4 6.92 6.91
Q5 13.61 11.86
Q5i 7.68 6.96
Q6 123.84 69.03
Q7 187.22 159.62
Q8 (1y) 8.75 7.59
Q8 (2y) 102.17 64.95
Q8 (3y) 104.7 69.76
Q8 (4y) 107.05 70.46
Q8 (10y) 119.54 84.64
Q9 69.05 47.67

For reference I used 5.1.36-Tokutek-2.1.0 for both MyISAM and TokuDB tests.

And if you are interested to compare MyISAM with previous engines:

Query MyISAM MonetDB InfoBright LucidDB InfiniDB
Q0 72.84 29.9 4.19 103.21 NA
Q1 61.03 7.9 12.13 49.17 6.79
Q2 98.12 0.9 6.73 27.13 4.59
Q3 123.04 1.7 7.29 27.66 4.96
Q4 6.92 0.27 0.99 2.34 0.75
Q5 13.61 0.5 2.92 7.35 NA
Q6 123.84 12.5 21.83 78.42 NA
Q7 187.22 27.9 8.59 106.37 NA
Q8 (1y) 8.75 0.55 1.74 6.76 8.13
Q8 (2y) 102.17 1.1 3.68 28.82 16.54
Q8 (3y) 104.7 1.69 5.44 35.37 24.46
Q8 (4y) 107.05 2.12 7.22 41.66 32.49
Q8 (10y) 119.54 29.14 17.42 72.67 70.35
Q9 69.05 6.3 0.31 76.12 9.54

The all results are available in summary Spreadsheet

I especially do not put TokuDB in the same table with analytic oriented databases, to highlight TokuDB is OLTP engine for general purposes.
As you see it is doing better than MyISAM in all queries.

Entry posted by Vadim |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Air traffic queries in InfiniDB: early alpha

As Calpont announced availability of InfiniDB I surely couldn’t miss a chance to compare it with previously tested databases in the same environment.
See my previous posts on this topic:
Analyzing air traffic performance with InfoBright and MonetDB
Air traffic queries in LucidDB

I could not run all queries against InfiniDB and I met some hiccups during my experiment, so it was less plain experience than with other databases.

So let’s go by the same steps:

Load data

InfiniDB supports MySQL’s LOAD DATA statement and it’s own colxml / cpimport utilities. As LOAD DATA is more familiar for me, I started with that, however after issuing LOAD DATA on 180MB file ( for 1989 year, 1st month) very soon it caused extensive swapping (my box has 4GB of RAM) and statement failed with
ERROR 1 (HY000) at line 1: CAL0001: Insert Failed: St9bad_alloc

Alright, colxml / cpimport was more successful, however it has less flexibility in syntax than LOAD DATA, so I had to transform the input files into a format that cpimport could understand.

Total load time was 9747 sec or 2.7h (not counting time spent on files transformation)

I put summary data into on load data time, datasize and query time to Google Spreadsheet so you can easy compare with previous results. There are different sheets for queries, datasize and time of load.


Size of database after loading is another confusing point. InfiniDB data directory has complex structure like


  1. ./000.dir/000.dir/003.dir/233.dir
  2. ./000.dir/000.dir/003.dir/233.dir/000.dir
  3. ./000.dir/000.dir/003.dir/233.dir/000.dir/FILE000.cdf
  4. ./000.dir/000.dir/003.dir/241.dir
  5. ./000.dir/000.dir/003.dir/241.dir/000.dir
  6. ./000.dir/000.dir/003.dir/241.dir/000.dir/FILE000.cdf
  7. ./000.dir/000.dir/003.dir/238.dir
  8. ./000.dir/000.dir/003.dir/238.dir/000.dir
  9. ./000.dir/000.dir/003.dir/238.dir/000.dir/FILE000.cdf
  10. ./000.dir/000.dir/003.dir/235.dir
  11. ./000.dir/000.dir/003.dir/235.dir/000.dir
  12. ./000.dir/000.dir/003.dir/235.dir/000.dir/FILE000.cdf

so it’s hard to day what files are related to table. But after load, the size of 000.dir is 114G, which is as twice big as original data files. SHOW TABLE STATUS does not really help there, it shows


  1. Name: ontime
  2.          Engine: InfiniDB
  3.         Version: 10
  4.      Row_format: Dynamic
  5.            Rows: 2000
  6.  Avg_row_length: 0
  7.     Data_length: 0
  8. Max_data_length: 0
  9.    Index_length: 0
  10.       Data_free: 0
  11.  Auto_increment: NULL
  12.     Create_time: NULL
  13.     Update_time: NULL
  14.      Check_time: NULL
  15.       Collation: latin1_swedish_ci
  16.        Checksum: NULL
  17.  Create_options:
  18.         Comment:

with totally misleading information.

So I put 114GB as size of data after load, until someone points me how to get real size, and also explains what takes so much space.


First count start query SELECT count(*) FROM ontime took 2.67 sec, which shows that InfiniDB does not store counter of records, however calculates it pretty fast.

select avg(c1) from (select year,month,count(*) as c1 from ontime group by YEAR,month) t;

Another bumper, on this query InfiniDB complains

ERROR 138 (HY000):
The query includes syntax that is not supported by InfiniDB. Use 'show warnings;' to get more information. Review the Calpont InfiniDB Syntax guide for additional information on supported distributed syntax or consider changing the InfiniDB Operating Mode (infinidb_vtable_mode).
mysql> show warnings;
| Level | Code | Message |
| Error | 9999 | Subselect in From clause is not supported in this release. |

Ok, so InfiniDB does not support DERIVED TABLES, which is big limitation from my point of view.
As workaround I tried to create temporary table, but got another error:


  1. mysql> create temporary table tq2 as (select Year,Month,count(*) as c1 from ontime group by Year, Month);
  2. ERROR 122 (HY000): Cannot open table handle for ontime.

As warning suggests I turned infinidb_vtable_mode = 2, which is:


  1. 2) auto-switch mode: InfiniDB will attempt to process the query internally, if it
  2. cannot, it will automatically switch the query to run in row-by-row mode.

but query took 667 sec :

so I skip queries Q5, Q6, Q7 from consideration, which are also based on DERIVED TABLES, as not supported by InfiniDB.

Other queries: (again look on comparison with other engines in Google Spreadsheet or in summary table at the bottom)

Query Q1:
mysql> SELECT DayOfWeek, count(*) AS c FROM ontime WHERE Year BETWEEN 2000 AND 2008 GROUP BY DayOfWeek ORDER BY c DESC;
7 rows in set (6.79 sec)

Query Q2:
mysql> SELECT DayOfWeek, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year BETWEEN 2000 AND 2008 GROUP BY DayOfWeek ORDER BY c DESC;

7 rows in set (4.59 sec)

Query Q3:
SELECT Origin, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year BETWEEN 2000 AND 2008 GROUP BY Origin ORDER BY c DESC LIMIT 10;

4.96 sec

Query Q4:
mysql> SELECT Carrier, count(*) FROM ontime WHERE DepDelay > 10 AND YearD=2007 GROUP BY Carrier ORDER BY 2 DESC;

I had another surprise with query, after 15 min it did not return results, I check system and it was totally idle, but query stuck. I killed query, restarted mysqld but could not connect to mysqld anymore. In processes I see that InfiniDB started couple external processes: ExeMgr, DDLProc, PrimProc, controllernode fg, workernode DBRM_Worker1 fg which cooperate each with other using IPC shared memory and semaphores. To clean system I rebooted server, and only after that mysqld was able to start.

After that query Q4 took 0.75 sec

Queries Q5-Q7 skipped.

Query Q8:

SELECT DestCityName, COUNT( DISTINCT OriginCityName) FROM ontime WHERE YearD BETWEEN 2008 and 2008 GROUP BY DestCityName ORDER BY 2 DESC LIMIT 10;

And times for InfiniDB:

1y: 8.13 sec
2y: 16.54 sec
3y: 24.46 sec
4y: 32.49 sec
10y: 1 min 10.35 sec

Query Q9:

select Year ,count(*) as c1 from ontime group by Year;

Time: 9.54 sec

Ok, so there is summary table with queries times (in sec, less is better)

Query MonetDB InfoBright LucidDB InfiniDB
Q0 29.9 4.19 103.21 NA
Q1 7.9 12.13 49.17 6.79
Q2 0.9 6.73 27.13 4.59
Q3 1.7 7.29 27.66 4.96
Q4 0.27 0.99 2.34 0.75
Q5 0.5 2.92 7.35 NA
Q6 12.5 21.83 78.42 NA
Q7 27.9 8.59 106.37 NA
Q8 (1y) 0.55 1.74 6.76 8.13
Q8 (2y) 1.1 3.68 28.82 16.54
Q8 (3y) 1.69 5.44 35.37 24.46
Q8 (4y) 2.12 7.22 41.66 32.49
Q8 (10y) 29.14 17.42 72.67 70.35
Q9 6.3 0.31 76.12 9.54


  • InfiniDB server version shows Server version: 5.1.39-community InfiniDB Community Edition (GPL), so I consider it as alpha release, and it is doing OK for alpha. I will wait for more stable release for further tests, as it took good amount of time to deal with different glitches.
  • InfiniDB shows really good time for queries it can handle, quite often better than InfoBright.
  • Inability to handle derived tables is significant drawback for me, I hope it will be fixed

Entry posted by Vadim |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Air traffic queries in LucidDB

After my first post Analyzing air traffic performance with InfoBright and MonetDB where I was not able to finish task with LucidDB, John Sichi contacted me with help to setup. You can see instruction how to load data on LucidDB Wiki page

You can find the description of benchmark in original post, there I will show number I have for LucidDB vs previous systems.

Load time
To load data into LucidDB in single thread, it took for me 15273 sec or 4.24h. In difference with other systems LucidDB support multi-threaded load, with concurrency 2 (as I have only 2 cores on that box), the load time is 9955 sec or 2.76h. For comparison
for InforBright load time is 2.45h and for MonetDB it is 2.6h

Another interesting metric is datasize after load. In LucidDB db file after load takes 9.3GB.
UPDATE 27-Oct-2009 From metadata table the actual size of data is 4.5GB, the 9.3GB is size of physical file db.dat, which probably was not truncated after several loads of data.

For InfoBright it is 1.6GB, and for MonetDB – 65GB. Obviously LucidDB uses some compression, but it is not so aggressive as in InfoBright case. As original dataset is 55GB, compression rate for LucidDB is somewhat 1:12

Queries time

Let me put list of queries and times for all systems.

– Lame query “count start”
SELECT count(*) FROM otp."ontime";
1 row selected (55.165 seconds)

Both InfoBright and MonetDB returned result immediately.
It seems LucidDB has to scan whole table to get result.

– Q0:
select avg(c1) from (select "Year","Month",count(*) as c1 from otp."ontime" group by "Year","Month") t;
LucidDB: 103.205 seconds
InfoBright: 4.19 sec
MonetDB: 29.9 sec

– Q1:
SELECT “DayOfWeek”, count(*) AS c FROM OTP.”ontime” WHERE “Year” BETWEEN 2000 AND 2008 GROUP BY “DayOfWeek” ORDER BY c DESC;
LucidDB: 49.17 seconds
InfoBright: 12.13 sec
MonetDB: 7.9 sec

– Q2:
SELECT “DayOfWeek”, count(*) AS c FROM otp.”ontime” WHERE “DepDelay”>10 AND “Year” BETWEEN 2000 AND 2008 GROUP BY “DayOfWeek” ORDER BY c DESC;
LucidDB: 27.131 seconds
InfoBright: 6.37 sec
MonetDB: 0.9 sec

– Q3:
!set rowlimit 10
SELECT “Origin”, count(*) AS c FROM otp.”ontime” WHERE “DepDelay”>10 AND “Year” BETWEEN 2000 AND 2008 GROUP BY “Origin” ORDER BY c DESC;
LucidDB: 27.664 seconds
InfoBright: 7.29 sec
MonetDB: 1.7 sec

– Q4:
SELECT “Carrier”, count(*) FROM otp.”ontime” WHERE “DepDelay”>10 AND “Year”=2007 GROUP BY “Carrier” ORDER BY 2 DESC;
LucidDB: 2.338 seconds
InfoBright: 0.99 sec
MonetDB: 0.27 sec

– Q5:
SELECT t.”Carrier”, c, c2, c*1000/c2 as c3 FROM (SELECT “Carrier”, count(*) AS c FROM OTP.”ontime” WHERE “DepDelay”>10 AND “Year”=2007 GROUP BY “Carrier”) t JOIN (SELECT “Carrier”, count(*) AS c2 FROM OTP.”ontime” WHERE “Year”=2007 GROUP BY “Carrier”) t2 ON (t.”Carrier”=t2.”Carrier”) ORDER BY c3 DESC;
LucidDB: 7.351 seconds
InfoBright: 2.92 sec
MonetDB: 0.5 sec

– Q6:
SELECT t.”Carrier”, c, c2, c*1000/c2 as c3 FROM (SELECT “Carrier”, count(*) AS c FROM OTP.”ontime” WHERE “DepDelay”>10 AND “Year” BETWEEN 2000 AND 2008 GROUP BY “Carrier”) t JOIN (SELECT “Carrier”, count(*) AS c2 FROM OTP.”ontime” WHERE “Year” BETWEEN 2000 AND 2008 GROUP BY “Carrier”) t2 ON (t.”Carrier”=t2.”Carrier”) ORDER BY c3 DESC;
LucidDB: 78.423 seconds
InfoBright: 21.83 sec
MonetDB: 12.5 sec

– Q7:
SELECT t.”Year”, c1/c2 FROM (select “Year”, count(*)*1000 as c1 from OTP.”ontime” WHERE “DepDelay”>10 GROUP BY “Year”) t JOIN (select “Year”, count(*) as c2 from OTP.”ontime” GROUP BY “Year”) t2 ON (t.”Year”=t2.”Year”);
LucidDB: 106.374 seconds
InfoBright: 8.59 sec
MonetDB: 27.9 sec

– Q8:
SELECT “DestCityName”, COUNT( DISTINCT “OriginCityName”) FROM “ontime” WHERE “Year” BETWEEN 2008 and 2008 GROUP BY “DestCityName” ORDER BY 2 DESC;

Years, LucidDB, InfoBright, MonetDB
1y, 6.76s, 1.74s, 0.55s
2y, 28.82s, 3.68s, 1.10s
3y, 35.37s, 5.44s, 1.69s
4y, 41.66s, 7.22s, 2.12s
10y, 72.67s, 17.42s, 29.14s

– Q9:
select “Year” ,count(*) as c1 from “ontime” group by “Year”;
LucidDB: 76.121 seconds
InfoBright: 0.31 sec
MonetDB: 6.3 sec

As you see LucidDB is not showing best results. However on good side about LucidDB I can mention it is very reach featured, with full support of DML statement. ETL features is also very impressive, you can extract, filter, transform external data (there is even access to MySQL via JDBC driver) just in SQL queries (compare with single LOAD DATA statement in InfoBright ICE edition). Also I am not so much in Java, but as I understood LucidDB can be easily integrated with Java applications, which is important if your development is Java based.

Worth to mention that in LucidDB single query execution takes 100% of user time in single CPU, which may signal that there some low-hanging fruits for optimization. OProfile can show clear places to fix.

Entry posted by Vadim |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com