Aug
30
2012
--

Visualization tools for pt-query-digest tables

When you process MySQL slow query logs using pt-query-digest you can store samples of each query into query_review table and historical values for review trend analysis into query_review_history table. But it could be difficult to easily browse those tables without a good GUI tool.

For the visual browsing of tables created by pt-query-digest you may want to use some kind of web tools besides phpMyAdmin ;-)

Query Digest UI

This is a advanced, but easy to install, ui for pt-query-digest or mk-query-digest.

Main features:
* Dynamic filtering and searching of queries
* Colorized and normalized SQL syntax
* Explain the query dynamically
* Integrated pt-query-advisor support
* Detailed query historic stats, support for log_slow_verbosity.

Actually, this is a very simple and straightforward tool to browse slow queries. The web interface is AJAX-based. Please refer to the screenshots below to see what columns you can filter the report on. Personally, I find this tool useful in case you want to easily find a certain query by id, have it syntax-highlighted or find queries that have first been seen since the specified date or date range.
Also you can explain queries, see their stats and post reviews.

Box Anemometer

Anemometer is a tool for visualizing collected data from the MySQL Slow Query Log. The name comes from the instrument in a weather station that measures wind speed. SQL queries are like the wind — ephemeral and hard to get a hold of. Anemometer helps you get a handle on your queries; it makes it easier to figure out what to optimize and how to track performance over time.

Anemometer relies on the Percona Toolkit to do the slow query log collection. Specifically you can run pt-query-digest. To parse your slow logs and insert them into a database for reporting.

Personally, I find this tool as an advanced instrument that could be useful for:

* browsing queries with optional filtering on first seen date;
* custom reports, choosing which columns to show;
* filtering queries by hosts;
* filtering reports by different query conditions;
* graphing reports;
* explaining queries;
* searching and displaying samples;
* displaying table status and CREATE statements;
* reviewing and commenting on queries.

Despite its lack of ability to save predefined reports in the web interface for easy access, it does have a Permalink feature and API for relative date ranges. The web interface is pretty flexible as built on JQuery UI.

Take a look into the screenshots:

Aug
30
2012
--

Wow. My 6 year old MySQL Bug is finally fixed in MySQL 5.6

I got the message in the morning today about the bug being fixed in MySQL 5.6.6…. which I reported in Early 2006 (while still being with MySQL) and running MySQL 4.1 I honestly thought this issue was fixed long ago as it was indeed pretty annoying. I must say I’m very impressed with Oracle team going and cleaning up such very old bugs. Here is a description from the bug:

If you perform  match of constant which is too large  to the column
instead of simply responding with empty set MySQL   truncates the
constant, performs the lookup  and only when  discards results:

CREATE TABLE `trunc` (
  `i` int(11) NOT NULL default '0',
  KEY `i` (`i`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8



mysql> select i,count(*) cnt from trunc  group by i order by cnt desc
limit 10;
+------------+-------+
| i          | cnt   |
+------------+-------+
| 2147483647 | 76047 |
| 1421638051 |     3 |
|  985505567 |     3 |
| 1046160975 |     2 |
|  141017389 |     2 |
|  848130626 |     2 |
|  888665819 |     2 |
| 1001437915 |     2 |
|  118824892 |     2 |
| 2104712727 |     2 |
+------------+-------+
10 rows in set (0.34 sec)

(Just some ranfom data. The only row we really need is with
2147483647)

mysql> explain select count(*) from trunc where i=4147483647;
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key  | key_len |
ref   | rows  | Extra                    |
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
|  1 | SIMPLE      | trunc | ref  | i             | i    |       4 |
const | 81602 | Using where; Using index |
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
1 row in set, 1 warning (0.01 sec)

4bil is out of range for unsigned column and I would expect "Impossible
Where clause" here

Lets look at query execution:

| Handler_read_next     | 1305742982 |


mysql> select  count(*) from trunc where i=4147483647;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set, 1 warning (0.04 sec)

| Handler_read_next     | 1305819030 |


As you  can see  there were over 70000 row reads performed

I came across the bug in the real application which would use multiple table and the column type was inconsistent between them… so inserts into one table would happen with correct value, such as 3000000000, while inserting the same value in the different one will be truncated to 2147483647. This caused a lot of rows to have value of 2147483647 very quickly and select queries for values over 2bil becoming increasingly more expensive. Because there would be more and more queries for values over 2bil as data continued to be inserted the system essentially collapsed in matter of hours.

Thank you, Oracle team, for having this bug finally fixed (even though you’re about 6 years over my expectations while reporting this bug). There are also couple of advises to prevent problems like this to happen:

Use consistent data types Make sure you’re using consistent data types for the same values. Often it is just better to standardize on a few and not take decision in every single case. I for example use “int unsigned not null” for not overly large positive integers.

Consider using strict mode Storing different data when your application requests without throwing the error is not good idea in most cases, yet MySQL continues to be very loose by default. Consider running with sql_mode=strict_all_tables which will report error when data is truncated, together with using transactional tables for any data you value.

Aug
29
2012
--

Here’s a quick way to Foresee if Replication Slave is ever going to catch up and When!

If you ever had a replication slave that is severely behind, you probably noticed that it’s not catching up with a busy master at a steady pace. Instead, the “Seconds behind master” is going up and down so you can’t really tell whether the replica is catching up or not by looking at just few samples, unless these are spread apart. And even then you can’t tell at a glance when it is going to catch up.

Normally, the “severely behind” thing should not happen, but it does often happen in our consulting practice:

  • sometimes replication would break and then it needs to catch up after it is fixed,
  • other times new replication slave is built from a backup which is normally hours behind,
  • or, it could be that replication slave became too slow to catch up due to missing index

Whatever the case is, single question I am being asked by the customer every time this happens is this: When is the replica going to catch up?”

I used to tell them “I don’t know, it depends..” and indeed it is not an easy question to answer. There are few reasons catching up is so unstable:

  1. If you have restarted the server, or started a new one, caches are cold and there’s a lot of IO happening,
  2. Not all queries are created equal – some would run for seconds, while others can be instant,
  3. Batch jobs: some sites would run nightly tasks like building statistics tables or table checksum – these are usually very intense and cause slave to backup slightly.

I didn’t like my own answer to The question, so I decided to do something about it. And because I love awk, I did that something in awk:

delay=60
cmd="mysql -e 'show slave status\G' | grep Seconds_Behind_Master | awk '{print \$2}'"
while sleep $delay; do
  eval $cmd
done | awk -v delay=$delay '
{
   passed += delay;
   if (count%10==0)   
      printf("s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h\n");
   if (prev==NULL){
      prev = $1;
      start = $1;
   }
   speed = (prev-$1)/delay;
   o_speed = (start-($1-passed))/passed
   if (speed == 0)    speed_d = 1;
     else             speed_d = speed;
   eta = $1/speed_d;
   if (eta<0)         eta = -86400;
   o_eta = $1/o_speed;
   printf("%8d %8.6f %9.3f %7.3f | %9.3f %7.3f %7.2f\n",
      $1, $1/86400, speed, eta/86400, o_speed, o_eta/86400, o_eta/3600);
   prev=$1;
   count++;
}'

I don't know if this is ever going to become a part of a Percona Toolkit, however since it's pretty much a one-liner, I just keep it in my snippets pool for easy copy'n'paste.

Here's a piece of an output from a server that was almost 27 days behind just yesterday:

// at the beginning:
s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h
 2309941 26.735428     0.000  26.735 |     1.000  26.735  641.65
 2309764 26.733380     2.950   9.062 |     2.475  10.801  259.23
 2308946 26.723912    13.633   1.960 |     6.528   4.094   98.25
 2308962 26.724097    -0.267  -1.000 |     5.079   5.262  126.28
 2309022 26.724792    -1.000  -1.000 |     4.063   6.577  157.85
...
// after one hour:
s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h
 2264490 26.209375    39.033   0.671 |    13.418   1.953   46.88
 2262422 26.185440    34.467   0.760 |    13.774   1.901   45.63
 2261702 26.177106    12.000   2.181 |    13.762   1.902   45.65
...
// after three hours:
s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h
 2179124 25.221343    13.383   1.885 |    13.046   1.933   46.40
 2178937 25.219178     3.117   8.092 |    12.997   1.940   46.57
 2178472 25.213796     7.750   3.253 |    12.973   1.943   46.64
...
// after 12 hours:
s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h
 1824590 21.117940    20.233   1.044 |    12.219   1.728   41.48
 1823867 21.109572    12.050   1.752 |    12.221   1.727   41.46
 1823089 21.100567    12.967   1.627 |    12.223   1.726   41.43
...
// after 21 hours:
s_behind  d_behind   c_sec_s   eta_d | O_c_sec_s O_eta_d O_eta_h
 1501659 17.380312    -0.533  -1.000 |    11.768   1.477   35.44
 1501664 17.380370    -0.083  -1.000 |    11.760   1.478   35.47
 1501689 17.380660    -0.417  -1.000 |    11.751   1.479   35.50
...

Of course, it is still not perfectly accurate and it does not account for any potential changes in queries, workload, warm-up, nor the time it takes to run the mysql cli, but it does give you an idea and direction that replication slave is going. Note, negative values mean replication isn't catching up, but values themselves are mostly meaningless.

Here's what the weird acronyms stand for:

  • s_behind - current Seconds_Behind_Master value
  • d_behind - number of days behind based on current s_behind
  • c_sec_s - how many seconds per second were caught up during last interval
  • eta_d - this is ETA based on last interval
  • O_c_sec_s - overall catch-up speed in seconds per second
  • O_eta_d - ETA based on overall catch-up speed (in days)
  • O_eta_h - same like previous but in hours

Let me know if you ever find this useful.

Aug
28
2012
--

Differences between READ-COMMITTED and REPEATABLE-READ transaction isolation levels

As an instructor with Percona I’m sometimes asked about the differences between the READ COMMITTED and REPEATABLE READ transaction isolation levels.  There are a few differences between READ-COMMITTED and REPEATABLE-READ, and they are all related to locking.

Extra locking (not gap locking)
It is important to remember that InnoDB actually locks index entries, not rows. During the execution of a statement InnoDB must lock every entry in the index that it traverses to find the rows it is modifying. It must do this to prevent deadlocks and maintain the isolation level.

If you run an UPDATE that is not well indexed you will lock many rows:

update employees set store_id = 0 where store_id = 1;
---TRANSACTION 1EAB04, ACTIVE 7 sec
633 lock struct(s), heap size 96696, 218786 row lock(s), undo log entries 1
MySQL thread id 4, OS thread handle 0x7f8dfc35d700, query id 47 localhost root
show engine innodb status

In the employees table, the column `store_id` is not indexed. Notice that the UPDATE has completed running (we are now running SHOW ENGINE …) but we are holding 218786 row locks and only one undo entry. This means that only one row was changed, but we are still holding extra locks.  The heap size represents the amount of memory that has been allocated for locks.

In REPEATABLE READ every lock acquired during a transaction is held for the duration of the transaction.

In READ COMMITTED the locks that did not match the scan are released after the STATEMENT completes.

Here is the UPDATE statement repeated under READ COMMITTED:

---TRANSACTION 1EAB06, ACTIVE 11 sec
631 lock struct(s), heap size 96696, 1 row lock(s), undo log entries 1
MySQL thread id 4, OS thread handle 0x7f8dfc35d700, query id 62 localhost root
show engine innodb status

You’ll notice that the heap size is the same, but we are now holding only one lock. In all transaction isolation levels InnoDB creates locks over every index entry scanned. The difference between the levels is that once the statement completes in READ COMMITTED mode, the locks are released for the entries that did not match the scan.  Note that InnoDB does not immediately release the heap memory back after releasing the locks, so the heap size is the same as as that in REPEATABLE READ, but the number of locks held is lower (only one).

This means that in READ COMMITTED other transactions are free to update rows that they would not have been able to update (in REPEATABLE READ) once the UPDATE statement completes.

 

Consistent read views
In REPEATBLE READ, a ‘read view’ ( trx_no does not see trx_id >= ABC, sees < ABB ) is created at the start of the transaction, and this read view (consistent snapshot in Oracle terms) is held open for the duration of the transaction. If you execute a SELECT statement at 5AM, and come back in an open transaction at 5PM, when you run the same SELECT, then you will see the exact same resultset that you saw at 5AM.  This is called MVCC (multiple version concurrency control) and it is accomplished using row versioning and UNDO information.

In REPEATABLE READ InnoDB also creates gap locks for range scans.

select * from some_table where id > 100 FOR UPDATE;

The above update will create a gap lock that will prevent any rows with id > 100 from being inserted into the table until the transaction rolls back or commits.

In the same transaction, if the SELECT … FOR UPDATE is run at 5AM, and an UPDATE is run at 5PM (“UPDATE some_table where id > 100″) then the UPDATE will change the same rows that SELECT FOR UPDATE locked at 5AM. There is no possibility of changing additional rows, because the gap after 100 was previously locked.

 

Non-repeatable reads (read committed)
In READ COMMITTED, a read view is created at the start of each statement.   This means that a SELECT made at 5AM may show different results from the same SELECT run at 5PM, even in the same transaction.   This is because in READ COMMITTED the read view for the transaction lasts only as long as each statement execution.   As a result, consecutive executions of the same statement may show different results.

This is called the ‘phantom row’ problem.

In addition, in READ COMMITTED gap locks are never created.   Since there is no gap lock, the example SELECT .. FOR UPDATE above will not prevent insertions of new rows into the table by other transactions.   Thus, locking rows with SELECT … FOR UPDATE (ie “where id> 100″)  and subsequently updating  rows with “where id> 100″ (even in the same transaction) may result in more rows being updated than were earlier locked. This is because new rows may have been inserted in the table between the statements since there was no gap lock created for the SELECT … FOR UPDATE.

Useful links

Aug
28
2012
--

Sell-an-Elephant-to-your-Boss-HOWTO

Spoiler alert: If your boss does not need an elephant, he is definitely NOT going to buy one from you. If he will, he will regret it and eventually you will too.

I must appologize to the reader who was expecting to find an advice on selling useless goods to his boss. While I do use a similar technique to get a quarterly raise (no, I don’t), this article is actually about convincing your team, your manager or anyone else who has influence over project’s priorities, that pending system performance optimizations are a priority (assuming, they indeed are). However this headline was not very catchy and way too long, so I decided to go with the elephant instead.

System performance optimization is what I do day to day here at Percona. Looking back at the duration of an optimization project, I find that with bigger companies (bigger here means it’s not a one-man show) it’s not the identification of performance problems that takes most of the time. Nor it is looking for the right solution. Biggest bottleneck in the optimization project is where solution gets approved and prioritized appropriately inside the company that came for performance optimization in the first place. Sometimes I would follow-up with the customer after a few weeks or a month just to find that nothing was done to implement suggested changes. When I would ask why, most of the time the answer is someting along those lines: my manager didn’t schedule/approve it yet.

I don’t want to say that all performance improvements are a priority and should be done right away, not at all. I want to suggest that you can check if optimizations at hand should be prioritized and if so – how to make it happen if you’re not the one who sets priorities.

1. Estimate harm being done

This is the hardest and the most important part of this. The Question that you have to answer is this: How much does it cost your company NOT to have the optimization in place?

This should be expressed in dollars per amount of time (e.g. month). Normally, Percona consultant would tell you how much faster is something going to be when changes are applied. When we can measure it- it will be an accurate number, othertimes – we estimate it (e.g. when buying new hardware or making significant changes to database architecture is needed). What Percona can’t do though is map that to dollars for your company so this is something you will have to work out on your own. These questions might help you:

  • How many users are we loosing [per month] because they.. (“get very unstable response times”, “get timeout for every 10th request”, “need to wait 10s for page to load”) ? How much loosing one customer cost us? Then, multiply.
  • How much more efficient X dept. could be if they got this report in 5min rather than 4h. Would that give any $$ value to the company? How much per month?
  • How many extra servers do we need to run because our systems are not optimized? How much does that cost our company every month?
  • How many conversions are we loosing because users turn away due to slow service during registration?

It may be hard to get some of these answered if the company is not very transparent and you may have to go asking around, sometimes guess. That’s fine though, just keep a record of the guess you made so you can recalculate the whole thing easily once you have the number secured. In other cases there’s no way to get a good enough guess as the data is just way too far from you. In that case you either have to make a very rough guess or accept that there’s nothing you can do about it and that you don’t know how important for the company is the work that you do.

By the way, if someone says users don’t care if the website loads in 2s or in 5s, let him read this.

In the end you have to come up with one number. That number will vary greatly depending on many things like size of the company, number of servers system runs on, number of users, importance of the system we are working on in the global picture and what not.

2. Estimate the cost of the solution

Now comes the question of how much does it cost to implement given optimization?

In some cases it’s just a matter of opportunity costs: how much more valuable is it for the company that you work on implementing performance improvement -vs- that other thing? In other cases, you have to buy extra hardware and then see when (and if) the optimization is going to pay off.

When you have the two numbers, you can clearly see what is more beneficial to the company – get the performance improvements implemented now or leave things as they are and just work on the next thing. Few examples I just came up with:

(a) Company “Find that thing” runs a specialized search service with 25k new users monthly. 20% of users cancel their subscription after first month because sometimes the search is so painfully slow that the requests would time out. Because of that, company is loosing $60k/month. Search stability can be improved to all 6 sigmas if you just had 2 extra servers for search, but that is $20k extra for the servers and $1k/month for colocation. Well, that means the company is going to loosing ~$39k this month if nothing gets done and $59k or more month after month until it gets fixed.

(b) Company “Ana Lysis” sells data analysis services. Each customer report takes approx. 2h to generate, so one 24-disk server can only run ~300 reports a day and needs to queue them so no more than 24 such requests run in parallel. Since one customer can run 5 reports in parallel, with no overbooking they can only host 60 users on a single such server and still due to queueing response time would be unstable during peak times. For every 60 new customers, “Ana Lysis” needs to buy a new 24-disk machine which costs $25k. At the moment there are 6 such machines and a plan to buy 4 new ones next month. By moving to different storage engine, average report generation time would drop to 10 minutes hence one server would now be able to serve 12 times more customers all other things being equal, so not only the new hardware won’t be needed but the current one will be underloaded as well. You will save your company at least $100k. And if you know how many new customers come in every month, you can also get the $/month figure.

3. Make it a short and clear statement

Now that you know the most important numbers, go to your manager or whoever sets project priorities and make it clear:

“Our company is wasting $59k every month and I know exactly how to fix this.”

While you may sound funny at first if it’s the first time you talk about money other than your salary, it should still get you the attention – business is all about making money and no business likes to be wasting money. Most managers know this and they are normally welcome to save money. However, beware that different orders of magnitude are relevant for different businesses, so don’t be surprised if it appears that you bring in more than $59k a month to the company by working on that other thing.

Assuming manager happens to be worried about the $59k/month loss, now it is your turn to show exactly how is it wasting that much money and what to do about it.

4. Show the method, 5. the main problem and 6. the solution

Having the attention, I would try to be very thorough here and explain everthing from top to bottom starting with how it occurred that there is a problem at all, how the performance review was done, what were the findings, what’s the impact of other performance problems found and then most importantly, how you came up with the $59/month figure and what were your assumptions and guesses. Of course, don’t make it a 3h explanation – if you can run through this within 15-20 minutes, that’s perfect.

Once you’re clear on that, it’s time to present the solution, why you think it makes sense and how much you think it will cost – either in your hours or in company’s dollars.

7. Overcome any obsticles

At this point it is quite likely that your manager will come up with a few or lots of reasonable arguments against the problem or the solution. Don’t get discouraged – most of the time you have already thought about it and found a solution, you will just have to do it again if you didn’t take notes. On the other hand, managers often see broader picture and they know more than you think they do, therefore it is very important that you (either on your own or together with manager) actually acknowledge and remove every possible obsticle or it will be a show-stopper.

8. Kick it off

If you were able to overcome all the obsticles and unless the problem is fixed, the net result is still a significant waste of money, it’s time to go ahead and get that thing fixed.

Summary

I know this all sounds like a lot of work and indeed it is, however it is very rewarding as you focus on what is the most important thing for virtually any business: making (or saving) money.

Also note that Percona can do a lot of heavy lifting for you. The method that we use – Goal driven performance optimization – is based on the same value mindset: you tell us what would give you the most value and we will help you get there, while you’re busy rolling out that new feature or launching the new project.

Of course, there are certain things we can’t do, like re-architecting application, but often the question is moving to better hardware, optimizing indexes or queries, tuning MySQL etc. which we can offload 80-90% from the team. On the other hand, if the application is so bad it needs to be re-desgined, we are going to let you know.

Aug
28
2012
--

Hidden columns of query_review_history table

You can use pt-query-digest to process a MySQL slow query log and store historical values for review trend analysis into query_review_history table. According to its official documentation you can populate many columns in that table but there are other important ones such as ‘user’, ‘host’, ‘db’ which are not included by default. I will explain how to implement this.

Also the documentation says:

Any columns not mentioned above are inspected to see if they follow a certain naming convention. The column is special if the name ends with an underscore followed by any of these MAGIC_history_cols values:

pct|avt|cnt|sum|min|max|pct_95|stddev|median|rank

If the column ends with one of those values, then the prefix is interpreted as the event attribute to store in that column, and the suffix is interpreted as the metric to be stored.
For a full list of attributes, see http://code.google.com/p/maatkit/wiki/EventAttributes

pt-query-digest works on events, which are a collection of key/value pairs called attributes. You’ll recognize most of the attributes right away: Query_time, Lock_time, and so on. You can just look at a slow log and see them. However, there are some that don’t exist in the slow log, and slow logs may actually include different kinds of attributes (for example, you may have a server with the Percona patches).
With creative use of –filter, you can create new attributes derived from existing attributes.

Note: the event attributes list could be outdated as Percona Toolkit was moved from Google Code to Launchpad.

So according the above, you can manually add and have the following useful columns populated in query_review_history table:
* `user_max` varchar(64) DEFAULT NULL – User who executed the query.
* `host_max` varchar(64) DEFAULT NULL – Client host which executed the query.
* `db_max` varchar(64) DEFAULT NULL – Current database that comes from USE database statements.
* `hostname_max` varchar(64) NOT NULL – Hostname of the server where slow log is located or was copied from.

The first three columns could be populated automatically when you run pt-query-digest but in order to set the hostname you need to specify it in the –filter option. For example:

pt-query-digest --review h=localhost,D=percona,t=query_review,p=pass,u=percona \
                --review-history h=localhost,D=percona,t=query_review_history,p=pass,u=percona \
                --filter=" \$event->{Bytes} = length(\$event->{arg}) and \$event->{hostname}=\"$HOSTNAME\"" mysqld-slow-queries.log

$HOSTNAME will reflect the current machine hostname. Replace $HOSTNAME with the necessary hostname if you copied slow query log from the another server. Having hostname set might be useful for filtering queries in the database aggregating slow queries from multiple servers.

Aug
24
2012
--

Full Text Search Webinar Questions Followup

I presented a webinar this week to give an overview of several Full Text Search solutions and compare their performance.  Even if you missed the webinar, you can register for it, and you’ll be emailed a link to the recording.

During my webinar, a number of attendees asked some good questions.  Here are their questions and my answers.

Adrian B. commented:

Q: Would’ve been a good idea to retrieve the same number of rows on each benchmark (I noticed 100 rows on SQL and 20 on Sphinx). Also Sphinx does the relevance sorting by default, adding relevance sorting to the MySQL queries would make them even slower, I’m sure.

Indeed, the result set of 20 rows from SphinxQL queries is merely the default.  And sorting by relevance must have some effect on the speed of the query.  I didn’t control for these variations, but that would be worthwhile to do.

Geoffrey L. asked:

Q: Do you have any data on memory usage for these tests?

No, I didn’t measure the memory usage for each test.  I did allocate buffers in what I hope was a realistic way.  For instance, when testing the InnoDB FT index, I made sure to increase innodb_buffer_pool_size to 50% of my RAM, and then when I tested MyISAM FT index, I reallocated that memory to into key_buffer_size.

The other thing that happened with regards to memory was that when I tried to create an InnoDB FT index before declaring the `FTS_DOC_ID` primary key column properly, MySQL crashed with an out of memory fatal error, and the Windows 7 server I was testing on prompted me to shut down the operating system.  So be careful about declaring the correct primary key!

Mario asked:

Q: Did you compare relevancy of search results between each engine?  If yes, which one seems the best?

No, I didn’t compare relevancy.  I was focused solely on query response time for this test. You’re right that the specific results probably vary based on the search implementation, and that’s an important factor in choosing a solution.

David S. asked:

Q: If searching on multiple terms, can you get Sphinx to report which matched?

The SHOW META command in Sphinx shows how many rows each keyword matched, and then a total for the search expression.  But I don’t know a way to report exactly which rows matched, without doing additional searches for each keyword.

Jessy B. asked:

Q: With respect to Solr and Sphinx, do indexes stay up to date with changes (inserts, updates and deletes)?

That’s a good question, because both of these external indexing technologies depend on being able to reindex as data in the source database changes.  This is an important consideration for choosing a full-text solution, because updating the index can become quite complex.

Solr is easier to add documents to, either individually as you add new data to the MySQL database or else periodically batch-insert data that has changed since last time you updated the index.  You can use the DataImportHandler to import the result of any SQL query, and if you can form a SELECT query that returns the “new” data (for example, WHERE updated_at > ’2012-08-22 00:00:00′, when you did the prior update), you can do this anytime.

Sphinx Search is a bit harder, because it’s quite costly to update an index incrementally — it’s basically as expensive as creating the whole index.  For that reason, there are a couple of strategies used by Sphinx Search users to support changing data.  One is to store two indexes, one for historical data, and the other for the “delta” of recently-changed data.  Your application would have to be coded to search both indexes to find all matches among the most current data.  You would merge the delta index with the main index periodically.

Sphinx Search also supports a supplementary in-memory RT index type that supports direct row-by-row updates.  But you would still have to update the RT index as data changes, using application code.  Since RT indexes are in volatile memory, not stored on disk, you are responsible for integrating new data with the on-disk Sphinx index periodically by reindexing the whole collection.  There doesn’t currently seem to be a function to merge an RT index with an on-disk index, so integrating recent changes with the full index may require reindexing the whole dataset from time to time.

Mike W. asked:

Q: If the Sphinx index isn’t in sync, will the out-of-sync rows not be found?

No, only the documents included in the Sphinx Search indexes will be returned by Sphinx Search queries.

Mike W. also asked:

Q: What about MemSQL and indexes.  Have you benchmarked it?

According to their documentation, MemSQL supports hash indexes and skip list indexes, but not full-text indexes, so comparisons would not be meaningful.

Since MemSQL is an in-memory database, you can get a lot of speed improvement because you’re searching data without touching the disk, but I assume the search would necessarily do table-scans.

Jessy B. also asked:

Q: Were these tests performed on a single machine and a common/share set of disk?

The test machine I used is a Windows 7 tower with an Intel i3 processor, 8GB of RAM, and two SSD drives: one for the Windows partition, and one for the MySQL data partition.  I performed all the tests on this machine.  I realize this isn’t representative of modern production systems, but hopefully by performing all the tests on the same hardware, I got results that are least comparable to each other.

Hernan S. commented on the blog post where I announced the webinar:

We evaluated MySQL vs Solr.  I was able to index all the data from the database into Solr and make it queryable from a browser within four days plus some customization on the search algorithm. It would have taken me two to three weeks to do something equivalent with MySQL and it wouldn’t be as flexible and customizable as Solr. With Solr, I was able to fine tune search and I still feel there are tons of additional features that will help me address future needs.

Great points Hernan.  Each application project is different, and has different requirements for the FT functionality.  So one solution may include advanced features that are must-have for your application, but are not so important for another application.  I tried to test only the functionality all these solutions had in common, so I tested only simple queries without customizing the indexing.

Jeffrey S. asked:

Q: Do you know if there’s a good reference that discusses what technologies might be most adequate for various application types? (I can only think of my company’s video library search).  Talking about the relevance of search results, sorry if it wasn’t clear.

No, I don’t know of a reference that compares these technologies for different application types.  There are books that describe how to use one technology or the other, and some may compare one technology to the other, but typically these comparisons are made with respect to individual features, not in the context of application types.  You’d have to evaluate how relevant the search results are for your application needs, this isn’t something a benchmark can tell you.

Thanks for all the questions!

I’d like to see some of the folks who viewed my Full-Text Search Throwdown webinar when I present the popular Percona Training in Salt Lake City, September 24-27.  See http://www.percona.com/training/ for details on our training offerings and the schedule for upcoming events.

Aug
23
2012
--

Announcing Percona Server 5.5.27-28.0

Percona is glad to announce the release of Percona Server 5.5.27-28.0 on August 23rd, 2012 (Downloads are available here and from the Percona Software Repositories).

Based on MySQL 5.5.27, including all the bug fixes in it, Percona Server 5.5.27-28.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.27-28.0 milestone at Launchpad.

New Features:

  • Percona Server supports tunable buffer size for fast index creation in InnoDB. This value was calculated based on the merge block size (which was hardcoded to 1 MB) and the minimum index record size. By adding the session variable innodb_merge_sort_block_size block size that is used in the merge sort can now be adjusted for better performance.
  • Percona Server has implemented ability to have a MySQL Utility user who has system access to do administrative tasks but limited access to user schema. This feature is especially useful to those operating MySQL As A Service.
  • New Expanded Program Option Modifiers have been added to allow access control to system variables.
  • New table INNODB_UNDO_LOGS has been added to allow access to undo segment information. Each row represents an individual undo segment and contains information about which rollback segment the undo segment is currently owned by, which transaction is currently using the undo segment, and other size and type information for the undo segment. This information is ‘live’ and calculated for each query of the table.

Bugs Fixed:

  • Fixed incorrect merge of MySQL bug #61188 fix which caused server to freeze with “has waited at buf0buf.c line 2529 for XXX seconds the semaphore” errors. This regression was introduced in Percona Server 5.5.23-25.3. Bug fixed #1026926 (Stewart Smith).
  • Fixed regression introduced in Percona Server 5.5.23-25.3 when merging the upstream fix for MySQL bug #64284. Bug fixed #1015109 (Stewart Smith).
  • Fixed the upstream MySQL bug #66301. Concurrent INSERT … ON DUPLICATE KEY UPDATE statements on a table with an AUTO_INCREMENT column could result in spurious duplicate key errors (and, as a result, lost data due to some rows being updated rather than inserted) with the default value of innodb_autoinc_lock_mode=1. Bug fixed #1035225 (Alexey Kopytov).
  • Removed error log warnings that occured after enabling innodb_use_sys_stats_table and before ANALYZE TABLE is run for each table. Bug fixed #890623 (Alexey Kopytov).
  • Removed the unneeded lrusort.py script. The server now does this sorting automatically and has done for some time. Bug fixed #882653 (Stewart Smith).
  • Fixed the malformed CHANGE MASTER query in the output of mysqldump with –include-master-host-port option. Bug fixed #1013432 (Stewart Smith).

Bug fix for bug #1007268 has been targeted for the next Percona Server release. Workaround for this bug exists and it’s setting up the innodb_lazy_drop_table to 1.

Release notes for Percona Server 5.5.27-28.0 are available in our online documentation.

Aug
22
2012
--

Submit your proposal for Percona Live London 2012 Now !

Call for papers is still open for Percona Live London 2012, but only for few more days. We’re looking for great talks relevant for MySQL Ecosystem it could be about MySQL and its variants as well as technologies which are used together with MySQL. Please consider submitting if you’re actively using MySQL, Percona Server, MariaDB or Drizzle in production – the practical war stories are always very interesting. If you’re practicing MySQL consultant or other service vendor we would like to hear about your experience with multiple environments which gives unique prospective. If you’re working with MySQL or its variants on code Level and can share some prospective about internals or explain how features work in depth it would be a gem.

Do not feel intimidated. Many of the most attended and appreciated talks are talks on simple topics done well. Many people come to the conferences to learn and we need a content for people only starting at MySQL community not just advanced topics for veterans.

The call for papers is extended until August 26th to give you a few more days to gather your thoughts. We recognize a lot of people have been on vacation mid August and could not submit in time.

Looking to attend instead ? Registration is open now with super saver prices.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com