Feb
28
2013
--

Percona Server on the Nexus 7: Your own MySQL Database Server on an Android Tablet

Percona Server on the Nexus 7: Your own MySQL Database Server on an Android Tablet

Percona Server on the Nexus 7: Your own MySQL Database Server on an Android Tablet

Following Roel’s post, Percona Server on the Raspberry Pi: Your own MySQL Database Server , I thought what other crazy gadget can I run Percona Server on? And having an Asus Nexus 7 Android tablet I decided to give it a try.

Anything below contains a risk that you break your tablet if you do not know what you are doing, so be advised.

First, we need rooted tablet, most likely with custom ROM. I personally use SmoothROM which already comes with root and a proper kernel.

Next, we install Linux environment, I used Linux Deploy App to deploy Ubuntu 12.04 LTS. It does not remove current ROM and works pretty must as a regular Android App. The process is easy, but time consuming. It took about 30 mins to download and install all needed packages. When installation is done and Linux started we are able to SSH to the tablet and get familiar Linux shell.

From now it looks totally like a regular Linux installation, with a difference that this is an ARMv7 architecture, so we won’t be able to download regular 32 or 64bit binaries built for Intel architecture. And we do not have ARM binaries of Percona Server, so we will need to build from sources.

So we install build environment

apt-get install build-essential cmake libncurses5-dev bison libaio-dev

download Percona Server MySQL source code

wget http://www.percona.com/redir/downloads/Percona-Server-5.5/Percona-Server-5.5.29-29.4/source/Percona-Server-5.5.29-rel29.4.tar.gz

unpack and build. Not, not so simple, unfortunately there is a bug Non-portable code in client plugin (fails on ARM) which will not allow to build from plain sources.
We need to apply a small patch to source, which is available here.

After that to build binaries:

cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DBUILD_CONFIG=mysql_release -DFEATURE_SET=community -DWITH_EMBEDDED_SERVER=OFF
make -j2
make install

and if everything goes as expect we will have binaries in /usr/local/mysql, which we start

mysql> show variables like 'version%';
+-------------------------+------------------------------+
| Variable_name           | Value                        |
+-------------------------+------------------------------+
| version                 | 5.5.29                       |
| version_compile_machine | armv7l                       |
| version_compile_os      | Linux                        |
+-------------------------+------------------------------+

And a quick useless CPU benchmark just to understand some level of performance of tablet CPUs:

Processor       : ARMv7 Processor rev 9 (v7l)
processor       : 0
BogoMIPS        : 1993.93
mysql> SELECT BENCHMARK(10000000,ENCODE('hello','goodbye'));
+-----------------------------------------------+
| BENCHMARK(10000000,ENCODE('hello','goodbye')) |
+-----------------------------------------------+
|                                             0 |
+-----------------------------------------------+
1 row in set (5.12 sec)

to compare with Cisco UCS server:

model name      : Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz
bogomips        : 5851.73
mysql> SELECT BENCHMARK(10000000,ENCODE('hello','goodbye'));
+-----------------------------------------------+
| BENCHMARK(10000000,ENCODE('hello','goodbye')) |
+-----------------------------------------------+
|                                             0 |
+-----------------------------------------------+
1 row in set (1.46 sec)

The next steps is to combine two tablets and run Percona XtraDB Cluster for MySQL but there is another bug Can’t build galera on ARM architecture which should be fixed before.

Now I have a hard time thinking about a practical application of running a MySQL database on a tablet, but it was more to check capabilities of mobile gadgets, which are quite impressive I may say.

The post Percona Server on the Nexus 7: Your own MySQL Database Server on an Android Tablet appeared first on MySQL Performance Blog.

Feb
28
2013
--

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

Like Ovais said in Implications of Metadata Locking Changes in MySQL 5.5, the hot topic these days is MySQL 5.6, but there was an important metadata locking change in MySQL 5.5.  As I began to dig into the Percona Toolkit bug he reported concerning this change apropos pt-online-schema-change, I discovered something about lock_wait_timeout that shocked me.  From the MySQL 5.5 docs for lock_wait_timeout:

This variable specifies the timeout in seconds for attempts to acquire metadata locks. The permissible values range from 1 to 31536000 (1 year). The default is 31536000.

Translation: by default, MySQL 5.5 will “meta-block” for 1 year!  In my humble opinion, this is a bug, especially given the various subtle and sometimes quiet ways that metadata locking can lock the server as Ovais describes in his blog post.  The default for innodb_lock_wait_timeout, by comparison, is 50 seconds.  That’s reasonable, but 31536000 is not.  I would only set a timeout, or any kind of wait or interval value, to such a high value to play a practical joke on someone.

Second, and finally, if: “This variable specifies the timeout in seconds for attempts to acquire metadata locks,” then why isn’t it called metadata_lock_wait_timeout?

In conclusion: be careful when upgrading to MySQL 5.5 because the solstices may pass before your DLL statement completes.

The post MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server appeared first on MySQL Performance Blog.

Feb
27
2013
--

MySQL optimizer: ANALYZE TABLE and Waiting for table flush

The MySQL optimizer makes the decision of what execution plan to use based on the information provided by the storage engines. That information is not accurate in some engines like InnoDB and they are based in statistics calculations therefore sometimes some tune is needed. In InnoDB these statistics are calculated automatically, check the following blog post for more information:

http://www.mysqlperformanceblog.com/2011/10/06/when-does-innodb-update-table-statistics-and-when-it-can-bite/

There are some variables to tune how that statistics are calculated but we need to wait until the gathering process triggers again to see if there is any improvement. Usually the first step to try to get back to the previous execution plan is to force that process with ANALYZE TABLE that is usually fast enough to not cause troubles.

Let’s see an example of how a simple and fast ANALYZE can cause a downtime.

Waiting for table flush:

In order to trigger this problem we need:

– Lot of concurrency
– A long running query
– Run an ANALYZE TABLE on a table accessed by the long running query

So first we need a long running query against table t:

SELECT * FROM t WHERE c > '%c%';

Then in our efforts to get a better execution plan for another query we run ANALYZE TABLE:

mysql> analyze table t;
+--------+---------+----------+----------+
| Table  | Op      | Msg_type | Msg_text |
+--------+---------+----------+----------+
| test.t | analyze | status   | OK       |
+--------+---------+----------+----------+
1 row in set (0.00 sec)

Perfect, very fast! But then some seconds later we realize that our application is down. Let’s see the process list. I’ve removed most of the columns to make it clearer:

+------+-------------------------+---------------------------------+
| Time | State                   | Info                            |
|   32 | Writing to net          | select * from t where c > '%0%' |
|   12 | Waiting for table flush | select * from test.t where i=1  |
|   12 | Waiting for table flush | select * from test.t where i=2  |
|   12 | Waiting for table flush | select * from test.t where i=3  |
|   11 | Waiting for table flush | select * from test.t where i=7  |
|   10 | Waiting for table flush | select * from test.t where i=11 |
|   11 | Waiting for table flush | select * from test.t where i=5  |
|   11 | Waiting for table flush | select * from test.t where i=4  |
|   11 | Waiting for table flush | select * from test.t where i=9  |
|   11 | Waiting for table flush | select * from test.t where i=8  |
|   11 | Waiting for table flush | select * from test.t where i=12 |
|   11 | Waiting for table flush | select * from test.t where i=14 |
|   10 | Waiting for table flush | select * from test.t where i=6  |
|   10 | Waiting for table flush | select * from test.t where i=15 |
|   10 | Waiting for table flush | select * from test.t where i=10 |
[...]

The ANALYZE TABLE runs perfect but after it the rest of the threads that are running a query against that table need to wait. This is because MySQL has detected that the underlying table has changed and it needs to close and reopen it using FLUSH. Therefore the table will be locked until all queries that are using that table finish. There are only two solutions to this situation, wait until the long query finishes or kill the query. Also, we have to take in account that killing a query could cause even more troubles. If we are dealing with a write query on InnoDB the rollback process could take even more time to finish than the original query. On the other hand, if the table is MyISAM there will be no rollback process so all the already updated rows can’t be recovered.

This particular example is not only a problem of ANALYZE. Other commands like FLUSH TABLES, ALTER, RENAME, OPTIMIZE or REPAIR can cause threads to wait on “Waiting for tables”, “Waiting for table” and “Waiting for table flush”.

Conclusion

Before running an ANALYZE table or any other command listed before, check the running queries. If the table that you are going to work on is very used the recommendation is to run it during the low peak of load or a maintenance window.

The post MySQL optimizer: ANALYZE TABLE and Waiting for table flush appeared first on MySQL Performance Blog.

Feb
26
2013
--

Fight! Fight! Fight! Excerpt from Ocean of Dust

Anyone want to read another excerpt from Ocean of Dust? Sure you do :)

Pop over to The Book Hookup and take a look. A little while ago, Celeste over at The Book Hookup did a review too.

After reading the excerpt, stay there and scroll down for a giveaway, but only if you like winning free stuff!

That scene required help from my writing group to get right. Not being a girl (duh!), I didn’t know how they fought. In my original draft  I had them punching each other and all sorts of things. The women in my writing group (bless them!) explained to me “that isn’t how it works”. Apparently girls go in for hair pulling and hair dragging. Who knew? So hopefully you female readers are now nodding your heads in agreement in that fight, perhaps remembering your own school days?

I recommend you add The Book Hookup to your RSS reader or follow it by email. It’s a great review site. Thanks, Celeste.

How dirty did you used to play in your fights as a kid? Please leave a comment below. Maybe I can feature your signature move in a future book. That would be fun. :)

 

Feb
26
2013
--

InnoDB Full-text Search in MySQL 5.6 (part 1)

I’ve never been a very big fan of MyISAM; I would argue that in most situations, any possible advantages to using MyISAM are far outweighed by the potential disadvantages and the strengths of InnoDB. However, up until MySQL 5.6, MyISAM was the only storage engine with support for full-text search (FTS). And I’ve encountered many customers for whom the prudent move would be a migration to InnoDB, but due to their use of MyISAM FTS, the idea of a complete or partial migration was, for one reason or another, an impractical solution. So, when FTS for InnoDB was first announced, I thought this might end up being the magic bullet that would help these sorts of customers realize all of the benefits that have been engineered into InnoDB over the past few years and still keep their FTS capability without having to make any significant code changes.

Unfortunately, I think that hope may be premature. While it is true that InnoDB FTS in MySQL 5.6 is syntactically identical to MyISAM FTS, in the sense that the SQL required to run a MATCH .. AGAINST is the same (modulo any new features introduced with InnoDB FTS), that’s largely where the similarities end.

NOTE 1: I was originally planning to cover everything I wanted to discuss with respect to InnoDB FTS in this one post, but I think there’s a lot of interesting stuff here, so I will break it into three pieces instead. The first part (this one) will be a very quick overview of FTS in InnoDB and some observations that I’ve made while getting it configured. The second part will compare query results between MyISAM FTS and InnoDB FTS over the same data sets, and then finally in the third installment, we’ll look at query performance. In the event that a new release of 5.6 appears between now and part 3, I’ll also revisit some of the “quirks” from parts 1 and 2 to see if the behavior has changed.

NOTE 2: For purposes of this discussion, I used two separate data sets. The first one is a set of about 8K very SEO-stuffed web pages, wherein the document title is the page title, and the document body is the HTML-tag-stripped body of the page. We’ll call this data set “SEO” – it’s about 20MB of actual data. The other one is a set of about 790K directory records, each one containing the name, address, and some other public-records-type information about each person. We’ll call this data set “DIR”, and it’s about 155MB of actual data.

NOTE 3: Also, keep in mind that I used the community editions of MySQL 5.5.30 and MySQL 5.6.10 with no tuning whatsoever (with one exception that I’ll explain below) – the idea behind this investigation wasn’t to find out how to make InnoDB FTS blazingly-fast, but simply to get a sense of how it works compared to traditional MyISAM FTS. We’ll get to performance in the third installment. For now, the important number here is to note that the InnoDB buffer pool for my 5.6 instance is 128MB – smaller than the size of my DIR data.

So, with all of that out of the way, let’s get to it.

Here is our basic table definition for the DIR dataset. The table for the SEO dataset looks identical, except that we replace “full_name” with a VARCHAR(255) “title” and “details” with a TEXT “body”.

CREATE TABLE `dir_test_innodb` (
  `id` int(10) unsigned NOT NULL,
  `full_name` varchar(100) DEFAULT NULL,
  `details` text,
  PRIMARY KEY (`id`),
  FULLTEXT KEY `full_name` (`full_name`,`details`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

We also have identical tables created in 5.5.30 where, of course, the only difference is that the engine is MyISAM rather than InnoDB. Loading the data was done via a simple Perl script, inserting one row at a time with AutoCommit enabled – remember, the focus here isn’t on performance just yet.

Having loaded the data, the first thing we notice is that there are a lot of “new” InnoDB tablespace files in our database directory:

-rw-rw----. 1 mysql mysql      8632 Feb 20 15:54 dir_test_innodb.frm
-rw-rw----. 1 mysql mysql 213909504 Feb 20 15:55 dir_test_innodb.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_0000000000000153_DOC_ID.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_ADDED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_BEING_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_BEING_DELETED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_CONFIG.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_DELETED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 16:09 FTS_00000000000000ad_STOPWORDS.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:54 FTS_0000000000000114_0000000000000144_DOC_ID.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:54 FTS_0000000000000114_ADDED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:54 FTS_0000000000000114_BEING_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:55 FTS_0000000000000114_BEING_DELETED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:55 FTS_0000000000000114_CONFIG.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:54 FTS_0000000000000114_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:55 FTS_0000000000000114_DELETED.ibd
-rw-rw----. 1 mysql mysql     98304 Feb 20 15:54 FTS_0000000000000114_STOPWORDS.ibd
-rw-rw----. 1 mysql mysql      8618 Feb 20 16:09 seo_test_innodb.frm
-rw-rw----. 1 mysql mysql  37748736 Feb 20 16:29 seo_test_innodb.ibd

By comparison, this is what we see on the MyISAM side:

-rw-rw----. 1 mysql mysql      8632 Feb 19 17:17 dir_test_myisam.frm
-rw-rw----. 1 mysql mysql 155011048 Feb 19 17:17 dir_test_myisam.MYD
-rw-rw----. 1 mysql mysql 153956352 Feb 19 17:18 dir_test_myisam.MYI
-rw-rw----. 1 mysql mysql      8618 Feb 20 16:11 seo_test_myisam.frm
-rw-rw----. 1 mysql mysql  21561096 Feb 20 16:11 seo_test_myisam.MYD
-rw-rw----. 1 mysql mysql  14766080 Feb 20 16:11 seo_test_myisam.MYI

I also observed that if I simply load the data into an InnoDB table that has never had a full-text index on it, and then I create one, the following warning is generated:

mysql> alter table dir_test_innodb ADD FULLTEXT KEY (full_name, details);
Query OK, 0 rows affected, 1 warning (39.73 sec)
Records: 0  Duplicates: 0  Warnings: 1

mysql> show warnings;
+---------+------+--------------------------------------------------+
| Level   | Code | Message                                          |
+---------+------+--------------------------------------------------+
| Warning |  124 | InnoDB rebuilding table to add column FTS_DOC_ID |
+---------+------+--------------------------------------------------+

This doesn’t make a lot of sense to me. Why does InnoDB need to add a hidden column (similar to GEN_CLUST_INDEX when you don’t define a PRIMARY KEY, I assume) when I already have an INT UNSIGNED PK that should suffice as any sort of document ID ? As it turns out, if you create a column called FTS_DOC_ID which is a BIGINT UNSIGNED NOT NULL with a unique index on it, your table doesn’t need to be rebuilt. The most important item to note here – FTS_DOC_ID must be spelled and specified exactly that way – IN ALL CAPS. If you try “fts_doc_id” or any other mixture of lettercase, you’ll get an error:

mysql> CREATE TABLE dir_test_innodb4 (fts_doc_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY);
ERROR 1166 (42000): Incorrect column name 'fts_doc_id'

Various points in the documentation mention the notion of a “document ID” that “might reflect the value of an ID column that you defined for the underlying table, or it can be a sequence value generated by InnoDB when the table does not contain a suitable column,” but there are only a handful of references to FTS_DOC_ID found when searching the MySQL 5.6 manual, and the only page which appears to suggest how using an explicitly-defined column is done is this one, which discusses improving bulk insert performance. At the very bottom, the page claims that you can speed up bulk loading into an InnoDB FT index by declaring a column called FTS_DOC_ID at table creation time of type BIGINT UNSIGNED NOT NULL with a unique index on it, loading your data, and then creating the FT index after the data is loaded.

One obvious problem wih those instructions is that if you define a column and a unique key as they suggest, your data won’t load due to unique key constraint violations unless you also do something to provide some sort of sequence value for that column, whether as an auto_increment value or via some other means, but the bit that troubles me further is the introduction of a column-level case-sensitivity requirement that only seems to actually matter at table creation time. Once I’ve got a table with an explicit FTS_DOC_ID column, however, MySQL apparently has no problem with either of the following statements:

mysql> insert into dir_test_innodb3 (fts_doc_id, full_name) values (1, 'john smith');
mysql> select * from dir_test_innodb3 where fts_doc_id=1;

Philosophically, I find that kind of behavior unsettling. I don’t like case-sensitivity in my table or column names to begin with (I may be one of the few people that likes lower_case_table_names = 1 in /etc/my.cnf), but I think it’s even worse that the case-sensitivity only matters some of the time. That strikes me as a good recipe for DBA frustration.

Now, let’s return to all of those FTS_*.ibd files. What, exactly, are they? In short, the _CONFIG.ibd file contains configuration info about the FT index (the same sort of configuration data that can be queried out of the I_S.INNODB_FT_CONFIG table, as I’ll discuss momentarily), and the others contain document IDs of new rows that are added to or removed from the table and which need to be merged back into or removed from the main index. I’m not entirely sure about the _STOPWORDS.ibd file just yet; I thought it might be somehow related to a custom stopwords table, but that doesn’t seem to be the case (or at least not in the way that I had thought), so I will need to look through the source to figure out what’s going on there.

In any case, for each new FULLTEXT KEY that you create, you’ll get a corresponding FTS_*_DOC_ID.ibd file (but none of the others), and if you drop a FT index, the its corresponding FTS_*_DOC_ID.ibd file will also be removed. HOWEVER, even if you drop all of the FT indexes for a given table, you’re still left with all of the other FTS_*.ibd files, and it appears that the only way to get rid of them is to actually rebuild the table.

-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_ADDED.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_BEING_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_BEING_DELETED.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 19:07 FTS_0000000000000025_CONFIG.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_DELETED_CACHE.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_DELETED.ibd
-rw-rw----. 1 mysql mysql 98304 Feb 20 18:57 FTS_0000000000000025_STOPWORDS.ibd

Also, while we’re on the subject of adding and dropping FT indexes, it’s entirely possible to DROP multiple FT indexes with InnoDB in the same ALTER TABLE statement, but it’s not possible to CREATE more than one at a time. If you try it, this is what happens:

mysql> alter table dir_test_innodb ADD FULLTEXT KEY (full_name, details), 
       ADD FULLTEXT KEY (details);
ERROR 1795 (HY000): InnoDB presently supports one FULLTEXT index creation at a time

That’s an odd limitation. Do it as two separate ALTER statements, and it appears to work fine.

But here’s where things start to get even weirder. According to the documentation, if we specify the name of a table that has a FT index for the global variable innodb_ft_aux_table, we should be able to get some statistics about the FT indexes on that table by querying the various I_S.INNODB_FT_* tables. In particular, the INNODB_FT_CONFIG table is supposed to “display metadata about the FULLTEXT index and associated processing for an InnoDB table.” The documentation also tells us that we can keep our FT indexes up to date by setting innodb_optimize_fulltext_only = 1 and then running OPTIMIZE TABLE, and that we might have to run OPTIMIZE TABLE multiple times if we’ve had a lot of changes to the table.

This all sounds pretty good, in theory, but at least some part of it doesn’t seem to work. First, let’s check the stats immediately after setting these variables, and then let’s push some additional data into the table, run an OPTIMIZE or two, delete some data, and see what happens:

mysql> set global innodb_ft_aux_table='test/dir_test_innodb';
mysql> set global innodb_optimize_fulltext_only=1;
mysql> select * from information_schema.innodb_ft_config;
+---------------------------+--------+                                                  
| KEY                       | VALUE  |                                                  
+---------------------------+--------+                                                  
| optimize_checkpoint_limit | 180    |                                                  
| synced_doc_id             | 787625 |                                                  
| last_optimized_word       |        |                                                  
| deleted_doc_count         | 0      |                                                  
| total_word_count          |        |                                                  
| optimize_start_time       |        |                                                  
| optimize_end_time         |        |                                                  
| stopword_table_name       |        |                                                  
| use_stopword              | 1      |
| table_state               | 0      |
+---------------------------+--------+
10 rows in set (0.00 sec)

mysql> insert into dir_test_innodb (full_name, details) 
       SELECT reverse(full_name), details FROM dir_test_innodb WHERE id < 500000; 
Query OK, 245051 rows affected (8 min 13.13 sec) 
Records: 245051  Duplicates: 0  Warnings: 0 

mysql> select * from information_schema.innodb_ft_config;
+---------------------------+---------+
| KEY                       | VALUE   |
+---------------------------+---------+
| optimize_checkpoint_limit | 180     |
| synced_doc_id             | 1028261 |
| last_optimized_word       |         |
| deleted_doc_count         | 0       |
| total_word_count          |         |
| optimize_start_time       |         |
| optimize_end_time         |         |
| stopword_table_name       |         |
| use_stopword              | 1       |
| table_state               | 0       |
+---------------------------+---------+
10 rows in set (0.02 sec)

mysql> optimize table dir_test_innodb;
+----------------------+----------+----------+----------+
| Table                | Op       | Msg_type | Msg_text |
+----------------------+----------+----------+----------+
| test.dir_test_innodb | optimize | status   | OK       |
+----------------------+----------+----------+----------+
1 row in set (4.60 sec)

mysql> select * from information_schema.innodb_ft_config;
+---------------------------+---------+
| KEY                       | VALUE   |
+---------------------------+---------+
| optimize_checkpoint_limit | 180     |
| synced_doc_id             | 1032677 |
| last_optimized_word       |         |
| deleted_doc_count         | 0       |
| total_word_count          |         |
| optimize_start_time       |         |
| optimize_end_time         |         |
| stopword_table_name       |         |
| use_stopword              | 1       |
| table_state               | 0       |
+---------------------------+---------+
10 rows in set (0.00 sec)

mysql> delete from dir_test_innodb LIMIT 200000;
Query OK, 200000 rows affected (8.65 sec)

mysql> optimize table dir_test_innodb;           
+----------------------+----------+----------+----------+
| Table                | Op       | Msg_type | Msg_text |
+----------------------+----------+----------+----------+
| test.dir_test_innodb | optimize | status   | OK       |
+----------------------+----------+----------+----------+
1 row in set (8.44 sec)

mysql> select * from information_schema.innodb_ft_config;
+---------------------------+---------+
| KEY                       | VALUE   |
+---------------------------+---------+
| optimize_checkpoint_limit | 180     |
| synced_doc_id             | 1032677 |
| last_optimized_word       |         |
| deleted_doc_count         | 0       |
| total_word_count          |         |
| optimize_start_time       |         |
| optimize_end_time         |         |
| stopword_table_name       |         |
| use_stopword              | 1       |
| table_state               | 0       |
+---------------------------+---------+

I ran OPTIMIZE TABLE several more times, and each execution took between 6 and 8 seconds, but the output of the query against I_S.innodb_ft_config never changed, so it seems like at least some of the diagnostic output isn’t working quite right. Intuitively, I would expect some changes in total_word_count, or optimize_(start|end)_time, and the like. However, if I check some of the other I_S tables, I do find that the number of rows in I_S.innodb_ft_index_table is changing, so it’s pretty clear that I do actually have a FT index available.

At the start of this post, I mentioned that I did make one configuration change to the default InnoDB settings for MySQL 5.6, and that was to change innodb_ft_min_token_size from the default of 3 to a value of 4 so that it would be identical to the MyISAM default. After all, the (naive?) hope here is that when I run an FTS query against both MyISAM and InnoDB I will get back the same results; if this equivalence doesn’t hold, then as a consultant, I might have a hard time recommending this feature to my customers, and as an end user, I might have a hard time using the feature at all, because it could completely alter the behavior of my application unless I also make a nontrivial number of code changes.

In part 2 of this series, we’ll reload and reset our SEO and DIR data sets back to their initial states, run some queries, and compare the output. Stay tuned, it gets rather curious.

The post InnoDB Full-text Search in MySQL 5.6 (part 1) appeared first on MySQL Performance Blog.

Feb
26
2013
--

Announcing Percona Server for MySQL version 5.5.29-30.0

Percona is glad to announce the release of Percona Server for MySQL version 5.5.29-30.0 on February 26th, 2013 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.29, including all the bug fixes in it, Percona Server 5.5.29-30.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.29-30.0 milestone at Launchpad.

New Features:

  • Ported the Thread Pool patch from MariaDB. This feature enables the server to keep the top performance even with the increased number of client connections.
  • New user statements have been introduced for handling the XtraDB changed page tracking log files.
  • In addition to the –debug build option for build-binary.sh script, new –valgrind option has been introduced, which will build debug builds with the Valgrind instrumentation enabled.

Bugs Fixed:

  • Ported a fix from MariaDB for the upstream bug #67974, which caused server crash on concurrent ALTER TABLE and SHOW ENGINE INNODB STATUS. Bug fixed #1017192 (Sergei Glushchenko).
  • The server could crash when executing an INSERT or UPDATE statement containing BLOB values for a compressed table. This regression was introduced in Percona Server 5.5.28-29.2. Bug fixed #1100159 (Laurynas Biveinis).
  • Upstream bug #67983 was causing a memory leak on a filtered slave. Bug fixed #1042946 (Sergei Glushchenko).
  • Percona Server would fail to install on a vanilla Ubuntu 12.04 server. Bug fixed #1103655 (Ignacio Nin).
  • The master thread was doing dirty buffer pool flush list reads to make its adaptive flushing decisions. Fixed by acquiring the flush list mutex around the list scans. Bug fixed #1083058 (Laurynas Biveinis).
  • Upstream changes made to improve InnoDB DROP TABLE performance were not adjusted for XtraDB. This could cause server assertion errors. Bugs fixed #934377, bug #1111211, bug #1116447 and #1110102 (Laurynas Biveinis).
  • The XtraDB used to print the open read view list without taking the kernel mutex. Thus any list element might become invalid during its iteration. Fixed by taking the kernel mutex. Bug fixed #1101030 (Laurynas Biveinis).
  • When option innodb_flush_method=O_DIRECT was set up, log bitmap files were created and treated as InnoDB data files for flushing purposes, which wasn’t original intention. Bug fixed #1105709 (Laurynas Biveinis).
  • INFORMATION_SCHEMA plugin name innodb_changed_pages serves also as a command line option, but it is also a prefix of another command line option innodb_changed_pages_limit. MySQL option handling would then shadow the former with the latter, resulting in start up errors. Fixed by renaming the innodb_changed_pages_limit option to innodb_max_changed_pages. Bug fixed #1105726 (Laurynas Biveinis).
  • Time in slow query log was displayed incorrectly when slow_query_log_timestamp_precision variable was set to microseconds. Bug fixed #887928 (Laurynas Biveinis).
  • Writing bitmap larger than 4GB would cause write to fail. Also a write error for every bitmap page, except the first one, would result in a heap corruption. Bug fixed #1111226 (Laurynas Biveinis).
  • Fixed the upstream bug #67504 that caused spurious duplicate key errors. Errors would happen if a trigger is fired while a slave was processing replication events for a table that is present only on slave server while there are updates on the replicated table on the master which is used in that trigger. For this to happen master needs to have more than one auto-increment table and the slave needs to have at least one of those tables specified in the replicate-ignore-table. Bug fixed #1068210 (George Ormond Lorch III).
  • Fixed failing rpm builds, that were caused by missing files. Bug fixed #1099809 (Alexey Bychko).
  • Fixed the upstream #68116 that caused the server crash with assertion error when InnoDB monitor with verbose lock info was used under heavy load. This bug is affecting only -debug builds. Bug fixed #1100178 (Laurynas Biveinis).
  • XtraDB changed page tracking wasn’t compatible with innodb_force_recovery=6. When starting the server log tracking initialization would fail. The server would abort on startup. Bug fixed #1083596 (Laurynas Biveinis).
  • Newly created bitmap file would silently overwrite the old one if they had the same file name. Bug fixed #1111144 (Laurynas Biveinis).
  • A server would stop with an assertion error in I/O and AIO routines if large innodb_log_block_size value is used in the combination with changed page tracking. Bug fixed #1114612 (Laurynas Biveinis).
  • InnoDB monitor was prefetching the data pages for printing lock information even if no lock information was going to be printed. Bug fixed #1100643 (Laurynas Biveinis).
  • InnoDB and the query plan information were being logged even if they weren’t enabled for the slow query log. Bug fixed #730173 (Laurynas Biveinis).
  • Fixed the incorrect help text for slow_query_log_timestamp_precision. Bug fixed #1090965 (Laurynas Biveinis).

Other bug fixes: bug fixed #909376 (Laurynas Biveinis), bug fixed #1082437 (Laurynas Biveinis), bug fixed #1083669 (Laurynas Biveinis), bug fixed #1096904 (Laurynas Biveinis), bug fixed #1091712 (Laurynas Biveinis), bug fixed #1096899 (Laurynas Biveinis), bug fixed #1088954 (Laurynas Biveinis), bug fixed #1096895 (Laurynas Biveinis), bug fixed #1092142 (Laurynas Biveinis), bug fixed #1090874 (Laurynas Biveinis), bug fixed #1089961 (Laurynas Biveinis), bug fixed #1088867 (Laurynas Biveinis), bug fixed #1089031 (Laurynas Biveinis), bug fixed #1108874 (Laurynas Biveinis).

Release notes for Percona Server 5.5.29-30.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Announcing Percona Server for MySQL version 5.5.29-30.0 appeared first on MySQL Performance Blog.

Feb
26
2013
--

Why do we care about MySQL Performance at High Concurrency?

In many MySQL Benchmarks we can see performance compared with rather high level of concurrency. In some cases reaching 4,000 or more concurrent threads which hammer databases as quickly as possible resulting in hundreds or even thousands concurrently active queries.

The question is how common is it in production ? The typical metrics to use for concurrency defined as number of queries being actually processed is “Threads_Running” which you can easily see for your production system:

root@smt2:/mnt/data/ mysqladmin extended -i1 | grep Threads_running
| Threads_running                               | 60          |
| Threads_running                               | 61          |
| Threads_running                               | 63          |
| Threads_running                               | 62          |
| Threads_running                               | 62          |
| Threads_running                               | 46          |

In my experience most of the systems out in the field will run with concurrency no more than low tens with normal load. Many have monitoring set up to kick in if number of threads running jumps over 50-100 and stays there for any significant amount of time.

So if people do not really run MySQL with high concurrency does it really matter how MySQL performs with high concurrency or is it just marketing gimmicks to promote new software versions ?

Performance at High Concurrency is important but for other reasons. Real world systems are different from benchmarks in terms they typically do not have fixed concurrency, instead they have to serve requests as they come in what can be close to “random arrivals” but in fact can be a lot more complicated in the practice. There is the chance both for burst of queries to come and hit database server at almost the same time (often when there is some stall happens on external system, such as memcached server) or the database server itself to experience the “micro stall” which can cause the buildup of the queries. Such build ups can happen very quickly.

Imagine for example some high volume Web system. It well might have 100 of Web servers each having Apache configured to run up to 100 apache children concurrently each of which might open a connection to MySQL server… this ends up with very high number of up to 10K connections and potentially running queries. Now imagine typically we have 30K queries/sec coming in from the Web level with some 1ms average latency which requires just around 30 queries to be ran at the same time. Imagine now database stalls just for 100ms – which you most likely will not even see with naked eye. This will results with expected 3000 queries to be backed up considering our inflow rate, which are quite likely to come from 1000 or more connections.

This is when performance at High Concurrency is the difference between life and death. Take a look at these graphs The Blue line of MySQL Community Server has just 1/4th of its Peak performance of concurrency of 1000 while Red Line of MySQL Enterprise Edition with Thread Pool plugin remains very close to the peak.

In practice this will mean one server will be able to process the backlog very quickly and recover after such stall the other server will be depressed and might be unable to serve the inflow of requests getting higher and higher under water. Some systems may never recover in this case until Web server is restarted or load removed from them other way, others will recover but taking a lot more time and with a lot more user impact – after all slower responses will cause users to submit less requests to the system reducing the load.

So yes. The performance at high concurrency matters as it helps systems in distress. But performance at low and medium concurrency matters too as this is what will define system performance during its normal operation.

P.S I think it would be interested for people to see how high concurrency people are running servers in the field. If you can run the command above for your loaded production servers (but which are not in distress) I think it would be very interesting.

The post Why do we care about MySQL Performance at High Concurrency? appeared first on MySQL Performance Blog.

Feb
25
2013
--

Percona Server on the Raspberry Pi: Your own MySQL Database Server for Under $80

There are many reasons for wanting a small MySQL database server:

  • You’re a uni student who wants to learn the SQL language better and needs a mini-testbox
  • You’re a Windows user who wants to play around with Percona Server on Linux
  • You’re a corporate application developer who wants a small SQL development & test box
  • You’re a Internet startup that just needs a tiny startup database server without all the added costs

So, how about if you could setup a small Arch Linux ARMv6-based hardware device which runs Percona Server for MySQL, in a space not much bigger than your mouse, with the power consumption of only a smartphone charger, fully networked, all for under $80?

Introducing the Raspberry Pi with Percona Server:

Raspberry Pi

Raspberry Pi Fully Connected (Only the power + network connections are needed in this tutorial)

A small disclaimer first: Arch Linux is not currently on our list of supported OS’es (i.e. we do not promise to provide binaries, even though they are available from the Arch Linux AUR repository), however if you do happen to run into issues, our support service engineers are happy to provide help.

To get started, here is what you need (prices in AU$):

  • The Raspberry Pi itself (about $36)
    • Make sure to get the v2 (512MiB SDRAM) version
  • A plastic case (~$12 on eBay which includes 3 heatsinks)
  • A SD Card reader (likely build into your laptop or PC already)
  • Network setup
    • A network lead and spare network port on your router
    • A DHCP server (your router likely has this enabled already)
  • A quality 2GB (or more) SDHC Card
    • I use a SanDisk 8GB Class10 30MB/s card (~$8.50 on eBay)
  • A 1Amp+ micro USB power supply + cable
    • I use a 2Amp dual portapow supply (~$20 on eBay)

Total cost: ~76.5 AU$ (= ~79 US$).

Optionally, if your screen has a DVI port, you can get a HDMI Cable + DVI (male)-to-HDMI (female) connector (~$12). This enables you to connect the Raspberry Pi to your screen, and so follow what is happening on the console of the Raspberry Pi. Make sure to get the proper DVI connector for your screen as many connectors that are sold have a wrong pin layout (most standard computer screens use a DVI-D connector, not a DVI-I one).

Alternatively, if you really do end up needing to view the console of your Raspberry Pi (for example to configure an IP address in Arch Linux if you do not have a DHCP server), you can simply hook the Raspberry Pi to your TV set using the Composite RCA (PAL & NTSC) connector.

Once you have your hardware, here is how to get Percona Server for MySQL up and running:

  1. Insert the SD card into a slot on your laptop or desktop PC.
  2. Download Fedora ARM Installer (available for Linux and Windows).
  3. Download the Arch Linux ARM image from the Raspberry Pi website and unzip.
  4. Start Fedora ARM Installer (in Windows make sure to right-click the exe and “Run as administrator”).
    • Select the image file (about 1.8GB .img file) and select the destination drive to write the image to. Make sure you selected the right drive and click “Install”.
    • The image gets written to a 2Gb partition on the card. If you have a larger card, you can add another partition later.
  5. Software eject the SD card using your OS’es software eject system (to prevent partial writes), take out the card and insert into your Pi. Carefull, the card goes upside down :)
  6. Plug in the network lead to the Raspberry Pi, and connect the other end of the lead to your router.
  7. Plug in the Micro USB power lead to the Raspberry Pi, the other side to the power supply. Plug in the power supply.
  8. The Raspberry Pi will automatically get an IP address from the DHCP server (Arch Linux has this preset)
    • Also, Arch Linux has sshd already active (sshd=ssh daemon, a software package allowing you to connect to a [Linux] server console/command prompt).
  9. Connect via ssh to the Raspberry Pi. For windows, I use the Tunnelier SSH Client from Bitvise
    • If you do not know what IP address the Raspberry Pi got (likely the first available address in your DHCP range), you can always check your router’s DHCP table which should list all IP addresses it assigned, usually together with a host name.
    • While there, you could also assign a permanent IP address (still assigned via DHCP) by binding a MAC address to a reserved IP in your router’s DHCP server setup. This makes it easier later on.
    • You could also assign a permanent DNS name (like pi.yourdomain.com) by binding the reserved IP to a DNS name in your router’s DNS server setup.
    • The password for user ‘root’ is ‘root’, and the OS boots in about 10 seconds.
    • Connecting to the Raspberry Pi using Linux (ssh) and Windows (Tunnelier):
      $ ssh root@192.168.0.180
      root@192.168.0.180's password:
      Last login: Fri Feb 22 23:27:49 2013 from percona.<...>
      [root@alarmpi ~]#

      Tunnelier connecting with the Raspberry Pi

  10. Once connected to the Raspberry Pi, either via ssh on Linux or Tunnelier on Windows, update the root password first:
    [root@alarmpi ~]# passwd
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully

    (If you use Windows+Tunnelier and have saved your password in the configuration file, update it in Tunnelier now also)

  11. Re-connect to see if your password change worked as expected, and install Percona Server from the Arch Linux AUR Repo at the same time:
    (pacman is the ‘Package Manager’ for Arch Linux) 

    [root@alarmpi ~]# pacman -S percona-server
    resolving dependencies...
    looking for inter-conflicts...
    
    Targets (4): libaio-0.3.109-6  libmysqlclient-5.5.30-1  mysql-clients-5.5.30-1  percona-server-5.5.29_rel29.4-1
    
    Total Download Size:    8.07 MiB
    Total Installed Size:   96.00 MiB
    
    Proceed with installation? [Y/n] y
    :: Retrieving packages from extra...
    libmysqlclient-5.5.30-1-armv6h                                           3.2 MiB   435K/s 00:08 [########################################################] 100%
    mysql-clients-5.5.30-1-armv6h                                          736.6 KiB   395K/s 00:02 [########################################################] 100%
    :: Retrieving packages from community...
    libaio-0.3.109-6-armv6h                                                  4.6 KiB  1089K/s 00:00 [########################################################] 100%
    percona-server-5.5.29_rel29.4-1-armv6h                                   4.1 MiB   323K/s 00:13 [########################################################] 100%
    (4/4) checking package integrity                                                                 [########################################################] 100%
    (4/4) loading package files                                                                      [########################################################] 100%
    (4/4) checking for file conflicts                                                                [########################################################] 100%
    (4/4) checking available disk space                                                              [########################################################] 100%
    (1/4) installing libmysqlclient                                                                  [########################################################] 100%
    (2/4) installing mysql-clients                                                                   [########################################################] 100%
    (3/4) installing libaio                                                                          [########################################################] 100%
    (4/4) installing percona-server                                                                  [########################################################] 100%
    Installing MySQL system tables...
    OK
    Filling help tables...
    OK
    [...more setup information, please read...]
  12. [Optional] Set Percona server to startup at boot time
    [root@alarmpi ~]# systemctl enable mysqld
    ln -s '/usr/lib/systemd/system/mysqld.service' '/etc/systemd/system/multi-user.target.wants/mysqld.service'
  13. Start Percona Server
    [root@alarmpi ~]# systemctl start mysqld
  14. Connect using the mysql client:
    [root@alarmpi ~]# /usr/bin/mysql
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    [...]
    mysql> SHOW ENGINES;
    +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
    | Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |
    +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
    | InnoDB             | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |
    [...]
    mysql> SELECT "Hello World!" AS "Success!" \G
    *************************** 1. row ***************************
    Success!: Hello World!
    1 row in set (0.00 sec)

Now that was easy wasn’t it?

A few other points:

  • This setup uses Percona Server for MySQL with the latest mysql client, which is untested, but should be “ok/fine” in most situations
  • If you want to edit the configuration file, you can find it in /etc/mysql/my.cnf
  • If you setup Percona Server to startup at boot time (step 12), give your little server a reboot (type “reboot”) now and check that Percona Server works fine afterwards.
  • Make sure to secure your server properly, especially if it will be exposed on the Internet or if this will be a production server.
    • Secure the initial accounts for your installation.
    • If this will be a production server, you should use a seperate userID to run Percona Server.
    • There are more (OS and otherwise) steps necessary then what it listed here to make your server secure, but those are outside the scope of this article.
  • To check if mysqld is running, use “ps -ef | grep -v grep | grep mysqld” at the Linux command line
  • If you did get a SD card which is larger then 2GB, you can add another partition using fdisk & mkfs.ext4 (comment below if you cannot work out how, and I can post some additional information)

Enjoy!

The post Percona Server on the Raspberry Pi: Your own MySQL Database Server for Under $80 appeared first on MySQL Performance Blog.

Feb
25
2013
--

Percona Welcomes MySQL 5.6!

MySQL 5.6 was made generally available as a production-ready solution earlier this month. This release comes about 2 years after MySQL 5.5 was released, but MySQL 5.6 contains improvements started long before that – for example, work on the Innodb Full Text Search project was started over 6 years ago, in addition with many optimizer and replication features. We’re happy to congratulate MySQL development team at Oracle with making this release happen.

In this blog post, I will not go into a features overview of MySQL 5.6. You can check out Release Notes for a good overview as well as many blog posts and articles written on this matter. Instead, I will tell you what is the current state of things with MySQL 5.6 at Percona.

Percona’s Services for MySQL 5.6

In addition to Percona Server, Percona XtraDB Cluster, MariaDB, Amazon RDS for MySQL, Drizzle, and Oracle MySQL, MySQL 5.6 is fully supported by our MySQL Consulting,  MySQL Support, MySQL Server Development, and MySQL Remote DBA teams:

  • Our MySQL Consulting Team and MySQL Support Team can work together to audit your current system; assess how to re-architect, develop, and tune your application to take advantage of MySQL 5.6′s features; create an in-depth upgrade plan; help design and deploy new applications to bring new benefits to your customers; and, then help support your environment while providing bug fixes and enhancements.
  • Our MySQL Server Development Team is ready to help you with code level issues such as compatibility, porting plugins and other code you might have from previous MySQL versions to MySQL 5.6, and other issues.
  • Our MySQL Remote DBA Team is ready to monitor, maintain, and fine tune our customers’ MySQL 5.6 test and production environments, and address any performance issues that might arise.
  • Our MySQL Training team is working on updating materials to include MySQL 5.6, and is already ready to provide custom private training covering MySQL 5.6

 

Percona Open Source Software MySQL 5.6 Integration

From a product perspective, we’re working hard towards MySQL 5.6 integration with Percona Software.

  • Percona Toolkit 2.1 already supports MySQL 5.6. Improved support is coming in Percona Toolkit 2.2 which will be released soon.
  • Percona Monitoring Plugins are also MySQL 5.6 compatible.
  • Percona Xtrabackup support for MySQL 5.6 is coming next month with Percona Xtrabackup 2.0.6.  Though used more for final integration and testing questions, we have customers backing up and restoring MySQL 5.6 databases in standard configuration successfully with the current version.

Percona Server Logo
The largest focus of our development effort is in getting the production version of Percona Server 5.6 released, followed by the Percona XtraDB Cluster 5.6 release. We’re very excited about the Percona Server 5.6 release which will contain all MySQL 5.6 features, as well as Percona’s additions and optimizations. MySQL 5.6 rolls many improvements which previously existed only in Percona Server into MySQL 5.6 mainline which means we will have less code to maintain and can focus on more valuable features and optimizations.

Learn More About MySQL 5.6

If you’re interested in learning more about MySQL 5.6, the Percona Live MySQL Conference and Expo is a great chance to do so. MySQL 5.6 will be well covered in talks, Birds of a Feather sessions, and hallway conversations.

As always, please check our blog on a regular basis for Percona product release announcements and to share our experience working with 5.6. In addition, you can subscribe to our new monthly technical newsletter, which conveniently summarizes all new software releases and popular blog posts. We are also scheduling many webinars focused on MySQL 5.6, including one I will be giving on March 6 called “MySQL 5.6 Advantages in a Nutshell.”

At Percona, we’re ready for MySQL 5.6! Are you? We can help!

The post Percona Welcomes MySQL 5.6! appeared first on MySQL Performance Blog.

Feb
24
2013
--

How to use a smartphone for two-factor authentication with Percona Server for MySQL

In this post I will describe a non-trivial way to authenticate users in Percona Sever for MySQL. Percona Server comes with PAM authentication plugin, which allows you to do a lot of cool things, such as: OS authentication, LDAP authentication, even RSA Secure Server authentication (which is useful if you are required a PCI-compliance), and use Google Authenticator, which is the topic of this post.

How to use a smartphone for two-factor authentication with Percona Server for MySQL

You can use a smartphone for two-factor authentication with Percona Server for MySQL

With two-factor authentication a user is required to enter not only password, but to have an additional security token, which in the case with Google Authenticator can be your cell-phone (clients are available for Android, iPhone, Windows Mobile or BlackBerry with the full list here). This way an attacker will need not only steal or guess password, but also to gain an access to cell phone, which is not impossible, but makes things more complicated.

The setup actually is quite easy if you follow steps:

1. Enable PAM plugin (more in our documentation):

mysql&gt; INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';

2. Configure PAM for mysqld process by putting into /etc/pam.d/mysqld file:

auth       required     pam_unix.so
account    required     pam_unix.so

3. Create a user in the server:

mysql&gt; CREATE USER 'vadim'@'%' IDENTIFIED WITH auth_pam;

After this we already able to authenticate using an OS account (assuming there is an account for user ‘vadim’)

4. Now, to install pam-google-authenticator, there are possible ways:

5. From the user account, we setup authentication for, run google-authenticator for an initial setup.

&gt; google-authenticator
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/vadim@secrethost.percona.com%3Fsecret%3DEDCHNEEFQ5TYQFYL
Your new secret key is: EDCHNEEFQ5TYQFYL

It outputs URL, when you use it will produce an QRCode, like:
QRCode
which you scan from smartphone Authentificator application to connect accounts, or
just manually enter secret key.

6. Instruct PAM to use google authentificator, add to /etc/pam.d/mysqld

auth       required     pam_google_authenticator.so

7. All set, now when I try to login with user ‘vadim’, the server asks me both password and Verification Code (generated on my smartphone in Authentificator application)

mysql -uvadim
Password:  
Verification code:

Authentificating against LDAP server is not much more complicated, once you have LDAP running, just replace
pam_unix to pam_ldap in /etc/pam.d/mysqld

This was just an example, but to show a Percona Server’s feature: a complex authentication which satisfies Enterprise-grade security requirements.

The post How to use a smartphone for two-factor authentication with Percona Server for MySQL appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com