Readers: What authors really want from you

Photo courtesy Angela Brown, friend and author

Photo courtesy Angela Brown, friend and author

Readers, what do authors want from you? Ha, that’s an open-ended question isn’t it?

I bet you answered “to buy all your books.” Yes, that’s true. Most of us desire to make a living from our writing, not so much to be stinking rich but enough that we can make writing our full-time career. That’s a win for readers too, since we can write more books.

You may also have said “to write awesome 5-star reviews.” Yes, please. I’ve spoken about reviews before, and we all know that lots of good reviews sell more books.

By now,  you’re thinking more like an author and might offer “to promote your books and tell all our friends.” Ding. Ten points! Word of mouth is much more powerful than even reviews. Reviews are the opinion of strangers. When you recommend a book in person, that endorsement carries significant weight because you (presumably) know what your friends like to read. How often do you not read a book when your BFF tells you “OMG, you just have to read this. I stayed up all night to finish it!” Authors dream of readers becoming such passionate advocates of our books.

Taking these three points as a given, what authors really want from readers is feedback and comments. Note that this doesn’t mean flattery and singing our praises because our latest book is “a complete masterpiece”. It means honest, down-to-earth and personal contact. You see, most writers don’t write to be rich or famous, we write because we want people to enjoy our stories. Hearing from readers is the high point of our days. A good review is nice, as is a post on Facebook or a tweet about liking one of our books. Even better is a personal message, a comment on our web sites or an email. Obviously I can’t speak for all authors here, since some don’t like to approached in such an intimate form as an email, but many of us love it. Even just a few words means a lot to us. That you went out of your way to comment or email is not something we take for granted.

Maybe you just want to say you enjoyed our book. Maybe it resonated with you in some way, reminded you of someone or that you found the theme or symbology meaningful. Perhaps there was something you didn’t like. Personally, I like to hear that too. Perhaps you wanted more mystery, more romance, or you just want to hear more about a particular character. All feedback is great. As authors we spend months or years creating something and then we throw it into the world for others to enjoy. The worst thing for us is to hear crickets. Did we move you, make you laugh, make you cry? Did you fall in love with a character, or hate a villain so much that you cheered when he got defeated?

Historically, authors have cultivated an aloofness I think; someone we readers fawned over at conventions or book signings. Or maybe it was just too difficult to engage with an author in the days before the internet or social media. Most authors I know today, love chatting with readers. Remember that all authors are readers too, and we love talking about books, yes, even – shock – other people’s books!

You have a chance to shape our future books too. Writing is fast becoming a collaborative effort in that if we know what types of books, settings, characters, or situations you prefer, we can tailor our future work. Tell us which of our books you liked best, and why. I’d like to know if more readers are interested in a sequel to Ocean of Dust than Necromancer, or vice versa. I like to think that’s a win for readers too. Many authors are collaborative in this way. It’s fun.

So, dear reader, there is a solid reason that we display our email address (or a contact form) on our web sites, and have social media profiles – to make it easy for you to contact us. Please do! Don’t be shy.



(More) Secure local passwords in MySQL 5.6 and up

I log into a lot of different servers running MySQL and one of the first things I do is create a file in my home directory called ‘.my.cnf’ with my credentials to that local mysql instance:


This means I don’t have to type my password in every time, nor am I tempted to include it on the command line with -p and get the dreaded (but completely accurate):

Warning: Using a password on the command line interface can be insecure.

rel="nofollow" href="" rel="nofollow">MySQL 5.6 introduces a utility to make this easier and more secure. First, let’s start with a new mysqld instance with a blank root password and make it more secure:

[vagrant@localhost ~]$ mysqladmin -u root password
New password:secret
Confirm new password:secret
[vagrant@localhost ~]$ mysql -u root
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

Ok, so I’ve added a password, now I want to create my .my.cnf file:

[vagrant@localhost ~]$ mysql_config_editor set --user=root --password
Enter password:secret
[vagrant@localhost ~]$ mysql
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 10
Server version: 5.6.21-70.0 Percona Server (GPL), Release 70.0, Revision 688
Copyright (c) 2009-2014 Percona LLC and/or its affiliates
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

What did ‘mysql_config_editor set’ actually do? It creates a .mylogin.cnf file (which stands in for a .my.cnf) in my home directory that contains my credentials, just in encrypted form:

[vagrant@localhost ~]$ ls -alh .my*
-rw-------. 1 vagrant vagrant 100 Nov 19 16:20 .mylogin.cnf
-rw-------. 1 vagrant vagrant  29 Nov 19 16:20 .mysql_history
[vagrant@localhost ~]$ cat .mylogin.cnf
??>NTv?&?S???/?,	>?$%KZ 9i?V?jK?H[???
[vagrant@localhost ~]$ mysql_config_editor print
user = root
password = *****

The mysql client picks this up right away and will use it by default. This file has good default filesystem permissions, is local to my homedir, and is a whole lot better than specifying it on the command line or typing it in every time.

This utility also supports a feature called ‘login-path’ wherein you can add multiple mysql logins (perhaps to different servers) and refer to them with the —login-path option in the mysql client:

[vagrant@localhost ~]$ mysql_config_editor set --login-path=remote --host=remote --user=remote --password
Enter password:secure
[vagrant@localhost ~]$ mysql_config_editor print --all
user = root
password = *****
user = remote
password = *****
host = remote
[vagrant@localhost ~]$ mysql --login-path=remote
ERROR 2005 (HY000): Unknown MySQL server host 'remote' (0)

The ‘remote’ host doesn’t exist here, but you get the idea. You can create as many login-paths as you want with varied hostnames, credentials and other login parameters and quickly access them with any client supporting login-path.

Now, rel="nofollow" href="" rel="nofollow">how secure is this really?  This isn’t secure from anyone who roots your DB server.  I would say the benefits are more about reducing careless password storage and tidier management of local credentials.

The post rel="nofollow" href="">(More) Secure local passwords in MySQL 5.6 and up appeared first on rel="nofollow" href="">MySQL Performance Blog.


Percona Server 5.6.21-70.1 is now available

href=""> class="alignright size-thumbnail wp-image-22759" src="" alt="Percona Server" width="150" height="150" />Percona is glad to announce the release of href="">Percona Server 5.6.21-70.1 on November 24, 2014. Download the latest version from the title="Percona Server 5.6" href="" >Percona web site or from the Percona href="">Software Repositories.

Based on MySQL rel="nofollow" href="" rel="nofollow">5.6.21, including all the bug fixes in it, Percona Server 5.6.21-70.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free. Complete details of this release can be found in the rel="nofollow" href="" rel="nofollow">5.6.21-70.1 milestone on Launchpad.

Bugs Fixed:

  • A slave replicating in RBR mode would crash, if a table definition between master and slave differs with an allowed conversion, and the binary log contains a table map event followed by two row log events. This bug is an upstream regression introduced by a fix for bug rel="nofollow" rel="nofollow" href="">#72610. Bug fixed rel="nofollow" rel="nofollow" href="">#1380010.
  • An incorrect source code function attribute would cause MySQL to crash on an InnoDB row write, if compiled with a recent GCC with certain compilation options. Bug fixed rel="nofollow" rel="nofollow" href="">#1390695 (upstream rel="nofollow" rel="nofollow" href="">#74842).
  • MTR tests for Response Time Distribution were not packaged in binary packages. Bug fixed rel="nofollow" rel="nofollow" href="">#1387170.
  • The RPM packages provided for CentOS 5 were built using a debugging information format which is not supported in the gdb version included with CentOS 5.10. Bug fixed rel="nofollow" rel="nofollow" href="">#1388972.
  • A session on a server in mixed mode binlogging would switch to row-based binlogging whenever a temporary table was created and then queried. This switch would last until the session end or until all temporary tables in the session were dropped. This was unnecessarily restrictive and has been fixed so that only the statements involving temporary tables were logged in the row-based format whereas the rest of the statements would continue to use the statement-based logging. Bug fixed rel="nofollow" rel="nofollow" href="">#1313901 (upstream rel="nofollow" rel="nofollow" href="">#72475).

Other bugs fixed: rel="nofollow" rel="nofollow" href="">#1387227, and rel="nofollow" rel="nofollow" href="">#1388001.

Release notes for Percona Server 5.6.21-70.1 are available in the  href="">online documentation. Please report any bugs on the rel="nofollow" href="" rel="nofollow">launchpad bug tracker

The post rel="nofollow" href="">Percona Server 5.6.21-70.1 is now available appeared first on rel="nofollow" href="">MySQL Performance Blog.


Host Analytics Raises $25M And Says Next Step Is IPO

Spreadsheet and 3D chart coming out of tablet. Host Analytics, a company that provides cloud-based financial analytics tools, announced $25M in Series E funding today, and suggested the next logical step for the company is going public. Today’s funding brings their total to date to $77M.
The round was led by Centerview Capital Technology. Also participating were previous investors Advanced Technology Ventures, Next World Capital… Read More


Salesforce Wants App Hub To Be One-Stop Shop For SMBs

Women working in small customer service center. announced an update today to, their customer service product aimed at small businesses. They have created a app store called the App Hub that includes a bunch of partner apps chosen specifically for SMBs to give them a one-stop shop for the cloud applications they need –and they have created a way to integrate those apps directly into Among… Read More


Schema changes in MySQL for OpenStack Trove users

People using OpenStack Trove instances can hit a common issue in the MySQL world: how to perform schema change operations while minimizing the impact on the database server? Let’s explore the options that can allow online schema changes.


class="alignright" src="" alt="Schema changes in MySQL for OpenStack Trove users" width="167" height="256" />With MySQL 5.5, href="" >pt-online-schema-change from Percona Toolkit is your best option for large tables while regular ALTER TABLE statements are only acceptable for small tables. Also beware of metadata locks.

With MySQL 5.6, almost all types of schema changes can be done online. Metadata locks can also be an issue. pt-online-schema-change can still be worth using as it is also online on read replicas.

Regular ALTER TABLE with MySQL 5.5

If you are still using MySQL 5.5, almost all schema changes will require a table rebuild and MySQL will set a write lock. Therefore all writes to the table that gets modified will be blocked. As soon as the table gets large or if you cannot afford to have a maintenance window, using ALTER TABLE becomes tricky.

The only exception is that secondary indexes can be added or removed without rebuilding the whole table. The table is still write locked during the operation but it is much faster.

You can spot this ‘fast index creation’ process by looking at SHOW PROCESSLIST (see manage keys in the State field):

| Id | User | Host      | db     | Command | Time | State       | Info                                    | Rows_sent | Rows_examined | Rows_read |
|  1 | root | localhost | sbtest | Query   |    4 | manage keys | alter table sbtest2 add index idx_k (k) |         0 |             0 |         0 |

It is helpful for small or even medium tables, but it is not fast enough for large tables.

A common workaround when you have a master-slave configuration is to perform the schema change on the slave first, promote the slave and then perform the schema change on the old master. However replication is not only available if you are using Icehouse and even with Juno, replicas are read-only. So this workaround is not usable with Trove.


A good alternative is to use href="" >pt-online-schema-change. In a nutshell the tool creates a new table with the desired structure, creates triggers on the original table, and copies rows in chunks from the original table to the new table.

Nice features of the tool include being able to write to the original table while the schema change is being performed (hence the name of the tool), monitoring replication lag on slaves and automatic throttling to keep lag under a reasonable threshold.

Let’s assume you want to add a new field called ‘city’ in the ‘user’ table. Instead of running:

mysql> ALTER TABLE mydb.user ADD city VARCHAR(50) NOT NULL DEFAULT '';

you will run:

# pt-online-schema-change --progress=percentage,10 --alter="ADD city VARCHAR(50) NOT NULL DEFAULT ''" h=localhost,u=root,D=mydb,t=user --execute

All ALTER TABLE statements can be converted to an equivalent pt-online-schema-change command.

Is there any tradeoff? The first one is that pt-online-schema-change will be slower than a plain ALTER TABLE. This is expected as the tool has extra code to throttle itself if it adds too much load.

How much slower is it? As always it depends. On a lightly loaded server the difference will be minimal, for instance I measured 3mn24s for ALTER TABLE and 3mn30s for pt-online-schema-change on a test table. However on a heavily loaded server, it can be like 5x slower.

The second tradeoff is that pt-online-schema-change adds triggers on the original table to capture data change. So if you already have triggers, you can’t use the tool (this will be lifted in MySQL 5.7).

Metadata Locks

rel="nofollow" href="" rel="nofollow">Metadata locks were introduced in MySQL 5.5 for better transaction isolation.

But one side effect is: if you start a schema change operation on a table while another transaction is reading or writing on the same table, ALTER TABLE will be stuck in the Waiting for metadata lock state.

Another negative side effect of metadata locks is that in some situations ALTER TABLE will also block all queries to the table that is being altered, even reads. In the worst cases, this may create a query pileup situation that will freeze the whole server. For more on this topic, have a look at href="" >this post.

What about pt-online-schema-change and metadata locks? It is of course hit by this issue/feature. However the good thing is that the timeout for metadata locks is set by the tool to href="" >60s instead of the default 1 year. So after 60s, pt-online-schema-change will simply retry the operation that failed because of metadata locks.

MySQL 5.6: Online Schema Changes?

Metadata locks also exist with MySQL 5.6, the same issues as described above can then happen.

However the good news with MySQL 5.6 is that rel="nofollow" href="" rel="nofollow">most schema changes can be done href="" rel="nofollow">online. Now the question is: should you use pt-online-schema-change or a regular online ALTER TABLE statement?

Both have pros and cons:

  • ALTER TABLE is easy to use while being confident with pt-online-schema-change requires some time.
  • There is no way for ALTER TABLE to know if it’s overloading a replica while pt-online-schema-change monitors replication lag and throttles itself if needed.
  • ALTER TABLE is only online for the master, not for replicas while pt-online-schema-change is online for all servers.

The last point can be a bit confusing: why an online ALTER TABLE on the master wouldn’t be online on the slave as well? The truth is that unless you are using multi-threaded replication, writes on slaves are serialized. So while the ALTER TABLE is running, the slave will not process any other write, which is not much different from an offline ALTER TABLE.

So if you are running a single server (all Icehouse users for instance), ALTER TABLE is probably the right choice. But if you have read replicas and specifically if the application cannot tolerate replication lag, pt-online-schema-change is definitely a better choice.


Performing schema changes is becoming easier with newer versions of MySQL. However potential issues can be found with any version, so always be careful when you need to run ALTER TABLE.

The post rel="nofollow" href="">Schema changes in MySQL for OpenStack Trove users appeared first on rel="nofollow" href="">MySQL Performance Blog.


Debunking The 7 Myths of Marketing In The Enterprise

myths Over the last few months enterprise giants Oracle and Salesforce have put on shows featuring the likes of Hillary Clinton and Bruno Mars reminding all of us in the enterprise world that marketing matters. While most enterprise startups will never reach these levels of budget and glamour, this time of the year does serve as a reminder that enterprise marketing matters. Read More


Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

I’m happy to announce that Google has added rel="nofollow" href="" rel="nofollow">Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on rel="nofollow" href="" rel="nofollow">Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

href=""> class="alignleft size-full wp-image-27110" src="" alt="Google Cloud Platform adds Percona XtraDB Cluster to click-to-deploy process" width="100" height="59" />Percona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in href="" >previous posts, in the href="">PXC manual, and in various href="" >webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching href="">this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here:  rel="nofollow" href="" rel="nofollow"> to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the rel="nofollow" href="" rel="nofollow"> Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

jayj@~ [500]$ ssh [public ip]
Linux percona-mysql-niyr 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

Once there I installed sysbench 0.5 from the Percona apt repo:

jayj@percona-mysql-niyr:~$ sudo -i
root@percona-mysql-niyr:~# apt-key adv --keyserver --recv-keys 1C4CBDCDCD2EFD2A
root@percona-mysql-niyr:~# echo "deb wheezy main" > /etc/apt/sources.list.d/percona.list
root@percona-mysql-niyr:~# apt-get update; apt-get install sysbench -y
Setting up sysbench (0.5-3.wheezy) ...
root@percona-mysql-niyr:~# logout

Now we can load some data and run a quick test:

jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-tables-count=16 --oltp-table-size=1000000 --oltp-auto-inc=off --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard] --mysql-db=test run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Random number generator seed is 0 and will be ignored
Threads started!
thread prepare4
Creating table 'sbtest5'...
thread prepare2
Creating table 'sbtest3'...
thread prepare6
Creating table 'sbtest7'...
thread prepare0
Creating table 'sbtest1'...
thread prepare7
Creating table 'sbtest8'...
thread prepare1
Creating table 'sbtest2'...
thread prepare5
Creating table 'sbtest6'...
thread prepare3
Creating table 'sbtest4'...
Inserting 100000 records into 'sbtest5'
Inserting 100000 records into 'sbtest3'
Inserting 100000 records into 'sbtest7'
Inserting 100000 records into 'sbtest1'
Inserting 100000 records into 'sbtest8'
Inserting 100000 records into 'sbtest2'
Inserting 100000 records into 'sbtest6'
Inserting 100000 records into 'sbtest4'
Creating table 'sbtest13'...
Creating table 'sbtest15'...
Inserting 100000 records into 'sbtest13'
Inserting 100000 records into 'sbtest15'
Creating table 'sbtest11'...
Creating table 'sbtest12'...
Inserting 100000 records into 'sbtest11'
Creating table 'sbtest16'...
Inserting 100000 records into 'sbtest12'
Creating table 'sbtest10'...
Creating table 'sbtest14'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest16'
Inserting 100000 records into 'sbtest9'
Inserting 100000 records into 'sbtest10'
Inserting 100000 records into 'sbtest14'
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           608
        other:                           32
        total:                           640
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 608    (11.33 per sec.)
    other operations:                    32     (0.60 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          53.6613s
    total number of events:              10000
    total time taken by event execution: 0.0042s
    response time:
         min:                                  0.00ms
         avg:                                  0.00ms
         max:                                  0.02ms
         approx.  95 percentile:               0.00ms
Threads fairness:
    events (avg/stddev):           1250.0000/3307.19
    execution time (avg/stddev):   0.0005/0.00
jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/update_index.lua --oltp-tables-count=16 --oltp-table-size=1000000 --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard]--mysql-db=test --max-requests=0 --max-time=10 --report-interval=1 --oltp-auto-inc=off --rand-init=on --rand-type=uniform run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Report intermediate results every 1 second(s)
Initializing random number generator from timer.
Random number generator seed is 0 and will be ignored
Threads started!
[   1s] threads: 8, tps: 0.00, reads: 0.00, writes: 1396.43, response time: 10.85ms (95%), errors: 0.00, reconnects:  0.00
[   2s] threads: 8, tps: 0.00, reads: 0.00, writes: 1314.13, response time: 14.91ms (95%), errors: 0.00, reconnects:  0.00
[   3s] threads: 8, tps: 0.00, reads: 0.00, writes: 1382.87, response time: 12.34ms (95%), errors: 0.00, reconnects:  0.00
[   4s] threads: 8, tps: 0.00, reads: 0.00, writes: 949.09, response time: 12.88ms (95%), errors: 0.00, reconnects:  0.00
[   5s] threads: 8, tps: 0.00, reads: 0.00, writes: 1312.01, response time: 11.27ms (95%), errors: 0.00, reconnects:  0.00
[   6s] threads: 8, tps: 0.00, reads: 0.00, writes: 898.92, response time: 11.64ms (95%), errors: 0.00, reconnects:  0.00
[   7s] threads: 8, tps: 0.00, reads: 0.00, writes: 1541.71, response time: 10.59ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 8, tps: 0.00, reads: 0.00, writes: 1551.35, response time: 11.48ms (95%), errors: 0.00, reconnects:  0.00
[   9s] threads: 8, tps: 0.00, reads: 0.00, writes: 923.07, response time: 10.40ms (95%), errors: 0.00, reconnects:  0.00
[  10s] threads: 8, tps: 0.00, reads: 0.00, writes: 1273.99, response time: 11.01ms (95%), errors: 0.00, reconnects:  0.00
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           12551
        other:                           0
        total:                           12551
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 12551  (1254.65 per sec.)
    other operations:                    0      (0.00 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          10.0036s
    total number of events:              12551
    total time taken by event execution: 79.9602s
    response time:
         min:                                  1.13ms
         avg:                                  6.37ms
         max:                                389.52ms
         approx.  95 percentile:              11.68ms
Threads fairness:
    events (avg/stddev):           1568.8750/4.81
    execution time (avg/stddev):   9.9950/0.00

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something href="" rel="nofollow">Percona can help you with.

The post rel="nofollow" href="">Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process appeared first on rel="nofollow" href="">MySQL Performance Blog.


Microsoft Promises Managed, Web-Based App Stores For The Enterprise With Windows 10

ereer Microsoft previously announced that Windows 10, its next operating system, will contain a single app store that will work across device classes. Don’t make the Lord of the Rings joke. The company also previously promised that organizations will be able to curate their own app experience for their users. Today, it announced that a “Web-based” store will be an option for… Read More


Sys Schema for MySQL 5.6 and MySQL 5.7

Performance Schema (P_S) has been available since MySQL 5.5, more than 4 years ago. It is still difficult to see production servers with P_S enabled, even with MySQL 5.6 where the feature is enabled by default. There have been several complaints like the overhead, that the fix is a work in progress, and the ease of use. 52 tables to query and 31 configuration variables is enough to scare people.

There is a solution for the second problem, the usability. It’s name is “ title="sys schema" rel="nofollow" href="" rel="nofollow">sys schema“. It is a collection of views, functions and procedures to make P_S more user friendly.


If you are a MySQL Workbench user the installation is pretty easy because sys schema is already included. You just need to install it. Click on “Performance – Performance Reports” and there you will find the “Install Helper” button that will install sys schema.

href=""> class="aligncenter size-full wp-image-26915" src="" alt="sys schema mysql workbench" width="600" height="451" />

If you don’t use MySQL Workbench you need to download sys_56.sql or sys_57.sql (depends if you use 5.6 or 5.7) from the github repository. Then, just import the sql file as usual:

mysql -u root -p < ./sys_56.sql


After the import, you will have a new “sys” schema with some very descriptive table names. Let’s see an example. Do you want to know what tables are using most of our InnoDB buffer memory? Easy:

mysql> select * from sys.innodb_buffer_stats_by_table;
| object_schema | object_name        | allocated | data      | pages | pages_hashed | pages_old | rows_cached |
| test          | t                  | 63.61 MiB | 58.06 MiB |  4071 |         4071 |      4071 |     2101222 |
| InnoDB System | SYS_FOREIGN        | 32.00 KiB | 0 bytes   |     2 |            2 |         2 |           0 |
| InnoDB System | SYS_TABLES         | 32.00 KiB | 1.11 KiB  |     2 |            2 |         2 |          10 |
| InnoDB System | SYS_COLUMNS        | 16.00 KiB | 4.68 KiB  |     1 |            1 |         1 |          71 |
| InnoDB System | SYS_DATAFILES      | 16.00 KiB | 324 bytes |     1 |            1 |         1 |           6 |
| InnoDB System | SYS_FIELDS         | 16.00 KiB | 722 bytes |     1 |            1 |         1 |          17 |
| InnoDB System | SYS_INDEXES        | 16.00 KiB | 836 bytes |     1 |            1 |         1 |          12 |
| InnoDB System | SYS_TABLESPACES    | 16.00 KiB | 318 bytes |     1 |            1 |         1 |           6 |
| mysql         | innodb_index_stats | 16.00 KiB | 274 bytes |     1 |            1 |         1 |           3 |
| mysql         | innodb_table_stats | 16.00 KiB | 53 bytes  |     1 |            1 |         1 |           1 |

Pretty easy and useful, right? You can also get what is the database using more memory in the buffer pool querying innodb_buffer_stats_by_schema.

For each table there is another similar ones that it’s name starts with x$. For example, you have user_summary_by_file_io and x$user_summary_by_file_io. The difference is that the first table has a human readable output values while the second one has the real values. Let’s see an example:

mysql> select * from sys.user_summary_by_file_io;
| user       | ios   | io_latency |
| root       | 19514 | 2.87 s     |
| background |  5916 | 1.91 s     |
2 rows in set (0.00 sec)
mysql> select * from sys.x$user_summary_by_file_io;
| user       | ios   | io_latency    |
| root       | 19514 | 2871847094292 |
| background |  5916 | 1905079715132 |

For humans, at least for me, it is easier to read seconds rather than picoseconds src="" alt=":)" class="wp-smiley" />

There are multiple tables with very descriptive names.

– io_by_thread_by_latency /> – schema_unused_indexes /> – statements_with_temp_tables /> – statements_with_errors_or_warnings /> – user_summary_by_statement_type /> – waits_by_host_by_latency /> …

There are lot more, and they are explained with examples in project’s title="sys readme" rel="nofollow" href="" rel="nofollow">README file.


On the MySQL side nothing special is needed. Just enable performance_schema:


sys schema also provides some procedures to enable/disable some features. For example:

– ps_setup_enable_background_threads /> – ps_setup_enable_consumers /> – ps_setup_enable_instrument /> and so on…

We also have the same procedures with “disable”. After you have made the changes you can save them calling ps_setup_save() and reload it later on if you want calling ps_setup_reload_saved(). If you want to reset the configuration to default values just call ps_setup_reset_to_default().

For example, we can check that some consumers are disabled. It is possible to enable them and save the configuration:

mysql> CALL sys.ps_setup_show_disabled_consumers();
| disabled_consumers             |
| events_stages_current          |
| events_stages_history          |
| events_stages_history_long     |
| events_statements_history      |
| events_statements_history_long |
| events_waits_current           |
| events_waits_history           |
| events_waits_history_long      |
mysql> CALL sys.ps_setup_enable_consumers('events');
| summary             |
| Enabled 8 consumers |
mysql> CALL sys.ps_setup_show_disabled_consumers();
Empty set (0.00 sec)
mysql> CALL sys.ps_setup_save(TRUE);


Performance Schema is very useful when we want to know what is happening inside the database. It is getting more features with each new GA and will probably be the single point of information in near future. Now thanks to sys schema it is also easy to use.

The post rel="nofollow" href="">Sys Schema for MySQL 5.6 and MySQL 5.7 appeared first on rel="nofollow" href="">MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by