Dec
31
2012
--

MySQL Wish for 2013 – Better Memory Accounting

With Performance Schema improvements in MySQL 5.6 I think we’re in the good shape with insight on what is causing performance bottlenecks as well as where CPU resources are spent. (Performance Schema does not accounts CPU usage directly but it is something which can be relatively easily derived from wait and stage information). Where we’re still walking blind with MySQL is resource usage – specifically Memory Usage.

I can’t count how many time I had to scratch my head with system configured to consume only few GBs in global buffers growing to consume much more for some unknown needs leaving me puzzled whenever it is user variables, complex stored procedures temporary tables or something else. Not only such connection related allocations are invisible but many global allocations are poorly visible too. Sure, we know how much memory was allocated for Innodb Buffer Pool or Query Cache but amount of memory used for example for Table Cache can be only guessed.

Not only lack of proper memory accounting is operational problem, it is problem for QA too as it makes it very hard to ensure the memory consumption by various database operations is reasonable. For example if some INFORMATION_SCHEMA query were to take couple of GB of memory we might not find it reasonable yet it is not a “memory leak” as it is freed after operation is completed.

What I would like to see is similar to Performance Schema “wait names” we could get something similar in terms of memory allocation target, which could be something like “Prepared Statement Cache”, “Table Cache” etc. For global pools we need just global accounting, for connection basics accounting should be done per connection but aggregated globally. This way it would be possible to see for example which connection is using most memory. We could also aggregate per purpose to understand what memory is currently used for.

One thing to note with memory allocation is it is rather volatile – a lot of memory might be allocated for very short period of time and hence not very well visible. To assist catching these situations it is good idea to track not only current amount allocated but also maximum, which can be reset as needed.

If such functionality is implemented not only it can help DBA to configure memory allocation a lot more successful but I also believe similar to Performance Schema helping to find many performance bottleneck such profiling will help to find a lot of wasteful memory allocation and help to get MySQL on the memory diet. It would be easy to extend QA suite to check for peak memory usage after each test and hence find memory usage regression – when new code unexpectedly requires a lot more memory than previous variant.

If memory accounting is implemented it can be used to help restricting memory usage in addition to monitoring it which I’m sure hosting providers, MySQL as a Service vendors will especially appreciate as currently they are on complete mercy of their users not being able to limit memory usage by individual MySQL users or even track how much these users are using.

Now as MySQL 5.6 is getting close to be done I imagine the planning and feature design is long started for MySQL 5.7 (or whatever next release is going to be called) and I really hope something along those lines will be picked up for inclusion. If not perhaps this is opportunity for MariaDB to pave the road ? Whatever way I’m sure MySQL community will appreciate it.

Happy New 2013 Year everyone !

The post MySQL Wish for 2013 – Better Memory Accounting appeared first on MySQL Performance Blog.

Dec
28
2012
--

Auditing login attempts in MySQL

This is a recurrent question made by our MySQL Support customers:

How can I audit the login attempts in MySQL?

Logging all the attempts or just the failed ones is a very important task on some scenarios. Unfortunately there are not too many audit capabilities in MySQL Community so the first option to audit MySQL’s authentication process is to get all the information we need from logs.

General Query Log

The first option is the General Query Log. Let’s see an example:

Enable the log:

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

User correctly authenticated:

121227  8:31:49	   38 Connect	root@localhost on 
		   38 Query	select @@version_comment limit 1

User not correctly authenticated:

121227  8:32:18	   39 Connect	root@localhost on 
		   39 Connect	Access denied for user 'root'@'localhost' (using password: YES)

The problem of the General Query Log is that it will log everything so it can cause performance degradation and you will have to deal with very large files on high loaded servers. general_log variable is dynamic so a solution could be enabling and disabling the log just when it’s needed.

Error log

If you only care about failed attempts to login then there is another different and less problematic approach. From 5.5 it’s possible to log access denied messages to the error log.

We just need to enable log_warnings with a value greater than 1:

log_warnings = 2

Then check the error log:

121227  8:44:21 [Warning] Access denied for user 'root'@'localhost' (using password: YES)

User Statistics

If you are using Percona Server then there is a third option to get information about our users, the User Statistics. As with the previous options we can get the number of connections and failed connections made by a particular user but not the date and time of those attempts. Besides that information we can get other statistics that can be very useful if MySQL is running on a multi-tenant environment or we need to control how resources are used.

Let’s seen an example, first we enable User Statistics in my.cnf:

5.5

userstat = 1

5.1

userstat_running = 1

Then we get the information about a particular user:

mysql> select * from user_statistics where user='root'\G
*************************** 1. row ***************************
                  USER: root
     TOTAL_CONNECTIONS: 25
CONCURRENT_CONNECTIONS: 0
        CONNECTED_TIME: 464
             BUSY_TIME: 96
              CPU_TIME: 19
        BYTES_RECEIVED: 62869617
            BYTES_SENT: 14520
  BINLOG_BYTES_WRITTEN: 0
          ROWS_FETCHED: 783051
          ROWS_UPDATED: 1017714
       TABLE_ROWS_READ: 1484751
       SELECT_COMMANDS: 14
       UPDATE_COMMANDS: 103
        OTHER_COMMANDS: 3556
   COMMIT_TRANSACTIONS: 0
 ROLLBACK_TRANSACTIONS: 0
    DENIED_CONNECTIONS: 2
      LOST_CONNECTIONS: 16
         ACCESS_DENIED: 0
         EMPTY_QUERIES: 0
 TOTAL_SSL_CONNECTIONS: 0

Here we can see that root has done 25 total connections. Two denied connections (bad password) and 16 lost connections (not closed properly). Apart from that information we get the connection time, bytes received and sent, rows accessed, commands executed and so on. Very valuable information.

It is important to mention that these tables are stored in INFORMATION_SCHEMA and that means that after a mysqld restart all the information will be lost. So if you really need that information you should copy it to another table or export to a csv for further analysis.

Conclusion

We don’t have too many audit capabilities in MySQL Community so logging all events and then filter them with custom-made scripts is the best solution we have nowadays. If you are using Percona Server you can get more detailed information about what a particular user is doing. All options can be combined to meet your needs.

The post Auditing login attempts in MySQL appeared first on MySQL Performance Blog.

Dec
28
2012
--

Top Tweets December 2012

This is a short collection of the best tweets on @SQLPerfTips and/or @MarkusWinand for those who missed them and those who don’t use twitter at all.

And I want to take this opportunity to wish you a happy new year :)

Original title and author: “Top Tweets December 2012” by Markus Winand.

Dec
27
2012
--

Interview with me about Indie Publishing

Hello all, and I hope everyone had a fabulous Christmas (or other holiday).

A little while ago, Candace, at the fabulous Candace’s Book Blog, invited me for an interview. I had a lot of fun answering her in-depth questions about Indie Publishing.

Take a look at the interview!

If you’re an avid reader, or interested in the latest publishing trends, I recommend you follow Candace.

Have a great New Years and I hope you all have a super 2013.

 

Dec
24
2012
--

Winners of the Holiday Indie Giveaway

Indie Holiday Giveaway

Congratulations to the winners of our giveaway. Drum roll please…

  • Sherry Fundin
  • Traci Ellen Smith
  • Shannon Aleene Romein

Enjoy!

Please look out for emails from myself and the other authors with details of how to obtain your free books, and an email containing your $10 Amazon Gift Card. Be sure to check your spam folders, so you don’t miss the emails.

On behalf of myself, Angela Brown, Gwen Gardner and Lisa Orchard, thank you to the winners and to everyone for taking part in our fun giveaway. We greatly appreciate you taking an interest in us and our books. If you didn’t win, why not buy a copy of our books anyway. ;)

Happy Holidays everyone!

Dec
21
2012
--

Announcing Percona XtraBackup 1.6.7

Percona is glad to announce the latest stable release of Percona XtraBackup 1.6.7 on December 20th, 2012 (Downloads are available here).

This release is purely composed of bug fixes to the previous stable release of Percona XtraBackup. New users should head for Percona XtraBackup 2.0 rather than 1.6.

Bugs Fixed:

  • xtrabackup_binary was not included in tar archive when streaming, instead it was written to the current directory. This could lead to a wrong xtrabackup binary being used when preparing backups created with the --stream or --remote-host options. Bugs fixed #723318 and #787988 (Stewart Smith).
  • FLUSH TABLES WITH READ LOCK was not used when creating incremental backups, which could lead to inconsistent backups when updates to non-InnoDB tables or DDL statements on any tables occurred during the backup process. Bug fixed #771981 (Alexey Kopytov).
  • Option --safe-slave-backup was resulting in incorrect binlog info, because in some cases innobackupex confused the response from SHOW SLAVE STATUS with the one from SHOW MASTER STATUS. Bug fixed #977101 (Alexey Kopytov).
  • innodb_data_file_path was not written to backup-my.cnf, this was a regression introduced in XtraBackup 1.6.5. Bug fixed #983685 (Sergei Glushchenko).
  • Fixed spurious test suite failures with grep 2.10. Bug fixed #996483 (Alexey Kopytov).
  • When innobackupex was running with --apply-log, it was reading configuration from the server configuration file instead of backup-my.cnf in backup directory. Bug fixed #996493 (Sergei Glushchenko).
  • innobackupex could copy files to a wrong directory when merging an incremental backup to a full one. Bug fixed #1002688 (Alexey Kopytov).
  • XtraBackup binary was leaking file descriptors on --backup. This was fixed by reusing the existing file descriptor so no leak occurs. Bug fixed #713267 (Alexey Kopytov).

The full release notes and details about fixed bugs are available in the release notes.

The post Announcing Percona XtraBackup 1.6.7 appeared first on MySQL Performance Blog.

Dec
21
2012
--

Why I write YA

Recently, Sharon over at Obsession with Books invited me to do a guest post. Please pop over there and read about Why I write YA. Stay a while and nose around her wonderful site too.

 

Dec
21
2012
--

Be productive with the MySQL command line

Even if you are using a GUI tool to connect to your MySQL servers, one day or another, you will have to deal with the command line. So it is nice to know a few tips that can really make your work easier.

Note: The commands below are only available for Unix/Linux.

Using pager

Most of the graphical tools paginate results, which is very handy. But this is not the way the command line client works: it just outputs all results. It can be annoying but it is easily solved by using the pager command:

mysql> pager more
PAGER set to 'more'
mysql> select title from sakila.film;
+-----------------------------+
| title                       |
+-----------------------------+
| ACADEMY DINOSAUR            |
| ACE GOLDFINGER              |
| ADAPTATION HOLES            |
| AFFAIR PREJUDICE            |
| AFRICAN EGG                 |
| AGENT TRUMAN                |
| AIRPLANE SIERRA             |
| AIRPORT POLLOCK             |
| ALABAMA DEVIL               |
| ALADDIN CALENDAR            |
| ALAMO VIDEOTAPE             |
| ALASKA PHANTOM              |
| ALI FOREVER                 |
| ALICE FANTASIA              |
| ALIEN CENTER                |
| ALLEY EVOLUTION             |
| ALONE TRIP                  |
| ALTER VICTORY               |
| AMADEUS HOLY                |
--Plus--

Another example of the pager command is when you want to estimate a good size for you InnoDB redo logs: the estimation is based on the variation of the Log Sequence Number during a given period of time. Instead of manually looking for the right line in the output of SHOW ENGINE INNODB STATUS (which can be huge), you can call pager to the rescue:

mysql> pager grep sequence
PAGER set to 'grep sequence'
mysql> show engine innodb status\Gselect sleep(60);show engine innodb status\G
Log sequence number 380166807992
1 row in set (0.41 sec)

1 row in set (1 min 0.00 sec)

Log sequence number 380170274979
1 row in set (0.00 sec)

When you are done and you want to disable paging, you can simply run:

mysql> pager
Default pager wasn't set, using stdout.

Using edit

When you try to optimize a query, it often involves manipulating the text of the query, and sometimes it would be great to have a text editor inside the client. Well, this can be achieved by using the edit command.

Let’s say you have the following query:

mysql> select count(*) from film left join film_category using(film_id) left join category using(category_id) where name='Music';

and let’s say you want to change the left joins to inner joins and use capital letters for reserved SQL words. Instead of manually editing the statement, which will be boring, you simply call edit:

mysql> edit

and it will open your default text editor with the text of the last query. The default text editor is vi, so you now have the power of vi inside the mysql client!
Once you have made your changes, save and exit the editor: you are back in the mysql client where you can type ; or \G to execute the query.

Using tee

In some situations, like when you are testing a set of commands to write documentation or when you are in the middle of an emergency, you want to be able to record the queries that you have executed. The command line client offers the tee command, which will log to a file the statements you typed and their output, pretty much like the Unix tee command:

mysql> tee queries.log
Logging to file 'queries.log'
mysql> use sakila
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select count(*) from sakila;
ERROR 1146 (42S02): Table 'sakila.sakila' doesn't exist
mysql> select count(*) from film;
+----------+
| count(*) |
+----------+
|     1000 |
+----------+
1 row in set (0.00 sec)

mysql> exit

And now if you look at the content of the queries.log file, you will see a copy of your session.

Conclusion

The mysql command line client is not as glossy as most of the graphical tools, but if you know some of its hidden features, it can be very powerful. If you enjoyed these tips, I will write another post with other useful but overlooked features.

The post Be productive with the MySQL command line appeared first on MySQL Performance Blog.

Dec
20
2012
--

Percona XtraDB Cluster: SElinux is not always the culprit !

If you are using SElinux, you should know that it’s advised to disable it to avoid issue with PXC. Generally the communication between your nodes doesn’t work properly and a node having SElinux enabled won’t be able to join the cluster.

So when a node doesn’t join the cluster where it should, my first reflex is to have a look at audit.log. But recently I faced another problem: the node joined the cluster but SST failed (whatever which method was used, discarding skip).

I checked SElinux and it was of course disabled, then I add some debug information in the SST script but it seemed that the script was never launched. And this time the culprit is called : AppArmor !

Percona doesn’t provide any AppArmor profile for PXC, but it seems that on this server (Ubuntu TLS), a previous version of MySQL was installed and then removed but the AppArmor profile was still present.

So if you use apparmor (or if you don’t know) and you want to check is there is a profile for mysql, you can run the following command :

root@testmachine:~# apparmor_status
apparmor module is loaded.
7 profiles are loaded.
7 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/mysqld
/usr/sbin/named
/usr/sbin/ntpd
/usr/sbin/tcpdump
0 profiles are in complain mode.
2 processes have profiles defined.
2 processes are in enforce mode.
/usr/sbin/named (1205)
/usr/sbin/ntpd (1347)
0 processes are in complain mode.

You can disable a profile easily by running

sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld

For more information related to AppArmor, you can refer to Ubuntu’s wiki

So now if you run ubuntu, you have two things to check first : SElinux and AppArmor !

Note: We often advise to disable SElinux and AppArmor on dedicated MySQL servers to avoid the performance overhead

The post Percona XtraDB Cluster: SElinux is not always the culprit ! appeared first on MySQL Performance Blog.

Dec
20
2012
--

How to STOP SLAVE on Amazon RDS read replica

We are doing a migration from Amazon RDS to EC2 with a customer. This, unfortunately, involves some downtime – if you are an RDS user, you probably know you can’t replicate an RDS instance to an external server (or even EC2). While it is annoying, this post isn’t going to be a rant on how RDS can make you feel locked in. Instead, I wanted to give you a quick tip.

So here’s the thing – you can’t stop replication on RDS read replica, because you don’t have (and won’t get) privileges to do that:

replica> STOP SLAVE;
ERROR 1045 (28000): Access denied for user 'usr'@'%' (using password: YES)

Normally, you don’t want to do that, however we wanted to run some pt-upgrade checks before we migrate and for that we needed the read replica to stop replicating. Here’s one way to do it:

WARNING! Resuming replication gracefully only works if you run RDS with MySQL version 5.1.62 or 5.5.23 and up.

master> CREATE DATABASE percona;
master> CREATE TABLE percona.test (id int unsigned auto_increment not null primary key);

replica> INSERT INTO percona.test VALUES (1);

master> INSERT INTO percona.test VALUES (1);

replica> show slave status\G
...
     Slave_SQL_Running: No
            Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: 'percona'. Query: 'INSERT INTO test VALUES (1)'

Of course, for that to work, you will also have to disable read_only mode, which you can do by going to AWS Console and changing value for variable “read_only” from its default “{TrueIfReplica}” to “0″.

When we’re done with pt-upgrade checks, I will just run the following to resume replication:

replica> CALL mysql.rds_skip_repl_error;
master> DROP DATABASE percona;

Note: If you are running an earlier MySQL version and it does not have the rds_skip_repl_error procedure, you can try removing the conflicting record from replication slave and replication should resume shortly. That worked for me.

The post How to STOP SLAVE on Amazon RDS read replica appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com