Apr
27
2018
--

MySQL 8.0 GA: Quality or Not?

MySQL 8.0 GA

MySQL 8.0 GAWhat does Anton Ego – a fictional restaurant critic from the Pixar movie Ratatouille – have to do with MySQL 8.0 GA?

When it comes to being a software critic, a lot.

In many ways, the work of a software critic is easy. We risk very little and thrive on negative criticism, which is fun to read and write.

But what about those who give their many hours of code development, and those who have tested such code before release? How about the many people behind the scenes who brought together packaging, documentation, multiple hours of design, marketing, online resources and more?

And all of that, I might add, is open source! Free for the world to take, copy, adapt and even incorporate in full or in part into their own open development.

It is in exactly that area that the team at MySQL shines once again – they have from their humble beginnings build up a colossally powerful database software that handles much of the world’s data, fast.

Used in every area of life – aerospace, defense, education, finances, government, healthcare, pharma, manufacturing, media, retail, telecoms, hospitality, and finally the web – it truly is a community effort.

My little contribution to this effort is first and foremost to say: well done! Well done for such an all-in-all huge endeavor. When I tested MySQL 8.0, I experienced something new: an extraordinarily clean bug report screen when I unleashed our bug hunting rats, ahem, I mean tools. This was somewhat unexpected. Usually, new releases are a fun playground even for seasoned QA engineers who look for the latest toy to break.

I have a suspicion that the team at Oracle either uses newly-improved bug-finding tools or perhaps they included some of our methods and tools in their setup. In either case, it is, was and will be welcome.

When the unexpected occurs, a fight or flight syndrome happens. I tend to be a fighter, so I upped the battle and managed to find about 30 bugs, with 21 bugs logged already. Quite a few of them are Sig 11’s in release builds. Signal 11 exceptions are unexpected crashes, and release builds are the exact same build you would download at dev.mysql.com.

The debug build also had a number of issues, but less than expected, leading me to the conclusions drawn above. Since Oracle engineers marked many of the issues logged as security bugs, I didn’t list them here. I’ll give Oracle some time to fix them, but I might add them later.

In summary, my personal recommendation is this: unless you are a funky new web company thriving on the latest technology, give Oracle the opportunity to make a few small point bugfix releases before adapting MySQL 8.0 GA. After that, providing upgrade prerequisites are matched, and that your software application is compatible, go for it and upgrade.

Before that, this is a great time to start checking out the latest and greatest that MySQL 8.0 GA has to offer!

All in all, I like what I saw, and I expect MySQL 8.0 GA to have a bright future.

Signed, a seasoned software critic.

The post MySQL 8.0 GA: Quality or Not? appeared first on Percona Database Performance Blog.

Nov
27
2017
--

autoxtrabackup v1.5.0: A Tool for Automatic Backups

autoxtrabackup

autoxtrabackupThere is a new version of the autoxtrabackup tool. In this post, I’ll provide some of the highlights available this time around.

autoxtrabackup is a tool created by PerconLabs. We’ve now put out the 1.5.0 version, and you can test it further.

Note: PerconaLabs is a repository for various open source scripts and tools created by Percona staff. Items added to the PerconaLabs repository are not official Percona products, and aren’t covered by support or services agreements. 

autoxtrabackup is written in Python3 and hosted in PerconaLab (forked from Shako’s repo). Basically, this tool automates backup/prepare/copy-back actions. I want to talk about recent changes and additions.

First of all, autoxtrabackup now has a --test_mode option, intended to test XtraBackup automation process.

Here is the brief flow for this:

  • Clone percona-qa repo
  • Clone Percona Server for MySQL 5.6 and 5.7 from github.
  • Build PS servers in debug mode.
  • Get 2.3 and 2.4 versions of XtraBackup
  • Generate autoxtrabackup .conf files for each version of PS and XtraBackup
  • Pass different combination of options to PS start command and initialize PS servers each time with different options
  • Run sysbench against each started PS server
  • Take backup in cycles for each started PS + prepare
  • If make_slaves is defined, then create slave1 server from this backup (i.e., copy-back to another directory and start the slave from it)
  • Then take a backup, prepare and copy-back from this new slave1 to create slave2
  • Run pt-table-checksum on the master to check backup consistency

I have prepared my environment, and now want to start --test_mode. Basically, it creates option combinations and passes them to the start script:

2017-11-15 22:28:21 DEBUG    Starting cycle1
2017-11-15 22:28:21 DEBUG    Will start MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G
--innodb_page_size=64K --early-plugin-load=keyring_file.so
--keyring_file_data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring
--log-bin=mysql-bin --log-slave-updates --server-id=1 --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row

So as you see, it is starting MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K. In cycle2, it will likely pick --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=32K, and so on. It depends what you have passed in config:

# Do not touch; this is for --test_mode, which is testing for XtraBackup itself.
[TestConf]
ps_branches=5.6 5.7
gitcmd=--recursive --depth=1 https://github.com/percona/percona-server.git
testpath=/home/shahriyar.rzaev/XB_TEST/server_dir
incremental_count=3
#make_slaves=1
xb_configs=xb_2_4_ps_5_6.conf xb_2_4_ps_5_7.conf xb_2_3_ps_5_6.conf
default_mysql_options=--log-bin=mysql-bin,--log-slave-updates,--server-id={},--gtid-mode=ON,--enforce-gtid-consistency,--binlog-format=row
mysql_options=--innodb_buffer_pool_size=1G 2G 3G,--innodb_log_file_size=1G 2G 3G,--innodb_page_size=4K 8K 16K 32K 64K

You can pass more options by changing the mysql_options in the config file. Also you can specify how many incremental backups you want by setting the incremental_count option. You can enable creating slaves from backup to test it as well, by enabling the make_slaves option. This is not recommended for daily usage. You can read more about it here: –test_mode.

For daily backup actions, I have added the --tag and --show_tags options, which can be quite useful. They help you to tag your backups. Take a full backup:

$ sudo autoxtrabackup --tag="My Full backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take an incremental one:

$ autoxtrabackup --tag="First incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take a second incremental one:

$ autoxtrabackup --tag="Second incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Now you can use the --show_tags to list tags:

$ sudo autoxtrabackup --show_tags
--defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf
Backup              Type    Status  TAG
-------------------------------------------
2017-11-16_20-10-53 Full        OK  'My Full backup'
2017-11-16_20-12-23 Inc         OK  'First incremental backup'
2017-11-16_20-13-39 Inc         OK  'Second incremental backup'

It would be quite nice if we could prepare those backups with a tag name. In other words, if I have a full backup and five incremental backups, what if I want to prepare until the second or third incremental, or just a full backup?

Pass the tag name with the --prepare option, and it will do the trick:

$ autoxtrabackup --tag="First incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --prepare

It will prepare the full and “First incremental backup” – the remaining incremental backups will be ignored.

autoxtrabackup 1.5.0 also has a --dry_run option, which is going to show but not run exact commands. It is described here: –dry_run.

How about autoxtrabackup 1.5.0 installation? You can install it from the source or use pip3:

pip3 install mysql-autoxtrabackup

For more please read: Installation.

Do you want to enable encryption and compression for backups? Yes? You can enable this from the autoxtrabackup config as described here: Config file structure.

You can enable taking partial backups again by editing the config: partial backups.

autoxtrabackup 1.5.0 allows you to perform a partial recovery – i.e., restoring only a single specified table from a full backup. If the table was dropped,  autoxtrabackup will try to extract the create table statement from the .frm file using the mysqlfrm tool and then discard/import the tablespace from full backup. This is related to the transportable tablespace concept. You can read more here: restoring-single-table-after-drop.

For a full list of available options, read the DOC: autoxtrabackup DOC.

Thanks for reading! If you are going to try autoxtrabackup 1.5.0, don’t hesitate to provide some feedback!

Mar
06
2017
--

Improving TokuDB Hot Backup Usage with the autotokubackup Command Line Tool

autotokubackup

autotokubackup In this blog post, we’ll look at how the command line tool autotokubackup can make TokuDB hot backups easier.

I would like to share an experimental tool named autotokubackup, for TokuBackup. This tool is aimed at helping simplify the life of TokuDB system administrators. autotokubackup is written in the Python language.

General information:

So why would you need this tool? Let’s clarify a bit what you might face while using tokubackup. You have a backup solution that you can use from the MySQL shell:

mysql > set tokudb_backup_dir='/var/lib/tokubackupdir';

Now you want to automate this process. The first problem is that the second backup will fail, because it’s required that the backup directory is empty before starting a backup process. One solution is to create time-stamped directories and for the backups.

Further, you have a backup policy that requires some other necessary files are copied as part of your backup process. You need to write a script to put those files into a separate folder under backup directory.

Another issue you will face is the lack of any clear output on backup progress. The shell just pauses until the backup completes. The one possible way to obtain information about the backup process is displaying the MySQL processlist in a separate MySQL shell. But it isn’t the best way, and there are some issues, as reported here: Unclear status information of backup state while taking backups using TokuBackup.

Generally, we need to know which files are backed up during the backup process. There should also be a clear message indicating the end of the backup process.

To make your life easier, the autotokubackup tool:

  • Automates the TokuDB database backup procedures
  • Creates timestamped backups inside the backup directory, overcoming the need for copy/remove old backups to empty the backup directory
  • Copies all necessary files for your backup policy (you can specify up to ten supplementary files to be in the backup directory as part of backup process)
  • Clearly describes what is going to be in the backup director, by showing newly created files inside backup directory
  • Clearly shows the end of backup process

To start, we only need two things:

  • Installed Percona Server with TokuDB engine + TokuBackup plugin
  • Installed Python3

To install the tool you can use following methods:

* From source:

cd /home
git clone https://github.com/Percona-Lab/autotokubackup.git
cd autotokubackup
python3 setup.py install

* or via pip3:

pip3 install autotokubackup

The result will be something like:

Collecting autotokubackup
  Downloading autotokubackup-1.1-py3-none-any.whl
Collecting watchdog>=0.8.3 (from autotokubackup)
  Downloading watchdog-0.8.3.tar.gz (83kB)
    100% |????????????????????????????????| 92kB 8.2MB/s
Collecting click>=3.3 (from autotokubackup)
  Downloading click-6.7-py2.py3-none-any.whl (71kB)
    100% |????????????????????????????????| 71kB 10.6MB/s
Collecting mysql-connector>=2.0.2 (from autotokubackup)
  Downloading mysql-connector-2.1.4.zip (355kB)
    100% |????????????????????????????????| 358kB 4.7MB/s
Collecting PyYAML>=3.10 (from watchdog>=0.8.3->autotokubackup)
  Downloading PyYAML-3.12.tar.gz (253kB)
    100% |????????????????????????????????| 256kB 6.5MB/s
Collecting argh>=0.24.1 (from watchdog>=0.8.3->autotokubackup)
  Downloading argh-0.26.2-py2.py3-none-any.whl
Collecting pathtools>=0.1.1 (from watchdog>=0.8.3->autotokubackup)
  Downloading pathtools-0.1.2.tar.gz

After that there should be a configuration file for this tool located -> /etc/tokubackup.conf.

The structure of the config file is:

[MySQL]
mysql=/usr/bin/mysql
user=root
password=12345
port=3306
socket=/var/run/mysqld/mysqld.sock
host=localhost
datadir=/var/lib/mysql
[Backup]
backupdir=/var/lib/tokubackupdir
[Copy]
# The following copy_file_x options allow you to copy various files together with your backup
# Highly recommended; a copy of your my.cnf file (usually /etc/my.cnf) and any cnf files referenced from it (i.e. includedir etc.)
# You can also include other files you would like to take a copy of, like for example a text report or the mysqld error log
# copy_file_1=
# copy_file_2=
# copy_file_...=
# copy_file_10=
#copy_file_1=/etc/my.cnf
#copy_file_2=/var/log/messages
#copy_file_3=
#copy_file_4=
#copy_file_5=
#copy_file_6=
#copy_file_7=
#copy_file_8=
#copy_file_9=
#copy_file_10=

You can change options to reflect your environment and start to use. Available command line options for the tool can be displayed using --help

tokubackup --help
Usage: tokubackup [OPTIONS]
Options:
--backup              Take full backup using TokuBackup.
--version             Version information.
--defaults_file TEXT  Read options from the given file
--help                Show this message and exit.

You can prepare different config files. For example, one for the slave. Specify using the –defaults_file option, and the overall result of the run should be something like the below:

tokubackup --backup --defaults_file=/etc/tokubackup_node2.conf
Backup will be stored in  /var/lib/tokubackupdir/2017-02-09_20-25-40
Running backup command => /home/sh/percona-server/5.7.17/bin/mysql -uroot --password=msandbox --host=localhost --socket=/tmp/mysql_sandbox20194.sock -e set tokudb_backup_dir='/var/lib/tokubackupdir/2017-02-09_20-25-40'
mysql: [Warning] Using a password on the command line interface can be insecure.
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/__tokudb_lock_dont_delete_me_data
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/__tokudb_lock_dont_delete_me_logs
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/__tokudb_lock_dont_delete_me_temp
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/log000000000006.tokulog29
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/tokudb.rollback
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/tokudb.environment
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/tokudb.directory
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/tc.log
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/client-key.pem
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/server-cert.pem
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/server-key.pem
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/ca.pem
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/ca-key.pem
Created file in backup directory -> /var/lib/tokubackupdir/2017-01-31_14-15-46/mysql_data_dir/auto.cnf
Completed - OK

The backup directory will store the following:

ls -l 2017-02-09_20-25-40/
  copied_files            - Directory for copied files.
  global_variables        - File for MySQL global variables.
  mysql_data_dir          - Directory for copied MySQL datadir.
  session_variables       - File for MySQL session variables.
  tokubackup_binlog_info  - File for storing binary log position.(The new feature for TokuBackup) [Not released yet]
  tokubackup_slave_info   - File for storing slave info.(The new feature for TokuBackup) [Not released yet]

That’s it. If you test it and find bugs, send a feature request to further improve our “helper.” Thanks! ?

Jan
09
2017
--

MySQL 8.0.1: The Next Development Milestone

MySQL 8.0.1

MySQL 8.0.1This post discusses the next MySQL development milestone: MySQL 8.0.1.

From the outset, MySQL 8.0 has received plenty of attention. Both this blog (see the MySQL 8.0 search) and other sites around the Internet have covered it. Early reviews seem positive (including my own MySQL 8.0 early bugs review). There is plenty of excitement about the new features.

As for early feedback on MySQL 8.0, Peter Zaitsev (Percona CEO) listed a set of recommendations for benchmarking MySQL 8.0. I hope these get reviewed and implemented.

MySQL achieved the current development milestone (available for download on dev.mysql.com) on September 12, 2016. Its release immediately came with a detailed review by Geir Hoydalsvik from MySQL. If you haven’t had the opportunity to do so yet, you can also review the MySQL 8.0 release notes.

It now looks like we’re nearing 8.0.1, the next development milestone. I don’t have insider information, but it’s quite clear when navigating mysql.com that:

Regarding timing, it’s interesting to note that the “What Is New in MySQL 8.0” page was updated on the 6th of January.

It looks like the release might come soon. So, restrain your excitement for a few days (or weeks?) more. Maybe you’ll be able to checkout the all new MySQL 8.0.1!

PS: If MySQL quality interests you, have a look at this recent – and very interesting – change made to the MTR (MySQL Test Run, the MySQL test suite) program. I believe it improves quality for everyone who runs MySQL (including its forks). The tests (which are run worldwide, often for each code change made) will now test the product with its own defaults.

Nov
10
2016
--

Thoughts About Column Compression, with Optional Predefined Dictionary

column compression

column compressionThis blog discusses column compression with an optional predefined dictionary.

Compression, more compression with different algorithms, compress again, compress multiple times! ? Compression is a hot topic in our lives.

In general, testing new things is great if the processes are well-described and easy to follow. Let’s try to think like a QA engineer: the first golden rule of QA is “everything is buggy, life is full of bugs: good night bugs, good morning bugs, hello my old bug friends.”

The second golden rule of QA is “OK, now let’s find a way to catch a bug — but remember that your methods can be buggy, too.”

Remember: always test! No bugs, no happiness!

When you start to test, the first goal is getting an idea of what is going on. This blog will demonstrate a test scenario for column compression with an optional predefined dictionary. For reference on column compression, read Compressed columns with dictionaries.”

To begin, let’s set up a basic environment:
The installation process requires installing Percona Server which is already documented here -> PS 5.6 installation

Secondly, find an already existing test: xtradb_compressed_columns_ibd_sizes.test.

Third, write a simple script to get started:

import mysql.connector
cnx = mysql.connector.connect(user='msandbox', password='msandbox',
                              host='127.0.0.1',
                              database='dbtest',
                              port=22896,
                              autocommit=True)
cursor = cnx.cursor()
crt_comp_dic = "CREATE COMPRESSION_DICTIONARY names2 ('Bartholomew')"
cursor.execute(crt_comp_dic)
table_t1 = "CREATE TABLE t1(id INT,a BLOB) ENGINE=InnoDB"
table_t2 = "CREATE TABLE t2(id INT,a BLOB COLUMN_FORMAT COMPRESSED) ENGINE=InnoDB"
table_t3 = "CREATE TABLE t3(id INT,a BLOB COLUMN_FORMAT COMPRESSED WITH COMPRESSION_DICTIONARY names) ENGINE=InnoDB"
cursor.execute(table_t1);
cursor.execute(table_t2);
cursor.execute(table_t3);
insert_stmt = "insert into {} values({},repeat('Bartholomew', 128))"
for i in range(0, 100000):
	cursor.execute(insert_stmt.format('t1', int(i)))
	print insert_stmt.format('t1', int(i))
	cursor.execute(insert_stmt.format('t2', int(i)))
	print insert_stmt.format('t2', int(i))
	cursor.execute(insert_stmt.format('t3', int(i)))
	print insert_stmt.format('t3', int(i))
cursor.close()
cnx.close()

As you might notice, column compression might be with or without a compression dictionary. The visible difference, of course, is in the size of the tables. If you want to compress columns based on a predefined dictionary, you should create it with frequently used data. It is possible to create an empty dictionary, but it will have no effect. (See here: #1628231.)

The result of running this script is:

100.000 rows tables

t1 -> uncompressedt2 -> compressed column, t3 -> compressed column with compression dictionary, ‘names2’ dictionary

t2 -> compressed column,

t3 -> compressed column with compression dictionary, ‘names2’ dictionary.

Table size difference:

sh@sh-ubuntu:~/sandboxes/rsandbox_percona-server-5_6_31/master/data/dbtest$ ls -lth | grep .ibd
-rw-rw---- 1 sh sh 168M Sep 29 23:43 t1.ibd
-rw-rw---- 1 sh sh  15M Sep 29 23:43 t2.ibd
-rw-rw---- 1 sh sh  14M Sep 29 23:43 t3.ibd

After running an optimize table:

master [localhost] {msandbox} (dbtest) > optimize table t1;
+-----------+----------+----------+-------------------------------------------------------------------+
| Table     | Op       | Msg_type | Msg_text                                                          |
+-----------+----------+----------+-------------------------------------------------------------------+
| dbtest.t1 | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| dbtest.t1 | optimize | status   | OK                                                                |
+-----------+----------+----------+-------------------------------------------------------------------+
2 rows in set (1 min 35.88 sec)
master [localhost] {msandbox} (dbtest) > optimize table t2;
+-----------+----------+----------+-------------------------------------------------------------------+
| Table     | Op       | Msg_type | Msg_text                                                          |
+-----------+----------+----------+-------------------------------------------------------------------+
| dbtest.t2 | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| dbtest.t2 | optimize | status   | OK                                                                |
+-----------+----------+----------+-------------------------------------------------------------------+
2 rows in set (11.82 sec)
master [localhost] {msandbox} (dbtest) > optimize table t3;
+-----------+----------+----------+-------------------------------------------------------------------+
| Table     | Op       | Msg_type | Msg_text                                                          |
+-----------+----------+----------+-------------------------------------------------------------------+
| dbtest.t3 | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| dbtest.t3 | optimize | status   | OK                                                                |
+-----------+----------+----------+-------------------------------------------------------------------+
2 rows in set (7.89 sec)

The resulted size:

sh@sh-ubuntu:~/sandboxes/rsandbox_percona-server-5_6_31/master/data/dbtest$ ls -lh | grep .ibd
-rw-rw---- 1 sh sh 160M Sep 29 23:52 t1.ibd
-rw-rw---- 1 sh sh 8.0M Sep 29 23:52 t2.ibd
-rw-rw---- 1 sh sh 7.0M Sep 29 23:52 t3.ibd

I want more:

master [localhost] {msandbox} (dbtest) > alter table t1 row_format=compressed;
Query OK, 0 rows affected (2 min 38.85 sec)
Records: 0  Duplicates: 0  Warnings: 0
master [localhost] {msandbox} (dbtest) > alter table t2 row_format=compressed;
Query OK, 0 rows affected (14.41 sec)
Records: 0  Duplicates: 0  Warnings: 0
master [localhost] {msandbox} (dbtest) > alter table t3 row_format=compressed;
Query OK, 0 rows affected (10.74 sec)
Records: 0  Duplicates: 0  Warnings: 0

Using 

ROW_FORMAT=COMPRESSED

 requires

innodb_file_format

  to be >  Antelope. But this is not true for

COLUMN_FORMAT

.

Again, check the size:

sh@sh-ubuntu:~/sandboxes/rsandbox_percona-server-5_6_31/master/data/dbtest$ ls -lh | grep .ibd
-rw-rw---- 1 sh sh  76M Sep 29 23:57 t1.ibd
-rw-rw---- 1 sh sh 4.0M Sep 29 23:58 t2.ibd
-rw-rw---- 1 sh sh 4.0M Sep 29 23:58 t3.ibd

Question: How do I get information about column compression dictionary and tables? Answer: tables from information_schema:

master [localhost] {msandbox} ((none)) > SELECT * FROM information_schema.xtradb_zip_dict;
+----+--------+-------------+
| id | name   | zip_dict    |
+----+--------+-------------+
|  1 | names  | Bartholomew |
|  2 | names2 | Bartholomew |
+----+--------+-------------+
2 rows in set (0.00 sec)
master [localhost] {msandbox} ((none)) > SELECT * FROM information_schema.xtradb_zip_dict_cols;
+----------+------------+---------+
| table_id | column_pos | dict_id |
+----------+------------+---------+
|       67 |          1 |       1 |
+----------+------------+---------+
1 row in set (0.00 sec)

Question: How do I drop the compression dictionary? Answer: if it is in use, you will get:

master [localhost] {msandbox} (dbtest) > drop COMPRESSION_DICTIONARY `names`;
ERROR 1894 (HY000): Compression dictionary 'names' is in use

Before dropping it, make sure there are no tables using the dictionary. There is an extreme condition where you are unable to drop the dictionary (see #1628824).

Question: Great! How about mysqldump? Answer: read here: mysqldump.

I might make this the topic of a dedicated post. Thanks for reading!

Sep
28
2016
--

7 Fresh Bugs in MySQL 8.0

bugs in mysql 8.0

bugs in mysql 8.0This blog post will look at seven bugs in MySQL 8.0.

Friday afternoon is always ideal for a quick look at the early quality of MySQL 8.0! Last Friday, I did just that.

If you haven’t heard the news yet, MySQL 8.0 DMR is available for download on mysql.com!

Tools to the ready: pquery2, updated 8.0 compatible scripts in Percona-qa and some advanced regex to wade through the many cores generated by the test run. For those of you who know and use pquery-run.sh, this should mean a lot!

[09:41:50] [21492] ====== TRIAL #39308 ======

In other words, almost 40K trials and 21.5K core dumps (or other significant issues) detected! This run had been churning away on a server for a number of days. On to the bug logging fun!

After reducing test cases, and filtering duplicates, we have the following seven bugs logged in upstream;

  • Bug #83120 virtual void Field::store_timestamp(const timeval*): Assertion ‘false’ failed.
  • Bug #83118 handle_fatal_signal (sig=11) in replace_user_table
  • Bug #83117 Assertion MDL_checker::is_read_locked(m_thd, *object)’ failed.
  • Bug #83115 Assertion ‘maybe_null’ failed. handle_fatal_signal in tem_func_concat::val_str
  • Bug #83114 Assertion `strlen(db_name) <= (64*3) && strlen(table_name) <= (64*3)’ failed.
  • Bug #83113 SIGKILL myself on DROP TABLE
  • Bug #83112 handle_fatal_signal (sig=11) in sp_get_flags_for_command

My first impressions?

MySQL 8.0 DMR is a reasonably solid release for a first iteration.

It seems our friends at upstream are on an excellent path to making MySQL 8.0 another rock-solid release. Chapeau!

Aug
13
2015
--

MySQL Quality Assurance: A Vision for the Future by Roel Van de Paar (Final Episode 13)

Welcome to the final – but most important – episode in the MySQL QA Series.

In it, I present my vision for all MySQL Quality Assurance – for all distributions – worldwide.

Episode 13: A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving

1. pquery Review
2. Random Coverage Testing
3. SQL Interleaving
4. The past & the future

Presented by Roel Van de Paar. Full-screen viewing @ 720p resolution recommended

Interested in the full MySQL QA Series?

The post MySQL Quality Assurance: A Vision for the Future by Roel Van de Paar (Final Episode 13) appeared first on MySQL Performance Blog.

Aug
11
2015
--

MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash

My server is crashing… Now what?

This special episode in the MySQL QA Series is for customers or users experiencing a crash.

  1. A crash?
    1. Cheat sheet: https://goo.gl/rrmB9i
    2. Sever install & crash. Note this is as a demonstration: do not action this on a production server!
      sudo yum install -y http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
      sudo yum install -y Percona-Server-client-56 Percona-Server-server-56
      sudo service mysql start
  2. Gimme Stacks!
    1. Debug info packages (can be executed on a production system, but do match your 5.5, 5.6 or 5.7 version correctly)
      sudo yum install -y Percona-Server-56-debuginfo
  3. Testcase?

Full-screen viewing @ 720p resolution recommended.

Also checkout the full MySQL QA Series!

The post MySQL QA Episode 12: My server is crashing… Now what? For customers or users experiencing a crash appeared first on MySQL Performance Blog.

Aug
04
2015
--

MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How

Today’s episode is all about Valgrind – from the pro’s to the con’s, from the why to the how! This episode will be of interest to anyone who is or wants to work with Valgrind on a regular or semi-regular basis.

  1. Pro’s/Why
  2. Con’s
  3. How
    1. Using the latest version
      sudo [yum/apt-get] install valgrind
      #OR#
      sudo [yum/apt-get] remove valgrind
      sudo [yum/apt-get] install bzip2 glibc*
      wget http://valgrind.org/downloads/valgrind-3.10.1.tar.bz2
      tar -xf valgrind-3.10.1.tar.bz2; cd valgrind-3.10.1
      ./configure; make; sudo make install
      valgrind –version # This should now read 3.10.1
    2. VGDB (cd ./mysql-test)
      ./lib/v1/mysql-test-run.pl –start-and-exit –valgrind –valgrind-option=”–leak-check=yes”
      –valgrind-option=”–vgdb=yes” –valgrind-option=”–vgdb-error=0″ 1st
      #OR# valgrind –leak-check=yes –vgdb=yes –vgdb-error=0 ./mysqld –options…
      #ALSO CONSIDER# –num-callers=40 –show-reachable=yes –track-origins=yes
      gdb /Percona-server/bin/mysqld
      (gdb) target remote | vgdb
      (gdb) bt
    3. pquery Valgrind
    4. Valgrind stacks overview & analysis

Full-screen viewing @ 720p resolution recommended.

 

The post MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How appeared first on Percona Data Performance Blog.

Jul
31
2015
--

MySQL QA Episode 10: Reproducing and Simplifying: How to get it Right

Welcome to the 10th episode in the MySQL QA series! Today we’ll talk about reproducing and simplifying: How to get it Right.

Note that unless you are a QA engineer stuck on a remote, and additionally difficult-to-reproduce or difficult-to-reduce bug, this episode will largely be non-interesting for you.

However, what you may like to see – especially if you watched episodes 7 (and possibly 8 and 9) – is how reducer automatically generates handy start/stop/client (cl) etc. scripts, all packed into a handy bug tarball, in combination with the reduced SQL testcase.

This somewhat separate part is covered directly after the introduction (ends at 11:17), as well as with an example towards the end of the video (starts at time index 30:35).

The “in between part” (11:17 to 30:35) is all about reproducing and simplifying, which – unless you are working on a remote case – can likely be skipped by most; remember that 85-95% of bugs reproduce & reduce very easily – and for this – episode 7, episode 8 (especially the FORCE_SKIPV/FORCE_SPORADIC parts), and the script-related parts of this episode (start to 11:17 and 30:35 to end) would suffice.

As per the above, the topics covered in this video are:
1. percona-qa/reproducing_and_simplification.txt
2. Automatically generated scripts (produced by Reducer)

========= Example bug excerpt for copy/paste – as per the video
Though the testcase above should suffice for reproducing the bug, the attached tarball gives the testcase as an exact match of our system, including some handy utilities
$ vi {epoch}_mybase # Update base path in this file (the only change required!)
$ ./{epoch}_init # Initializes the data dir
$ ./{epoch}_start # Starts mysqld (MYEXRA –option)
$ ./{epoch}_stop # Stops mysqld
$ ./{epoch}_cl # To check mysqld is up
$ ./{epoch}_run # Run the testcase (produces output) using mysql CLI
$ ./{epoch}_run_pquery # Run the testcase (produces output) using pquery
$ vi /dev/shm/{epoch}/error.log.out # Verify the error log
$ ./{epoch}_gdb # Brings you to a gdb prompt
$ ./{epoch}_parse_core # Create {epoch}_STD.gdb and {epoch}_FULL.gdb; standard and full var gdb stack traces

Full-screen viewing @ 720p resolution recommended

The post MySQL QA Episode 10: Reproducing and Simplifying: How to get it Right appeared first on Percona Data Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com