Aug
20
2019
--

Percona Server for MySQL 5.6.45-86.1 Now Available

Percona Server for MySQL 5.6.45-86.1

Percona Server for MySQL 5.6.45-86.1Percona announces the release of Percona Server for MySQL 5.6.45-86.1 on August 20, 2019. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.45, and including all the bug fixes in it, Percona Server for MySQL 5.6.45-86.1 is the current GA release in the Percona Server for MySQL 5.6 series. Percona Server for MySQL is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL.

Bugs Fixed:

  • The TokuDB hot backup library continually dumps TRACE information to the server error log. The user cannot enable or disable the dump of this information. Bug fixed #4850.
  • The TokuDBBackupPlugin is optional at cmake time. Bug fixed #5748.

Other bugs fixed: #5531, #5146 #5638, #5645, #5669, #5749 #5752, #5780, #5833, #5725 #5742, #5743, and #5746.

Release notes are available in the online documentation. Please report any bugs on the JIRA bug tracker.

Aug
15
2019
--

Percona Server for MySQL 8.0.16-7 Is Now Available

Percona server for MySQLPercona announces the release of Percona Server for MySQL 8.0.16-7 on August 15, 2019 (downloads are available here and from the Percona Software Repositories).

This release is based on MySQL 8.0.16. It includes all bug fixes in these releases. Percona Server for MySQL 8.0.16-7 is now the current GA release in the 8.0 series. All of Percona’s software is open-source and free.

Percona Server for MySQL 8.0.16 includes all the features available in MySQL 8.0.16 Community Edition in addition to enterprise-grade features developed by Percona. For a list of highlighted features from both MySQL 8.0 and Percona Server for MySQL 8.0, please see the GA release announcement.

Encryption Features General Availability (GA)

  • Temporary File Encryption (Temporary File Encryption)
  • InnoDB Undo Tablespace Encryption
  • InnoDB System Tablespace Encryption (InnoDB System Tablespace Encryption)
  • default_table_encryption

      =OFF/ON (General Tablespace Encryption)

  • table_encryption_privilege_check

     =OFF/ON (Verifying the Encryption Settings)

  • InnoDB redo log encryption (for master key encryption only) (Redo Log Encryption)
  • InnoDB merge file encryption (Verifying the Encryption Setting)
  • Percona Parallel doublewrite buffer encryption (InnoDB Tablespace Encryption)

Known Issues

  • 5865: Percona Server 8.0.16 does not support encryption for the MyRocks storage engine. An attempt to move any table from InnoDB to MyRocks fails as MyRocks currently will see all InnoDB tables as being encrypted.

Bugs Fixed

  • Parallel doublewrite buffer writes must crash the server on an I/O error occurs. Bug fixed #5678.
  • After resetting the innodb_temp_tablespace_encrypt to OFF during runtime the subsequent file-per-table temporary tables continue to be encrypted. Bug fixed #5734.
  • Setting the encryption to ON for the system tablespace generates an encryption key and encrypts system temporary tablespace pages. Resetting the encryption to OFF, all subsequent pages are written to the temporary tablespace without encryption. To allow any encrypted tables to be decrypted, the generated keys are not erased. Modifying they innodb_temp_tablespace_encrypt does not affect file-per-table temporary tables. This type of table is encrypted if ENCRYPTION='Y' is set during table creation. Bug fixed #5736.
  • An instance started with the default values but setting the redo log to encrypt without specifying the keyring plugin parameters does not fail or throw an error. Bug fixed #5476.
  • The rocksdb_large_prefix allows index key prefixes up to 3072 bytes. The default value is changed to TRUE to match the behavior of the innodb_large_prefix. Bug fixed #5655.
  • On a server with two million or more tables, a shutdown may take a measurable length of time. Bug fixed #5639.
  • The changed page tracking uses the LOG flag during read operations. The redo log encryption may attempt to decrypt pages with a specific bit set and fail. This failure generates error messages. A NO_ENCRYPTION flag lets the read process safely disable decryption errors in this case. Bug fixed #5541.
  • If large pages are enabled on MySQL side, the maximum size for innodb_buffer_pool_chunk_size is effectively limited to 4GB. Bug fixed 5517 (upstream #94747 ).
  • The TokuDB hot backup library continually dumps TRACE information to the server error log. The user cannot enable or disable the dump of this information. Bug fixed #4850.

Other bugs fixed: #5688,
#5723, #5695, #5749, #5752, #5610, #5689, #5645, #5734, #5772, #5753, #5129, #5102, #5681, #5686, #5681, #5310, #5713, #5007, #5102, #5129, #5130, #5149, #5696, #3845, #5149, #5581, #5652, #5662, #5697, #5775, #5668, #5752, #5782, #5767, #5669, #5753, #5696, #5733, #5803, #5804, #5820, #5827, #5835, #5724, #5767, #5782, #5794, #5796, #5746, and #5748.

Note:

If you are upgrading from 5.7 to 8.0, please ensure that you read the upgrade guide and the document Changed in Percona Server for MySQL 8.0.

Find the release notes for Percona Server for MySQL 8.0.16-7 in our online documentation. Report bugs in the Jira bug tracker.

Aug
14
2019
--

MySQL 8 and MySQL 5.7 Memory Consumption on Small Devices

MySQL 8 and MySQL 5.7 Memory Consumption

MySQL 8 and MySQL 5.7 Memory ConsumptionWhile we often run MySQL on larger scale systems in Production for Test and Dev, sometimes we want to run MySQL on the tiniest cloud instances possible or just run it on our laptops. In these cases, MySQL 8 and MySQL 5.7 memory consumption is quite important.

In comparing MySQL 8 vs MySQL 5.7, you should know that MySQL 8 uses more memory. Basic tests on a 1GB VM with MySQL 8 and MySQL 5.7 (actually they’re Percona Server versions) running the same light workload, I see the following vmstat output:

MySQL 5.7 vmstat output

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0  65280  71608  58352 245108    0    0  2582  3611 1798 8918 18  9 11 33 30
 4  0  65280  68288  58500 247512    0    0  2094  2662 1769 8508 19  9 13 30 29
 3  1  65280  67780  58636 249656    0    0  2562  3924 1883 9323 20  9  7 37 27
 4  1  65280  66196  58720 251072    0    0  1936  3949 1587 7731 15  7 11 36 31

MySQL 8.0 vmstat output

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa st
 9  1 275356  62280  60832 204736    0    0  2197  5245 2638 13453 24 13  2 34 27
 3  0 275356  60548  60996 206340    0    0  1031  3449 2446 12895 25 11 10 28 26
 7  1 275356  78188  59564 190632    0    1  2448  5082 2677 13661 26 13  6 30 25
 4  1 275356  76516  59708 192096    0    0  2247  3750 2401 12210 22 12  4 38 24

As you can see, MySQL 8 uses some 200MB more swap and also uses less OS cache, signaling more memory being allocated and at least “committed.” If we look at the “top” output we see:

MySQL 5.7

mysql 5.7

MySQL 8.0

MySQL 8.0

This also shows more Resident memory and virtual memory used by MySQL8.   Virtual Memory, in particular, is “scary” as it is well in excess of the 1GB of physical memory available on these VMs.  Of course, Virtual Memory usage (VSZ) is a poor indicator of actual memory needs for modern applications, but it does corroborate the higher memory needs story.

In reality, though, as we know from the “vmstat” output, neither MySQL 8 nor MySQL 5.7 is swapping with this light load, even though there isn’t much “room” left. If you have more than a handful of connections or wish to run some applications on the same VM, you would get swapping (or OOM killer if you have not enabled swap).

It would be an interesting project to see how low I can drive MySQL 5.7 and MySQL 8 memory consumption, but I will leave it to another project. Here are the settings I used for this test:

[mysqld]
innodb_buffer_pool_size=256M
innodb_buffer_pool_instances=1
innodb_log_file_size=1G
innodb_flush_method=O_DIRECT
innodb_numa_interleave=1
innodb_flush_neighbors=0
log_bin
server_id=1
expire_logs_days=1
log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=1
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1

Summary:  When moving to MySQL 8 in a development environment, keep in mind it will require more memory than MySQL 5.7 with the same settings.

Aug
12
2019
--

ProxySQL 2.0.5 and proxysql-admin tool Now Available

ProxySQL 2.0.5

ProxySQL 1.4.14

ProxySQL 2.0.5, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.5 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has been enhanced to support the following new options:

  • The --add-query-rule option creates query rules for synced MySQL users. This option is only applicable for the singlewrite mode and works together with the --syncusers and --sync-multi-cluster-users options.
  • The --force option skips existing configuration checks in mysql_servers, mysql_users and mysql_galera_hostgroups tables. This option will only work together with the –enable option: proxysql-admin --enable --force.
  • The --update-mysql-version option updates the mysql-server_version variable in ProxySQL with the version from a node in Percona XtraDB Cluster.

The ProxySQL 2.0.5 source and binary packages available from the Percona download page for ProxySQL include proxysql-admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.5 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL 2.0.5 Improvements

  • PSQLADM-49: Create rules for –syncusers. When running with --syncusers or --sync-multi-cluster-users, the --add-query-rule option can now be specified to add the singlewriter query rules for the new users.
  • PSQLADM-51: Update mysql-server_version variable. The --update-mysql-version command has been added to set the mysql-server_version__ global variable in ProxySQL.  This will take the version from a node in the cluster and set it in ProxySQL.

Bugs Fixed

  • PSQLADM-190: The --remove-all-servers option did not work on enable. When running with proxysql-cluster, the galera hostgroups information was not replicated which could result in failing to run --enable on a different ProxySQL node.  The --force option was added for --enable to be able to ignore any errors and always configure the cluster.
  • PSQLADM-199: query-rules removed during proxysql-cluster creation with PXC operator. When using the PXC operator for Kubernetes and creating a proxysql-cluster, the query rules could be removed.  The code was modified to merge the query rules (rather than deleting and recreating).  If the --force option was specified, then a warning was issued in case any existing rules were found; otherwise an error was issued. The --disable-updates option was added to ensure that ProxySQL cluster updates did not interfere with the current command.
  • PSQLADM-200: users were not created for –-syncusers with PXC operator. When using the PXC operator for Kubernetes, the --syncusers command was run but the mysql_users table was not updated.  The fix for PSQLADM-199 that suggested to use --disable-updates also applies here.

ProxySQL is available under Open Source license GPLv3.

Jul
17
2019
--

MySQL: Disk Space Exhaustion for Implicit Temporary Tables

Implicit Temporary Tables

Implicit Temporary TablesI was recently faced with a real issue about completely exhausting the disk space on MySQL. This was a serious issue because of the continuous outages of the service, as the customer had to constantly restart the server and wait for the next outage.

What was happening? In this article, I’m going to explain it and propose solutions.

Implicit temporary tables

MySQL needs to create implicit temporary tables for solving some kinds of queries. The queries that require a sorting stage most of the time need to rely on a temporary table. For example, when you use GROUP BY, ORDER BY or DISTINCT.  Such queries are executed in two stages: the first is to gather the data and put them into a temporary table, the second is to execute the sorting on the temporary table.

A temporary table is also needed in case of some UNION statements evaluation, for VIEW that cannot use merge, for derived tables when using subqueries, for multiple-table UPDATE, and some other cases.

If the temporary table is small it can be created into the memory, otherwise, it’s created on the disk. Needless to say that an in-memory temporary table is faster. MySQL creates an in-memory table, and if it becomes too large it is converted to an on-disk table. The maximum size for in-memory temporary tables is defined by the tmp_table_size or max_heap_table_size value, whichever is smaller. The default size in MySQL 5.7 is 16MB. If you run queries on a large amount of data, or if you have not optimized queries, you can increase the variables. When setting the threshold, take into consideration the available RAM and the number of concurrent connections you have during your peaks. You cannot indefinitely increase the variables, as at some point you’ll need to let MySQL use on-disk temporary tables.

Note: the temporary table is created on-disk if the tables involved have TEXT or BLOB columns, even if the size is less than the configured threshold.

Temporary tables storage engine

Until MySQL 5.6, all the on-disk temporary tables are created as MyISAM. The temporary table is created in-memory or on-disk, depending on the configuration, and it’s dropped immediately at the end of the query. From MySQL 5.7, they are created as InnoDB by default. Then you can rely on the advanced features.

The new default is the best option for the overall performance and should be used in the majority of the cases.

A new configuration variable is available to set the storage engine for the temporary tables:  internal_tmp_disk_storage_engine. The variable can be set to  innodb (default if not set) or myisam.

The potential problem with InnoDB temporary tables

Although using InnoDB is the best for performance, a new potential issue could arise. In some particular cases, you can have disk exhaustion and server outage.

As any other InnoDB table in the database, the temporary tables have their own tablespace file. The new file is in the data directory together with the general tablespace, with the name ibtmp1. It stores all the tmp tables. A tablespace file cannot be shrunk, and it grows constantly as long as you don’t run a manual OPTIMIZE TABLE. The ibtmp1 makes no difference, as you cannot use OPTIMIZE. The only way to shrink ibtmp1 size to zero is to restart the server.

Fortunately, even if the file cannot shrink, after the execution of a query the temporary table is automatically dropped and the space in the tablespace can be reused for another incoming query.

Let’s think now about the following case:

  • you have non-optimized queries that require the creation of very large on-disk tmp tables
  • you have optimized queries, but they are creating very large on-disk tmp tables because you are doing in purpose computation on a very large dataset (statistics, analytics)
  • you have a lot of concurrent connections running the same queries with tmp table creation
  • you don’t have a lot of free space in your volume

In such a situation it’s easy to understand that the file ibtmp1 size can increase considerably and the file can easily exhaust the free space. This was happening several times a day, and the server had to be restarted in order to completely shrink the ibtmp1 tablespace.

It’s not mandatory that the concurrent queries are launched exactly at the same time. Since a query with a large temporary table will take several seconds or minutes to execute, it is sufficient to have queries launched at different times while the preceding ones are still running. Also, you have to consider that any connection creates its own temporary table, so the same exact query will create another exact copy of the same temporary table into the tablespace. Exhausting the disk space is very easy with non-shrinkable files!

So, what to do to avoid disk exhaustion and outages?

The trivial solution: use a larger disk

This is really trivial and can solve the problem, but it is not the optimal solution. In fact, it’s not so easy to figure out what your new disk size should be. You can guess by increasing the disk size step by step, which is quite easy to do if your environment is in the cloud or you have virtual appliances on a very large platform. But it’s not easy to do in on-premise environments.

But with this solution, you can risk having unneeded expenses, so keep that in mind.

You can also move the ibtmp1 file on a dedicated large disk, by setting the following configuration variable:

[mysqld]
innodb_temp_data_file_path = ../../tmp/ibtmp1:12M:autoextend

A MySQL restart is required.

Please note that the path has to be specified as relative to the data directory.

Set an upper limit for ibtmp1 size

For example:

[mysqld]
innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:10G

In this case, the file cannot grow more than 10GB. You can easily eliminate the outages, but this is a dangerous solution. When the data file reaches the maximum size, queries fail with an error indicating that the table is full. This is probably bad for your applications.

Step back to MyISAM for on-disk temporary tables

This solution seems to be counterintuitive but it could be the best way to avoid the outages in a matter of seconds and is guaranteed to use all needed temporary tables.

You can set the following variable into my.cnf:

internal_tmp_disk_storage_engine = MYISAM

Since the variable is dynamic you can set it also at runtime:

SET GLOBAL internal_tmp_disk_storage_engine = MYISAM;

Stepping back to MyISAM, you will considerably decrease the possibility of completely filling your disk space. In fact, the temporary tables will be created into different files and immediately dropped at the end of the query. No more issues about a forever increasing file.

And while there is always the possibility to see the same issue, just in case you can run the queries at the exact same time or really very close. In my real-world case, this was the solution to avoid all the outages.

Optimize your queries

This is the most important thing to do. After stepping back the storage engine to MyISAM to mitigate the outage occurrences, you have to absolutely take the time to analyze the queries.

The goal is to decrease the size of the on-disk temporary tables. It’s not the aim of this article to explain how to investigate the queries, but you can rely on the slow log, on a tool like pt-query-digest and on EXPLAIN.

Some tips:

  • create missing indexes on the tables
  • add more filters in the queries to gather less data, if you don’t really need it
  • rewrite queries to optimize the execution plan
  • if you have very large queries on purpose, you can use a queue manager in your applications to serialize their executions or to decrease the concurrency

This will be the longest activity, but hopefully, after all the optimizations, you can return to set the temporary storage engine to InnoDB for better performance.

Conclusion

Sometimes the improvements can have unexpected side effects. The InnoDB storage engine for on-disk temporary tables is a good improvement, but in some particular cases, for example, if you have non-optimized queries and little free space, you can have outages because of “disk full” error. Stepping back the tmp storage engine to MyISAM is the fastest way to avoid outages, but the optimization of the queries is the more important thing to do as soon as possible in order to return to InnoDB. And yes, even a larger or dedicated disk may help. It’s a trivial suggestion, I know, but it can definitely help a lot.

By the way, there’s a feature request about the issue: https://bugs.mysql.com/bug.php?id=82556

Further readings:
https://www.percona.com/blog/2007/01/19/tmp_table_size-and-max_heap_table_size/
https://www.percona.com/blog/2017/12/04/internal-temporary-tables-mysql-5-7/
http://mysqlserverteam.com/mysql-5-7-innodb-intrinsic-tables/
https://dev.mysql.com/doc/refman/5.7/en/internal-temporary-tables.html

 

Jul
03
2019
--

Percona Server for MySQL Highlights – binlog_space_limit

Percona Server for MySQL

Percona Server for MySQLI think it is often confusing to compare upstream MySQL and Percona Server for MySQL, and some helpful information can be found in the introductory notes. But what does that really mean for an ordinary DBA, especially if none of the known big extra features are important in a particular use case?

In this article, I would like to start a series of short blog posts highlighting small, often less known, but potentially useful features, available in Percona Server for MySQL. Let’s start with a relatively new feature.

Limit the disk space used by binary logs problem

Historically, dealing with binary logs and the disk space used by them was quite challenging. The only thing that let you control this is the expire_logs_days variable, which can be useless in some scenarios (binlogs growing faster than can be purged) and does not solve all problems. Therefore, in order to avoid disk full and server crashes, it is often needed to create smart scripts that monitor the situation and call PURGE BINARY LOGS TO when necessary.

This isn’t very convenient and so a feature request appeared in September 2012 – https://bugs.mysql.com/bug.php?id=66733 to use the already existing logic for relay logs and apply it for binary logs as well. It has not been addressed in upstream yet.

We attempted to deal with it in Percona Server by introducing max_binlog_files variable (Percona Server 5.5.27, October 2012). This was a good step forward, but still not as good as expected. Even though limiting the number of binary log files gives much better control in terms of disk space used by them, it has one serious downside. It does not take into account situations when there may be binary logs lot smaller or a lot bigger then the max_binlog_size specifies.

Most common situations like that may be that the server rotates to a new binary log faster due to FLUSH LOGS issued, or due to restarts. A couple of such FLUSH commands in a row may completely ruin the ability to point in time recovery by losing a large part of the history of the binary log. An opposite case is when a binary log gets bigger than maximum size due to large transactions (which are never going to be split).

Recently, a new variable in MySQL 8.0 has replaced the expire_logs_days– the binlog_expire_logs_secondsand also a period of 30 days became the new default (previously it was unlimited).  It is certainly better to have more precise control on binary logs history then “days”, for example, we can set the rotation period to 2 days 6 hours and 30 minutes or whatever best matches the backup schedule. Again, a good time limit control on binary logs may not be sufficient in all cases and won’t help much when it is hard to predict how much data will be written or changed over time. Only space-based limits can actually save us from hitting the disk space problem.

Expected solution is there

Finally, something better has arrived, the binlog_space_limit variable, introduced in Percona Server 5.7.23 on September 2018 (ported to 8.0 as well). The idea is very simple: it does the same thing as relay_log_space_limit does for the relay logs. Finally, both sides of binary logging have comparable functionality when it comes to controlling the space on the disk.

How to use it

When the binlog_space_limit variable is set to 0 (default), the feature is turned off and no limit is set apart from the one imposed by expire_logs_days. I think it is a good idea is to dedicate a separate disk partition for the binary logs (or binary and redo logs), big enough to keep as many of them as needed for a good backup and replication strategy, and set the variable to around 95% of this partition size. Remember, that even though max_binlog_sizedefines the maximum size of a single binary log, this is not a hard limit. This is because a single transaction is never split between multiple binlog files and has to be written to the same binary log as a whole.

Let’s say the max_binlog_size is set to 1GB (default and maximum), and the binlog_space_limit is set to 10GB, but it happens that a really big transaction changes 3GB of data. What will happen with the binlog_space_limit set is that in order to store that new 3GB in the binary log (when the total size of logs is going to hit the limit), it will remove as many old binary logs as needed first, in this case, up to three oldest ones. This way there is no risk in hitting disk full situation even if the free disk space is smaller then the new binary log addition would need. But still, I would recommend reserving at least little more than 1GB of free disk space for such partition, just in case.

Example settings for a 16GB disk partition, dedicated for binary logs only, may look like this:

[mysqld]
max_binlog_size = 512M
binlog_space_limit = 15G

One thing that could be still improved about this functionality is to make this variable dynamic. But to stay consistent with the one for relay logs, it would be best if the upstream makes this one dynamic first ?

Best binlog rotation strategy?

There may be still cases where I think you would rather prefer to stay with a time-based limit. Let’s imagine a situation where the full backups are made every week, and the binlog_space_limit is set to hold an average total size of two weeks of writes, so all should be safe. However, some heavy maintenance tasks alter a large portion of the data, creating more binary logs in one day then usually it takes two weeks. This may also break the point in time recovery capability. So if possible, maybe better to give an ample supply for the disk partition and specify 8 days rotation via expire_logs_days or binlog_expire_logs_secondsinstead.

Do you like it?

I am very interested to find out if this feature was already noticed by the community and how useful you think it is. Your comments are more than welcome!

Jul
02
2019
--

ALTERS, Foreign Keys, and Metadata Locks, oh my!

ALTERS, Foreign Keys, and Metadata Locks

ALTERS, Foreign Keys, and Metadata LocksI’m sure you’ve seen it – you kick off an ALTER and you get the dreaded “waiting on metadata lock”.  In many cases, this is expected if you are actively working on the table. However, I recently had a case with a client where the table being altered was rarely updated and very small (<100 rows).  The ALTER just sat for hours during a load test (more on this shortly) and never completed until the load test was stopped. Upon stopping the load test, the ALTER completed in less than a second as expected.  So what was going on here?

Check Foreign Keys

My first instinct, whenever there is odd locking, is to check foreign keys.  Naturally, this table had some FKs referencing a much busier table. However, this behavior still seemed rather strange.  When running an ALTER against a table, there is a request for a SHARED_UPGRADEABLE metadata lock against the child table. There is also a SHARED_READ_ONLY metadata lock against the parent and this is where things can get messy.

Let’s take a look at how MDLs are acquired per the documentation (https://dev.mysql.com/doc/refman/en/metadata-locking.html:

If there are multiple waiters for a given lock, the highest-priority lock request is satisfied first, with an exception related to the max_write_lock_count system variable. Write lock requests have higher priority than read lock requests.

It is important to note that the lock order is serialized:

Statements acquire metadata locks one by one, not simultaneously, and perform deadlock detection in the process.

Normally, when thinking of a queue, we think of a FIFO process.  If I issue the following three statements (in this order), they would complete in this order:

  1. INSERT INTO parent…
  2. ALTER TABLE child…
  3. INSERT INTO parent…

However, as the child ALTER statement requests a read lock against parent, the two inserts will complete PRIOR to the ALTER despite the ordering.  Here is a sample scenario in which this can be demonstrated:

Table setup and initial population:

CREATE TABLE `parent` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `val` varchar(10) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB;
CREATE TABLE `child` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `parent_id` int(11) DEFAULT NULL,
  `val` varchar(10) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `idx_parent` (`parent_id`),
  CONSTRAINT `fk_parent` FOREIGN KEY (`parent_id`) REFERENCES `parent` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION
) ENGINE=InnoDB;
INSERT INTO `parent` VALUES (1, "one"), (2, "two"), (3, "three"), (4, "four");

Session 1:

start transaction;
update parent set val = "four-new" where id = 4;

Session 2:

alter table child add index `idx_new` (val);

Session 3:

start transaction;
update parent set val = "three-new" where id = 3;

At this point, session 1 has an open transaction and is in sleep with a write metadata lock granted on parent.  Session 2 has an upgradeable (write) lock granted on child and is waiting on a read lock on the parent. Finally, session 3 has a granted write lock against parent:

mysql> select * from performance_schema.metadata_locks;
+-------------+-------------+-------------------+---------------+-------------+
| OBJECT_TYPE | OBJECT_NAME | LOCK_TYPE         | LOCK_DURATION | LOCK_STATUS |
+-------------+-------------+-------------------+---------------+-------------+
| TABLE       | child       | SHARED_UPGRADABLE | TRANSACTION   | GRANTED     | <- ALTER (S2)
| TABLE       | parent      | SHARED_WRITE      | TRANSACTION   | GRANTED     | <- UPDATE (S1)
| TABLE       | parent      | SHARED_WRITE      | TRANSACTION   | GRANTED     | <- UPDATE (S3)
| TABLE       | parent      | SHARED_READ_ONLY  | STATEMENT     | PENDING     | <- ALTER (S2)
+-------------+-------------+-------------------+---------------+-------------+

Notice, the only session with a pending lock status is session 2 (the ALTER).  Session 1 and session 3 (issued before and after the ALTER respectively) both have been granted the write locks.  Where the ordering breaks down is when the commit happens on session 1. When thinking about an ordered queue, one would expect session 2 to acquire the lock and things would just move on.  However, due to the priority nature of the metadata lock system, session 2 still waits and now session 3 has the lock.

If another write session comes in and starts a new transaction and acquires a write lock against the parent table, then even when session 3 completes, the ALTER will still be blocked.  You can see where this is going…

As long as I keep an active transaction that has an MDL against the parent table open, the ALTER on the child table will never complete.  Making this worse, since the write lock on the child table was successful (but the full statement is waiting on acquiring the parent read lock), all incoming read requests against the child table will be blocked!!

Also, think about how you normally try to troubleshoot a statement that won’t complete.  You look at transactions (both in the processlist and InnoDB status) that have been open for a longer time.  But since the blocking thread is now younger than the ALTER thread, the oldest transaction/thread you will see is the ALTER.  Queue hair pulling!!

This is exactly what was happening in this scenario.  In preparation for a release, our client was running their ALTER statements in conjunction with a load test (a very good practice!) to ensure a smooth release.  The problem was that the load test kept an active write transaction open against the parent table. That isn’t to say it just kept writing, but rather there were multiple threads and one was ALWAYS active.  This prevented the ALTER from ever completing and blocked ensuing read requests to the relatively static child table.

Fortunately, there is a solution to this problem (other than banishing FKs from the schema).  The variable max_write_lock_count (https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_write_lock_count) can be used to allow read locks to be granted ahead of write locks after a threshold of consecutive write locks.  By default, this variable is set to 18,446,744,073,709,551,615. So you just have to wait for 18 quintillion write requests to complete before the read would be granted.  To put that in perspective, if you were issuing 10,000 writes/sec against that table, your read would be blocked for 58 million years.  

To prevent this from happening, you can simply reduce max_write_lock_count to a small number (say 10?) and after every 10 write locks get acquired, the MDL subsystem will look for pending read locks, grant one, then move back to writes.  Problem solved!

Being a dynamic variable, this can be adjusted at runtime to allow the waiting ALTER to complete.  In general, this is more of an edge case as there normally is some time in between writes to a table for read locks to get acquired.  However, if your use case keeps concurrent sessions running that ALWAYS have a transaction against a table that is referenced as an FK, you could see this situation crop up.  Fortunately, the fix is straightforward and can be done on the fly!

NOTE:  This troubleshooting was made possible through the performance schema and enabling the metadata_locks table as described here: https://dev.mysql.com/doc/refman/5.7/en/metadata-locks-table.html

May
30
2019
--

Percona Monitoring and Management (PMM) 2 Beta Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management

We are pleased to announce the release of PMM 2 Beta!  PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL, MongoDB, and PostgreSQL performance.

  • Query Analytics:
    • MySQL and MongoDB – Slow log, PERFORMANCE_SCHEMA, and Profiler data sources
    • Support for large environments – default view all queries from all instances
    • Filtering – display only the results matching filters such as the schema name or the server instance
    • Sorting and more columns – now sort by any column.
    • Modify Columns – Add one or more columns for any field exposed by the data source
    • Sparklines –  restyled sparkline targeted at making data representation more accurate
  • Labels – Prometheus now supports auto-discovered and custom labels
  • Inventory Overview Dashboard – Displays the agents, services, and nodes which are registered with PMM Server
  • Environment Overview Dashboard – See issues at a glance across multiple servers
  • API – View versions and list hosts using the API
  • MySQL, MongoDB, and PostgreSQL Metrics – Visualize database metrics over time
  • pmm-agent – Provides secure remote management of the exporter processes and data collectors on the client

PMM 2 Beta is still a work in progress – you may encounter some bugs and missing features. We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

This release is not recommended for Production environments.

PMM 2 is designed to be used as a new installation – please don’t try to upgrade your existing PMM 1 environment.

Query Analytics Dashboard

Query Analytics Dashboard now defaults to display all queries on each of the systems that are configured for MySQL PERFORMANCE_SCHEMA, Slow Log, and MongoDB Profiler, and includes comprehensive filtering capabilities.

Query Analytics Overview

You’ll recognize some of the common elements in PMM 2 Query Analytics such as the Load, Count, and Latency columns. However, there are new elements such as the filter box and more arrows on the columns:

Query Detail

Query Analytics continues to deliver detailed information regarding individual query performance

Filter and Search By

There is a filtering panel on the left, or use the search by bar to set filters using key:value syntax. For example, I’m interested in just the queries related to mysql-sl2 server, I could then type d_server:mysql-sl2:

Sort by any column

This is a much-requested feature from PMM Query Analytics and we’re glad to announce that you can now sort by any column! Just click the small arrow to the right of the column name and:

Sparklines

As you may have already noticed, we have changed the sparkline representation. New sparklines are not points-based lines, but are interval-based, and look like a staircase line with flat values for each of the displayed period:

We also position a single sparkline for only the left-most column and render numeric values for all remaining columns.

Add extra columns

Now you can add a column for each additional field which is exposed by the data source. For example, you can add Rows Examined by clicking the + sign and typing or selecting from the available list of fields:

MySQL Query Analytics Slow Log source

We’ve increased our MySQL support to include both PERFORMANCE_SCHEMA and Slow log – and if you’re using Percona Server with the Extended Slow Log format, you’ll be able to gain deep insight into the performance of individual queries, for example, InnoDB behavior.  Note the difference between the detail available from PERFORMANCE_SCHEMA vs Slow Log:

PERFORMANCE_SCHEMA:

Slow Log:

MongoDB Metrics

Support for MongoDB Metrics included in this release means you can add a local or remote MongoDB instance to PMM 2 and take advantage of the following view of MongoDB performance:

PostgreSQL Metrics

In this release, we’re also including support for PostgreSQL Metrics. We’re launching PMM 2 Beta with just the PostgreSQL Overview dashboard, but we have others under development, so watch for new Dashboards to appear in subsequent releases!

Environment Overview Dashboard

This new dashboard provides a bird’s-eye view, showing a large number of hosts at once. It allows you to easily figure out the hosts which have issues, and move onto other dashboards for a deeper investigation.

The charts presented show the top five hosts by different parameters:

The eye-catching colored hexagons with statistical data show the current values of parameters and allow you to drill-down to a dashboard which has further details on a specific host.

Labels

An important concept we’re introducing in PMM 2 is that when a label is assigned it is persisted in both the Metrics (Prometheus) and Query Analytics (Clickhouse) databases. So, when you browse a target in Prometheus you’ll notice many more labels appear – particularly the auto-discovered (replication_set, environment, node_name, etc.) and (soon to be released) custom labels via custom_label.

Inventory Dashboard

We’ve introduced a new dashboard with several tabs so that users are better able to understand which nodes, agents, and services are registered against PMM Server. We have an established hierarchy with Node at the top, then Service and Agents assigned to a Node.

  • Nodes – Where the service and agents will run. Assigned a node_id, associated with a machine_id (from /etc/machine-id)

    • Examples: bare metal, virtualized, container
  • Services – Individual service names and where they run, against which agents will be assigned. Each instance of a service gets a service_id value that is related to a node_id
    • Examples: MySQL, Amazon Aurora MySQL
    • You can also use this feature to support multiple mysqld instances on a single node, for example: mysql1-3306, mysql1-3307
  • Agents – Each binary (exporter, agent) running on a client will get an agent_id value
    • pmm-agent is the top of the tree, assigned to a node_id
    • node_exporter is assigned to pmm-agent agent_id
    • mysqld_exporter and QAN MySQL Perfschema are assigned to a service_id
    • Examples: pmm-agent, node_exporter, mysqld_exporter, QAN MySQL Perfschema

You can now see which services, agents, and nodes are registered with PMM Server.

Nodes

In this example I have PMM Server (docker) running on the same virtualized compute instance as my Percona Server 5.7 instance, so PMM treats this as two different nodes.

Services

Agents

For a monitored Percona Server instance, you’ll see an agent for each of these:

  1. pmm-agent
  2. node_exporter
  3. mysqld_exporter
  4. QAN Perfschema

API

We are exposing an API for PMM Server! You can view versions, list hosts, and more…

The API is not guaranteed to work until GA release – so be prepared for some errors during Beta release.

Browse the API using Swagger at /swagger

Installation and configuration

The default PMM Server credentials are:

username: admin
password: admin

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. Running the PMM 2 Docker container with PMM Server can be done by the following commands (note the version tag of 2.0.0-beta1):

docker create -v /srv --name pmm-data-2-0-0-beta1 perconalab/pmm-server:2.0.0-beta1 /bin/true
docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-beta1 --name pmm-server-2.0.0-beta1 --restart always perconalab/pmm-server:2.0.0-beta1

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository. See percona-release official documentation for further details on this new tool.

Specific instructions for a Debian system are as follows:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the experimental repo:

sudo percona-release disable all
sudo percona-release enable original experimental

Install pmm2-client package:

apt-get update
apt-get install pmm2-client

Users who have previously installed pmm2-client alpha version should remove the package and install a new one in order to update to beta1.

Please note that leaving experimental repository enabled may affect further package installation operations with bleeding edge software that may not be suitable for Production. You can revert by disabling experimental via the following commands:

sudo percona-release disable original experimental
sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin config command with your PMM Server IP address to register your Node:

# pmm-admin config --server-insecure-tls --server-address=<IP Address>:443

You should see the following:

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

Adding MySQL Metrics and Query Analytics

MySQL server can be added for the monitoring in its normal way. Here is a command which adds it using the PERFORMANCE_SCHEMA source:

sudo pmm-admin add mysql --use-perfschema --username=pmm --password=pmm

where username and password are credentials for accessing MySQL.

The syntax to add MySQL services (Metrics and Query Analytics) using the Slow Log source is the following:

sudo pmm-admin add mysql --use-slowlog --username=pmm --password=pmm

When the server is added, you can check your MySQL dashboards and Query Analytics in order to view its performance information!

Adding MongoDB Metrics and Query Analytics

You can add MongoDB services (Metrics and Query Analytics) with a similar command:

pmm-admin add mongodb --use-profiler --use-exporter  --username=pmm  --password=pmm

Adding PostgreSQL monitoring service

You can add PostgreSQL service as follows:

pmm-admin add postgresql --username=pmm --password=pmm

You can then check your PostgreSQL Overview dashboard.

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

May
28
2019
--

ProxySQL 2.0.4 and proxysql-admin tool Now Available

ProxySQL 1.4.14

ProxySQL 1.4.14

ProxySQL 2.0.4, released by ProxySQL, is now available for download in the Percona Repository along with Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

This release includes ProxySQL 2.0.4 which fixes many bugs and introduces a number of features and enhancements. The proxysql-admin tool has not changed since the previous release.

The ProxySQL 2.0.4 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.4 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

ProxySQL is available under Open Source license GPLv3.

May
27
2019
--

Percona Server for MySQL 5.7.26-29 Is Now Available

Percona Server for MySQLPercona announces the release of Percona Server for MySQL 5.7.26-29 on May 27, 2019 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.7.26, including all the bug fixes in it. Percona Server for MySQL 5.7.26-29 is now the current GA release in the 5.7 series. All of Percona’s software is open-source and free.

New Features:

  • New Audit_log_buffer_size_overflow status variable has been implemented to track when an Audit Log Plugin entry was either dropped or written directly to the file due to its size being bigger than audit_log_buffer_size variable.

Bug Fixes:

  • TokuDB storage engine would assert on load when used with jemalloc 5.x. Bug fixed #5406.
  • a read-write workload on compressed InnoDB tables could cause an assertion error. Bug fixed #3581.
  • using TokuDB or MyRocks native partitioning and index_merge access method could lead to a server crash. Bugs fixed #5206, #5562.
  • a stack buffer overrun could happen if the redo log encryption with key rotation was enabled. Bug fixed #5305.
  • TokuDB and MyRocks native partitioning handler objects were allocated from a wrong memory allocator. Memory was released only on shutdown and concurrent access to global memory allocator caused memory corruptions and therefore crashes. Bugs fixed #5508, #5525.
  • enabling redo log encryption resulted in redo log being written unencrypted. Bug fixed #5547.
  • if there are multiple row versions in InnoDB, reading one row from PK may have O(N) complexity and reading from secondary keys may have O(N^2) complexity. Bugs fixed #4712, #5450 (upstream #84958).
  • setting the log_slow_verbosity to include innodb value and enabling the slow_query_log could lead to a server crash. Bug fixed #4933.
  • the page cleaner could sleep for a long time when the system clock was adjusted to an earlier point in time. Bug fixed #5221 (upstream #93708).
  • executing SHOW BINLOG EVENT from an invalid position could result in a segmentation fault on 32bit machines. Bug fixed #5243.
  • BLOB entries in the binary log could become corrupted in a case when a database with Blackhole tables served as an intermediate binary log server in a replication chain. Bug fixed #5353 (upstream #93917).
  • when Audit Log Plugin was enabled, the server could use a lot of memory when handling large queries. Bug fixed #5395.
  • XtraDB changed page tracking was missing pages changed by the in-place DDL. Bug fixed #5447.
  • innodb_encrypt_tables variable accepted FORCE option only inside quotes as a string. Bug fixed #5538.
  • enabling redo log encryption and XtraDB changed page tracking together would result in the error log flooded with decryption errors. Bug fixed #5541.
  • system keyring keys initialization wasn’t thread safe. Bugs fixed #5554.
  • when using the Docker image, if the root passwords set in the mounted .cnf configuration file and the one specified with MYSQL_ROOT_PASSWORD option are different, password from the MYSQL_ROOT_PASSWORD option will be used. Bug fixed #5573.
  • long running ALTER TABLE ADD INDEX could cause a semaphore wait > 600 assertion. Bug fixed #3410 (upstream #82940).

Other bugs fixed: #5537, #5007 (upstream #93164), #5018, #5561, #5570, #5578, #5610, #5441, and #5442.

This release also contains the fixes for the following security issues: CVE-2019-2632, CVE-2019-1559, CVE-2019-2628, CVE-2019-2581, CVE-2019-2683, CVE-2019-2592, CVE-2019-262, and CVE-2019-2614.

Find the release notes for Percona Server for MySQL 5.7.26-29 in our online documentation. Report bugs in the Jira bug tracker.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com