Sep
20
2018
--

ProxySQL 1.4.10 and Updated proxysql-admin Tool Now in the Percona Repository

ProxySQL 1.4.9

ProxySQL 1.4.5ProxySQL 1.4.10, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 1.4.10 source and binary packages available at https://percona.com/downloads/proxysql include ProxySQL Admin – a tool, developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 1.4.10 are available as well: https://hub.docker.com/r/percona/proxysql/. You can download the original ProxySQL from https://github.com/sysown/proxysql/releases.

Improvements

  • PSQLADM-12: Implemented the writer-is-reader option in proxysql-admin. This is now a text option: ‘always’, ‘never’, and ‘ondemand’
  • PSQLADM-64: Added the option --sync-multi-cluster-users which , that uses the same function as --sync-users but will not delete users on ProxySQL that don’t exist on MySQL
  • PSQLADM-90: Added testsuites for host priority/slave/loadbal/writer-is-reader features
  • Additional debugging support
    An additional --debug flag on scripts prints more output. All SQL calls are now logged if debugging is enabled.

Tool Enhancements

  • proxysql-status
    proxysql-status now reads the credentials from the proxysql-admin.cnf file. It is possible to look only at certain tables (--files, --main, --monitor, --runtime, --stats). Also added the ability to filter based on the table name (--table)
  • tests directory
    The proxysql-admin-testsuite.sh script can now be used to create test clusters (

    proxysql-admin-testsuite.sh <workdir> --no-test --cluster-one-only

     , this option will create a 3-node PXC cluster with 1 async slave and will also start proxyxql). Also added regression test suites.

  • tools directory
    Added extra tools that can be used for debugging (mysql_exec, proxysql_exec, enable_scheduler, and run_galera_checker).

Bug Fixes

  • PSQLADM-73: proxysql-admin did not check that the monitor user had been configured on the PXC nodes.
  • PSQLADM-82: the without-check-monitor-user option did check the monitor user (even if it was enabled). This option has been replaced with use-existing-monitor-password.
  • PSQLADM-83: proxysql_galera-checker could hang if there was no scheduler entry.
  • PSQLADM-87: in some cases, proxysql_galera_checker was not moving a node to OFFLINE_SOFT if pxc_maint_mode was set to “maintenance”
  • PSQLADM-88: proxysql_node_monitor was searching among all nodes, not just the read hostgroup.
  • PSQLADM-91: Nodes in the priority list were not being picked.
  • PSQLADM-93: If mode=’loadbal’, then the read_hostgroup setting was used from the config file, rather than being set to -1.
  • PSQLADM-96: Centos used /usr/share/proxysql rather than /var/lib/proxysql
  • PSQLADM-98: In some cases, checking the PXC node status could stall (this call now uses a TIMEOUT)

ProxySQL is available under OpenSource license GPLv3.

The post ProxySQL 1.4.10 and Updated proxysql-admin Tool Now in the Percona Repository appeared first on Percona Database Performance Blog.

Sep
20
2018
--

MyRocks Disk Full Edge Case

problem in MyRocks

MyRocks disk full bugRocksDB engine—and it’s MySQL implementation MyRocks—is a very good alternative engine for MySQL. It has proven to be very efficient and stable for many workloads, including those of large scale. However, it is still a relative newborn in the MySQL ecosystem, and has only a small fraction of the adoption rate of InnoDB. That means it is not so well tested at all possible edge cases, and may have many unreported bugs. One known bug is discussed here: if you are a MyRocks user, it’s important that you are aware of the possibility of lost data in the specific circumstances described below.

In writing this article, I want to broadcast a wider warning, as the problem I found is pretty serious and could lead to a very unpleasant situation.

The problem is related to not a very edge case after all – disk full. The result could be extremely bad, though. After printing some errors, the RocksDB engine continues to message clients that consecutive writes are accepted, while they are, in fact, lost! In addition, depending on the workload and at which moment the disk ran out of space, in a worst case scenario we may lose access to tables that use the RocksDB engine completely.

Let me show how bad the situation could be, using a basic sandbox and sysbench test example. I have a sandbox with MyRocks installed:

mysql [localhost] {msandbox} ((none)) > show engines\G
*************************** 1. row ***************************
Engine: ROCKSDB
Support: YES
Comment: RocksDB storage engine
Transactions: YES
XA: YES
Savepoints: YES
mysql [localhost] {msandbox} ((none)) > select @@version,@@version_comment;
+-----------+----------------------------------------------------+
| @@version | @@version_comment |
+-----------+----------------------------------------------------+
| 5.7.22-22 | Percona Server (GPL), Release 22, Revision f62d93c |
+-----------+----------------------------------------------------+
1 row in set (0.00 sec)

Once my test system had not so much free disk space remaining, I ran this simple sysbench prepare command:

$ sysbench /usr/share/sysbench/oltp_insert.lua --mysql_storage_engine=rocksdb --table-size=1000000 --tables=4 --mysql-db=db2 --mysql-user=root --mysql-password=msandbox --mysql-socket=/tmp/mysql_sandbox5723.sock --threads=4 --time=200 --report-interval=1 --events=0 --db-driver=mysql prepare
sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2)
Initializing worker threads...
Creating table 'sbtest2'...
Creating table 'sbtest3'...
Creating table 'sbtest1'...
Creating table 'sbtest4'...
Inserting 1000000 records into 'sbtest2'
Inserting 1000000 records into 'sbtest1'
Inserting 1000000 records into 'sbtest3'
Inserting 1000000 records into 'sbtest4'
Creating a secondary index on 'sbtest4'...
FATAL: mysql_drv_query() returned error 1105 ([./.rocksdb/db2.sbtest4_k_4_4_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device) for query 'CREATE INDEX k_4 ON sbtest4(k)'
FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:238: SQL error, errno = 1105, state = 'HY000': [./.rocksdb/db2.sbtest4_k_4_4_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device
Creating a secondary index on 'sbtest2'...
Creating a secondary index on 'sbtest1'...
Creating a secondary index on 'sbtest3'...
FATAL: mysql_drv_query() returned error 1105 ([./.rocksdb/db2.sbtest2_k_2_5_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device) for query 'CREATE INDEX k_2 ON sbtest2(k)'
FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:238: SQL error, errno = 1105, state = 'HY000': [./.rocksdb/db2.sbtest2_k_2_5_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device
FATAL: mysql_drv_query() returned error 1105 ([./.rocksdb/db2.sbtest3_k_3_6_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device) for query 'CREATE INDEX k_3 ON sbtest3(k)'
FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:238: SQL error, errno = 1105, state = 'HY000': [./.rocksdb/db2.sbtest3_k_3_6_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device
FATAL: mysql_drv_query() returned error 1105 ([./.rocksdb/db2.sbtest1_k_1_7_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device) for query 'CREATE INDEX k_1 ON sbtest1(k)'
FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:238: SQL error, errno = 1105, state = 'HY000': [./.rocksdb/db2.sbtest1_k_1_7_0.bulk_load.tmp] bulk load error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device

This resulted in the following errors being printed to the error log:

2018-08-10T11:01:35.379611Z 5 [Note] RocksDB: Begin index creation (0,260)
2018-08-10T11:01:40.105596Z 3 [Note] RocksDB: Begin index creation (0,261)
2018-08-10T11:01:40.838902Z 4 [Note] RocksDB: Begin index creation (0,262)
2018-08-10T11:01:40.924093Z 6 [Note] RocksDB: Begin index creation (0,263)
2018-08-10T11:03:14.868958Z 0 [ERROR] RocksDB: Error detected in background, Status Code: 5, Status: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on device
2018-08-10T11:03:14.868986Z 0 [ERROR] RocksDB: BackgroundErrorReason: 0
2018-08-10T11:03:14.869003Z 0 [ERROR] LibRocksDB:[/mnt/workspace/percona-server-5.7-binaries-release-rocks/label_exp/min-stretch-x64/percona-server-5.7.22-22/storage/rocksdb/rocksdb/db/db_impl_compaction_flush.cc:1293] Waiting after background flush error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000101.sst: No space left on deviceAccumulated background error counts: 1
2018-08-10T11:03:15.430064Z 0 [ERROR] LibRocksDB:[/mnt/workspace/percona-server-5.7-binaries-release-rocks/label_exp/min-stretch-x64/percona-server-5.7.22-22/storage/rocksdb/rocksdb/db/db_impl_compaction_flush.cc:1373] Waiting after background compaction error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000102.sst: No space left on device, Accumulated background error counts: 2
2018-08-10T11:03:18.538866Z 9 [Note] RocksDB: Begin index creation (0,268)
2018-08-10T11:03:19.897797Z 9 [ERROR] Error finishing bulk load.
2018-08-10T11:03:23.306957Z 10 [Note] RocksDB: Begin index creation (0,269)
2018-08-10T11:03:23.509448Z 8 [Note] RocksDB: Begin index creation (0,270)
2018-08-10T11:03:23.539705Z 7 [Note] RocksDB: Begin index creation (0,271)
2018-08-10T11:03:24.536092Z 10 [ERROR] Error finishing bulk load.
2018-08-10T11:03:24.659233Z 7 [ERROR] Error finishing bulk load.
2018-08-10T11:03:24.736380Z 8 [ERROR] Error finishing bulk load.

OK, so I resized my data partition online (no MySQL service restart) and gave it more free disk space. And here is what happened next. I tried to see if MySQL/MyRocks works now:

mysql [localhost] {msandbox} (db1) > select count(*) from sbtest1;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.33 sec)
mysql [localhost] {msandbox} (db1) > insert into sbtest1 set k=1;
Query OK, 1 row affected (0.01 sec)
mysql [localhost] {msandbox} (db1) > select count(*) from sbtest1;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.32 sec)
mysql [localhost] {msandbox} (db1) > delete from db1.sbtest2 where id<100;
Query OK, 99 rows affected (0.01 sec)
mysql [localhost] {msandbox} (db1) > select count(*) from db1.sbtest2;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
1 row in set (0.32 sec)

That’s the worst part – there’s no error returned to the client, and yet my new writes are basically gone!

The good thing is we can still read from the tables, so mysqldump or mydumper is the way to go now, BEFORE you attempt to restart the service:

mysql [localhost] {msandbox} (db1) > select id from db1.sbtest2 where id<10;
+----+
| id |
+----+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+----+
9 rows in set (0.00 sec)

When I later try to restart the service, it crashes on shutdown. In some cases, the database is able to recover and offer access to the old RocksDB data (though the new data, written above, are still lost).

In other cases, though, your RocksDB won’t load after restart:

mysql [localhost] {msandbox} (db1) > show create table sbtest1\G
ERROR 1286 (42000): Unknown storage engine 'ROCKSDB'

and using this option:

rocksdb_wal_recovery_mode = 3

…helps to recover the engine to usable state.

Ultimately, the worst case scenario is that you end up with the MyRocks engine dead and unable to recover, no matter what options you try. We can see this in the error log when I try to start the service:

2018-08-10T09:33:18.756868Z 0 [Note] RocksDB: 2 column families found
2018-08-10T09:33:18.756993Z 0 [Note] RocksDB: Column Families at start:
2018-08-10T09:33:18.757016Z 0 [Note] cf=default
2018-08-10T09:33:18.757022Z 0 [Note] write_buffer_size=67108864
2018-08-10T09:33:18.757027Z 0 [Note] target_file_size_base=67108864
2018-08-10T09:33:18.757043Z 0 [Note] cf=__system__
2018-08-10T09:33:18.757048Z 0 [Note] write_buffer_size=67108864
2018-08-10T09:33:18.757052Z 0 [Note] target_file_size_base=67108864
2018-08-10T09:33:18.811671Z 0 [ERROR] RocksDB: Could not get index information for Index Number (0,282), table db3.sbtest4
2018-08-10T09:33:18.811741Z 0 [ERROR] RocksDB: Failed to initialize DDL manager.
2018-08-10T09:33:18.811795Z 0 [ERROR] Plugin 'ROCKSDB' init function returned error.
2018-08-10T09:33:18.811812Z 0 [ERROR] Plugin 'ROCKSDB' registration as a STORAGE ENGINE failed.

Unfortunately, we were not able to find any workaround for this ultimate worst case situation, and so recovering data—including all other RocksDB tables not touched during the incident—may be very hard.

The problem, which is obviously a bug, is reported here: https://jira.percona.com/browse/PS-4706.

Fortunately, the engineers at Facebook—the original RocksDB founder—are aware of the issue and have already released a simple patch. This allows the engine simply to crash instead of allowing lost writes. While this is not a perfect solution, it at least protects you from losing data. Related commit details can be found here: https://github.com/facebook/mysql-5.6/commit/4a7d3fb8f96c96c2be10d61a7796ccac7610a5d6.

At Percona, we accepted that fix, and the latest Percona Server for MySQL 5.7.23 version, already has it incorporated: https://github.com/percona/percona-server/commit/d0d5635573ebc4f157e36a989d14f7ee6f87c410

So, with the upgraded version, we instead will see this kind of error log event:

2018-09-19T08:12:14.496412Z 0 [ERROR] RocksDB: Error detected in background, Status Code: 5, Status: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000104.sst: No space left on device
2018-09-19T08:12:14.496435Z 0 [ERROR] RocksDB: BackgroundErrorReason: 1
2018-09-19T08:12:14.496451Z 0 [ERROR] LibRocksDB:[/mnt/workspace/percona-server-5.7-binaries-release-rocks-new/label_exp/min-stretch-x64/test/percona-server-5.7.23-23/storage/rocksdb/rocksdb/db/db_impl_compactio
n_flush.cc:1472] Waiting after background compaction error: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000104.sst: No space left on device, Accumulated background error counts: 1
2018-09-19T08:12:14.511910Z 12 [ERROR] RocksDB: failed to write to WAL, Status Code: 5, Status: IO error: No space left on deviceWhile appending to file: ./.rocksdb/000104.sst: No space left on device
2018-09-19T08:12:14.511935Z 12 [ERROR] MyRocks: aborting on WAL write error.
08:12:14 UTC - mysqld got signal 6 ;

Therefore, all MyRocks users are advised to upgrade ASAP and if that’s not possible, you should at least double check the disk space monitoring and alerting.

The post MyRocks Disk Full Edge Case appeared first on Percona Database Performance Blog.

Sep
17
2018
--

Using the keyring_vault Plugin with Percona Server for MySQL 5.7

keyring_vault store database encryption keys

keyring_vault store database encryption keysThis is the first of a two-part series on using the keyring_vault plugin with Percona Server for MySQL 5.7. The second part will walk you through on how to use Percona Xtrabackup to backup from this instance and restore to another server and set it up as a slave with keyring_vault plugin.

What is the keyring_vault plugin?

The keyring_vault is a plugin that allows the database to interface with a Hashicorp Vault server to store and secure encryption keys. The Vault server then acts as a centralized encryption key management solution which is critical for security and for compliance with various security standards.

Configuring Vault

Create SSL certificates to be used by Vault. You can use the sample ssl.conf template below to generate the necessary files.

[root@vault1 ~]# cat /etc/sslkeys/ssl.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = NC
L =  R
O = Percona
CN = *
[v3_req]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = @alt_names
[alt_names]
IP = 192.168.0.114

Then run the two commands below to generated the cert and key files and the certificate chain:

$ openssl req -config ssl.conf -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout vault.key -out vault.crt
$ cat vault.key vault.crt > vault.pem

Once the SSL certificates are created start Vault with the sample configuration below. Take note that you should follow the suggested best practices when deploying Vault in production, this example is to get us by with a simple working setup.

[root@vault1 ~]# cat /etc/vault.hcl
listener "tcp" {
address = "192.168.0.114:8200"
tls_cert_file="/etc/sslkeys/vault.crt"
tls_key_file="/etc/sslkeys/vault.key"
}
storage "file" {
path = "/var/lib/vault"
}

Assuming Vault started up fine and you are able to unseal Vault, the next step is to create the policy file. For more details on initializing and unsealing Vault please read the manual here.

[root@vault1 ~]# cat /etc/vault/policy/dc1.hcl
path "secret/*" {
capabilities = ["list"]
}
path "secret/dc1/*" {
capabilities = ["create", "read", "delete", "update", "list"]
}

Create a Vault policy named dc1-secrets using the dc1.hcl file like this:

[root@vault1 ~]# vault policy write dc1-secrets /etc/vault/policy/dc1.hcl -ca-cert=/etc/sslkeys/vault.pem
Success! Uploaded policy: dc1-secrets

Next, create a token associated with the newly created policy:

[root@vault1 ~]# vault token create -policy=dc1-secrets -ca-cert=/etc/sslkeys/vault.pem > dc1-token
[root@vault1 ~]# cat dc1-token
Key                  Value
---                  -----
token                be515093-b1a8-c799-b237-8e04ea90ad7a
token_accessor       4c1ba5c5-3fed-e9bb-d230-5bf1392e2d7e
token_duration       8760h
token_renewable      true
token_policies       ["dc1-secrets" "default"]
identity_policies    []
policies             ["dc1-secrets" "default"]

Setting up MySQL

The following instructions should work starting from Percona Server for MySQL 5.7.20-18 and through later versions.

Configure my.cnf with the following variables:

early-plugin-load="keyring_vault=keyring_vault.so"
loose-keyring_vault_config="/var/lib/mysql-keyring/keyring_vault.conf"
encrypt_binlog=ON
innodb_encrypt_online_alter_logs=ON
innodb_encrypt_tables=ON
innodb_temp_tablespace_encrypt=ON
master_verify_checksum=ON
binlog_checksum=CRC32
log_bin=mysqld-bin
binlog_format=ROW
server-id=1
log-slave-updates

Create the keyring_vault.conf file in the path above with the following contents:

[root@mysql1 ~]# cat /var/lib/mysql-keyring/keyring_vault.conf
vault_url = https://192.168.0.114:8200
secret_mount_point = secret/dc1/master
token = be515093-b1a8-c799-b237-8e04ea90ad7a
vault_ca = /etc/vault_ca/vault.pem

Here we are using the vault.pem file generated by combining the vault.crt and vault.key files. Observe that our secret_mount_point is secret/dc1/master. We want to make sure that this mount point is unique across all servers, this is in fact advised in the manual here.

Ensure that the CA certificate is owned by mysql user:

[root@mysql1 ~]# ls -la /etc/vault_ca/
total 24
drwxr-xr-x  2 mysql mysql   41 Jul 14 11:39 .
drwxr-xr-x 63 root  root  4096 Jul 14 13:17 ..
-rw-------  1 mysql mysql 1139 Jul 14 11:39 vault.pem

Initialize the MySQL data directory on the Master:

[root@mysql1 ~]# mysqld --initialize-insecure --datadir=/var/lib/mysql --user=mysql

For production systems we do not recommend using --initialize-insecure option, this is just to skip additional steps in this tutorial.

Finally, start mysqld instance and then test the setup by creating an encrypted table.

[root@mysql1 ~]# systemctl status mysqld
? mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2018-07-14 23:53:16 UTC; 2s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 1401 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
Process: 1383 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 1403 (mysqld)
CGroup: /system.slice/mysqld.service
??1403 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
Jul 14 23:53:16 mysql1 systemd[1]: Starting MySQL Server...
Jul 14 23:53:16 mysql1 systemd[1]: Started MySQL Server.

At this point you should have Percona Server for MySQL instance with tablespace encryption using Vault.

Researching database security?

You might also enjoy this pre-recorded webinar securing your database servers from external attacks presented by my colleague Colin Charles.

The post Using the keyring_vault Plugin with Percona Server for MySQL 5.7 appeared first on Percona Database Performance Blog.

Sep
14
2018
--

Encryption of the InnoDB System Tablespace and Parallel Doublewrite Buffer

encryption of InnoDB tablespace parallel doublewrite buffer

encryption of InnoDB tablespace parallel doublewrite bufferIn my last post I compared data at-rest encryption features available for MySQL and MariaDB. As noted at the time, some of the features available for Percona Server for MySQL were in development, and the latest version (5.7.23) sees two of them released as ALPHA quality.

Encrypting the InnoDB system tablespace

The first of the new features is InnoDB system tablespace encryption via innodb_sys_tablespace_encrypt, which would provide encryption of the following data:

  • the change buffer, which caches changes to secondary index pages as a result of DML operations for pages that are not in the InnoDB buffer pool
  • The undo logs if they have not been configured to be stored in separate undo tablespaces
  • data from any tables that exist in the main tablespace, which occurs when innodb_file_per_table is disabled

There are some related changes on the horizon that would allow this to be applied to an existing instance. However, for now this is only available for new instances as it can only be applied during bootstrap. This means that it would require a logical restore of your data to use it with an existing cluster–I should restate that this is an ALPHA feature and not production-ready.

There are some extra points to note about this new variable:

  • an instance with an encrypted tablespace cannot be downgraded to use a version prior to 5.7.23, due to the inability to read the tablespace
  • as noted, it is not currently possible to convert the tablespace between encrypted and unencrypted states, or vice versa
  • the key for the system tablespace can be manually rotated using ALTER INSTANCE ROTATE INNODB MASTER KEY as per any other tablespace

Encrypting the parallel doublewrite buffer

To complement the encryption of the system tablespace, it is also possible to encrypt the parallel doublewrite buffer using innodb_parallel_dblwr_encrypt, a feature unique to Percona Server for MySQL.  This means that any data for an encrypted tablespace is also only written in an encrypted form in the parallel doublewrite buffer; unencrypted tablespace data remains in plaintext. Unlike innodb_sys_tablespace_encrypt, you are able to set innodb_parallel_dblwr_encrypt dynamically on an existing instance.

There are more encryption features planned–or already in development–for Percona Server for MySQL so watch this space!

The post Encryption of the InnoDB System Tablespace and Parallel Doublewrite Buffer appeared first on Percona Database Performance Blog.

Sep
13
2018
--

Percona Toolkit 3.0.12 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.12 on September 13, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

Fixed bugs:

 

  • PT-1611: pt-archiver failed to output UTF-8 characters.
  • PT-1603: pt-table-sync incorrectly calculated chunk boundaries in case of unsorted ENUM fields in indexes.
  • PT-1574: pt-online-schema-change failed on tables with a nullable unique key and a row with NULL values.
  • PT-1572: ENUM fields usage in keys was improved, resulting in higher speed for expressions with sorted ENUM items.
  • PT-1422: pt-mysql-summary could hang when NULL values appear in the processlist Time column.

Documentation change:

  • PT-1321: The required MySQL privileges were detailed in pt-online-schema-change documentation

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.12 Is Now Available appeared first on Percona Database Performance Blog.

Sep
12
2018
--

Percona Server for MySQL 5.7.23-23 Is Now Available

Percona Server for MySQL 5.6

Percona Server for MySQL 5.7Percona announces the release of Percona Server for MySQL 5.7.23-23 on September 12, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.7.23, including all the bug fixes in it. Percona Server for MySQL 5.7.23-23 is now the current GA release in the 5.7 series. All of Percona’s software is open-source and free.

New Features
  • The max_binlog_files variable is deprecated, and the binlog_space_limit variable should be used instead of it. The behavior of binlog_space_limit is consistent with the variable relay-log-space-limit used for relay logs; both variables have the same semantics. For more information, see #275.
  • Starting with 5.7.23-23, it is possible to encrypt all data in the InnoDB system tablespace and in the parallel double write buffer. This feature is considered ALPHA quality. A new variable innodb_sys_tablespace_encrypt is introduced to encrypt the system tablespace. The encryption of the parallel double write buffer file is controlled by the variable innodb_parallel_dblwr_encrypt. Both variables are OFF by default. For more information, see #3822.
  • Changing rocksdb_update_cf_options returns any warnings and errors to the client instead of printing them to the server error log. For more information, see #4258.
  • rocksdb_number_stat_computers and rocksdb_rate_limit_delay_millis variables have been removed. For more information, see #4780.
  • A number of new variables were introduced for MyRocks: rocksdb_rows_filtered to show the number of rows filtered out for TTL in MyRocks tables, rocksdb_bulk_load_allow_sk to allow adding secondary keys using the bulk loading feature, rocksdb_error_on_suboptimal_collation toggling warning or error in case of an index creation on a char field where the table has a sub-optimal collation, rocksdb_stats_recalc_rate specifying the number of indexes to recalculate per second, rocksdb_commit_time_batch_for_recovery toggler of writing the commit time write batch into the database, and rocksdb_write_policy specifying when two-phase commit data are actually written into the database.
Bugs Fixed
  • The statement SELECT...ORDER BY produced inconsistent results with the euckr charset or euckr_bin collation. Bug fixed #4513 (upstream #91091).
  • InnoDB statistics could incorrectly report zeros in the slow query log. Bug fixed #3828.
  • With the FIPS mode enabled and performance_schema=off, the instance crashed when running the CREATE VIEW command. Bug fixed #3840.
  • The soft limit of the core file size was set incorrectly starting with PS 5.7.21-20. Bug fixed #4479.
  • The option innodb-optimize-keys could fail when a dumped table has two columns such that the name of one of them contains the other as as a prefix and is defined with the AUTO_INCREMENT attribute. Bug fixed #4524.
  • When innodb_temp_tablespace_encrypt was set to ON the CREATE TABLE command could ignore the value of the ENCRYPTION option. Bug fixed #4565.
  • If FLUSH STATUS was run from a different session, a statement could be counted twice in GLOBAL STATUS. Bug fixed #4570 (upstream #91541).
  • In some cases, it was not possible to set the flush_caches variable on systems that use systemd. Bug fixed #3796.
  • A message in the MyRocks log file did not clearly inform whether fast CRC32 was supported. Bug fixed #3988.
  • mysqld could not be started on Ubuntu if the database recovery had taken longer than ten minutes. Bug fixed #4546 (upstream #91423).
  • The ALTER TABLE command was slow when the number of dirty pages was high. Bug fixed #3702.
  • Setting the global variable version_suffix to NULL could lead to a server crash. Bug fixed #4785.
Other Bugs Fixed
  • #4620 “Enable encryption of temporary tablespace from foreground thread”
  • #4727 “intrinsic temp table behaviour shouldn’t depend on innodb_encrypt_tables”
  • #4046 “Ship assert failure: ‘res == 0’ (bulk loader)”
  • #3851 “Percona Ver 5.6.39-83.1 Failing assertion: sym_node->table != NULL”
  • #4533 “audit_log MTR tests should refer to include files without parent directories”
  • #4619 “main.flush_read_lock fails with timeout in wait_condition.inc.”
  • #4561 “Read after free at Binlog_crypt_data::load_latest_binlog_key()”
  • #4587 “ROCKSDB_INCLUDE_RFR macro in wrong file”

Find the release notes for Percona Server for MySQL 5.7.23-23 in our online documentation. Report bugs in the Jira bug tracker.

The post Percona Server for MySQL 5.7.23-23 Is Now Available appeared first on Percona Database Performance Blog.

Sep
11
2018
--

Announcement: Experimental Build of Percona XtraBackup 8.0

Percona XtraBackup 8.0

Percona XtraBackup 8.0Experimental Build of Percona XtraBackup 8.0 released

An experimental alpha version of Percona XtraBackup 8.0.1 is now available in the Percona experimental software repositories.

A few things to note about this release:

  • We removed the deprecated innobackupex in this release
  • Due to the new MySQL redo log and data dictionary formats the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and the upcoming Percona Server for MySQL 8.0.x
  • For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x

PXB 8.0.1 alpha is available for the following platforms:

  • RHEL/Centos 6.x
  • RHEL/Centos 7.x
  • Ubuntu 14.04 Trusty*
  • Ubuntu 16.04 Xenial
  • Ubuntu 18.04 Bionic
  • Debian 8 Jessie*
  • Debian 9 Stretch

Information on how to configure the Percona repositories for apt and yum systems and access the Percona experimental software is here.

* We might drop these platforms before GA release.

The post Announcement: Experimental Build of Percona XtraBackup 8.0 appeared first on Percona Database Performance Blog.

Sep
10
2018
--

Using ProxySQL to connect to IPv6-only databases over IPv4

connect to ipv6 database from ipv4 application using proxysql

connect to ipv6 database from ipv4 application using proxysqlIt’s 2018. Maybe now is the time to start migrating your network to IPv6, and your database infrastructure is a great place to start. Unfortunately, many legacy applications don’t offer the option to connect to MySQL directly over IPv6 (sometimes even if passing a hostname). We can work around this by using ProxySQL’s IPv6 support which was added in version 1.3. This will allow us to proxy incoming IPv4 connections to IPv6-only database servers.

Note that by default ProxySQL only listens on IPv4. We don’t recommended changing that until this bug is resolved. The bug causes ProxySQL to segfault frequently if listening on IPv6.

In this example I’ll use centos7-pxc57-1 as my database server. It’s running Percona XtraDB Cluster (PXC) 5.7 on CentOS 7,  which is only accessible over IPv6. This is one node of a three node cluster, but l treat this one node as a standalone server for this example.  One node of a synchronous cluster can be thought of as equivalent to the entire cluster, and vice-versa. Using the PXC plugin for ProxySQL to split reads from writes is the subject of a future blog post.

The application server, centos7-app01, would be running the hypothetical legacy application.

Note: We use default passwords throughout this example. You should always change the default passwords.

We have changed the IPv6 address in these examples. Any resemblance to real IPv6 addresses, living or dead, is purely coincidental.

  • 2a01:5f8:261:748c::74 is the IPv6 address of the ProxySQL server
  • 2a01:5f8:261:748c::71 is the Percona XtraDB node

Step 1: Install ProxySQL for your distribution

Packages are available here but in this case I’m going to use the version provided by the Percona yum repository:

[...]
Installed:
proxysql.x86_64 0:1.4.9-1.1.el7
Complete!

Step 2: Configure ProxySQL to listen on IPv4 TCP port 3306 by editing /etc/proxysql.cnf and starting it

[root@centos7-app1 ~]# vim /etc/proxysql.cnf
[root@centos7-app1 ~]# grep interfaces /etc/proxysql.cnf
interfaces="127.0.0.1:3306"
[root@centos7-app1 ~]# systemctl start proxysql

Step 3: Configure ACLs on the destination database server to allow ProxySQL to connect over IPv6

mysql> GRANT SELECT on sys.* to 'monitor'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'monitor';
Query OK, 0 rows affected, 1 warning (0.25 sec)
mysql> GRANT ALL ON legacyapp.* TO 'legacyappuser'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'super_secure_password';
Query OK, 0 rows affected, 1 warning (0.25 sec)

Step 4: Add the IPv6 address of the destination server to ProxySQL and add users

We need to configure the IPv6 server as a mysql_server inside ProxySQL. We also need to add a user to ProxySQL as it will reuse these credentials when connecting to the backend server. We’ll do this by connecting to the admin interface of ProxySQL on port 6032:

[root@centos7-app1 ~]# mysql -h127.0.0.1 -P6032 -uadmin -padmin
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.
Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.30 (ProxySQL Admin Module)
Copyright (c) 2009-2018 Percona LLC and/or its affiliates
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,'2a01:5f8:261:748c::71',3306);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('legacyappuser', 'super_secure_password', 1);
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.27 sec)
mysql> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.01 sec)
mysql> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.30 sec)
mysql> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL VARIABLES TO DISK;
Query OK, 95 rows affected (0.12 sec)

Step 5: Configure your application to connect to ProxySQL over IPv4 on localhost4 (IPv4 localhost)

This is application specific and so not shown here, but I’d configure my application to use localhost4 as this is in /etc/hosts by default and points to 127.0.0.1 and not ::1

Step 6: Verify

As I don’t have the application here, I’ll verify with mysql-client. Remember that ProxySQL is listening on 127.0.0.1 port 3306, so we connect via ProxySQL on IPv4 (the usage of 127.0.0.1 rather than a hostname is just to show this explicitly):

[root@centos7-app1 ~]# mysql -h127.0.0.1 -ulegacyappuser -psuper_secure_password
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql> SELECT host FROM information_schema.processlist WHERE ID=connection_id();
+-----------------------------+
| host                        |
+-----------------------------+
| 2a01:5f8:261:748c::74:57546 |
+-----------------------------+
1 row in set (0.00 sec)
mysql> CREATE TABLE legacyapp.legacy_test_table(id int);
Query OK, 0 rows affected (0.83 sec)

The query above shows the remote host (from MySQL’s point of view) for the current connection. As you can see, MySQL sees this connection established over IPv6. So to recap, we connected to MySQL on an IPv4 IP address (127.0.0.1) and were successfully proxied to a backend IPv6 server.

The post Using ProxySQL to connect to IPv6-only databases over IPv4 appeared first on Percona Database Performance Blog.

Sep
08
2018
--

Percona Monitoring and Management (PMM) 1.14.1 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

We’re releasing hotfix 1.14.1 to address three issues found post-release of 1.14.0:

  • PMM-2963: Upgrading to PMM 1.14.0 fails due to attempting to create already existing Dashboard
    • Our upgrade script incorrectly tried to create dashboards that already existed, and generating failure message:
      A folder or dashboard in the general folder with the same name already exists
  • PMM-2958: Grafana did not update to 5.1 when upgrading from versions older than 1.11
    • We identified a niche case where PMM installations that were upgraded from < 1.11 would fail to upgrade Grafana to correct release 5.1 (Users were left on Grafana 5.0)

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.14.1 Is Now Available appeared first on Percona Database Performance Blog.

Sep
05
2018
--

Percona Monitoring and Management (PMM) 1.14.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and Management

We’ve included a plethora of visual improvements in this release, including:

  • PostgreSQL Metrics Collection – Visualize PostgreSQL performance!
  • Identify New Queries in Query Analytics
  • New Dashboard: Compare System Parameters
  • New Dashboard: PERFORMANCE_SCHEMA Wait Events Analysis
  • Dashboard Updates – Advanced Data Exploration, MyRocks, TokuDB, InnoDB Metrics
  • Disable SSL between Prometheus and Exporters
  • Dashboards grouped by Folder – We’ve organized the Dashboard drop-down to present a cleaner interface

We addressed 16 new features and improvements, and fixed 20 bugs.

PostgreSQL Metrics Collection

The PMM team is very proud to bring you native support for PostgreSQL! We’ve shipped a new dashboard called PostgreSQL Overview, and we now provide the ability to add PostgreSQL instances as native, first-class citizens as part of PMM. This means you can add PostgreSQL + Linux monitoring capabilities through the standard pmm-admin add postgresql syntax, see our documentation links for more details!

../_images/1.14.0-1.png

Identify New Queries in Query Analytics

A long-awaited feature is the ability to visually identify new queries that have appeared in Query Analytics – those queries who’s first seen time is within the selected time range. New queries will be highlighted in a soft blue band for quick identification, and we’ve provided a button called First Seen which you can toggle to display only those newly seen queries. A common use case for this feature is potentially during code release / deployments, where you want to review which new queries have been deployed and to review their performance characteristics.

../_images/1.14.0-2.jpg

New Dashboard: Compare System Parameters

We’ve introduced a new dashboard to let you compare System Parameters across multiple servers so at a glance you can understand provisioning or configuration differences. This might be of help when comparing a pool of identical slaves or other logical groups of instances.

../_images/1.14.0-3.jpg

New Dashboard: PERFORMANCE_SCHEMA Wait Events Analysis

We’ve added a new dashboard that lets you drill down into great detail on one or several PERFORMANCE_SCHEMA wait event categories in order to visualize them over time.

../_images/1.14.0-4.jpg

Dashboards grouped by Folder

At long last we’ve addressed the sprawl of the long list of 30+ Dashboards, and grouped them into categories which match the pre-existing right-side navigation system. This should leave you with a more organized, less cluttered list of Dashboards.

../_images/1.14.0-5.jpg

Dashboard Updates – Advanced Data Exploration, MyRocks, TokuDB, InnoDB Metrics

We’ve improved four dashboards with minor but helpful improvements:

  • Advanced Data Exploration dashboard with the addition of a graph element plotting the Metric Rates, which will help you understand the scraping efficiency of this metric series, or whether scrapes have failed / are failing.
  • InnoDB Metrics to present the graph elements in two columns – previously we’d inconsistently use three columns or two columns, making it hard to visualize trends across graphs.
  • MyRocks formulas were improved to be more precise
  • TokuDB has many new graphs to expand our coverage of this storage engine

Disable SSL between PMM Server and Exporters

Lastly, we’ve delivered on a feature request from a Percona Customer to optionally disable SSL between PMM Server and Exporters, with the advantage that if you do not need encrypted traffic for your metric series, you can reduce the CPU overhead on PMM Server. We’d love to hear your feedback on this feature!

pmm-admin add mysql --disable-ssl ...

New Features & Improvements

  • PMM-1362: Update descriptions on MySQL InnoDB Metrics (Advanced) Dashboard – thanks to Yves Trudeau
  • PMM-2304: New Dashboard: Compare System Parameters
  • PMM-2331: Advanced Data Exploration: add graph for showing exporter scrapers over time intervals
  • PMM-2356: Grouping dashboards in folders with Grafana5
  • PMM-2472: Identify new queries in QAN
  • PMM-2486: Allow the disabling of SSL by means of an option – thanks to Dongchan Sung
  • PMM-2597: Improve MyRocks dashboard – thanks to Przemek Malkowski for the valuable ideas
  • PMM-2704: PostgreSQL Metrics Collection
  • PMM-2772: Display InnoDB Metrics dashboard using consistent two column view
  • PMM-2775: Display PERFORMANCE_SCHEMA Wait Events Analysis
  • PMM-2769: Display TokuDB Dashboard Improvements
  • PMM-2797: MySQL Performance Schema – Filter HOSTS
  • PMM-2798: Filter hosts on NUMA dashboard
  • PMM-2833: Added granularity interval for scraping AWS API – thanks to Aleksandr Stepanov
  • PMM-2846: Increase MySQL Max Connections in PMM Server

Fixed Bugs

  • PMM-946: QAN sparklines drop to zero when data is not available
  • PMM-1987: pt-archiver rule for agent_log is not correct – thanks to Yves Trudeau for providing a fix
  • PMM-2013: Styling of QAN allows overlapping content
  • PMM-2028: nginx shows “414 Request-URI Too Large” for 150 hosts – thanks to Nickolay Ihalainen for the bug report and fix
  • PMM-2166: Add RDS instance page refresh will head to “Page Not Found” error
  • PMM-2457: Improve External Exporter help documentation for duration interval
  • PMM-2459: Cross-Graph Crosshair not enabled on the PXC/Galera Cluster
  • PMM-2477: Frequent Access Denied prompts while using AWS Marketplace image
  • PMM-2566: CPU busy graph shows incorrect values
  • PMM-2763: Unknown version is available on Update widget
  • PMM-2784: What’s new link on Update widget has wrong URL
  • PMM-2793: Network Overview needs to be in OS menu, not insights
  • PMM-2796: Overview NUMA Metrics dashboard should be renamed to NUMA Overview
  • PMM-2801: Prometheus Exporters Overview – CPU metrics are strange
  • PMM-2804: Prometheus Graph is empty with PMM 1.13
  • PMM-2811: SQL to get Hosts in QAN – thanks to Forums member Fan
  • PMM-2821: Clean local storage if status is “You are up to date” and use animation for refresh button
  • PMM-2828: Weird Latency Graphs
  • PMM-2841: Change memory defaults for Prometheus 1.8 and use additional environment variable
  • PMM-2856: RDS/Aurora disk related graphs are empty
  • PMM-2885: System Overview dashboard has incorrect values

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.14.0 Is Now Available appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com