Oct
15
2021
--

Comparing Graviton (ARM) Performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQLRecently, AWS presented its own CPU on ARM architecture for server solutions.

It was Graviton. As a result, they update some lines of their EC2 instances with new postfix “g” (e.g. m6g.small, r5g.nano, etc.). In their review and presentation, AWS showed impressive results that it is faster in some benchmarks up to 20 percent. On the other hand, some reviewers said that Graviton does not show any significant results and, in some cases, showed fewer performance results than Intel.

We decided to investigate it and do our research regarding Graviton performance, comparing it with other CPUs (Intel and AMD) directly for MySQL.

Disclaimer

  1. The test is designed to be CPU bound only, so we will use a read-only test and make sure there is no I/O activity during the test.
  2. Tests were run  on m5.* (Intel) , m5a.* (AMD),  m6g.*(Graviton) EC2 instances in the US-EAST-1 region. (List of EC2 see in the appendix).
  3. Monitoring was done with Percona Monitoring and Management (PMM).
  4. OS: Ubuntu 20.04 TLS.
  5. Load tool (sysbench) and target DB (MySQL) installed on the same EC2 instance.
  6. MySQL– 8.0.26-0 — installed from official packages.
  7. Load tool: sysbench —  1.0.18.
  8. innodb_buffer_pool_size=80% of available RAM.
  9. Test duration is five minutes for each thread and then 90 seconds warm down before the next iteration. 
  10. Tests were run three times (to smooth outliers or have more reproducible results), then results were averaged for graphs. 
  11. We are going to use high concurrency scenarios for those scenarios when the number of threads would be bigger than the number of vCPU. And low concurrent scenario with scenarios where the number of threads would be less or equal to a number of vCPU on EC2.
  12. Scripts to reproduce results on our GitHub.

Test Case

Prerequisite:

1. Create DB with 10 tables with 10 000 000 rows each table

sysbench oltp_read_only --threads=10 --mysql-user=sbtest --mysql-password=sbtest --table-size=10000000 --tables=10 --db-driver=mysql --mysql-db=sbtest prepare

2. Load all data to LOAD_buffer

sysbench oltp_read_only --time=300 --threads=10 --table-size=1000000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Test:

Run in a loop for same scenario but  different concurrency THREAD (1,2,4,8,16,32,64,128) on each EC2.

sysbench oltp_read_only --time=300 --threads=${THREAD} --table-size=100000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Results:

Result reviewing was split into 3 parts:

  1. for “small” EC2 with 2, 4, and 8 vCPU
  2. for “medium” EC2 with 16 and 32 vCPU
  3. for  “large” EC2 with 48 and 64 vCPU

This “small”, “medium”, and “large” splitting is just synthetic names for better reviewing depends on the amount of vCPu per EC2
There would be four graphs for each test:

  1. Throughput (Queries per second) that EC2 could perform for each scenario (amount of threads)
  2. Latency 95 percentile that  EC2 could perform for each scenario (amount of threads)
  3. Relative comparing Graviton and Intel
  4. Absolute comparing Graviton and Intel

Validation that all load goes to CPU, not to DISK I/O, was done also using PMM (Percona Monitoring and Management). 

pic 0.1. OS monitoring during all test stages

pic 0.1 – OS monitoring during all test stages

From pic.0.1, we can see that there was no DISK I/O activity during tests, only CPU activity. The main activity with disks was during the DB creation stage.

Result for EC2 with 2, 4, and 8 vCPU

plot 1.1. Throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8  vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. AMD has the biggest latencies in all scenarios and for all EC2 instances. We won’t repeat this information in all future overviews, and this is the reason why we exclude it in comparing with other CPUs in percentage and numbers values (in plots 1.3 and 1.4, etc).
  2. Instances with two and four vCPU Intel show some advantage for less than 10 percent in all scenarios.
  3. However, an instance with 8 vCPU intel shows an advantage only on scenarios with threads that less or equal amount of vCPU on EC2.
  4. On EC2 with eight vCPU, Graviton started to show an advantage. It shows some good results in scenarios when the number of threads is more than the amount of vCPU on EC2. It grows up to 15 percent in high-concurrency scenarios with 64 and 128 threads, which are 8 and 16 times bigger than the amount of vCPU available for performing.
  5. Graviton start showing an advantage on EC2 with eight vCPU and with scenarios when threads are more than vCPU amount. This feature would appear in all future scenarios – more load than CPU, better result it shows.

 

Result for EC2 with 16 and 32 vCPU

plot 2.1.  Throughput (queries per second)  for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. In scenarios with the same load for ec2 with 16 and 32 vCPU, Graviton is continuing to have advantages when the amount of threads is more significant than the amount of available vCPU on instances.
  2. Graviton shows an advantage of up to 10 percent in high concurrency scenarios. However, Intel has up to 20 percent in low concurrency scenarios.
  3. In high-concurrency scenarios, Graviton could show an incredible difference in the number of (read) transactions per second up to 30 000 TPS.

Result for EC2 with 48 and 64 vCPU

plot 3.1.  Throughput (queries per second)  for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.2.  Latencies (95 percentile) during the test for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. It looks like Intel shows a significant advantage in most scenarios when its number of threads is less or equal to the amount of vCPU. It seems Intel is really good for such kind of task. When it has some additional free vCPU, it would be better, and this advantage could be up to 35 percent.
  2. However, Graviton shows outstanding results when the amount of threads is larger than the amount of vCPU. It shows an advantage from 5 to 14 percent over Intel.
  3. In real numbers, Graviton advantage could be up to 70 000 transactions per second over Intel performance in high-concurrency scenarios.

Total Result Overview

 

plot 4.2.  Latencies (95 percentile) during the test for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

Conclusions

  1. ARM CPUs show better results on EC2 with more vCPU and with higher load, especially in high-concurrency scenarios.
  2. As a result of small EC2 instances and small load, ARM CPUs show less impressive performance. So we can’t see its benefits comparing with Intel EC2
  3. Intel is still the leader in the area of low-concurrency scenarios. And it is definitely winning on EC2 with a small amount of vCPU.
  4. AMD does not show any competitive results in all cases.   

Final Thoughts

  1. AMD — we have a lot of questions about EC2 instances on AMD. So it would be a good idea to check what was going on that EC2 during the test and check the general performance of CPUs on those EC2.
  2. We found out that in some specific conditions, Intel and Graviton could compete with each other. But the other side of the coin is economical. What is cheaper to use in each situation? The next article will be about it. 
  3. It would be a good idea to try to use EC2 with Graviton for real high-concurrency DB.  
  4. It seems it needs to run some additional scenarios with 256 and 512 threads to check the hypothesis that Graviton could work better when threads are more than vCPU.

APPENDIX:

List of EC2 used in research:

CPU type EC2 EC2 price per hour (USD) vCPU RAM
Graviton m6g.large 0.077 2 8 Gb
Graviton m6g.xlarge 0.154 4 16 Gb
Graviton m6g.2xlarge 0.308 8 32 Gb
Graviton m6g.4xlarge 0.616 16 64 Gb
Graviton m6g.8xlarge 1.232 32 128 Gb
Graviton m6g.12xlarge 1.848 48 192 Gb
Graviton m6g.16xlarge 2.464 64 256 Gb
Intel m5.large 0.096 2 8 Gb
Intel m5.xlarge 0.192 4 16 Gb
Intel m5.2xlarge 0.384 8 32 Gb
Intel m5.4xlarge 0.768 16 64 Gb
Intel m5.8xlarge 1.536 32 128 Gb
Intel m5.12xlarge 2.304 48 192 Gb
Intel m5.16xlarge 3.072 64 256 Gb
AMD m5a.large 0.086 2 8 Gb
AMD m5a.xlarge 0.172 4 16 Gb
AMD m5a.2xlarge 0.344 8 32 Gb
AMD m5a.4xlarge 0.688 16 64 Gb
AMD m5a.8xlarge 1.376 32 128 Gb
AMD m5a.12xlarge 2.064 48 192 Gb
AMD m5a.16xlarge 2.752 64 256 Gb

 

my.cnf

my.cnf:
[mysqld]
ssl=0
performance_schema=OFF
skip_log_bin
server_id = 7

# general
table_open_cache = 200000
table_open_cache_instances=64
back_log=3500
max_connections=4000
 join_buffer_size=256K
 sort_buffer_size=256K

# files
innodb_file_per_table
innodb_log_file_size=2G
innodb_log_files_in_group=2
innodb_open_files=4000

# buffers
innodb_buffer_pool_size=${80%_OF_RAM}
innodb_buffer_pool_instances=8
innodb_page_cleaners=8
innodb_log_buffer_size=64M

default_storage_engine=InnoDB
innodb_flush_log_at_trx_commit  = 1
innodb_doublewrite= 1
innodb_flush_method= O_DIRECT
innodb_file_per_table= 1
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_flush_neighbors=0
max_prepared_stmt_count=1000000 
bind_address = 0.0.0.0
[client]

 

Oct
14
2021
--

Custom Percona Monitoring and Management Metrics in MySQL and PostgreSQL

mysql postgresl custom metrics

mysql postgresl custom metricsA few weeks ago we did a live stream talking about Percona Monitoring and Management (PMM) and showcased some of the fun things we were doing at the OSS Summit.  During the live stream, we tried to enable some custom queries to track the number of comments being added to our movie database example.  We ran into a bit of a problem live and did not get it to work. As a result, I wanted to follow up and show you all how to add your own custom metrics to PMM and show you some gotchas to avoid when building them.

Custom metrics are defined in a file deployed on each server you are monitoring (not on the server itself).  You can add custom metrics by navigating over to one of the following:

  • For MySQL:  /usr/local/percona/pmm2/collectors/custom-queries/mysql
  • For PostgreSQL:  /usr/local/percona/pmm2/collectors/custom-queries/postgresql
  • For MongoDB:  This feature is not yet available – stay tuned!

You will notice the following directories under each directory:

  • high-resolution/  – every 5 seconds
  • medium-resolution/ – every 10 seconds
  • low-resolution/ – every 60 seconds

Note you can change the frequency of the default metric collections up or down by going to the settings and changing them there.  It would be ideal if in the future we added a resolution config in the YML file directly.  But for now, it is a universal setting:

Percona Monitoring and Management metric collections

In each directory you will find an example .yml file with a format like the following:

mysql_oss_demo: 
  query: "select count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments;"
  metrics: 
    - comment_cnt: 
        usage: "GAUGE" 
        description: "count of the number of comments coming in"

Our error during the live stream was we forgot to include the database in our query (i.e. table_name.database_name), but there was a bug that prevented us from seeing the error in the log files.  There is no setting for the database in the YML, so take note.

This will create a metric named mysql_oss_demo_comment_cnt in whatever resolution you specify.  Each YML will execute separately with its own connection.  This is important to understand as if you deploy lots of custom queries you will see a steady number of connections (this is something you will want to consider if you are doing custom collections).  Alternatively, you can add queries and metrics to the same file, but they are executed sequentially.  If, however, the entire YML file can not be completed in less time than the defined resolution ( i.e. finished within five seconds for high resolution), then the data will not be stored, but the query will continue to run.  This can lead to a query pile-up if you are not careful.   For instance, the above query generally takes 1-2 seconds to return the count.  I placed this in the medium bucket.  As I added load to the system, the query time backed up.

You can see the slowdown.  You need to be careful here and choose the appropriate resolution.  Moving this over to the low resolution solved the issue for me.

That said, query response time is dynamic based on the conditions of your server.  Because these queries will run to completion (and in parallel if the run time is longer than the resolution time), you should consider limiting the query time in MySQL and PostgreSQL to prevent too many queries from piling up.

In MySQL you can use:

mysql>  select /*+ MAX_EXECUTION_TIME(4) */  count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments ;
ERROR 1317 (70100): Query execution was interrupted

And on PostgreSQL you can use:

SET statement_timeout = '4s'; 
select count(1) as comment_cnt from movies_normalized_user_comments ;
ERROR:  canceling statement due to statement timeout

By forcing a timeout you can protect yourself.  That said, these are “errors” so you may see errors in the error log.

You can check the system logs (syslog or messages) for errors with your custom queries (note at this time as of PMM 2.0.21, errors were not making it into these logs because of a potential bug).  If the data is being collected and everything is set up correctly, head over to the default Grafana explorer or the “Advanced Data Exploration” dashboard in PMM.  Look for your metric and you should be able to see the data graphed out:

Advanced Data Exploration PMM

In the above screenshot, you will notice some pretty big gaps in the data (in green).  These gaps were caused by our query taking longer than the resolution bucket.  You can see when we moved to 60-second resolution (in orange), the graphs filled in.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Oct
12
2021
--

ProxySQL 2.3.0: Enhanced Support for MySQL Group Replication

ProxySQL 2.3 MySQL Group Replication

ProxySQL 2.3 MySQL Group ReplicationProxySQL 2.3.0 was recently released and when I was reading the release notes, I was really impressed with the Group Replication enhancements and features. I thought of experimenting with those things and was interested to write a blog about them. Here, I have focused on the following two topics:

  • When the replication lag threshold is reached, ProxySQL will move the server to SHUNNED state, instead of moving them to OFFLINE hostgroup. When shunning a server, it will be performed gracefully and not immediately drop all backend connections.
  • The servers can be taken to maintenance through ProxySQL using “OFFLINE_SOFT”.

Test Environment

To test this, I have configured a three-node GR cluster (gr1,gr2,gr3) in my local environment. I have configured a single primary cluster (1 writer, 2 readers).

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
+-------------+--------------+-------------+----------------+
| member_host | member_state | member_role | member_version |
+-------------+--------------+-------------+----------------+
| gr1         | ONLINE       | PRIMARY     | 8.0.26         |
| gr2         | ONLINE       | SECONDARY   | 8.0.26         |
| gr3         | ONLINE       | SECONDARY   | 8.0.26         |
+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

Currently, there is no transaction delay in the GR cluster.

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)

To compare the result with older ProxySQL versions, I have configured two versions of ProxySQL. One is the latest version (2.3.0) and another one is the older version (2.2.2).

ProxySQL 1: (2.3.0)

mysql>  show variables like '%admin-version%';
+---------------+------------------+
| Variable_name | Value            |
+---------------+------------------+
| admin-version | 2.3.1-8-g794b621 |
+---------------+------------------+
1 row in set (0.00 sec)

ProxySQL 2: ( < 2.3.0 )

mysql> show variables like '%admin-version%';
+---------------+-------------------+
| Variable_name | Value             |
+---------------+-------------------+
| admin-version | 2.2.2-11-g0e7630d |
+---------------+-------------------+
1 row in set (0.01 sec)

GR nodes are configured on both the ProxySQLs:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

Host group settings are:

mysql> select writer_hostgroup,reader_hostgroup,offline_hostgroup from runtime_mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
 writer_hostgroup: 2
 reader_hostgroup: 3
offline_hostgroup: 4
1 row in set (0.00 sec)

Scenario 1: When the replication lag threshold is reached, ProxySQL will move the server to SHUNNED state, instead of moving them to OFFLINE host group.

Here the replication lag threshold is configured when the ProxySQL is “20”.

mysql> select @@mysql-monitor_groupreplication_max_transactions_behind_count;
+----------------------------------------------------------------+
| @@mysql-monitor_groupreplication_max_transactions_behind_count |
+----------------------------------------------------------------+
| 20                                                             |
+----------------------------------------------------------------+
1 row in set (0.00 sec)

As per my current setting,

  • At ProxySQL 2.3.0, if the transaction_behind reaches 20, then the node will be put into “SHUNNED” state.
  • At “< ProxySQL 2.3.0”, if the transaction_behind reaches 20, then the node will be put into an offline hostgroup.

To manually create the replication lag, I am going to start the read/write load on the GR cluster using the sysbench.

sysbench oltp_read_write --tables=10 --table_size=1000000  --mysql-host=172.28.0.96 --mysql-port=6033 --mysql-user=monitor --mysql-password="Monitor@321" --mysql-db=jc --time=30000 --threads=50 --report-interval=1 run

As expected, now I can see the transaction delay in the cluster.

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                 457 |                    0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)

Let’s see how the different ProxySQL versions are behaving now.

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname, status from runtime_mysql_servers;
+--------------+----------+---------+
| hostgroup_id | hostname | status  |
+--------------+----------+---------+
| 2            | gr1      | ONLINE  |
| 3            | gr2      | SHUNNED |
| 3            | gr3      | SHUNNED |
+--------------+----------+---------+
3 rows in set (0.00 sec)

As expected, both the reader nodes (gr2,gr3) are moved to “SHUNNED” state. And, the servers are still available in reader_hostgroup.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname, status from runtime_mysql_servers;
+--------------+----------+---------+
| hostgroup_id | hostname | status  |
+--------------+----------+---------+
| 2            | gr1      | ONLINE  |
| 4            | gr2      | ONLINE  |
| 4            | gr3      | ONLINE  |
+--------------+----------+---------+
3 rows in set (0.00 sec)

The server status is still ONLINE. But, the hostgroup_id is changed from 3 to 4. “4” is the offline hostgroup_id.

So, when comparing both the results, seems the latest release (2.3.0) has the correct implementation. Shunning the node is just temporarily taking the server out of use until the replication lag issue is fixed. When shunning a server, it will be performed gracefully and not immediately drop all backend connections. You can see the servers are still available in the reader hostgroups. With the previous implementation, the servers are moved to offline_hostgroup immediately.

Again, from ProxySQL 2.3.0, during the lag, shunning the nodes depend on the parameter “mysql-monitor_groupreplication_max_transactions_behind_for_read_only”. The parameter has 3 values (0,1,2).

  • “0” means only servers with “read_only=0” are placed as SHUNNED.
  • “1” means Only servers with “read_only=1” are placed as SHUNNED. This is the default one.
  • “2” means Both servers with “read_only=1” and “read_only=0” are placed as SHUNNED.

Scenario 2: The servers can be taken to maintenance through ProxySQL using “OFFLINE_SOFT”.

Personally, I would say, this is one of the nice implementations. From ProxySQL 2.3.0, ProxySQL itself can put the servers into the maintenance mode using the “OFFLINE_SOFT”. In the older version, you can also set it, but it was not stable. Let me explain the behavior of the latest and the older version.

Both ProxySQLs have the following configuration:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

— Now, I am going to put the server “gr3” into maintenance mode on both ProxySQL.

After putting it into maintenance mode, both ProxySQL has the following output.

mysql> update mysql_servers set status='offline_soft' where hostname='gr3'; load mysql servers to runtime; save mysql servers to disk;
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.04 sec)

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

— Now, I am going to stop the group replication service on the “gr3”.

mysql> stop group_replication;
Query OK, 0 rows affected (4.59 sec)

Let’s check the ProxySQL status now.

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

The latest release still maintains the same status. “gr3” is still in maintenance mode.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 4            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

The older ProxySQL release removed the “OFFLINE_SOFT” flag from “gr3” and put it on the offline hostgroup (hg 4).

— Now, I am again going to start the group_replication service on gr3.

mysql> start group_replication;
Query OK, 0 rows affected (2.58 sec)

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

The latest release still maintains the same state.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

At the older release, the server “gr3” came to ONLINE automatically. This is not the expected one because we did manually put that node into maintenance mode.

As you see in the comparison for the latest and older releases, the latest release has the right implementation. To remove the maintenance, we have to manually update the status to “ONLINE” as shown below.

mysql> update mysql_servers set status='online' where hostname='gr3'; load mysql servers to runtime; save mysql servers to disk;
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.04 sec)

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

I believe these two new implementations are very helpful to those who are running with the GR + ProxySQL setup. Apart from GR, the recent major releases have other important features as well. I will try to write a blog about them in the future, be on the lookout for that.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
08
2021
--

MySQL 8: Random Password Generator

MySQL 8 Random Password Generator

MySQL 8 Random Password GeneratorAs part of my ongoing focus on MySQL 8 user and password management, I’ve covered how using the new dual passwords feature can reduce the overall DBA workload and streamline the management process. I’ve also covered how the new password failure tracking features can enable the locking of an account with too many failed password attempts (see MySQL 8: Account Locking).

There are other new and useful features that have been added to the user management capabilities in MySQL 8 however, and an often overlooked change was the implementation of a random password generator. First introduced in MySQL 8.0.18, with this feature, CREATE USER, ALTER USER, and SET PASSWORD statements have the capability of generating random passwords for user accounts as an alternative to explicit administrator specified passwords.

Usage of MySQL 8 Random Password Generator

By default, all MySQL-generated random user/account passwords have a length of 20 characters. This can be changed, however, using the ‘generated_random_password_length’ system variable. With a valid range of 5 to 255, this dynamic variable can be assigned on a global or session-level and determines the overall password length of the randomly generated password.

mysql> SHOW variables LIKE 'generated_random_password_length';
+----------------------------------+-------+
| Variable_name                    | Value |
+----------------------------------+-------+
| generated_random_password_length | 20    |
+----------------------------------+-------+
1 row in set (0.01 sec)

Once a random password has been generated for a given user account, the statement stores the password in the ‘mysql.user’ system table, hashed appropriately for the authentication plugin. The cleartext ‘generated password’ is returned in the result set along with the ‘user’ and ‘host’  so that information is available to the user or application. See below examples:

mysql> CREATE USER 'percona'@'localhost' IDENTIFIED BY RANDOM PASSWORD;
+---------+-----------+----------------------+
| user    | host      | generated password   |
+---------+-----------+----------------------+
| percona | localhost | k%RJ51/kA>,B(74;DBq2 |
+---------+-----------+----------------------+
1 row in set (0.02 sec)

mysql> ALTER USER 'percona'@'localhost' IDENTIFIED BY RANDOM PASSWORD;
+---------+-----------+----------------------+
| user    | host      | generated password   |
+---------+-----------+----------------------+
| percona | localhost | eX!EOssQ,(Hn4dOdw6Om |
+---------+-----------+----------------------+
1 row in set (0.01 sec)

mysql> SET PASSWORD FOR 'percona'@'localhost' TO RANDOM;
+---------+-----------+----------------------+
| user    | host      | generated password   |
+---------+-----------+----------------------+
| percona | localhost | 5ohXP2LBTTPzJ+7oEDL4 |
+---------+-----------+----------------------+
1 row in set (0.00 sec)

Logging

The clear text generated password is logged only in hashed form, so it is never available in plain text anywhere other than the initial result set from the user statement (as above). The authentication plugin is also named in the binlog alongside the hashed password value.  Below are a couple of examples that have been extracted from the MySQL binlog from the ‘percona’@’localhost’ user that we created and altered earlier:

CREATE USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*5978ACEA46C1B81C7BEE2D1470ED1B002FE6840B'
ALTER USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*2994ECB14E21A8333C8C2DEDF38311EB714D500C'

In Closing

Human imagination is often a limiting factor in choosing secure passwords. The random password capability introduced in MySQL 8.0.18 ensures that there is a standardized method for truly random and secure passwords in your database environment.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
07
2021
--

Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in KubernetesThere are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.

We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.

Creating a Cluster

For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:

1. Create a cluster

gcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=3

2. Configure command-line access

gcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project

3. Create a Namespace

kubectl create namespace ivantest-ns

4. Set the context to use our new Namespace

kubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns

Dedicated Service Using a StatefulSet

One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.

For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.

Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.

1. Prepare a file for the ConfigMap

tee proxysql.cnf <<EOF
datadir="/var/lib/proxysql"
 
admin_variables=
{
    admin_credentials="admin:admin;cluster:secret"
    mysql_ifaces="0.0.0.0:6032"
    refresh_interval=2000
    cluster_username="cluster"
    cluster_password="secret"  
}
 
mysql_variables=
{
    threads=4
    max_connections=2048
    default_query_delay=0
    default_query_timeout=36000000
    have_compress=true
    poll_timeout=2000
    interfaces="0.0.0.0:6033;/tmp/proxysql.sock"
    default_schema="information_schema"
    stacksize=1048576
    server_version="8.0.23"
    connect_timeout_server=3000
    monitor_username="monitor"
    monitor_password="monitor"
    monitor_history=600000
    monitor_connect_interval=60000
    monitor_ping_interval=10000
    monitor_read_only_interval=1500
    monitor_read_only_timeout=500
    ping_interval_server_msec=120000
    ping_timeout_server=500
    commands_stats=true
    sessions_sort=true
    connect_retries_on_failure=10
}
 
mysql_servers =
(
    { address="mysql1" , port=3306 , hostgroup=10, max_connections=100 },
    { address="mysql2" , port=3306 , hostgroup=20, max_connections=100 }
)
 
mysql_users =
(
    { username = "myuser", password = "password", default_hostgroup = 10, active = 1 }
)
 
proxysql_servers =
(
    { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 }
)
EOF

2. Create the ConfigMap

kubectl create configmap proxysql-configmap --from-file=proxysql.cnf

3. Prepare a file with the StatefulSet

tee proxysql-ss-svc.yml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: proxysql
  labels:
    app: proxysql
spec:
  replicas: 3
  serviceName: proxysqlcluster
  selector:
    matchLabels:
      app: proxysql
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: proxysql
    spec:
      restartPolicy: Always
      containers:
      - image: proxysql/proxysql:2.3.1
        name: proxysql
        volumeMounts:
        - name: proxysql-config
          mountPath: /etc/proxysql.cnf
          subPath: proxysql.cnf
        - name: proxysql-data
          mountPath: /var/lib/proxysql
          subPath: data
        ports:
        - containerPort: 6033
          name: proxysql-mysql
        - containerPort: 6032
          name: proxysql-admin
      volumes:
      - name: proxysql-config
        configMap:
          name: proxysql-configmap
  volumeClaimTemplates:
  - metadata:
      name: proxysql-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app: proxysql
  name: proxysql
spec:
  ports:
  - name: proxysql-mysql
    nodePort: 30033
    port: 6033
    protocol: TCP
    targetPort: 6033
  - name: proxysql-admin
    nodePort: 30032
    port: 6032
    protocol: TCP
    targetPort: 6032
  selector:
    app: proxysql
  type: NodePort
EOF

4. Create the StatefulSet

kubectl create -f proxysql-ss-svc.yml

5. Prepare the definition of the headless Service (more on this later)

tee proxysql-headless-svc.yml <<EOF 
apiVersion: v1
kind: Service
metadata:
  name: proxysqlcluster
  labels:
    app: proxysql
spec:
  clusterIP: None
  ports:
  - port: 6032
    name: proxysql-admin
  selector:
    app: proxysql
EOF

6. Create the headless Service

kubectl create -f proxysql-headless-svc.yml

7. Verify the Services

kubectl get svc

NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                         AGE
proxysql          NodePort    10.3.249.158           6033:30033/TCP,6032:30032/TCP   12m
proxysqlcluster   ClusterIP   None                   6032/TCP                        8m53s

Pod Name Resolution

By default, each pod has a DNS name associated in the form pod-ip-address.my-namespace.pod.cluster-domain.example.

The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:

proxysql-0.proxysqlcluster
proxysql-1.proxysqlcluster
proxysql-3.proxysqlcluster

We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).

Connecting to the Service

To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):

kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -il

The connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.

mysql -umyuser -ppassword -h10.3.249.158 -P6033

Use port 30033 instead if the client is connecting from an external network:

mysql -umyuser -ppassword -h10.3.249.158 -P30033

Cleanup Steps

In order to remove all the resources we created, run the following steps:

kubectl delete statefulsets proxysql
kubectl delete service proxysql
kubectl delete service proxysqlcluster

Final Words

We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
06
2021
--

How to Hide Credentials from Percona Monitoring and Management Client Commands

Hide Credentials from Percona Monitoring and Management Client Commands

Hide Credentials from Percona Monitoring and Management Client CommandsIn this short blog post, we are going to review how to avoid using credentials in the Percona Monitoring and Management (PMM) client command line when adding new exporters. We will use an example with the MySQL exporter, but it is extensible to others (PostgreSQL, MongoDB, etc.).

In the online documentation we can see the basic steps for adding a new MySQL exporter:

  1.  Configure the PMM client 
    1. pmm-admin config ...
  2. Add the MySQL exporter
    1. pmm-admin add mysql --username=pmm --password=pass

The issue with this approach is that the user and password are there in plain sight for anyone to see, be it through the shell history or via commands like ps aux.

The PMM client uses kingpin to parse the arguments given, so we can use its feature for reading them from a file to do it in a more secure way. We just need to create the files with the arguments we want to hide from the commands, like:

shell> cat <<EOF >/home/agustin/pmm-admin-config.conf
--server-insecure-tls
--server-url=https://admin:admin@X.X.X.X:443
EOF

shell> cat <<EOF >/home/agustin/pmm-admin-mysql.conf
--username=pmm
--password=pmmpassword
EOF

Note that the above commands were used for simplicity in showing how they can be created. If you are worried about leaving traces in the shell command history use vim (or your editor of choice) to actually create the files and their contents.

We can use these files in the following way, instead:

shell> pmm-admin config @/home/agustin/pmm-admin-config.conf

shell> pmm-admin add mysql @/home/agustin/pmm-admin-mysql.conf

We can still use other arguments in the command directly. For example, for the MySQL command:

shell> pmm-admin add mysql --port=6033 @/home/agustin/pmm-admin.conf

PMM clients will not store database credentials within themselves, but will instead request this data from the PMM server. After the exporters are added and running, remove the pmm-admin conf files.

Using Shell Variables

Another way of achieving this is to use “hidden” variables, like:

shell> read -s pmm_mysql_pass
[type_the_password_here]
shell> pmm-admin add mysql --username=pmm --password=${pmm_mysql_pass}

You can then even wipe the variable out if you want:

shell> pmm_mysql_pass=""

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
06
2021
--

Reminder: TokuDB Storage Engine Will Be Disabled by Default in Percona Server for MySQL 8.0.26

TokuDB Disabled in Percona Server for MySQL

TokuDB Disabled in Percona Server for MySQLAs we’ve communicated in our blog post in May, the TokuDB Storage Engine has been marked as “deprecated” in Percona Server for MySQL 8.0. It will be removed in a future version (Percona Server for MySQL 8.0.28, expected to ship in Q1 2022).

With the release of Percona Server for MySQL 8.0.26, the storage engine will still be included in the binary builds and packages but will be disabled by default. If you are upgrading from a previous version, the TokuDB Storage Engine plugin will fail with an error message at server startup if it is installed.

You will still be able to re-enable it manually so that you can perform the necessary migration steps.

Re-enabling the TokuDB Engine in Percona Server for MySQL 8.0.26 is fairly straightforward. You need to add the options

tokudb_enabled=TRUE

and

tokudb_backup_enabled=TRUE

options to your my.cnf file and restart your server instance.

Once the server is up and running again, you can migrate your data to an alternative storage engine like RocksDB or InnoDB before disabling and removing TokuDB. See the chapter Removing the TokuDB storage engine in our documentation for details.

Let us know if you have any questions or comments about this process!

If you need any assistance with migrating your data or have any questions or concerns about this, don’t hesitate to reach out to us – our experts are here to help!

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
05
2021
--

MySQL 8: Account Locking

MySQL 8 Account Locking

MySQL 8 Account LockingAs part of my ongoing focus on MySQL 8 user and password management, I’ve covered how the new dual passwords feature can reduce the overall DBA workload and streamline the management process (see MySQL 8: Dual Passwords). This wasn’t the only change to user/password management in MySQL 8; one of the more security-focused changes was the implementation of temporary account locking, first introduced in MySQL 8.0.19. With this feature, database administrators can now configure user accounts so that too many consecutive login failures can temporarily lock the account.

The account locking feature only applies to the failure of a client to provide a correct password during the connection attempt. It does not apply to failure to connect for other reasons (network issues, unknown user account, etc.). In the case of dual passwords, either of the account passwords that have been set would count as correct during successful authentication.

Usage of MySQL 8 Account Locking

Configurable options are FAILED_LOGIN_ATTEMPTS and PASSWORD_LOCK_TIME, both used with the CREATE USER and ALTER USER statements. A couple of usage examples are:

CREATE USER ‘percona’@’localhost’ IDENTIFIED BY ‘password’ FAILED_LOGIN_ATTEMPTS 4 PASSWORD_LOCK_TIME 3;
ALTER USER ‘percona’@’localhost’ FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LOCK_TIME UNBOUNDED;

Once a user has been set up with these options, too many consecutive login failures will result in an error:

ERROR 3957 (HY000) : Access denied for user percona.
Account is blocked for D day(s) (R day(s) remaining) due to N consecutive failed logins. 

FAILED_LOGIN_ATTEMPTS N

This option determines whether or not to track account login attempts with an incorrect password. The number N specifies how many consecutive wrong password attempts will lock the account.

PASSWORD_LOCK_TIME (N | UNBOUNDED)

This option indicates how long an account will remain locked after too many consecutive incorrect password attempts. The number N specifies the number of days the account will remain locked. For a more permanent account lockout, setting this to UNBOUNDED stipulates that the duration of the locked state is now unbounded and does not end until the account is manually unlocked.

  • For both options, permitted values for N are between 0 and 32,767. Note that if setting the value to 0, the option is disabled. 
  • Note that for failed login tracking and locking to occur for an account, both of the above options must be set to a non-zero value. 
  • When creating a new user, not specifying either of the above options, the implicit default value is 0 for any accounts named in the statement. 
    • In other words, failed login tracking and temporary account locking are disabled when these options are not specified. 
    • This also applies to any accounts created before the introduction of this feature. 
  • When altering a user, not specifying either of the above options means the existing values will remain unchanged for all accounts named by the statement. 
  • For temporary account locking to occur, the password failures must be consecutive. 
    • Any successful login before reaching the FAILED_LOGIN_ATTEMPTS value for that user will reset the failure counting. 
  • Once an account has been temporarily locked, it is impossible to log in even with the correct password until either the lock duration has passed or the account is unlocked by one of the account reset methods below. 

Resetting Account Locks

State information for each account regarding failed login tracking, and account lock status, happens every time the server reads the grant tables. An account’s state information can be reset, resetting the failed-login count and unlocking the account if it is already locked. Account resets can be global for all accounts or limited to a single account. 

Global Reset

  • A global reset of all accounts occur for any of the following conditions:
    • MySQL server restart.
    • Execution of FLUSH PRIVILEGES.

Single Account Reset

  • A per-account reset occurs for any of the following conditions:
    • Successful login for the account.
    • Expiration of the lock duration.
      • Failed login counting resets and resumes at the time of the next login attempt.
    • Execution of an ALTER USER statement for the account that sets either of the above options (or both) to any value, or the execution of an ALTER USER … UNLOCK statement for the account.
      • No other ALTER USER statements have any effect on the failed-login counter or the account lock state.

Account Locking Behavior

The account locking state is recorded in the ‘account_locked’ column of the mysql.user system table. The output from SHOW CREATE USER indicates whether an account is locked or unlocked.

If a client attempts to connect to an account that is locked, the attempt will fail. In this case, the server also increments the ‘Locked_connects’ status variable that indicates the number of attempts to connect to a locked account, an error message is displayed, and the attempt is logged to the MySQL error log:

Access denied for user ‘percona’@’localhost’.
Account is locked.

Locking an account does not affect connecting using a proxy user that assumes the identity of the locked account. It also does not affect the ability to execute stored procedures or views with the DEFINER attribute set to the locked account.

In Closing

While this is another relatively simple feature, it can significantly impact how your company manages the security aspects of failed login attempts. This leads to a more secure database environment and better client management overall. Having this ability has also proven helpful during maintenance operations to disable access from specific accounts or as a security measure to lock privileged accounts and unlock them only during application maintenance.

Not running MySQL 8 yet? A colleague pointed out that a similar functionality was available in earlier versions of MySQL by utilizing the Connection-Control Plugin. With this plugin, too many consecutive failed attempts would increase the delay in the server response to help avoid or minimize brute force attacks. If there is interest, I may cover this plugin in more detail in a future blog post. Let me know!

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
04
2021
--

Do You Believe in the Future of Open Source Databases? Tell Us Your Views!

2021 Percona Open Source Survey

2021 Percona Open Source SurveyComplete the 2021 Open Source Data Management Software Survey to share your knowledge and experience, and help inform the open source database community.

In 2020 we ran our second Open Source Data Management Software Survey. This resulted in some interesting data on the state of the open source database market. 

Some key statistics:

  • 41% of buying decisions are now made by architects, giving them significant power over software adoption within a company.
  • 81% of respondents gave cost savings as the most important reason for adoption. In this challenging economic climate, many companies are actively avoiding vendor license costs and lock-in.
  • 82% of respondents reported at least a 5% database footprint growth over the last year, with 62% reporting more significant growth and 12% growing over 50%.
  • Although promoted as a cheap and convenient alternative, cloud costs can spiral, with 22% of companies spending more on cloud hosting than planned.

To see how the landscape has changed over the last (turbulent) year, we are pleased to announce the launch of our third annual Open Source Data Management Software Survey

The final results will be 100% anonymous and made freely available via Creative Commons Attribution ShareAlike (CC BY-SA).

Access to Accurate Market Data is Important

There are millions of open source projects currently running, and most are dependent on databases. 

Accurate data helps people track the popularity of different databases, and see how and where these databases are running successfully. 

The information helps people build better software and take advantage of shifting trends. It also helps businesses understand industry direction and make informed decisions on the software and services they choose.

We want to help developers, architects, DBAs, and any other users choose the best database or tool for their business, and understand how and where to deploy it. 

How Can You Help This Survey Succeed?

Firstly, we would love you to complete the survey and share your insight into current trends and new developments in open source database software. 

Secondly, we hope you will share this survey with other people who use open source database software and encourage them to contribute.

The more responses we receive, the more useful this data will be to the community. If there is anything we missed or you would like to ask in the future, we welcome your feedback.

The survey should take around 10 minutes to complete. 

Click here to complete the survey!

Sep
29
2021
--

Percona Monthly Bug Report: September 2021

Percona Bug Report Sept 2021

Percona Bug Report Sept 2021Here at Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open source form, report back on any issues or bugs you might encounter along the way.

We constantly update our bug reports and monitor other boards to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. These posts are a central place to get information on the most noteworthy open and recently resolved bugs. 

In this September 2021 edition of our monthly bug report, we have the following list of bugs:

Percona Server for MySQL/MySQL Bugs

PS-7866 (MySQL#104961): MySQL crashes when running a complex query with JOINS. In MySQL error log it only adds a single line as “Segmentation fault (core dumped)” and there are no further details like backtraces in MySQL error log. Further analyzing the core dump shows that something happened at CreateIteratorFromAccessPath.

Affects Version/s: 8.0 [Tested/Reported version 8.0.23, 8.0.25]

 

PS-7778 (MySQL#104168): The prepare statement can be failed when triggers with DEFINER are used. This is a regression bug introduce after WL#9384 implementation. Issue not reproducible in lower versions, tested with 8.0.20

Affects Version/s: 8.0 [Tested/Reported version 8.0.22, 8.0.25]

Fixed version: PS-8.0.26

 

MySQL#102586:  When doing a multiple-table DELETE that is anticipating a foreign key ON DELETE CASCADE, the statements work on the primary but it breaks row-based replication.

Affects Version/s: 8.0, 5.7  [Tested/Reported version 8.0.23, 5.7.33]

 

Percona XtraDB Cluster

PXC-3724: PXC node crashes with long semaphore, this issue occurred due to locking when writing on multiple nodes in PXC cluster. This is critical as it blocks all nodes to perform any transactions and finally crashing PXC node.

Affects Version/s: 8.0  [Tested/Reported version 8.0.22]

 

PXC-3729: PXC node fails with two applier threads due to conflict. In a certain combination of a table having both primary key and (multi-column) unique key, and highly concurrent write workload updating similar UK values, parallel appliers on secondary nodes may conflict with each other, causing node aborts with inconsistency state. These can be seen even when writes are going to a single writer node only. As transactions were already certified and replicated, they should never fail on the appliers.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

 

PXC-3449: When ALTER TABLE (TOI) is executed in the user session, sometimes it happens that it conflicts (MDL) with high priority transaction, which causes BF-BF abort and server termination.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

Percona XtraBackup

PXB-2375:  In some cases, xtrabackup will write the wrong binlog filename, pos, and GTID combination info in xtrabackup_binlog_info. Due to this xtrabackup might not work as expected with GTID.

If we are using this backup with GTID position details in xtrabackup_binlog_info to create a new replica, then most likely replication will break due to incorrect GTID position.

Looks like the GTID position is not consistent with binlog file pos, they are captured differently and later printed together in xtrabackup_binlog_info  file.

Workaround to avoid this bug,

  •  Use binary log coordinates 
  • Take backup in non-peak hours since this issue mostly occurred when MySQL under heavy write operations.

Affects Version/s:  8.0 [Tested/Reported version 8.0.14]

 

Percona Toolkit

PT-1889: Incorrect output when using pt-show-grants for users based on MySQL roles as result they can not be applied back properly on MySQL server. Due to this, we can not use pt-show-grants for mysql roles until this issue is fixed.

Affects Version/s:  3.2.1

 

PT-1747: pt-online-schema-change was bringing the database into a broken state when applying the “rebuild_constraints” foreign keys modification method if any of the child tables were blocked by the metadata lock.

Affects Version/s:  3.0.13

Fixed Version: 3.3.2

 

PMM  [Percona Monitoring and Management]

PMM-8421: Starting from pmm-server to 2.19.0, it will break existing external service monitoring, for internal supported services like MySQL not impacted.

Affects Version/s: 2.19.0  [Tested/Reported version 2.19]

Fixed Version:  2.22.0

 

PMM-7116: MongoDB ReplSet Summary Dashboard shows incorrect replset member’s state: STARTUP instead of PRIMARY.

Affects Version/s: 2.x  [Tested/Reported version 2.12,2.20]

 

PMM-8824:If there are custom dashboards, tagged with tags that are different from the tags used for default PMM dashboards the import-dashboard.py fails with the error. As a result upgrade to the next pmm version fails.

Affects Version/s: 2.21.0  [Tested/Reported version 2.21.0]

Fixed Version:  2.22.0

 

PMM-4665: Frequent error messages in pmm-agent.log for components like tokudb storage engine which are not supported by upstream MySQL. As a result, it increases the overall log file size due to these additional messages.

Affects Version/s:  2.x  [Tested/Reported version 2.13.0]

Fixed version: 2.19.0

 

PMM-7846:  Adding MongoDB instance via pmm-admin with tls option not working and failing with error Connection check failed: timeout (context deadline exceeded).

Affects Version/s: 2.x  [Tested/Reported version 2.13, 2.16]

 

Summary

We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, How to Report Bugs, Improvements, New Feature Requests for Percona Products.

For the most up-to-date information, be sure to follow us on Twitter, LinkedIn, and Facebook.

Quick References:

Percona JIRA  

MySQL Bug Report

Report a Bug in a Percona Product

MySQL 8.0.24 Release notes

___

About Percona:

As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides expertise, software, support, and services no matter the technology.

Whether it’s enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.

Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com