Oct
16
2020
--

Danger of Changing Default of log_error_verbosity on MySQL/Percona Server for MySQL 5.7

Changing Default of log_error_verbosity mysql

Changing Default of log_error_verbosity mysqlChanging the default value (3) of log_error_verbosity in MySQL/Percona Server for MySQL 5.7 can have a hidden unintended effect! What does log_error_verbosity do exactly? As per the documentation:

The log_error_verbosity system variable specifies the verbosity for handling events intended for the error log.

Basically a value of 1 logs only [Errors]; 2 is 1)+[Warnings]; and 3 is 2)+[Notes].

For example, one might be tempted to change the default of log_error_verbosity since the error.log might be bloated with thousands (or millions) of lines with [Notes], such as:

2020-10-08T17:02:56.736179Z 3 [Note] Access denied for user 'root'@'localhost' (using password: NO)
2020-10-08T17:04:48.149038Z 4 [Note] Aborted connection 4 to db: 'unconnected' user: 'root' host: 'localhost' (Got timeout reading communication packets)

(P.S. you can read more about those Notes on this other Percona blog posts):

Fixing MySQL 1045 Error: Access Denied

MySQL “Got an error reading communication packet”

The problem is, after lowering log_error_verbosity to 1 or 2, no messages about server startup or shutdown would be printed to the log! That can really make troubleshooting really hard in the event of issues or system failure.  For completeness, on error.log from 5.7 at startup with default log_error_verbosity, the following should be seen:

2020-10-08T16:36:07.302168Z 0 [Note] InnoDB: PUNCH HOLE support available
2020-10-08T16:36:07.302188Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
...
2020-10-08T16:36:07.303998Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-08T16:36:07.307823Z 0 [Note] InnoDB: Completed initialization of buffer pool
...
2020-10-08T16:36:07.497571Z 0 [Note] /usr/sbin/mysqld: ready for connections.

And on shutdown:

2020-10-08T16:36:10.447002Z 0 [Note] Giving 0 client threads a chance to die gracefully
2020-10-08T16:36:10.447022Z 0 [Note] Shutting down slave threads
2020-10-08T16:36:10.447027Z 0 [Note] Forcefully disconnecting 0 remaining clients
…
2020-10-08T16:36:12.104099Z 0 [Note] /usr/sbin/mysqld: Shutdown complete

With log_error_verbosity =2, there won’t be messages about MySQL startup, but some warnings that are only printed at startup might give a hint of the time of server restart such as:

2020-10-08T16:30:21.966844Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-10-08T16:30:22.181367Z 0 [Warning] CA certificate ca.pem is self signed.
2020-10-08T16:30:22.221732Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 6000)

If there is no information about server restart, system logs can be checked for information about restarts:

# cat /var/log/messages 
...
Oct  8 16:31:25 carlos-tutte-latest57-standalone-1 systemd: Starting MySQL Server...
Oct  8 16:31:26 carlos-tutte-latest57-standalone-1 systemd: Started MySQL Server.

If still no clue when was MySQL last started, checking the “Uptime” status variable can help in calculating the last startup.

The problem does NOT occur on MySQL/Percona Server for MySQL 8.0 since even with log_error_verbosity = 1, the following startup/shutdown lines are printed on the error.log:

2020-10-08T16:31:54.532504Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.20-11) starting as process 1052
2020-10-08T16:31:54.540342Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-10-08T16:31:55.026815Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-10-08T16:31:55.136125Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/lib/mysql/mysqlx.sock' bind-address: '::' port: 33060
2020-10-08T16:31:55.270669Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.20-11'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona Server (GPL), Release 11, Revision 5b5a5d2.
2020-10-08T16:32:01.708932Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.20-11)  Percona Server (GPL), Release 11, Revision 5b5a5d2.

In conclusion, if possible, avoid changing the default of log_error_verbosity on MySQL/Percona Server for MySQL 5.7. And if you need to change it, do it online with SET GLOBAL instead of through the config file, since, in the event of a restart, startup messages won’t be logged.

Oct
12
2020
--

Ask An Expert On Percona’s Community Forum!

Percona Community Forum

Percona Community ForumHave you recently visited Percona’s Community Forum? It’s your hub for direct Q&A with top database experts, including Percona CEO Peter Zaitsev and CTO Vadim Tkachenko. Last quarter over 450 users participated, including 45 engineers from Percona’s staff. Since it was first launched in 2006 and revamped earlier this year, our Forum has built up a mountain of easily-searched database expertise.

This free online community is open to everyone from newbies to maestros. Register as a user and then ask your question or visit the unanswered question list and help someone else. You’ll feel great doing so, plus you’ll earn points and advance in rank, building your online reputation like done by vaibhav_upadhyay40, Fan, Federico Razzoli, Ghan, djp, Björn, rdab100, Stateros, gordan, Venkat, and others.

Our Forum Q&A covers all flavors of MySQL, MongoDB, and PostgreSQL, as well as key utilities like ProxySQL and PMM. Plus it’s the only site announcing all of Percona’s new software releases and Percona’s occasional urgent software alerts. You can even personalize email notifications to track only certain categories and skip the rest.  And we promise to never spam you!

A few of our most popular posts illustrate how it all works:

So what’s the fine print? Most importantly, remember that it’s volunteers who answer, so answers may not be timely, might not be in in-depth, or occasionally might never arrive at all. Response time now averages five days. Remember to never share confidential details, as everything on the Forum is public.

The Forum operates on a spirit of self-help, so do a reasonable amount of your own research before popping off a question. And if you get help, try to give back help too. Everything depends on a spirit of reciprocity and goodwill. The Forum is for those who love databases and want to share their enthusiasm with others, and for those who want to master the database craft.

Finally, as our lawyers make us say, answers are “as-is” meaning Percona does not guarantee accuracy and disclaims all liability. Our Help Articles, Code of Conduct, and Terms of Service explain it all.

So register as a user and give the Percona Forum a try! As always we welcome your suggestions and feedback to Community-Team@Percona.com.

Oct
09
2020
--

Amazon Aurora Multi-Primary First Impression

Amazon Aurora Multi-Primary First Impression

Amazon Aurora Multi-Primary First ImpressionFor what reason should I use a real multi-primary setup?

To be clear, not a multi-writer solution where any node can become the active writer in case of needs, as for Percona XtraDB Cluster (PXC) or Percona Server for MySQL using Group_replication. No, we are talking about a multi-primary setup where I can write at the same time on multiple nodes. I want to insist on this “why?”.

After having excluded the possible solutions mentioned above, both covering the famous 99.995% availability, which is 26.30 minutes of downtime in a year, what is left?

Disaster Recovery? Well, that is something I would love to have, but to be a real DR solution we need to put several kilometers (miles for imperial) in the middle.

And we know (see here and here) that aside from some misleading advertising, we cannot have a tightly coupled cluster solution across geographical regions.

So, what is left? I may need more HA, ok, that is a valid reason. Or I may need to scale the number of writes, which is a valid reason as well. This means, in the end, that I am looking to a multi-primary because:

  • Scale writes (more nodes more writes)
    • Consistent reads (what I write on A must be visible on B)
  • Gives me 0 (zero) downtime, or close to that (5 nines is a maximum downtime of 864 milliseconds per day!!)
  • Allow me to shift the writer pointer at any time from A to B and vice versa, consistently.

Now, keeping myself bound to the MySQL ecosystem, my natural choice would be MySQL NDB cluster. But my (virtual) boss was at AWS re-invent and someone mentioned to him that Aurora Multi-Primary does what I was looking for. This (long) article is my voyage in discovering if that is true or … not.

Given I am focused on the behavior first, and NOT interested in absolute numbers to shock the audience with millions of QPS, I will use low-level Aurora instances. And will perform tests from two EC2 in the same VPC/region of the nodes.

You can find the details about the tests on GitHub here.

Finally, I will test:

  • Connection speed
  • Stale read
  • Write single node for baseline
  • Write on both node:
    • Scaling splitting the load by schema
    • Scaling same schema

Test Results

Let us start to have some real fun. The first test is …

Connection Speed

The purpose of this test is to evaluate the time taken in opening a new connection and time taken to close it. The action of the open/close connection can be a very expensive operation, especially if applications do not use a connection pool mechanism.

Amazon Aurora Multi-Primary


As we can see, ProxySQL results to be the most efficient way to deal with opening connections, which was expected given the way it is designed to reuse open connections towards the backend.


Different is the close connection operation, in which ProxySQL seems to take a little bit longer.

As a global observation, we can say that by using ProxySQL we have more consistent behavior. Of course, this test is a simplistic one, and we are not checking the scalability (from 1 to N connections) but it is good enough to give us the initial feeling. Specific connection tests will be the focus of the next blog on Aurora MM.

Stale Reads

Aurora multi-primary uses the same mechanism of the default Aurora to update the buffer pool:


Using the Page Cache update, just doing both ways. This means that the Buffer Pool of Node2 is updated with the modification performed in Node1 and vice versa.

To verify if an application would be really able to have consistent reads, I have run this test. This test is meant to measure if, and how many, stale reads we will have when writing on a node and reading from the other.

Amazon Aurora multi-primary has two consistency models:

Aurora consistency model
As an interesting fact, the result was that with the default consistency model (INSTANCE_RAW), we got a 100% stale read.
Given that I focused on identifying the level of the cost that exists when using the other consistency model (REGIONAL_RAW), that allows an application to have consistent reads.

The results indicate an increase of 44% in total execution time, and of 95% (22 times slower) in write execution.

It is interesting to note that the time taken is in some way predictable and consistent between the two consistency models.

The graph below shows in yellow how long the application must wait to see the correct data on the reader node. In blue is the amount of time the application waits to get back the same consistent read because it must wait for the commit on the writer.

lag time in nanoseconds

As you can see, the two are more or less aligned.

Given the performance cost imposed by using REGIONAL_RAW,  all the other tests are done with the default INSTANCE_RAW, unless explicitly stated.

Writing Tests

All tests run in this section were done using sysbench-tpcc with the following settings:

sysbench ./tpcc.lua --mysql-host=<> --mysql-port=3306 --mysql-user=<> --mysql-password=<> --mysql-db=tpcc --time=300 --threads=32 --report-interval=1 --tables=10 --scale=15  --mysql_table_options=" CHARSET=utf8 COLLATE=utf8_bin"  --db-driver=mysql prepare

 sysbench /opt/tools/sysbench-tpcc/tpcc.lua --mysql-host=$mysqlhost --mysql-port=$port --mysql-user=<> --mysql-password=<> --mysql-db=tpcc --db-driver=mysql --tables=10 --scale=15 --time=$time  --rand-type=zipfian --rand-zipfian-exp=0 --report-interval=1 --mysql-ignore-errors=all --histogram  --report_csv=yes --stats_format=csv --db-ps-mode=disable --threads=$threads run

Write Single Node (Baseline)

Before starting the comparative analysis, I was looking to define what was the “limit” of traffic/load for this platform.

baseline reads/writes

From the graph above, we can see that this setup scales up to 128 threads and after that, the performance remains more or less steady.

Amazon claims that we can mainly double the performance when using both nodes in write mode and use a different schema to avoid conflict.

aurora scalability

Once more, remember I am not interested in the absolute numbers here, but I am expecting the same behavior. Given that, our expectation is to see:

expected scalability

Write on Both Nodes, Different Schemas

So AWS recommend this as the scaling solution:


And I diligently follow the advice. I used two EC2 nodes in the same subnet of the Aurora Node, writing to a different schema (tpcc & tpcc2).

Overview

Let us make it short and go straight to the point. Did we get the expected scalability?

Well, no:


We just had a 26% increase, quite far to be the expected 100% Let us see what happened in detail (if not interested just skip and go to the next test).

Node 1

Schema read writes Aurora

Node 2


As you can see, Node1 was (more or less) keeping up with the expectations and being close to the expected performance. But Node2 was just not keeping up, and performances there were just terrible.

The graphs below show what happened.

While Node1 was (again more or less) scaling up to the baseline expectations (128 threads), Node2 collapsed on its knees at 16 threads. Node2 was never able to scale up.

Reads

Node 1


Node1 is scaling the reads as expected, but also here and there we can see performance deterioration.

Node 2


Node2 is not scaling Reads at all.

Writes

Node 1


Same as Read.

Node 2


Same as read.

Now someone may think I was making a mistake and I was writing on the same schema. I assure you I was not. Check the next test to see what happened if using the same schema.

Write on Both Nodes,  Same Schema

Overview

Now, now, Marco, this is unfair. You know this will cause contention. Yes, I do! But nonetheless, I was curious to see what was going to happen and how the platform would deal with that level of contention.

My expectations were to have a lot of performance degradation and an increased number of locks. About conflict I was not wrong, node2 after the test reported:

+-------------+---------+-------------------------+
| table       | index   | PHYSICAL_CONFLICTS_HIST |
+-------------+---------+-------------------------+
| district9   | PRIMARY |                    3450 |
| district6   | PRIMARY |                    3361 |
| district2   | PRIMARY |                    3356 |
| district8   | PRIMARY |                    3271 |
| district4   | PRIMARY |                    3237 |
| district10  | PRIMARY |                    3237 |
| district7   | PRIMARY |                    3237 |
| district3   | PRIMARY |                    3217 |
| district5   | PRIMARY |                    3156 |
| district1   | PRIMARY |                    3072 |
| warehouse2  | PRIMARY |                    1867 |
| warehouse10 | PRIMARY |                    1850 |
| warehouse6  | PRIMARY |                    1808 |
| warehouse5  | PRIMARY |                    1781 |
| warehouse3  | PRIMARY |                    1773 |
| warehouse9  | PRIMARY |                    1769 |
| warehouse4  | PRIMARY |                    1745 |
| warehouse7  | PRIMARY |                    1736 |
| warehouse1  | PRIMARY |                    1735 |
| warehouse8  | PRIMARY |                    1635 |
+-------------+---------+-------------------------+

Which is obviously a strong indication something was not working right. In terms of performance gain, if we compare ONLY the result with the 128 Threads:


Also with the high level of conflict, we still have 12% of performance gain.

The problem is that in general, we have the two nodes behaving quite badly. If you check the graph below you can see that the level of conflict is such to prevent the nodes not only to scale but to act consistently.

Node 1

Write on Both Nodes,  Same Schema

Node 2


Reads

In the following graphs, we can see how node1 had issues and it actually crashed three times, during tests with 32/64/512 threads. Node2 was always up but the performances were very low.

Node 1


Node 2


Writes

Node 1


Node 2


Recovery From Crashed Node

About recovery time, reading the AWS documentation and listening to presentations, I often heard that Aurora Multi-Primary is a 0 downtime solution. Or other statements like: “in applications where you can’t afford even brief downtime for database write operations, a multi-master cluster can help to avoid an outage when a writer instance becomes unavailable. The multi-master cluster doesn’t use the failover mechanism, because it doesn’t need to promote another DB instance to have read/write capability”

To achieve this the suggestion, the solution I found was to have applications pointing directly to the Nodes endpoint and not use the Cluster endpoint.

In this context, the solution pointing to the Nodes should be able to failover within a second or so, while the cluster endpoint:

Recovery From Crashed Node

Personally, I think that designing an architecture where the application is responsible for the connection to the database and failover is some kind of refuse from 2001. But if you feel this is the way, well, go for it.

What I did for testing is to use ProxySQL, as plain as possible with nothing else, and the basic monitor coming from the native monitor. I then compared the results with the tests using the Cluster endpoint. In this way, I adopt the advice of pointing directly at the nodes, but I was doing things in our time.

The results are below and they confirm (more or less) the data coming from Amazon.


A downtime of seven seconds is quite a long time nowadays, especially if I am targeting the 5 nines solution that I want to remember is 864 ms downtime per day. Using ProxySQL is going closer to that, but still too long to be called zero downtime.
I also have fail-back issues when using the AWS cluster endpoint, given it was not able to move the connection to the joining node seamlessly.

Last but not least, when using the consistency level INSTANCE_RAW, I had some data issue as well as PK conflict:
FATAL: mysql_drv_query() returned error 1062 (Duplicate entry ‘18828082’ for key ‘PRIMARY’) 

Conclusions

As state at the beginning of this long blog, the reasonable expectations to go for a multi-primary solution were:

  • Scale writes (more nodes more writes)
  • Gives me zero downtime, or close to that (5 nines is a maximum downtime of 864 milliseconds per day!!)
  • Allow me to shift the writer pointer at any time from A to B and vice versa, consistently.

Honestly, I feel we have completely failed the scaling point. Probably if I use the largest Aurora I will get much better absolute numbers, and it will take me more to encounter the same issues, but I will. In any case, if the multi-primary solution is designed to provide that scalability, and it should do that with any version.

I did not have zero downtime, but I was able to failover pretty quickly with ProxySQL.

Finally, unless the consistency model is REGIONAL_RAW, shifting from one node to the other is not prone to possible negative effects like stale reads. Given that I consider this requirement not satisfied in full.

Given all the above, I think this solution could eventually be valid only for High Availability (close to being 5 nines), but given it comes with some limitations I do not feel comfortable in preferring it over others just for that, at the end default Aurora is already good enough as a High available solution.

References

AWS re:Invent 2019: Amazon Aurora Multi-Master: Scaling out database write performance

Working with Aurora multi-master clusters

Improving enterprises ha and disaster recovery solutions reviewed

Robust ha solutions with proxysql

Limitations of multi-master clusters

Oct
08
2020
--

The Criticality of a Kubernetes Operator for Databases

Importance of Kubernetes Operators for Databases

Importance of Kubernetes Operators for DatabasesAs a Solutions Engineer at Percona, one of my responsibilities is to support our customers as they investigate new and emerging technologies. This affords me the opportunity to speak to many current and new customers who partner with Percona. The topic of Kubernetes is becoming more popular as companies are investigating and adopting this technology. The issue most companies are encountering is architecting a stateful database that doesn’t fall victim to an environment tuned for ephemeral workloads. This obviously introduces a level of complexity as to how to run a stateful database in an inherently stateless world, as databases are not natively designed for that.

To make your life easier, as a part of the Percona Cloud-Native Autonomous Database Initiative, our engineering teams have built two Kubernetes Operators: Percona Kubernetes Operator for Percona XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB, which allows for Kubernetes Pods to be destroyed, moved, or created with no impact to the application. To see an overview of Kubernetes, you can read this previous blog of mine Introduction to Percona Kubernetes Operator for Percona XtraDB Cluster that covers this topic. It’s common for companies new to Kubernetes to attempt to run their databases in Kubernetes the same way they would in a traditional environment. But, this is not advised as it introduces the possibility of data loss and it is not recommended for production workloads. Why is this dangerous and how has Percona solved this?

Appropriate Workloads for Kubernetes

Kubernetes is not the answer for everyone. It’s even not the answer for most people. Do not be misled into thinking that moving a database into Kubernetes is going to solve any of your problems. Before you consider moving your database into Kubernetes, ensure the rest of your application is cloud-native and can be used with Kubernetes. Moving your database to Kubernetes should happen after you have started both elastic vertical and horizontal scale and need to orchestrate it to control costs.

As more companies are moving to Kubernetes something has to happen to the legacy workloads. Oftentimes we see a lift and shift mentality into Kubernetes, which can be dangerous or cause more work than expected. We have seen two primary ideal use cases for moving database workloads to Kubernetes: Microservices and Unified Abstraction Layer.

Monolithic, large datasets can prohibit some of Kubernetes’ strong points: self-healing and availability. This can be an issue due to the time it takes to physically transmit data to a new Pod instance as it joins the database cluster. If your dataset is too large, this process is slow due to physical limitations and prohibits performance and the availability of your database. Microservices are a great fit due to the relatively smaller datasets, which allows Kubernetes automation to work well with the dataset size.

Companies looking to take full advantage of cloud-native applications and databases can be a really good fit for Kubernetes as well. If you truly want the ability to deploy and run your databases anywhere utilizing the concept of a Unified Abstraction Layer, Kubernetes is a great option. You can move your databases to anywhere that is running Kubernetes and know it will work.

We talked about large unsharded datasets and the limitations Kubernetes presents when handling them, but we should mention a few more workloads better suited for traditional platforms. Applications with a throughput sensitivity may not do well on Kubernetes, or they may not be cost-effective to do so. Kubernetes is fundamentally designed for container orchestration and is not designed to handle highly performant databases that require low latency. This may be possible to achieve, but at what cost? This applies to highly performant distributed applications as well. Lowest latency across all nodes is not a core tenant of Kubernetes, so ensure you have planned and tested against this before you move everything over to Kubernetes.

Pods Are Cattle, Not Pets

If you’re not familiar with Pets vs Cattle, it’s a DevOps concept that differentiates deployment methodologies of unique servers that require attention when issues arise (pets) versus the ability to replace a server with a copy if issues arise (cattle). Due to the nature of how Kubernetes operates, Pods can be destroyed, spun up, and moved at any time due to factors outside of the application’s control, much like how cattle are treated. Kubernetes uses a scheduler, which by design, can destroy and recreate Pods to meet the configuration needs of your Kubernetes Cluster. This is great for stateless applications as any failure in the application will result in a Pod containing the application being destroyed and recreated, eliminating the need for human interaction, and greatly speeding up the process to a resolution. This isn’t ideal for databases as you don’t want your database to suddenly stop working, halt the application, and introduce the potential for lost or corrupted data. One of the tools Kubernetes can utilize to help combat this is called Stateful Sets. These help by keeping a Pod’s identity assigned to it as it is destroyed and re-created. This helps facilitate stateful workloads, but how does this come into play with high availability and utilizing the automation aspects of Kubernetes?

Databases Are Pets, Not Cattle

Databases by design need to keep their identity, information, and most importantly their data safe and accessible at all times. They are the backbones of the application as they are the source of truth an application relies on for normal processing. Any errors in their operations will quickly stop an application from functioning. They are important, to say the least. How can we safely run databases in Kubernetes and still ensure we have highly available database deployments? By using Stateful Sets and Persistent Volumes we can maintain data integrity, but we need an additional set of hands to take on database administrator tasks such as ensuring failover happens, database members are recovered, and re-join the highly available architecture, along with other technology-specific functions. Fortunately, Kubernetes is extensible and has Operators, which aims to automate the key task of a human operator who is managing a service or set of services.

Automation, Automation, Automation

We know the complexities of running a database (safely) in Kubernetes and some of the concepts used to help bridge the gaps between automation and traditional human functions. With the help of Percona’s Kubernetes Operators, we can safely run databases the way they were intended to run. Percona’s Kubernetes Operators are able to automate tasks that are usually done by a database administrator such as:

  • Fully automated deployments with strict consistency and no single point of failure
  • Automated scaling with the ability to change the size parameter to add or remove members of a Cluster or Replica-Set
  • Fully automated backups and restores
  • Fully automated self-healing by automatically recovering from the failure of a single Cluster or Replica-Set member.
  • Automatically manage system users on password rotation
  • Simplified updates

Always Use a Kubernetes Operator

With the complexities of running a highly available database environment and the inherent dangers introduced by using the dynamic Kubernetes environment, an Operator should always be used when deploying databases in Kubernetes. Fortunately, Percona has already solved this by providing Percona Kubernetes Operator for Percona XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB. Percona provides full support for databases running in Kubernetes with the Percona Operators. If you are interested in learning more or obtaining support or professional services to maximize your database deployments, please reach out to us.

Oct
08
2020
--

MySQL 101: Troubleshooting MySQL with Percona Monitoring and Management

MySQL 101 Troubleshoot with percona monitoring and management

MySQL 101 Troubleshoot with percona monitoring and managementPercona Monitoring and Management (PMM) is a free and open source platform for managing and monitoring MySQL, MongoDB, and PostgreSQL databases. In this blog, we will look at troubleshooting MySQL issues and performance bottlenecks with the help of PMM.

We can start troubleshooting MySQL after selecting the appropriate server or cluster from the PMM dashboard. There are many aspects to check which should get priority depending on the issue you are facing.

Things to look at:

OS:

CPU Usage:

Check to see if there is a spike or gradual increase in the CPU usage on the database server other than the normal pattern. If so, you can check the timeframe of the spike or starting point of the increasing load and review the database connections/thread details from the MySQL dashboard for that time interval.

MySQL CPU usage

CPU Saturation Metrics and Max Core Usage:

This is an important metric as it shows the saturation level of the CPU with normalized CPU load. Normalized CPU Load is very helpful for understanding when the CPU is overloaded. An overloaded CPU causes response times to increase and performance to degrade.

CPU Saturation Metrics and Max Core Usage

Disk latency/ Disk IO utilization:

Check to see if there is any latency observed for the disk. If you see the Disk IO utilization reach 100%, this will cause latency in queries as Disk would not be able to perform the read/writes, causing a gradual pile up of queries and hence the spike. The issue might be with the underlying disk or hardware.

Disk latency/ Disk IO utilization

Memory Utilization:

Any sudden change in memory usage consumption could indicate some process hogging the memory, for example, the Mysql process, if many concurrent queries or any long-running queries are in progress. We can see any increase when any backup job or a scheduled batch job is in progress on the server as well.

Network Details:

Check the Inbound and Outbound Network traffic for the duration of the issue for any sudden dip which would point to some network problems.

MySQL:

MySQL Client Thread Activity / MySQL Connections:

If the issue at hand is for a large number of running threads or a connection spike, you can check the graphs of MySQL Connections and MySQL thread activity and get the timeframe when these connections start increasing. More details about the threads (or queries running) can be seen from Innodb dashboards. As mentioned previously, a spike in Disk IO utilization reaching 100% can also cause connections to pile up. Hence, it is important to check all aspects before coming to any conclusion.

MySQL Client Thread Activity / MySQL Connections

MySQL Slow Queries:

If there were queries that were performing slow, this would be reported in the MySQL slow queries graph. This could be due to old queries performing slowly due to multiple concurrent queries, underlying disk load, or newly introduced queries that need analysis and optimization. Look at the timeframe involved and further check the slow logs and QAN to get the exact queries.

MySQL Slow Queries

MySQL Aborted Connections:

If there were a large number of users or threads unable to establish a connection to the server this would be reported by a spike in aborted connections.

InnoDB Metrics:

InnoDB Transactions:

This metric will show the graph of History List Length on the server, which is basically ‘undo logs’ created to keep a consistent state of data for any particular connection. An increase in HLL over time is caused due to long-running transactions on the server. If you see a gradual increase in HLL, look at your server and check show engine innodb status\G. Look for the culprit query/transaction and try to kill it if it’s not truly needed. While not an immediate issue, an increase in HLL can hamper the performance of a server if the value is in the millions and still increasing.

InnoDB Transactions

InnoDB Row Operations:

When you see a spike in thread activity, you should check here next to get more details about the threads running.  Spike in reads/inserts/deletes? You will get details about each of the row operations and their count, which will help you understand what kind of queries were running on the server and find the particular job that is responsible for this.

InnoDB Row Operations

InnoDb Locking > InnoDB Row Lock Time:

Row Lock Waits indicates how many times a transaction waited on a row lock per second. Row Lock Wait Load is a rolling 5 minute average of Row Lock Waits. Average Row Lock Wait Time is the row lock wait time divided by the number of row locks.

InnoDb Locking > InnoDB Row Lock Time

InnoDB Logging > InnoDB Log file usage Hourly:

If there is an increase in writes on the server, we can see an increase in the log file usage hourly and it would show the size in GB on how much data was written to the ib_logfiles before being sent to disk.

Performance Schema Details:

PS File IO (Events):

This dashboard will provide details on the wait IO for various events as shown in the image:

PS File IO (Events)

PS File IO (Load):

Similar to events, this will display the load corresponding to the event.

QAN:

PMM Query Analytics:

The Query Analytics dashboard shows how queries are executed and where they spend their time. To get more information out of QAN you should have QAN prerequisites enabled. Select the timeframe and check for the slow queries that caused most of the load. Most probably these are the queries that you need to optimize to avoid any further issues.

PMM Query Analytics

Daniel’s blog on How to find query slowdowns using PMM will help you troubleshoot better with QAN.

For more details on MySQL Troubleshooting and Performance Optimizations, you can check our CEO Peter Zaitsev’s webinar on MySQL Troubleshooting and Performance Optimization with PMM.

Oct
08
2020
--

Dangerous Edge Case Warning for Percona Toolkit and pt-online-schema-change

Percona Toolkit and pt-online-schema-change

Percona Toolkit and pt-online-schema-changeRecently I was dealing with an unexpected issue raised by our Support customer, in which data became inconsistent after a schema change was applied.

After some investigation, it turned out that affected tables had a special word in the comments of some columns, which triggered an already known (and fixed) issue with the TableParser.pm library of Percona Toolkit.  The problem is that the customer was using an outdated Toolkit version, where pt-online-schema-change was using that buggy parser.

This bug applies only to Percona Toolkit versions up to 3.0.10, so if you have already 3.0.11 or newer installed, you can skip the rest of this post as these are no longer affected.

I am writing this post to warn every user of pt-online-schema-change who has not upgraded the toolkit, as the problem is potentially very dangerous and can lead to the silent loss of data. 

The problem can manifest in two ways. The first one, although confusing, is not really dangerous as the operation is canceled. It happens when the columns with culprit comments do not allow NULL values. For example:

CREATE TABLE `test_not_null` (
`id` int NOT NULL,
`add_id` int NOT NULL COMMENT 'my generated test case',
PRIMARY KEY (`id`)
) ENGINE=InnoDB;

Schema change operation on this one will look like this:

$ ./pt-online-schema-change-3.0.10 u=msandbox,p=msandbox,h=localhost,S=/tmp/mysql_sandbox5735.sock,D=test,t=test_not_null --print --alter "engine=InnoDB" --execute
(...)
Altering `test`.`test_not_null`...
Creating new table...
CREATE TABLE `test`.`_test_not_null_new` (
`id` int(11) NOT NULL,
`add_id` int(11) NOT NULL COMMENT 'my generated test case',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Created new table test._test_not_null_new OK.
Altering new table...
ALTER TABLE `test`.`_test_not_null_new` engine=InnoDB
Altered `test`.`_test_not_null_new` OK.
2020-09-30T21:25:22 Creating triggers...
2020-09-30T21:25:22 Created triggers OK.
2020-09-30T21:25:22 Copying approximately 3 rows...
INSERT LOW_PRIORITY IGNORE INTO `test`.`_test_not_null_new` (`id`) SELECT `id` FROM `test`.`test_not_null` LOCK IN SHARE MODE /*pt-online-schema-change 1438 copy table*/
2020-09-30T21:25:22 Dropping triggers...
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_not_null_del`
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_not_null_upd`
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_not_null_ins`
2020-09-30T21:25:22 Dropped triggers OK.
2020-09-30T21:25:22 Dropping new table...
DROP TABLE IF EXISTS `test`.`_test_not_null_new`;
2020-09-30T21:25:22 Dropped new table OK.
`test`.`test_not_null` was not altered.
2020-09-30T21:25:22 Error copying rows from `test`.`test_not_null` to `test`.`_test_not_null_new`: 2020-09-30T21:25:22 Copying rows caused a MySQL error 1364:
Level: Warning
Code: 1364
Message: Field 'add_id' doesn't have a default value
Query: INSERT LOW_PRIORITY IGNORE INTO `test`.`_test_not_null_new` (`id`) SELECT `id` FROM `test`.`test_not_null` LOCK IN SHARE MODE /*pt-online-schema-change 1438 copy table*/

So the reason for the failed operation may be unclear, but at least no data gets damaged. A much worse result happens when the column with comment allows nulls:

CREATE TABLE `test_null` (
`id` int NOT NULL,
`add_id` int DEFAULT NULL COMMENT 'my generated test case',
PRIMARY KEY (`id`)
) ENGINE=InnoDB;

mysql [localhost:5735] {msandbox} (test) > select * from test_null;
+----+--------+
| id | add_id |
+----+--------+
| 1  |      1 |
| 2  |      2 |
| 3  |      3 |
+----+--------+
3 rows in set (0.01 sec)

For this one, the schema change command runs without any errors:

$ ./pt-online-schema-change-3.0.10 u=msandbox,p=msandbox,h=localhost,S=/tmp/mysql_sandbox5735.sock,D=test,t=test_null --print --alter "engine=InnoDB" --execute
(...)
Altering `test`.`test_null`...
Creating new table...
CREATE TABLE `test`.`_test_null_new` (
`id` int(11) NOT NULL,
`add_id` int(11) DEFAULT NULL COMMENT 'my generated test case',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Created new table test._test_null_new OK.
Altering new table...
ALTER TABLE `test`.`_test_null_new` engine=InnoDB
Altered `test`.`_test_null_new` OK.
2020-09-30T21:28:11 Creating triggers...
2020-09-30T21:28:11 Created triggers OK.
2020-09-30T21:28:11 Copying approximately 3 rows...
INSERT LOW_PRIORITY IGNORE INTO `test`.`_test_null_new` (`id`) SELECT `id` FROM `test`.`test_null` LOCK IN SHARE MODE /*pt-online-schema-change 3568 copy table*/
2020-09-30T21:28:11 Copied rows OK.
2020-09-30T21:28:11 Analyzing new table...
2020-09-30T21:28:11 Swapping tables...
RENAME TABLE `test`.`test_null` TO `test`.`_test_null_old`, `test`.`_test_null_new` TO `test`.`test_null`
2020-09-30T21:28:11 Swapped original and new tables OK.
2020-09-30T21:28:11 Dropping old table...
DROP TABLE IF EXISTS `test`.`_test_null_old`
2020-09-30T21:28:11 Dropped old table `test`.`_test_null_old` OK.
2020-09-30T21:28:11 Dropping triggers...
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_null_del`
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_null_upd`
DROP TRIGGER IF EXISTS `test`.`pt_osc_test_test_null_ins`
2020-09-30T21:28:11 Dropped triggers OK.
Successfully altered `test`.`test_null`.

But… the table data is not the same after:

mysql [localhost:5735] {msandbox} (test) > select * from test_null;
+----+--------+
| id | add_id |
+----+--------+
|  1 |   NULL |
|  2 |   NULL |
|  3 |   NULL |
+----+--------+
3 rows in set (0.00 sec)

Summarizing, it is essential to make sure you are using the up to date Percona Toolkit, especially the pt-online-schema-change tool, to avoid potential disaster. The current latest stable release, as of when I am writing the post, is version 3.2.1, and the fixed version for this particular bug, 3.0.11, was released in July 2018.

References:

https://www.percona.com/doc/percona-toolkit/3.0/release_notes.html
https://jira.percona.com/browse/PT-1570

 

Oct
07
2020
--

How to Find Query Slowdowns Using Percona Monitoring and Management

Query Slowdowns Using Percona Monitoring and Management

Visibility is a blessing, and with databases, visibility is a must. That’s true not only for metrics but for the queries themselves. Having info on all the stats around query execution is priceless, and Percona Monitoring and Management (PMM) offers that in the form of the Query Analytics dashboard (QAN).

But where to start? QAN helps you with that by calculating the query profile. What is the profile? It’s a rank of queries, ordered by Load, so it is easy to spot the heaviest queries hitting your database. The Load is defined as the “Average Active Queries” but can also be defined as a mix of Query Execution Time Plus Query count. In other words, all the time the query was alive and kicking.

The Profile in PMM 2.10.0 looks like this:

percona monitoring and management

The purpose of this profile is to facilitate the task of finding the queries that are worth improving, or at least the ones that will have a bigger impact on the performance when optimized.

However, how do you know that a slow query has been always slow or it has come down the road from good performance to painfully slow? That’s where the graph on the “Load” column comes handy.

There’s a method for doing this. The first step is to have a wide view. That means: check a time range long enough so you can see patterns. Personally, I like to check the last 7 days.

The second step is to find irregularities like spikes or increasing patterns. For example, in the above profile, we can see that the “SHOW BINARY LOGS” command is the top #4 of queries adding more load to the database. In this case, it’s because the binlogs are not being purged, so every day there are more and more binlog files to read and that adds to the executing time. But the amount of times that the “SHOW BINARY LOGS” query is executed remains the same.

Another query with an “anomaly” in the load graph is the top #3 one. Let’s isolate it and see what happened:

Query Analytics dashboard percona

The third step will be to reduce the time to a range involving the event so we can isolate it even more:

Query Analytics dashboard percona monitoring and management

The event happened between 8 AM and 9 AM. To discard or confirm that is an isolated event only related to this query, let’s see again all the queries running at that same moment.

So this is a generic situation, common to several queries. Most likely it was an event with the server that made queries to stall.

By looking at the threads graph, we can confirm that hypothesis:

MySQL Active Client Threads

After some digging, the source cause was detected to be a Disk problem:

Query Analytics dashboard disk latency

It’s confirmed that it is not an issue with the query itself, so no need to “optimize” due to this spike.

In conclusion, with the new QAN dashboard available since PMM 2.10.0, finding query slowdowns is easier thanks to the Load graph that can give us context pretty fast.

Try Percona Monitoring and Management today, for free!

Oct
06
2020
--

Various Ways to Perform Schema Upgrades with Percona XtraDB Cluster

Perform Schema Upgrades with Percona XtraDB Cluster

Perform Schema Upgrades with Percona XtraDB ClusterSchema changes are the big challenges in Galera replication. So, it is recommended to understand the schema changes operation for everyone who uses the Percona XtraDB Cluster (PXB)/Galera clusters. In this blog, I am going to explain the operation and impact of the various schema changes methods used in the PXB/Galera cluster.

  • Schema changes with “wsrep_OSU_method = TOI”
  • Schema changes with “wsrep_OSU_method = RSU”
  • Schema changes with “ONLINE ALGORITHMS”
  • Schema changes with “pt-osc”
  • Schema changes with “gh-ost”

For testing:

  • I have configured the 3-node Percona Xtradb Cluster (8.0.19).
  • Executing read/write load using the sysbench.
mysql> select @@wsrep_cluster_address\G
*************************** 1. row ***************************
@@wsrep_cluster_address: gcomm://pxc81,pxc82,pxc83
1 row in set (0.00 sec)

mysql> select @@version, @@version_comment\G
*************************** 1. row ***************************
        @@version: 8.0.19-10
@@version_comment: Percona XtraDB Cluster (GPL), Release rel10, Revision 727f180, WSREP version 26.4.3
1 row in set (0.00 sec)

What is the Impact of Schema Changes in Clusters?

  • By default (TOI), all the nodes in the cluster will be pause during the ALTER process. Because the ALTER needs to be replicated on all the nodes. If the ALTER is big it will affect the performance and could be the cause of the downtime.
  • Rollback is not possible on schema upgrade. 
  • You can’t kill the ALTER query immediately during the operation. So, your application may need to wait until the ALTER completion. 
mysql> pager grep alter
PAGER set to 'grep alter'

mysql> show processlist;
| 19 | root            | localhost | schema_changes | Query   |   18 | altering table           | alter table sbtest1 add index idx_c(c) |         0 |             0 |
7 rows in set (0.00 sec)

mysql> kill 19;
ERROR 1095 (HY000): You are not owner of thread 19

  • MDLs are set only on one node. Not across all the nodes in the cluster. So, you need additional control over this. 

Schema Changes with “wsrep_OSU_method = TOI”

TOI: Total Order Isolation

  • TOI is the default method ( wsrep_OSU_method = TOI ) for schema changes.
  • DDL statements are processed in the same order with regard to other transactions in each node. 
  • The full cluster will be blocked/locked during the DDL operation. 
  • This guarantees data consistency.
mysql> select @@wsrep_OSU_method\G
*************************** 1. row ***************************
@@wsrep_OSU_method: TOI
1 row in set (0.00 sec)

Example:

I am going to run the below ALTER on “pxc81”.

alter table sbtest1 add index idx_c(c)

After initiating the ALTER on pxc81, My processlist states the COMMIT and UPDATE ( from sysbench ) statements are paused. Only ALTER is in progress. The COMMIT and UPDATE will be resumed once the ALTER is completed.

| 17 | root            | localhost | schema_changes | Execute |      15 | closing tables                           | COMMIT                                 |         0 |             0 |
| 17 | root            | localhost | schema_changes | Execute |      15 | updating                                 | UPDATE sbtest1 SET c='91668836759-30934071579-18064439108-53267873872-79461377960-32104006456-143369 |         0 |             1 |
| 24 | root            | localhost | schema_changes | Query   |      15 | altering table                           | alter table sbtest1 add index idx_c(c) |         0 |             0 |

But, still, the SELECT statement can be run with “wsrep_sync_wait != 1” because “wsrep_sync_wait = 1” needs the casualty checks from other nodes. So, it will fail.  

SELECT with “wsrep_sync_wait=1”

|  1 | system user     |           | schema_changes | Query   |     0 | altering table         | alter table sbtest1 add index idx_c(c) |         0 |             0 |
| 15 | root            | localhost | schema_changes | Query   |    40 | starting               | select * from sbtest1 where id=1       |         0 |             0 |

mysql> select * from sbtest1 where id=1;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

TOI can be the right choice for quick operations.

  • CREATE STATEMENTS
  • RENAME INDEX
  • RENAME TABLE
  • DROP INDEX
  • ALGORITHM=INSTANT

Schema Changes with “wsrep_OSU_method = RSU”

RSU – Rolling Schema Upgrade

  • In this method, DDL statements will not replicate across the cluster nodes. Need to execute the DDL individually on all nodes.
  • The node which is executing the DDL will desync from the cluster group. The other nodes in the cluster are still operational and receive the application connections.
  • Once the node executes the DDL, it will start to apply the missing writesets.
  • In this method, the important thing is the WRITEs should not be performed on that particular table until the schema upgrade completes on all the nodes. Users should be very clear on this because the failure will break the cluster and the data may be unrecoverable. 
  • Gcache should be good enough to store the writesets.

Example:

At pxc82, I am going to execute the ALTER.

Session 1: (setting up the value to RSU – session-level) 

mysql> set wsrep_OSU_method=RSU;
Query OK, 0 rows affected (0.09 sec)

Session 2: (checking the node status)

mysql> show global status where Variable_name like 'wsrep_local_recv_queue' or Variable_name like 'wsrep_local_state_comment';
+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_recv_queue    | 0      |
| wsrep_local_state_comment | Synced |
+---------------------------+--------+
2 rows in set (0.00 sec)

Session 1: (executing the ALTER )

mysql> alter table sbtest1 add index idx_c(c);

Session 2: (checking again the node status )

Here the node went to Donor/Desynced state once the ALTER started. You can see the queue also keeps increasing.

mysql> nopager;  show global status where Variable_name like 'wsrep_local_recv_queue' or Variable_name like 'wsrep_local_state_comment';
PAGER set to stdout
+---------------------------+----------------+
| Variable_name             | Value          |
+---------------------------+----------------+
| wsrep_local_recv_queue    | 2053           |
| wsrep_local_state_comment | Donor/Desynced |
+---------------------------+----------------+
2 rows in set (0.21 sec)

Session 1: (ALTER completed)

mysql> alter table sbtest1 add index idx_c(c);
Query OK, 0 rows affected (2 min 6.52 sec)
Records: 0  Duplicates: 0  Warnings: 0

Session 2: (Node synced to cluster)

mysql> show global status where Variable_name like 'wsrep_local_recv_queue' or Variable_name like 'wsrep_local_state_comment';
+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_recv_queue    | 0      |
| wsrep_local_state_comment | Synced |
+---------------------------+--------+
2 rows in set (0.00 sec)

This step needs to be executed in pxc81 and pxc83 as well. After completing on all nodes, we are good to allow the WRITEs for that table. 

The RSU method is not truly disruption-free, as there are few bugs reported regarding RSU. Users should be very clear and careful about executing the RSU for schema updates:

https://jira.percona.com/browse/PXC-2620

https://jira.percona.com/browse/PXC-2293

https://jira.percona.com/browse/PXC-1980

Schema Changes with “ONLINE ALGORITHMS”

So far, we have 3 algorithms,

  • INPLACE
  • COPY
  • INSTANT

With TOI:

“ALGORITHM = INPLACE / COPY” still pauses the cluster during the operation. Galera doesn’t allow transactions when an ALTER TABLE statement is run. So if you are using TOI, any ALTER TABLE will block all transactions on all nodes.

| 17 | root            | localhost | schema_changes | Execute |      12 | closing tables                           | COMMIT                                                               |         0 |             0 |
| 18 | root            | localhost | schema_changes | Execute |      12 | closing tables                           | COMMIT                                                               |         0 |             0 |
| 32 | root            | localhost | schema_changes | Query   |      13 | altering table                           | alter table sbtest1 add index idx_c(c), algorithm=inplace,

“ALGORITHM=INSTANT” is supported and faster in TOI.

mysql> alter table sbtest1 add column test_Ins int , algorithm=instant;
Query OK, 0 rows affected (0.24 sec)
Records: 0  Duplicates: 0  Warnings: 0 lock=none

With RSU:

“ALGORITHM = INPLACE/COPY” is still not beneficial on RSU. It pauses the Galera replication and takes the node to Desync. 

mysql> show processlist;
| 62 | root            | localhost | schema_changes | Query   |    51 | altering table                       | alter table sbtest1 add index idx_c(c), algorithm=inplace, lock=none |         0 |             0 |
5 rows in set (0.06 sec)

mysql> nopager;  show global status where Variable_name like 'wsrep_local_recv_queue' or Variable_name like 'wsrep_local_state_comment';
PAGER set to stdout
+---------------------------+----------------+
| Variable_name             | Value          |
+---------------------------+----------------+
| wsrep_local_recv_queue    | 7335           |
| wsrep_local_state_comment | Donor/Desynced |
+---------------------------+----------------+
2 rows in set (0.03 sec)

“ALGORITHM=INSTANT” is supported and faster in RSU. But, still, you can use TOI to avoid the additional work.

mysql> alter table sbtest1 add column test_Inss int , algorithm=instant;
Query OK, 0 rows affected (0.19 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select @@wsrep_OSU_method;
+--------------------+
| @@wsrep_OSU_method |
+--------------------+
| RSU                |
+--------------------+
1 row in set (0.02 sec)

I would suggest using the “ALGORITHM = INSTANT ” with TOI wherever you can. But, make sure you have the MySQL 8.x + version. Unfortunately, “ALGORITHM=INSTANT” currently only supports adding new columns. 

Schema Changes with “pt-osc”

pt-osc : Percona-online-schema-change

Personally, I like this approach very much and use this mostly in production environments. Pt-osc provides non-blocking schema upgrades on all nodes in one shot. This should be used with the TOI method. The action flow will be like this:

  • Create a new table “_tablename_new” with the required modification
  • Creates triggers for update the modified rows (insert / update / delete)
  • Copy the records from the original table to the new table using chunk operation.
  • Once the copy is completed, it will swap the table ( original ? _old, _new ? original ) and drop the triggers and old table. Direct DDLs ( RENAME TABLE, DROP TABLE ) will be used for this operation ( wsrep_OSU_method=TOI ). 

For the below ALTER, 

alter table schema_changes.sbtest1 add index idx_test_Ins(test_Ins);

Pt-osc flow in SQL syntax:

Creating new table...
CREATE TABLE `schema_changes`.`_sbtest1_new` (
  `id` int NOT NULL AUTO_INCREMENT,
  `k` int NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  `test_Ins` int DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=20400335 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
Created new table schema_changes._sbtest1_new OK.
Altering new table...
ALTER TABLE `schema_changes`.`_sbtest1_new` add index idx_test_Ins(test_Ins)
Altered `schema_changes`.`_sbtest1_new` OK.
Not creating triggers because this is a dry run.
Not copying rows because this is a dry run.
INSERT LOW_PRIORITY IGNORE INTO `schema_changes`.`_sbtest1_new` (`id`, `k`, `c`, `pad`, `test_ins`) SELECT `id`, `k`, `c`, `pad`, `test_ins` FROM `schema_changes`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE ((`id` >= ?)) AND ((`id` <= ?)) LOCK IN SHARE MODE /*pt-online-schema-change 9253 copy nibble*/
SELECT /*!40001 SQL_NO_CACHE */ `id` FROM `schema_changes`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE ((`id` >= ?)) ORDER BY `id` LIMIT ?, 2 /*next chunk boundary*/
Not swapping tables because this is a dry run.
Not dropping old table because this is a dry run.
Not dropping triggers because this is a dry run.
DROP TRIGGER IF EXISTS `schema_changes`.`pt_osc_schema_changes_sbtest1_del`
DROP TRIGGER IF EXISTS `schema_changes`.`pt_osc_schema_changes_sbtest1_upd`
DROP TRIGGER IF EXISTS `schema_changes`.`pt_osc_schema_changes_sbtest1_ins`
2020-09-30T08:31:17 Dropping new table...
DROP TABLE IF EXISTS `schema_changes`.`_sbtest1_new`;
2020-09-30T08:31:17 Dropped new table OK.

Pt-osc provides several options to perform the effective operations.  You can control the connections, active threads, load, chunk size etc ..

For Galera, we have the option “–max-flow-ctrl”. The option will check the average time cluster spent pausing for FC and make the tool pause if it goes over the percentage indicated in the option. By default, the tool will not check the FC.

[root@pxc81 log]# less /bin/pt-online-schema-change  | grep -i pausing
         print STDERR "Pausing because PXC Flow Control is active\n";
         print STDERR "Pausing because "

To make the schema changes on FOREIGN KEY tables, I would suggest using the “alter-foreign-keys-method = rebuild_constraints”. This helps to maintain the consistency of the schema and its relations. In this approach, before dropping the old table, it runs ALTER on all the child tables to drop existing FK and re-add new FK constraints that points to the columns from the new table. Again, adding and dropping the FOREIGN KEY will be the direct ALTER using TOI.

Schema changes with “gh-ost”

Gh-ost is doing a similar approach like “pt-osc”. It also helps to do the non-blocking ALTERs on all cluster nodes in one shot. The main difference is gh-ost is triggerless. Gh-ost uses the binary log to track the changes. So you need the following variables and thresholds to perform the gh-ost operation. 

log-bin=sakthi-bin
binlog-format=ROW
log-slave-updates=ON

The flow will be like,

  • Creates gh-ost table with the required modifications
  • Copy the records from the original table to the new table using chunk operation.
  • Apply the live changes by reading the DML events from binary logs.
  • Once the binary log events are applied, it will swap the tables ( original –> _old, gh-ost –> original ) and drop the old table.

Example:

[root@pxc81 schema_changes]# gh-ost --alter="add index idx_test_Inss(test_Ins)" --database=schema_changes --table=sbtest1 --user=root --password=Jesus@7sakthI --allow-on-master --execute
[2020/09/30 09:40:56] [info] binlogsyncer.go:133 create BinlogSyncer with config {99999 mysql 127.0.0.1 3306 root    false false <nil> false UTC true 0 0s 0s 0 false}
[2020/09/30 09:40:56] [info] binlogsyncer.go:354 begin to sync binlog from position (binlog.000027, 196850993)
[2020/09/30 09:40:56] [info] binlogsyncer.go:203 register slave for master server 127.0.0.1:3306
[2020/09/30 09:40:56] [info] binlogsyncer.go:723 rotate to (binlog.000027, 196850993)
# Migrating `schema_changes`.`sbtest1`; Ghost table is `schema_changes`.`_sbtest1_gho`
# Migrating pxc81:3306; inspecting pxc81:3306; executing on pxc81
# Migration started at Wed Sep 30 09:40:56 +0000 2020
# chunk-size: 1000; max-lag-millis: 1500ms; dml-batch-size: 10; max-load: ; critical-load: ; nice-ratio: 0.000000
# throttle-additional-flag-file: /tmp/gh-ost.throttle 
# Serving on unix socket: /tmp/gh-ost.schema_changes.sbtest1.sock
Copy: 0/6563240 0.0%; Applied: 0; Backlog: 0/1000; Time: 0s(total), 0s(copy); streamer: binlog.000027:196853401; Lag: 0.02s, State: migrating; ETA: N/A
Copy: 0/6563240 0.0%; Applied: 0; Backlog: 0/1000; Time: 1s(total), 1s(copy); streamer: binlog.000027:196858195; Lag: 0.01s, State: migrating; ETA: N/A
Copy: 22000/6563240 0.3%; Applied: 0; Backlog: 0/1000; Time: 2s(total), 2s(copy); streamer: binlog.000027:201067135; Lag: 0.01s, State: migrating; ETA: 9m58s

.......

Copy: 5682000/6563240 86.6%; Applied: 0; Backlog: 0/1000; Time: 16m10s(total), 16m10s(copy); streamer: binlog.000028:213168607; Lag: 0.01s, State: migrating; ETA: 2m30s
Copy: 6563000/6563240 100.0%; Applied: 0; Backlog: 0/1000; Time: 20m20s(total), 20m20s(copy); streamer: binlog.000028:382677405; Lag: 0.01s, State: migrating; ETA: 0s

Gh-ost also provides several options to perform effective operations.  You can control the connections, active threads, load, chunk size, etc.

But unfortunately, “–max-flow-ctl” option is not available in gh-ost. 

Conclusion

So, finally, I would say,

  • Always use the direct ALTER with TOI for the metadata changes and INSTANT ALTERs.
  • Use pt-online-schema-change with TOI and use the optimal flow control thresholds for InnoDB tables.
  • Schedule pt-online-schema-change operation in off-peak hours for FOREIGN KEY tables.
  • If you use RSU, never forget that you need to execute the ALTER on all nodes individually and you should block the WRITEs for that particular table. Make sure, your Gcache size is good enough to hold the writesets.
  • If you are concerned with triggers, you can use the gh-ost to make the ALTERs.
Oct
05
2020
--

Grab Your Percona Swag – For Free!

Percona Swag

Would you like to get the latest in Percona gear 100% free, shipped to you anywhere in the world? Maybe that sounds too good to be true, but it’s true!  It’s easy and takes as little as 20 minutes to earn your swag. Here are some examples of the swag items you can claim:

Percona Swag

So what’s the catch? Percona software products are now listed on four online software directories, but our listings are too new to have accumulated many user reviews. We need reviews!

So our offer is simple. You write one review, you pick one Percona swag item. You write two reviews, you pick two. Seven reviews, pick seven pieces of swag, our limit. But you must post your reviews by November 15, 2020!

Any meaningful review earns swag, be it positive, negative, or mixed. Write whatever you believe; only write something! There’s no swag for a review that gives a rating but says nothing at all or nothing meaningful, so make those reviews count!

Here’s where to post reviews:

Product Capterra   G2           TrustRadius   SourceForge
Percona Monitoring and Management Capterra G2 TrustRadius SourceForge
Percona Server For MySQL Capterra G2 TrustRadius SourceForge
Percona XtraDB Cluster Capterra G2 TrustRadius SourceForge
Percona XtraBackup Capterra G2 TrustRadius SourceForge
Percona Distribution for PostgreSQL n/a G2 TrustRadius SourceForge
Percona Backup for MongoDB n/a G2 TrustRadius SourceForge
Percona Server for MongoDB n/a n/a TrustRadius SourceForge
Percona Kubernetes Operator for Percona XtraDB Cluster n/a G2 TrustRadius SourceForge

You can review several different products and post them on one site, or you can write one product review and post it on multiple sites.  Or post any combination of reviews, up to a max of seven.  The more reviews you post, the more the swag delivered to your home address for free, courtesy of Percona.

To claim your swag, write to <community-team@percona.com>.  Include:

  • Links to each review you posted.
  • Your postal mailing address.
  • Your phone number (for delivery use only, never for marketing)

For t-shirt orders, also state:

  • Color (White, Black, or Blue)
  • Size (Small, Medium, Large, or Extra Large)

It’s that simple! Start writing now!

Oct
02
2020
--

How to Use CHECK Constraint in MySQL 8

CHECK Constraint MySQL 8

CHECK Constraint MySQL 8Hello everyone, in this little post we will review a new feature in MySQL 8.

What is “CHECK Constraint”?

This is a new feature to specify a condition to check the value before INSERT or UPDATE into a row. The constraint could return an error if the result of a search condition is FALSE for any row of the table (but not if the result is UNKNOWN or TRUE).

This feature starts working on MySQL 8.0.16, and in previous versions, we could create it, but it doesn’t work, meaning the syntax is supported but it is not working,

There are some rules to keep in mind…

– AUTO_INCREMENT columns are not permitted
– Refer to another column in another table is not permitted
– Stored functions and user-defined functions are not permitted (you can not call a function or any user-defined functions)
– Stored procedure and function parameters are not permitted (you cannot call a procedure and function parameters)
– Subqueries are not permitted
– Columns used in foreign key for the next actions (ON UPDATE, ON DELETE) are not permitted
– This CHECK is evaluated for the next statements INSERT, UPDATE, REPLACE, LOAD DATA, and LOAD XML. Also, CHECK constraint is evaluated for INSERT IGNORE, UPDATE IGNORE, LOAD DATA … IGNORE, and LOAD XML … IGNORE. For those statements, a warning occurs if a constraint evaluates to FALSE. The insert or update is skipped.

Let’s See Some Examples

I created the next table to test this functionality. This is super easy as you can see in examples:

CREATE TABLE users (
id int not null auto_increment,
firstname varchar(50) not null,
lastname varchar(50) not null,
age TINYINT unsigned not null CONSTRAINT `check_1` CHECK (age > 15),
gender ENUM('M', 'F') not null,
primary key (id)
) engine = innodb;

In this simple test, we can write or update rows only if the “age” column value is more than 15.

Let’s see an example trying to INSERT rows with the “age” column less than 15:

mysql> INSERT INTO users SET firstname = 'Name1', lastname = 'LastName1', age = 10, gender = 'M';
ERROR 3819 (HY000): Check constraint 'check_1' is violated.

To DROP, use the next example:

ALTER TABLE users DROP CHECK check_1;

Let’s see another example adding more logic into it. I altered the table with the next CHECKs:

ALTER TABLE users
ADD CONSTRAINT gender_male
CHECK (
    CASE
        WHEN gender = 'M'
        THEN
            CASE
                WHEN age >= 21
                THEN 1
                ELSE 0
            END
        ELSE 1
    END = 1
);

ALTER TABLE users
ADD CONSTRAINT gender_female
CHECK (
    CASE
        WHEN gender = 'F'
            THEN
                CASE
                    WHEN age >= 18
                    THEN 1
                    ELSE 0
                END
        ELSE 1
    END = 1
);

We added more logic, and now it depends on the “gender” and “age” columns. A CHECK constraint is satisfied if, and only if, the specified condition evaluates to TRUE or UNKNOWN(for NULL column value) for the row of the table. The constraint is violated otherwise.

Let see an example from the previous logic.

mysql> INSERT INTO users SET firstname = 'Name2', lastname = 'LastName2', age = 10, gender = 'F';
ERROR 3819 (HY000): Check constraint 'gender_female' is violated.

mysql> INSERT INTO users SET firstname = 'Name3', lastname = 'LastName3', age = 10, gender = 'M';
ERROR 3819 (HY000): Check constraint 'gender_male' is violated.

As you can see in the ERROR message, MySQL is showing the CHECK constraint name. This is good to use from the application source code to debug the error and to know from which CHECK is failing.

Finally, this is the table structure:

CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(50) NOT NULL,
`lastname` varchar(50) NOT NULL,
`age` tinyint(3) unsigned NOT NULL,
`gender` enum('M','F') NOT NULL,
PRIMARY KEY (`id`),
CONSTRAINT `gender_female` CHECK (((case when (`gender` = 'F') then (case when (`age` > 18) then 1 else 0 end) else 1 end) = 1)),
CONSTRAINT `gender_male` CHECK (((case when (`gender` = 'M') then (case when (`age` > 21) then 1 else 0 end) else 1 end) = 1))
) ENGINE=InnoDB AUTO_INCREMENT=4;

We can add more logic in the table using this feature, but from my previous experience as a programmer, I don’t recommend adding logic in the tables because it is difficult to find or debug errors unless you do not have access to the application code.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com