Basic Internal Troubleshooting Tools for MySQL Server Webinar: Q & A

Troubleshooting Tools for MySQL

Troubleshooting Tools for MySQLIn this blog, I will provide answers to the Q & A for the Basic Internal Troubleshooting Tools for MySQL Server webinar.

First, I want to thank everybody for attending my February 15, 2018, webinar on troubleshooting tools for MySQL. The recording and slides for the webinar are available here. Below is the list of your questions that I was unable to answer fully during the webinar.

Q: How do we prevent the schema prefix from appearing in the show create view. This is causing issue with restore on another server with a different DB. See the issue here and reproducible test case:

A: I shortened the example in order to fit it in this blog:

mysql> create table t1(f1 int);
Query OK, 0 rows affected (3.47 sec)
mysql> create view v1 as select * from t1;
Query OK, 0 rows affected (0.21 sec)
mysql> show create view v1G
*************************** 1. row ***************************
                View: v1
         Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`f1` AS `f1` from `t1`
character_set_client: utf8
collation_connection: utf8_general_ci
1 row in set (0.00 sec)
mysql> select * from information_schema.views where table_schema='test'G
*************************** 1. row ***************************
       TABLE_CATALOG: def
        TABLE_SCHEMA: test
          TABLE_NAME: v1
     VIEW_DEFINITION: select `test`.`t1`.`f1` AS `f1` from `test`.`t1`
             DEFINER: root@localhost
1 row in set (0.00 sec)

The issue you experienced happened because even if you created a view as

SELECT foo FROM table1;

, it is stored as

SELECT foo FROM your_schema.table1;

. You can see it if you query the 


  file for the view:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ cat var/mysqld.1/data/test/v1.frm
query=select `test`.`t1`.`f1` AS `f1` from `test`.`t1`
timestamp=2018-02-24 10:27:45
source=select * from t1
view_body_utf8=select `test`.`t1`.`f1` AS `f1` from `test`.`t1`

You cannot prevent the schema prefix from being stored. If you restore the view on a different server with a different database name, you should edit the view definition manually. If you already restored the view that points to a non-existent schema, just recreate it.


 is metadata only and does not hold any data, so this operation is non-blocking and will run momentarily.

Q: What is thread/sql/compress_gtid_table in performance_schema.threads?



  is name of the instrument. You can read this and other instruments as below:

  • thread/

     is a group of instruments. In this case, it is the instruments that are visible in the



  • thread/sql/

     is the group of instruments that are part of the server kernel code. If you are not familiar with MySQL source tree, download the source code tarball and check its content. The main components are:

    • sql

        – server kernel

    • storage

       – where storage engines code located (


        is InnoDB code,


        is MyISAM code and so on)

    • vio

       – input-output functions

    • mysys

       – code, shared between all parts of the server

    • client

       – client library and utilities

    • strings

       – functions to work with strings

This is not full list. For more information consult MySQL Internals Manual

  • thread/sql/compress_gtid_table

     is the name of the particular instrument.

Unfortunately, there is no link to source code for instrumented threads in the table


, but we can easily find them in the


 directory. The function


 is defined in


and we can check comments and find what it is doing:

The main function of the compression thread.
- compress the gtid_executed table when get a compression signal.
@param p_thd Thread requesting to compress the table
@retval 0 OK. always, the compression thread will swallow any error
for going to wait for next compression signal until
it is terminated.
extern "C" {
static void *compress_gtid_table(void *p_thd)

You can also find the description of mysql.gtid_executed compression in the User Reference Manual.

You can follow the same actions to find out what other MySQL threads are doing.

Q: How does a novice on MySQL learn the core basics about MySQL. The documentation can be very vast which surpasses my understanding right now. Are there any good intro books you can recommend for a System Admin?

A: I learned MySQL a long time ago, and a book that I can recommend written for version 5.0. This is “MySQL 5.0 Certification Study Guide” by Paul DuBois,? Stefan Hinz and Carsten Pedersen. The book is in two parts: one is devoted to SQL developers and explains how to run and tune queries. The second part is for DBAs and describes how to tune MySQL server. I asked my colleagues to suggest more modern books for you, and this one is still on the list for many. This is in all cases an awesome book for beginners, just note that MySQL has changed a lot since 5.0 and you need to deepen your knowledge after you finish reading this book.

Another book that was recommended is “MySQL” by Paul DuBois. It is written for beginners and has plenty of content. Paul DuBois has been working on (and continues to work on) the official MySQL documentation for many years, and knows MySQL in great detail.

Another book is “Murach’s MySQL” by Joel Murach, which is used as a course book in many colleges for “Introduction into Databases” type classes.

For System Administrators, you can read “Systems Performance: Enterprise and the Cloud” by Brendan Gregg. This book talks about how to tune operating systems for performance. This is one of the consistent tasks we have to do when administering MySQL. I also recommend that you study Brendan Gregg’s website, which is a great source of information for everyone who is interested in operating system performance tuning.

After you finish the books for novices, you can check out “High Performance MySQL, 3rd Edition” by Peter Zaitsev, Vadim Tkachenko, Baron Schwartz and “MySQL Troubleshooting” by Sveta Smirnova (yours truly =) ). These two books require at least basic MySQL knowledge, however.

Q: Does the database migration goes on same way? Do these tools work for migration as well?

A: The tools I discussed in this webinar are available for any version of MySQL/Percona/MariaDB server. You may use them for migration. For example, it is always useful to compare configuration (


) on both “old” and “new” servers. It helps if you observe performance drops on the “new” server. Or you can check table definitions before and after migration. There are many more uses for these tools during the migration process.

Q: How can we take backup of a single schema from a MySQL AWS instance without affecting the performance of applications. An AWS RDS instance to be more clear. mysqldump we cannot use in RDS instance in the current scenario.

A: You can connect to your RDS instance with mysqldump from your local machine, exactly like your MySQL clients connect to it. Then you can collect a dump of a single database, table or even specify the option –where to limit the resulting set to only a portion of the table. Note, by default


 is blocking, but if you backup solely transactional tables (InnoDB, TokuDB, MyRocks) you can run


 with the option


, which starts the transaction at the beginning of the backup job.

Alternatively, you can use AWS Database Migration Service, which allows you to replicate your databases. Then you can take a backup of a single schema using whatever method you like.

Q: Why do some sites suggest to turn off information and performance schema? Is it important to keep it on or turn it off?

A: You cannot turn off Information Schema. It is always available.

Performance Schema in earlier versions (before 5.6.14) was resource-consuming, even if it was idle while enabled. These limitations were fixed a long time ago, and you don’t need to keep it off. At least unless you hit some new bug.

Q: How do we handle storage level threshold if a data file size grows and reaches max threshold when unnoticed? Can you please help on this question?

A: Do you mean what will happen if the data file grows until filesystem has no space? In this case, clients receive the error

"OS error code 28: No space left on device"

  until space is freed and mysqld can start functioning normally again. If it can write into error log file (for example, if it is located on different disk), you will see messages about error 28 in the error log file too.

Q: What are the performance bottlenecks when enabling performance_schema. Is there any benchmark we can have?

A: Just enabling Performance Schema in version 5.6 and up does not cause any performance issue. With version 5.7, it can also start with almost zero allocated memory, so it won’t affect your other buffers. The Performance Schema causes impact when you enable particular instruments. Most of them are instruments that start with the name


. I performed benchmarks on effects of particular Performance Schema instruments and published them in this post.

Q: Suggest us some tips about creating a real-time dashboards for the same as we have some replication environment? it would be great if you can help us here for building business level dashboards

A: This is topic for yet another webinar or, better still, a tutorial. For starters, I recommend you to check out the “MySQL Replication” dashboard in PMM and extend it using the metrics that you need.

Thanks for attending the webinar on internal troubleshooting tools for MySQL.


Webinar Thursday, March 15, 2018: Basic External MySQL Troubleshooting Tools

Troubleshooting Tools

MySQL Troubleshooting ToolsPlease join Percona’s Principal Support Engineer, Sveta Smirnova, as she presents Basic External MySQL Troubleshooting Tools on March 15, 2018 at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4).

In my troubleshooting webinar series, I normally like to discuss built-in instruments available via the SQL interface. While they are effective and help to understand what is going on, external tools are also designed to make life of a database administrator easier.

In this webinar, I will discuss the external tools, toolkits and graphical instruments most valued by Support teams and customers. I will show the main advantages of these tools, and provide examples on how to effectively use them.

I will cover Percona Toolkit, MySQL Utilities, MySQL Sandbox, Percona Monitoring and Management (PMM) and a few other instruments.

Register for the webinar now.

Troubleshooting ToolsSveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues, and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.


Troubleshooting MySQL Crashes Webinar: Q&A

Troubleshooting MySQL Crashes

Troubleshooting MySQL CrashesIn this blog, I will provide answers to the Q & A for the Troubleshooting MySQL Crashes webinar.

First, I want to thank everybody for attending our January 25, 2018, webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I was unable to answer fully during the webinar.

Q: I have the 600 seconds “Long semaphore wait” assertion failure / crashing issue following DDL queries, sometimes on the master, sometimes just the slaves. Any hints for troubleshooting these? How can I understand what semaphore holding threads are doing?

A: These are hardest errors to troubleshoot. Especially because in some cases (like long-running


 commands) long semaphore waits could be expected and appropriate behavior. If you see long semaphore waits when performing DDL operations, it makes sense to consider using pt-online-schema-change or gh-ost utilities. Also, check the list of supported online DDL operations in the MySQL User Reference Manual.

But if you want to know how to analyze such messages, let’s check the output from page #17 in the slide deck used in the webinar:

2018-01-19T20:38:43.381127Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 139970010412800 has waited at line 3454 for 321.00 seconds the semaphore:
S-lock on RW-latch at 0x7f4dde2ea310 created in file line 1453
a writer (thread id 139965530261248) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: fffffffff0000000
Last time read locked in file line 3454
Last time write locked in file /mnt/workspace/percona-server-5.7-binaries-release/label_exp/
debian-wheezy-x64/percona-server-5.7.14-8/storage/innobase/btr/ line 177
2018-01-19T20:38:43.381143Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 139965135804160 has waited at line 4196 for 321.00 seconds the semaphore:
S-lock on RW-latch at 0x7f4f257d33c0 created in file line 353
a writer (thread id 139965345621760) has reserved it in mode exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file line 4196
Last time write locked in file ...

The line

--Thread 139970010412800 has waited at line 3454 for 321.00 seconds the semaphore:

Shows that some transaction was waiting for a semaphore. The code responsible for this wait is located on line 3454 in file

. I received this crash when I ran Percona Server for MySQL version 5.7.14-8. Therefore, to check what this code is doing, I need to use Percona Server 5.7.14-8 source code:

sveta@Thinkie:~/mysql_packages/percona-server-5.7.14-8$ vim storage/innobase/ibuf/
3454 btr_pcur_open(ibuf->index, ibuf_entry, PAGE_CUR_LE, mode, &pcur, &mtr);

A few lines above in the same file contain function definition and comment:

3334 /** Buffer an operation in the insert/delete buffer, instead of doing it
3335 directly to the disk page, if this is possible.
3336 @param[in] mode BTR_MODIFY_PREV or BTR_MODIFY_TREE
3337 @param[in] op operation type
3338 @param[in] no_counter TRUE=use 5.0.3 format; FALSE=allow delete
3339 buffering
3340 @param[in] entry index entry to insert
3341 @param[in] entry_size rec_get_converted_size(index, entry)
3342 @param[in,out] index index where to insert; must not be unique
3343 or clustered
3344 @param[in] page_id page id where to insert
3345 @param[in] page_size page size
3346 @param[in,out] thr query thread
3347 @return DB_SUCCESS, DB_STRONG_FAIL or other error */
3348 static MY_ATTRIBUTE((warn_unused_result))
3349 dberr_t
3350 ibuf_insert_low(
3351 ulint mode,
3352 ibuf_op_t op,
3353 ibool no_counter,
3354 const dtuple_t* entry,
3355 ulint entry_size,
3356 dict_index_t* index,
3357 const page_id_t& page_id,
3358 const page_size_t& page_size,
3359 que_thr_t* thr)
3360 {

The first line of the comment gives us an idea that InnoDB tries to insert data into change buffer.

Now, let’s check the next line from the error log file:

S-lock on RW-latch at 0x7f4dde2ea310 created in file line 1453
sveta@Thinkie:~/mysql_packages/percona-server-5.7.14-8$ vim storage/innobase/buf/
1446 /* If PFS_SKIP_BUFFER_MUTEX_RWLOCK is defined, skip registration
1447 of buffer block rwlock with performance schema.
1449 If PFS_GROUP_BUFFER_SYNC is defined, skip the registration
1450 since buffer block rwlock will be registered later in
1451 pfs_register_buffer_block(). */
1453 rw_lock_create(PFS_NOT_INSTRUMENTED, &block->lock, SYNC_LEVEL_VARYING);

And again let’s check what this function is doing:

1402 /********************************************************************//**
1403 Initializes a buffer control block when the buf_pool is created. */
1404 static
1405 void
1406 buf_block_init(

Even without knowledge of how InnoDB works internally, by reading only these comments I can guess that a thread waits for some global InnoDB lock when it tries to insert data into change buffer. The solution for this issue could be either disabling change buffer, limiting write concurrency, upgrading or using a software solution that allows you to scale writes.

Q: For the page cleaner messages, when running app using replication we didn’t get them. After switching to PXC we started getting them. Something we should look at particular to PXC to help resolve this?

A: Page cleaner messages could be a symptom of starving IO activity. You need to compare Percona XtraDB Cluster (PXC) and standalone server installation and check how exactly the write load increased.

Q: Hi, I have one question, we have a query we were joining on 




 fields that is causing system locks and high CPU alerts and causing a lot of system locks, can you please suggest how can we able to make it work? Can you please send the answer in a text I missed some information?

A: If you are joining on




 fields you most likely don’t use indexes. This means that InnoDB has to perform a full table scan. It increases IO and CPU activity by itself, but also increases the number of locks that InnoDB has to set to resolve the query. Even if you have partial indexes on the 




 columns, mysqld has to compare full values for the equation, so it cannot use index only to resolve


 clause. It is a best practice to avoid such kinds of


s. You can use surrogate integer keys, for example.

Q: Hi, please notice that “MySQL server has gone away” is the worst one, in my opinion, and there was no mention about that ….can you share some tips on this? Thank you.
Both MySQL from Oracle and Percona error log does not help on that, by the way …


MySQL Server has gone away

” error maybe the result of a crash. In this case, you need to handle it like any other crash symptom. But in most cases, this is a symptom of network failure. Unfortunately, MySQL doesn’t have much information why connection failures happen. Probably because, from mysqld’s point of view, a problematic network only means that the client unexpectedly disconnected after a timeout, and the client still waiting for a response receives “

MySQL Server has gone away

”. I discussed these kinds of errors in my  “Troubleshooting hardware resource usage” webinar. A good practice for situations when you see this kind of error often is don’t leave idle connections open for a long time.

Q: I see that a lot of work is doing hard investigation about some possibilities of what is going wrong….is there a plan at development roadmap on improve error log output messages? If you can comment on that …

A: Percona Engineering does a lot for better diagnostics. For example, Percona Server for MySQL has an extended slow log file format, and Percona Server for MySQL 5.7.20 introduced a new


  variable that allows log information about all InnoDB lock wait timeout errors (manual). More importantly, it logs not only blocked transaction, but also locking transaction. This feature was requested at lp:1657737 for one of our Percona Support customers and is now implemented

Oracle MySQL Engineering team also does a lot for better error logging. The start of these improvements happened in version 5.7.2, when variable log_error_verbosity was introduced. Version 8.0.4 added much better tuning control. You can read about it in the Release Notes.

Q: Hello, you do you using strace to find what exactly table have problems in case there is not clear information in mysql error log?

A: I am not a big fan of


 when debugging mysqld crashes, but Percona Support certainly uses this tool. I myself prefer to work with


 when debugging client issues, such as trying to identify why Percona XtraBackup behaves incorrectly.

Thanks everybody for attending the webinar. You can find the slides and recording of the webinar at the Troubleshooting MySQL Crashes web page.


Webinar Thursday, February 15, 2018: Basic Internal Troubleshooting Tools for MySQL Server

Troubleshooting Tools for MySQL

Troubleshooting Tools for MySQLPlease join Percona’s Principal Support Engineer, Sveta Smirnova, as she presents “Basic Internal Troubleshooting Tools for MySQL Server” on Thursday, February 15, 2018, at 10:00 am PST (UTC-8) / 1:00 pm EST (UTC-5).


MySQL Server has many built-in troubleshooting tools. They are always available and can provide many insights on what is happening internally. Many graphical tools, such as Percona Monitoring and Management (PMM), use built-ins to get data for their nice graphs.

Even if you are only going to use graphical tools, it is always good to know what data they can collect. This way, you can see their limitations and won’t have incorrect expectations in the heat of battle. Built-in troubleshooting tools are accessible via SQL commands. Most of them are standard across the server, but details are component-specific.

In this webinar, I will discuss how to use them, how to troubleshoot component-specific issues and how to find additional information. I will cover SHOW commands, Information Schema, status variables and few component-specific syntaxes. I will NOT cover Performance Schema (there will be a separate webinar on that), and I will use PMM graphs to illustrate the topics whenever possible.

Register for the Basic Internal Troubleshooting Tools for MySQL Server webinar now.

Internal Troubleshooting for MySQLSveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker. She likes teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.


Webinar Wednesday, January 24, 2018: Differences between MariaDB and MySQL

MariaDB and MySQL

MariaDB and MySQLJoin Percona’s Chief Evangelist, Colin Charles as he presents Differences Between MariaDB and MySQL on Wednesday, January 24, 2018, at 7:00 am PDT / 10:00 am EDT (UTC-7).

Tags: MariaDB, MySQL, Percona Server for MySQL, DBA, SysAdmin, DevOps
Experience Level: Novice

MariaDB and MySQL. Are they syntactically similar? Where do these two query languages differ? Why would I use one over the other?

MariaDB is on the path of gradually diverging from MySQL. One obvious example is the internal data dictionary currently under development for MySQL 8.0. This is a major change to the way metadata is stored and used within the server. MariaDB doesn’t have an equivalent feature. Implementing this feature could mark the end of datafile-level compatibility between MySQL and MariaDB.

There are also non-technical differences between MySQL and MariaDB, including:

  • Licensing: MySQL offers their code as open-source under the GPL, and provides the option of non-GPL commercial distribution in the form of MySQL Enterprise. MariaDB can only use the GPL because they derive their work from the MySQL source code under the terms of that license.
  • Support services: Oracle provides technical support, training, certification and consulting for MySQL, while MariaDB has their own support services. Some people will prefer working with smaller companies, as traditionally it affords them more leverage as a customer.
  • Community contributions: MariaDB touts the fact that they accept more community contributions than Oracle. Part of the reason for this disparity is that developers like to contribute features, bug fixes and other code without a lot of paperwork overhead (and they complain about the Oracle Contributor Agreement). However, MariaDB has its own MariaDB Contributor Agreement — which more or less serves the same purpose.

Colin will take a look at some of the differences between MariaDB and MySQL and help answer some of the common questions our Database Performance Experts get about the two databases.

Register for the webinar now.

Colin CharlesColin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005, and been a MySQL user since 2000. Before joining MySQL, Colin worked actively on the Fedora and projects. He’s well-known within many open source communities and speaks on the conference circuit.


Webinar January 18, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2

Percona Monitoring and Management

Percona Monitoring and ManagementJoin Percona’s Product Manager Michael Coburn as he presents MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2 on Thursday, January 18, 2018, at 11:00 am PST / 2:00 pm EST (UTC-8).

Tags: Percona Monitoring and Management, PMM, Monitoring, MySQL, Performance, Optimization, DBA, SysAdmin, DevOps
Experience Level: Expert

Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications need to handle heavy traffic loads while remaining responsive and stable. This is so that you can deliver an excellent user experience. Furthermore, DBA’s are also expected to find cost-efficient means of solving these issues.

In this webinar — the second part of a two-part series — Michael discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them.

By the end of this webinar, you will have a better understanding of how you can troubleshoot MySQL problems in your database.

Register for the webinar now.

Percona Monitoring and ManagementMichael Coburn, Product Manager

Michael joined Percona as a Consultant in 2012 and progressed through various roles including Managing Consultant, Principal Architect, Technical Account Manager, and Technical Support Engineer. He is now leading the Product Manager of Percona Monitoring and Management.


Webinar Q&A: Percona XtraDB Cluster 101

Percona XtraDB Cluster

Percona XtraDB ClusterIn this blog, we will answer questions from our webinar on Percona XtraDB Cluster 101.

Recently (7 Dec 2017) I presented a webinar about Percona XtraDB Cluster 101. Firstly, thanks to all the attendees: we had a great webinar with quite some interesting questions and feedback.

Through this blog, I’ll answer most of the questions that were raised during the webinar.

Q. How does the need for the acknowledgment from other nodes affect the speed of writes?

A. There are two parts to replication: delivering a transaction (including acknowledgment) and applying the transaction. Generally, the first part is pretty quick and dictated by the network latency. The second part is time-consuming, but happens asynchronously. So acknowledging a transaction from other nodes is not that time-consuming.

Q. How can geo-distributed nodes affect the speed of writes?

A. The longest node dictates cluster performance (in terms of latency). You can’t write faster than the time it takes for a packet to reach the longest node (round-trip-latency). So geo-distribution does affect write performance.

Q. Would you consider Master -> Slave replication in RDS a traditional replication of MySQL? And how easy is it replicating from PXC to RDS if its possible

A. If an application doesn’t need high availability, then a user can explore the MASTER-SLAVE replication. But I would argue that if I am going to spend time booting two servers (MASTER and SLAVE), then why not boot both as MASTER (through Percona XtraDB Cluster). This ensures HA and write-scalability. Percona XtraDB Cluster is flexible for all topologies, and can act as async-master or async-slave too.

Q. Moving forward, is there a plan to deal with version control tools like Flyway that still uses Get locks?

A. Statements like GET_LOCK that establish local locks at the said node are not cluster-safe, so they are blocked with


 and not recommended for use. With that said, if the application/user tries to use these statements in a non-conflicting way (with the load directed to single master) it could still work.

Q. With an ASYNC slave, can you use GTID? I know there is a bug(s) that prevent this currently from working 100% in MariaDB (though the bug is close to being fixed – MDEV-10715)?

A. Yes, you can use GTID with async-slave. MariaDB has different implementation of GTID so I am not in a position to comment on the latter part.

Q. Do tables need to have a primary key for the cluster to work?

A. Yes, all tables that you plan to use in a cluster should have a primary key (


 enforces this criterion). This is mainly needed for conflict resolution, when the same conflicting workload is executed on multiple-nodes.

Q. What is the best wsrep_sst_method? (for huge database)

A. Percona XtraDB Cluster recommends using XtraBackup. It doesn’t lock the tables for the complete SST life-time, so you can continue to use the node while it is acting as DONOR.

Q. Is the “show processlist” node-specific? Is there an equivalent command to show the whole cluster process list?

A. Yes, show processlist is node specific. There is currently no way to cluster-wide-processlist.

Q. does PXC support partitioned tables?

A. Yes, using InnoDB native partitioning.

Q. These nodes (PXC-nodes) are api nodes or data nodes ?

A. Data nodes.

Q. If cluster went down then everytime it follow SST/IST?

A. It depends. If there is DONOR that has a missing write-set, then the node can rejoin through IST else SST.

Q. Around how much time it will take to join the cluster?

A. The time a node takes to join back depends on the size of the data. Generally, the time for SST is longer than IST. The good part is with 5.7.17+ we have considerably reduced the time for IST, so that a node can join faster than before.

Q. How does IST (incremental state transfer) process affect cluster performance?

A. IST is asynchronous and doesn’t emit FLOW_CONTROL, so cluster can continue to perform as normal. A small slice of DONOR bandwidth is used to send data to the JOINER, but it is not that significant to affect the overall cluster performance.

Q. How do you handle a situation when three simultaneous transactions try to insert auto_increment value?

A. Percona XtraDB Cluster has a concept of wsrep_auto_increment_control that adjust the increment size on each node based on a number of nodes in the cluster. Please check this link for more details.

Q. Imagine that a table A has a trigger on insert that inserts data into another table B. And there are two concurrent transactions: TA inserts into table A (and the trigger makes an insert into B) and TB that inserts the same data directly into B. Will such a conflict – insert from TB and from trigger – be detected?

A. Yes. A transaction can touch multiple data-objects and when the conflict resolution is done, it will check all the objects that transaction is planning to modify before certifying a transaction is safe to apply.

Q. How PXC will make sure of data integrity with parallel processing?

A. Percona XtraDB Cluster has conflict resolution protocol. This protocol is based on FIRST COMMITTER WIN principle that ensures only the first transaction (from a group of a conflicting transaction) commits to cluster.

Q. I’ve created a three node cluster and replication is working. I’d like to copy our production data to the cluster since exporting and importing from MySQL takes a long time. Should I have waited to bootstrap the cluster until the data directory is transferred?

A. If you already have a cluster in place then you are simply adding new tables to the cluster. You can start adding (LOADING) the tables and these tables are immediately replicated to the other nodes of the cluster. An alternative would be to start the first node of the cluster with the pre-loaded data that then becomes cluster state. Other joining nodes copy it over through SST.

Q. Do we have an option to autospinup the compute nodes in a cloud? If PXC will have that option or do we manually need to spinup the Instance and setup the replication?

A. You will have to manually configure it.

Q. Why does XtraBackup not work due bootstrapping but works perfectly after bootstrapping? rsync is working in both cases.

A. Not sure I get the question completely, but XtraBackup works in all scenarios. If you are facing any issue, please log it on launchpad.

Q. As per flow control, one node waits for the other node to be in sync. Won’t there be latency in writing the data?

A. The transaction originated from one node needs to get replicated on other nodes of the cluster. This is what we can call latency and is dictated by network latency. Flow-control is mainly to regulate a scenario wherein one node of the cluster falls way behind other nodes of the cluster.

Q. Can we set up PXC using AWS EC2?

A. Yes.

Once again, thanks for taking time to attend the webinar. If you have more questions, then please post them to the Percona XtraDB Cluster forum here. Also, we have a lot of blogs about Percona XtraDB Cluster. Make sure you check them out here.


Webinar Wednesday, December 20, 2017: InnoDB Performance Optimization

InnoDB Performance Optimization

InnoDB Performance OptimizationJoin Percona’s, CEO and Co-Founder, Peter Zaitsev as he presents InnoDB Performance Optimization on Wednesday, December 20, 2017, at 11:00 am PST / 2:00 pm EST (UTC-8).

InnoDB is one of the most commonly used storage engines for MySQL and Percona Server for MySQL. It balances high reliability with high performance and is the focus of the majority of storage engine development by the MySQL and Percona Server for MySQL teams.

This webinar looks at InnoDB, including new developments in MySQL 5.7 as well as Percona Server for MySQL. In it, Peter explains how to use it, and many of the configuration options that help you to get the best performance from your application.

Register for the webinar.

Peter ZaitsevPeter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group.

A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of’s most popular downloads. 


Webinar Thursday, December 7, 2017: Percona XtraDB Cluster (PXC) 101

Percona XtraDB Cluster

Percona XtraDB ClusterJoin Percona’s Software Engineer (PXC Lead), Krunal Bauskar as he presents Percona XtraDB Cluster 101 on Thursday, December 7, 2017, at 7:00 am PST / 10:00 am EST (UTC-8).

Tags: Percona XtraDB Cluster, MySQL, High Availability, Clustering

Experience Level: Beginner

Percona XtraDB Cluster (PXC) is a multi-master solution that offers virtual synchronous replication among clustering node. It is based on the Codership Galera replication library. In this session, we will explore some key features of Percona XtraDB Cluster that make it enterprise ready including some recently added 5.7 exclusive features.

This webinar is an introductory and will cover the following topics:

  • ProxySQL load balancer
  • Multi-master replication
  • Synchronous replication
  • Data at rest encryption
  • Improved SST Security through simplified configuration
  • Easy to setup encrypted between-nodes communication
  • ProxySQL-assisted Percona XtraDB Cluster maintenance mode
  • Automatic node provisioning
  • Percona XtraDB Cluster “strict-mode”

Register for the webinar now.

Percona XtraDB ClusterKrunal Bauskar, C/C++ Engineer

Krunal joined Percona in September 2015. Before joining Percona he worked as part of the InnoDB team at MySQL/Oracle. He authored most of the temporary table revamp work besides a lot of other features. In the past, he was associated with Yahoo! Labs researching on big data problems and database startup which is now part of Teradata. His interest mainly includes data-management at any scale, and he has been practicing it for more than a decade now. He loves to spend time with his family or get involved in social work, unless he is out for some near-by exploration drive. He is located out of Pune, India.


Webinar Wednesday, December 6, 2017: Gain a MongoDB Advantage with the Percona Memory Engine

Percona Memory Engine

Percona Memory EngineJoin Percona’s, CTO, Vadim Tkachenko as he presents Gain a MongoDB Advantage with the Percona Memory Engine on Wednesday, December 6, 2017, at 11:00 am PST / 2:00 pm EST (UTC-8).

Experience: Entry Level to Intermediate

Tags: Developer, DBAs, Operations

Looking for the performance of Redis or Memcache, the expressiveness of the MongoDB query language and simple high availability and sharding? Percona Memory Engine, available as part of Percona Server for MongoDB, has it all!

In this webinar, Vadim explains the architecture of the MongoDB In-Memory storage engine. He’ll also show some benchmarks compared to disk-based storage engines and other in-memory technologies.

Vadim will share specific use cases where Percona Memory Engine for MongoDB excels, such as:

  • Caching documents
  • Highly volatile data
  • Workloads with predictable response time requirements

Register for the webinar now.

Vadim TkachenkoVadim Tkachenko, CTO

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High-Performance MySQL: Optimization, Backups, and Replication 3rd Edition. Previously, he founded a web development company in his native Ukraine and spent two years in the High-Performance Group within the official MySQL support team. Vadim received a BS in Economics and an MS in computer science from the National Technical University of Ukraine.


Powered by WordPress | Theme: Aeros 2.0 by