Nov
17
2021
--

Understanding MySQL Triggers: Exploring How Triggers Impact MySQL Memory Allocation

This blog was originally published in November 2021 and was updated in September 2023.

MySQL server performance can sometimes be perplexing, and if you’ve ever wondered about the role of triggers in influencing your MySQL server’s memory allocation, this post is for you. MySQL triggers are a powerful tool for database administrators and developers, enabling them to automate tasks, enforce data consistency, and respond to events within the database seamlessly.

Here, we first cover the basics of MySQL Triggers, and then we take a deeper dive, exploring their impact on memory usage and providing strategies to optimize MySQL server performance.

What is a Trigger in MySQL?

A Trigger in MySQL is a database object that plays a pivotal role in database management. It is essentially a set of instructions or a program that is executed automatically in response to specific events or actions occurring within the database. These events, known as trigger events, can include actions like inserting, updating, or deleting records in database tables.

The significance of triggers lies in their ability to enforce data integrity and automate various database-related tasks. When a trigger event occurs, the associated trigger is executed, allowing you to perform actions such as validating data, logging changes, or maintaining referential integrity. Triggers act as a safeguard to ensure that certain conditions or actions are met or carried out consistently, enhancing the reliability and consistency of the database.

Trigger execution is a crucial aspect of this mechanism. When a trigger event takes place, the trigger’s logic is automatically invoked, making it respond promptly to the event. This real-time response capability ensures data remains accurate and complies with predefined rules or actions.

The Different Types of MySQL Triggers

MySQL supports various types of triggers, primarily categorized based on when they execute concerning the triggering event. Understanding the differences between these trigger types is crucial for effectively utilizing them in database management, as they serve distinct purposes and are executed at different stages of the database operation.

Row-level Trigger

Row-level triggers in MySQL are a type of trigger that responds to individual row changes within a table. These triggers are executed for each affected row, making them a powerful tool for customizing actions based on specific data modifications. Here’s a step-by-step explanation of how row-level triggers work for each affected row:

Trigger Activation: When a triggering event, such as an UPDATE statement, occurs on a table with an associated row-level trigger, the trigger is activated.

Affected Rows: If the triggering event affects multiple rows (e.g., updating multiple rows in a table), the row-level trigger is executed separately for each affected row.

Per-Row Execution: For each affected row, the trigger’s code is executed. This code can include SQL statements, procedures, or other actions defined within the trigger.

Access to OLD and NEW Values: Within the trigger code, you can access the OLD and NEW values for each column of the current row. The OLD values represent the data before the modification, while the NEW values represent the data after the modification. This allows you to compare and manipulate data based on the specific changes made to each row.

Conditional Logic: Row-level triggers often include conditional logic that determines the actions to take based on the values in the OLD and NEW datasets. This logic can be used to enforce data integrity rules, generate audit logs, or implement custom business logic.

Customized Actions: The trigger’s actions are customized based on the specific requirements of each row. This fine-grained control over individual rows allows for tailored responses to changes in the database.

Completion of Trigger: Once the trigger’s code is executed for one row, it proceeds to the next affected row (if any) and repeats the process. This continues until the trigger has been executed for every affected row.

Commit or Rollback: Depending on the success or failure of the trigger and the associated SQL statement, the changes made to the rows may be committed to the database or rolled back to maintain data consistency.

Statement-level Trigger

Statement-level triggers in MySQL are database objects that are executed once in response to a single SQL statement that triggers them. These triggers are not concerned with individual rows affected by the statement but rather the statement as a whole. Here’s an explanation of statement-level triggers and the key differences between them and row-level triggers:

Statement-Level Triggers:

Execution Occurrence: Statement-level triggers execute once for each triggering SQL statement, regardless of the number of rows affected. For example, if an UPDATE statement modifies 100 rows, a statement-level trigger related to that statement will execute only once for the entire statement.

Data Access: Unlike row-level triggers, which have access to OLD and NEW values for individual rows, statement-level triggers typically don’t access specific row data. They operate at a higher level and are more concerned with the statement’s overall impact.

Use Cases: Statement-level triggers are often used for tasks that involve the entire statement, such as maintaining a summary or aggregate data table, logging high-level changes, or applying statement-level constraints.

Key Differences from Row-Level Triggers:

Execution Frequency: Row-level triggers execute once for each affected row, whereas statement-level triggers execute once per triggering statement.

Scope of Data: Row-level triggers work with individual row data, making them suitable for tasks that require row-level processing. Statement-level triggers operate at a higher level and do not access row-level data.

Use Cases: Row-level triggers are typically used for tasks like data validation, auditing changes to specific rows, and enforcing row-level constraints. Statement-level triggers are used for tasks that involve the entire statement’s result or behavior.

What are the advantages of MySQL Triggers?

MySQL triggers offer significant benefits in database management efforts. They enable automation by automatically executing predefined actions in response to specific events or data changes, reducing the need for manual intervention. Triggers are also instrumental in maintaining data integrity by enforcing rules and constraints, ensuring that only valid and consistent data modifications are allowed. Additionally, they enable the implementation of sophisticated business logic within the database, streamlining repetitive tasks and enhancing overall efficiency.

What are the limitations of MySQL Triggers?

While MySQL triggers can offer significant advantages in terms of automation and data integrity enforcement, it’s important to be aware of their limitations and considerations:

Performance Impact: Triggers can introduce performance overhead, especially if they involve complex operations or are triggered frequently. Each time an event that activates a trigger occurs (e.g., an INSERT, UPDATE, DELETE), the trigger logic must execute. This can affect the overall performance of your database system, potentially leading to slower response times. Careful optimization of trigger logic is essential to mitigate this impact.

Complexity: As the number of triggers in a database increases, it can become challenging to manage and troubleshoot them. Complex triggers might also be harder to maintain and modify over time. 

Maintenance Overhead: When schema changes are made to tables that have associated triggers, these triggers might need to be updated as well. This can introduce maintenance overhead, particularly in large and complex database systems. Regularly reviewing and updating triggers when necessary is crucial.

Nesting Triggers: MySQL allows triggers to invoke other triggers, creating nested trigger cascades. While this can be a powerful tool, it can lead to unexpected behavior if not managed carefully. 

Limited Error Handling: MySQL triggers have limited error-handling capabilities. When an error occurs within a trigger, it can be challenging to handle, and the error messages might not provide sufficient detail for debugging. 

Isolation Levels: Triggers can affect the isolation level of transactions. It’s important to understand how triggers interact with transactions and the potential impact on data consistency.

Testing and Validation: Thorough testing and validation are essential when working with triggers, especially in a production environment. Changes to triggers should be carefully reviewed and tested to ensure they do not introduce unintended consequences.

By understanding these challenges and following best practices, you can make the most of triggers while minimizing potential drawbacks.

MySQL Triggers Examples and Use Cases

MySQL triggers have a significant impact on data management in various real-world scenarios, bringing automation and ensuring data consistency and integrity. Here are a few examples:

Audit Logging: In compliance-driven industries like finance or healthcare, it’s crucial to maintain an audit trail of database changes. Triggers can automatically log all modifications, including who made the change, what was changed, and when. 

Data Validation: MySQL triggers are vital for enforcing data validation rules, ensuring that incoming data meets predefined criteria. For example, an e-commerce platform can utilize triggers to validate product orders, checking if the total value meets specified conditions. 

Business Rule Enforcement: Complex business rules can be seamlessly enforced at the database level using triggers. For instance, in a subscription-based service, triggers can automate actions like sending renewal reminders or suspending accounts when payments fail.

Security: Triggers can enhance security by automatically locking or restricting access to certain records or data based on predefined conditions. For instance, in a healthcare database, triggers can restrict access to sensitive patient data to authorized personnel only.

In these example scenarios, MySQL triggers streamline processes, reduces manual intervention, and ensures data consistency, maintaining a reliable and secure database environment.

Don’t let hidden database costs haunt you. Discover solutions in our eBook, “The Hidden Costs of Not Properly Managing Your Databases.” Get your free copy now!

Impacts of MySQL Triggers on Database Performance

MySQL triggers can significantly impact database performance, both positively and negatively. When used judiciously, triggers enhance automation and data integrity. However, they can also introduce performance bottlenecks, especially if poorly designed. Trigger execution consumes resources, including CPU and memory, which can slow down database operations. 

Additionally, if triggers are set to respond to frequently occurring events, they can increase contention for database resources. It’s important for database administrators to strike a balance between the benefits of triggers and their potential performance implications, employing them thoughtfully to maintain optimal database performance.

Mitigating Performance Issues

Optimizing trigger performance and mitigating potential bottlenecks in MySQL involves several strategies:

Selective Trigger Use: Limit the use of triggers to essential tasks. Avoid overusing triggers for routine operations that can be handled more efficiently within the application code.

Indexing: Ensure that tables involved in triggers are appropriately indexed. Well-designed indexes can significantly improve query performance within triggers.

Write Efficient Code: Efficient trigger code minimizes execution time. This includes using optimized SQL statements and avoiding resource-intensive operations within triggers.

Batch Processing: If possible, batch process trigger actions to reduce the number of trigger executions, which can help minimize the overhead of invoking triggers for each row affected.

Avoid Recursive Triggers: Be cautious with recursive triggers, which can lead to unexpected performance issues. Implement mechanisms to control and limit recursion if needed.

Monitoring and Profiling: Continuously monitor trigger performance to identify and address slow-performing triggers promptly.

Transaction Management: Be mindful of transaction management within triggers. Carefully design transactions to avoid unnecessary locking and contention.

Testing and Benchmarking: Thoroughly test triggers in a staging environment to evaluate their impact on performance. Benchmark different trigger implementations to identify the most efficient option.

Regular Maintenance: As data and usage patterns change, adjust triggers accordingly to maintain optimal performance.

Database Design: Consider the overall database design and schema. A well-structured database can minimize the need for complex triggers.

By following these strategies and regularly reviewing trigger performance, you can ensure that triggers enhance database functionality without introducing significant performance bottlenecks.

Exploring How Triggers Impact MySQL Memory Allocation

MySQL stores active table descriptors in a special memory buffer called the table open cache. This buffer is controlled by configuration variables table_open_cache that hold the maximum number of table descriptors that MySQL should store in the cache and table_open_cache_instances that store the number of the table cache instances. With default values of table_open_cache=4000 and table_open_cache_instances=16, MySQL will create 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other.

If you use only tables, the table cache does not require a lot of memory because descriptors are lightweight, and even if you significantly increase the value of the table_open_cache, the required memory amount would not be so high. For example, 4,000 tables will take up to 4,000 x 4K = 16MB in the cache, and 100,000 tables will take up to 390MB. That is also not quite a huge number for this number of tables.

However, if your tables have triggers, it changes the game.

For the test, I created a table with a single column and inserted a row into it:

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

Then I flushed the table cache and measured how much memory it uses:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,02 sec)mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

Then, I accessed the table to put it into the cache.

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,01 sec)

16 table descriptors took less than 16 KiB in the cache.

Now, let’s try to create some triggers on this table and see if it changes anything.

mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW 
    -> BEGIN 
    ->   SIGNAL SQLSTATE '45000' SET message_text='Very long string. 
    ->     MySQL stores table descriptors in a special memory buffer, called table open cache. 
    ->     This buffer could be controlled by configuration variables table_open_cache that 
    ->     holds how many table descriptors MySQL should store in the cache and table_open_cache_instances 
    ->     that stores the number of the table cache instances. So with default values of table_open_cache=4000 
    ->     and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 
    ->     table descriptors each. These table cache instances could be accessed concurrently, allowing DML 
    ->     to use cached table descriptors without locking each other. If you use only tables, the table cache 
    ->     does not require a lot of memory, because descriptors are lightweight, and even if you significantly 
    ->     increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take 
    ->     up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge 
    ->     number for this number of open tables. However, if your tables have triggers, it changes the game.'; 
    -> END|

Then, let’s flush the table cache and test memory usage again.

Initial state:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

After I put the tables into the cache:

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|    611.12 KiB |
+---------------+
1 row in set (0,00 sec)

As a result, in addition to 75.17 KiB in the table cache, 611.12 KiB is occupied by the memory/sql/sp_head::main_mem_root. That is the "Mem root for parsing and representation of stored programs."

This means that each time the table is put into the table cache, all associated triggers are put into the memory buffer, storing their definitions.

FLUSH TABLES command clears the stored programs cache as well as the table cache:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

More triggers increase memory usage when put into the cache.

For example, if we create five more triggers and repeat our test, we will see the following numbers:

mysql> d |
mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances that stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other. If you use only tables, the table cache does not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of open tables. However, if your tables have triggers, it changes the game.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed ta a
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed table open cache. This buffer could be controlled by configuration variables table_open_cache tha
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
| 446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 3.58 MiB |
+---------------+
1 row in set (0,00 sec)

Numbers for the event memory/sql/sp_head::main_mem_root differ six times:

mysql> SELECT 3.58*1024/611.12;
+------------------+
| 3.58*1024/611.12 |
+------------------+
|         5.998691 |
+------------------+
1 row in set (0,00 sec)

Note that the length of the trigger definition affects the amount of memory allocated by the memory/sql/sp_head::main_mem_root.

For example, if we define the triggers as follows:

mysql> DROP TABLE tc_test;
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

mysql> d |
mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,04 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> d ;
mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|    446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|      1.89 MiB |
+---------------+
1 row in set (0,00 sec)

The resulting amount of memory is 1.89 MiB instead of 3.58 MiB for the longer trigger definition.

Note that having a single table cache instance requires less memory to store trigger definitions. E.g., for our small six triggers, it will be 121.12 KiB instead of 1.89 MiB:

mysql> SHOW GLOBAL VARIABLES LIKE 'table_open_cache_instances';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| table_open_cache_instances |     1 |
+----------------------------+-------+
1 row in set (0,00 sec)

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 121.12 KiB |
+---------------+
1 row in set (0,00 sec)

Conclusion: Moving Forward With MySQL Triggers

When you access tables with associated triggers, their definitions are put into the stored programs cache even when not fired. This was reported as MySQL Bug #86821 and closed as “Not a Bug” by Oracle. This is certainly not a bug, but the table and stored routines cache design. Still, it is good to be prepared so you are not surprised when you run short of memory faster than expected, especially if you have many triggers with long definitions.

Although MySQL triggers are valuable tools for automating tasks and ensuring data integrity within a database, it’s important to note that they can have implications for memory usage. When triggers are invoked, they consume memory resources to execute their defined actions, including queries, updates, or inserts. The memory overhead becomes more significant when multiple triggers are activated simultaneously during a database operation. Therefore, it’s crucial for database administrators and developers to carefully design triggers, taking into account their specific logic and intended functionality, to strike a balance between achieving the desired automation and maintaining memory efficiency.

Looking for Comprehensive MySQL Support? Percona Has You Covered

Percona Support is ideal for organizations looking to supplement their in-house talent with Percona open source software, expertise, and guaranteed support SLAs.

Subscribers benefit from comprehensive and responsive support for MySQL, MongoDB, and PostgreSQL on any infrastructure — on-premises, in the cloud, or in a DBaaS, bare metal, virtualized, or containerized environment.

Support is available in Advanced and Premium tiers, and our flexible pricing model makes it easier to predict annual costs as you scale and grow your business.

 

Find the right solution for your business

Frequently Asked Questions

What is a MySQL trigger, and how does it work?

A MySQL trigger is a predefined action or set of actions that automatically execute in response to specific events or changes in a database table. These events can include INSERT, UPDATE, DELETE, or even some other types of operations like data manipulation. For example, you can create a trigger to automatically update a timestamp whenever a new record is inserted into a table or to prevent the deletion of critical data.

How does memory allocation in MySQL affect database performance?

Memory allocation in MySQL plays a crucial role in database performance. MySQL uses memory for various purposes, and efficient memory management can significantly impact how well your database performs. Monitoring memory usage and performance metrics is crucial for making adjustments as needed.

What role do triggers play in a MySQL database?

MySQL triggers are database objects designed to automatically execute predefined actions in response to specific events or conditions. Triggers uphold data integrity by enforcing rules and constraints, ensuring that data remains valid and consistent, and they also facilitate logging and auditing, enabling the tracking of changes made to the database for security and compliance purposes. Triggers are a means to implement complex business logic within the database, automating tasks and calculations.

How do triggers impact memory usage within a MySQL database system?

If a trigger is invoked in a MySQL database, it utilizes memory resources to perform its defined actions. When multiple triggers are triggered simultaneously, memory overhead grows, and triggers that are inefficiently designed or overly complex can put significant pressure on memory resources, potentially resulting in performance bottlenecks. MySQL triggers must be created carefully in order to strike a balance between functionality and memory efficiency.

Are there different types of triggers in MySQL? If so, how do they differ in terms of memory allocation?

MySQL supports two primary types of triggers: ‘BEFORE’ triggers and ‘AFTER’ triggers. These types are distinguished by when they execute in relation to the triggering event.

BEFORE Triggers: These triggers are activated before the triggering event (e.g., INSERT, UPDATE, etc.) happens. They serve a valuable role in modifying data before it is written to the database. Regarding memory allocation, BEFORE triggers can influence memory usage by buffering changes made to the data before they are finalized. This additional memory consumption is typically temporary and is released once the transaction is completed.

AFTER Triggers: AFTER triggers execute after the triggering event has taken place and are commonly used for purposes such as logging or auditing changes. The memory usage here may be affected by factors like the volume and complexity of the data being logged.

Feb
01
2013
--

Implications of Metadata Locking Changes in MySQL 5.5

While most of the talk recently has mostly been around the new changes in MySQL 5.6 (and that is understandable), I have had lately some very interesting cases to deal with, with respect to the Metadata Locking related changes that were introduced in MySQL 5.5.3. It appears that the implications of Metadata Locking have not been covered well, and since there are still a large number of MySQL 5.0 and 5.1 installations that would upgrade or are in the process of upgrading to MySQL 5.5, I thought it necessary to discuss what these implications exactly are.

To read what Metadata Locking exactly is please read this section here in the MySQL manual.

Let’s start off with having a look at the Meta Data Locking behavior prior to MySQL 5.5.3

Metadata Locking behavior prior to MySQL 5.5.3

Prior to MySQL 5.5.3 a statement that opened a table only held meta data locks till the end of the statement and not the end of the transaction. This meant that transaction was not really isolated, because the same query could return different results if executed twice and if a DDL was executed between the query invocations. Let me give you an example via a simple test case where I will add a new column to the table while a transaction in REPEATABLE-READ isolation mode is ACTIVE.

session1 > select @@tx_isolation;
+-----------------+
| @@tx_isolation  |
+-----------------+
| REPEATABLE-READ |
+-----------------+
1 row in set (0.00 sec)

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)  

session1 > select * from test where id=1;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
+----+------+
1 row in set (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (0.57 sec)
Records: 2  Duplicates: 0  Warnings: 0

session1 > select * from test where id=1;
Empty set (0.00 sec)

session1 > rollback;
Query OK, 0 rows affected (0.00 sec)  

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)  

session1 > select * from test where id=1;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
+----+------+------------+
1 row in set (0.00 sec)

And you can see how isolation is broken because the SELECT was not repeatable although transaction isolation level of REPEATABLE-READ was used. This behavior of versions prior to 5.5 also means that queries could be written in different order to the binary log breaking locking semantics and contrary to serialization concepts. For example take a look at the following excerpt from the binary log of a case when an UPDATE transaction is mixed with an ALTER:

# at 536
#130201 11:21:03 server id 1  end_log_pos 658   Query   thread_id=8     exec_time=0     error_code=0
SET TIMESTAMP=1359714063/*!*/;
ALTER TABLE test add column id_2 int(11) default 0 after id
/*!*/;
# at 658
#130201 11:21:39 server id 1  end_log_pos 726   Query   thread_id=7     exec_time=0     error_code=0
SET TIMESTAMP=1359714099/*!*/;
BEGIN
/*!*/;
# at 726
# at 773
#130201 11:21:35 server id 1  end_log_pos 773   Table_map: `test`.`test` mapped to number 17
#130201 11:21:35 server id 1  end_log_pos 829   Update_rows: table id 17 flags: STMT_END_F

BINLOG '
L5cLURMBAAAALwAAAAUDAAAAABEAAAAAAAEABHRlc3QABHRlc3QAAwMD/gL+CQY=
L5cLURgBAAAAOAAAAD0DAAAAABEAAAAAAAEAA///+AIAAAAAAAAAA2JhcvgCAAAAAAAAAANob3A=
'/*!*/;
### UPDATE test.test
### WHERE
###   @1=2 /* INT meta=0 nullable=0 is_null=0 */
###   @2=0 /* INT meta=0 nullable=1 is_null=0 */
###   @3='bar' /* STRING(9) meta=65033 nullable=1 is_null=0 */
### SET
###   @1=2 /* INT meta=0 nullable=0 is_null=0 */
###   @2=0 /* INT meta=0 nullable=1 is_null=0 */
###   @3='hop' /* STRING(9) meta=65033 nullable=1 is_null=0 */
# at 829
#130201 11:21:39 server id 1  end_log_pos 856   Xid = 85
COMMIT/*!*/;

Note how ALTER is logged before the UPDATE, because ALTER did not block waiting for the transaction to commit.

For the reasons described above the implementation of Metadata Locking was changed, starting MySQL 5.5.3. Let’s see how this works now.

Metadata Locking behavior starting MySQL 5.5.3

Starting with 5.5.3 DDL statements that modify the table metadata are executed in an isolated fashion consistent with transactional behavior. This means that any open transaction will hold metadata locks on the table it has accessed for as long as the transaction is open. Since an open transaction retains metadata locks on all tables that were opened by the transaction, hence any DDL operation cannot commence till all the transactions that accessed that table are open. Let’s see this in affect via a simple test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                                                        |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
|  1 | msandbox | localhost | test | Sleep   |  140 |                                 | NULL                                                        |
|  2 | msandbox | localhost | test | Query   |    3 | Waiting for table metadata lock | ALTER TABLE test add column c char(32) default 'dummy_text' |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist                                            |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
3 rows in set (0.00 sec)

You can see how the ALTER blocks, because the transaction in session1 is still open and once the transaction in session1 is closed, the ALTER proceeds through successfully:

session1 > rollback;
Query OK, 0 rows affected (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (46.77 sec)
Records: 2  Duplicates: 0  Warnings: 0

Let’s see where the ALTER spent most of its time:

session2 > show profiles;
+----------+-------------+-------------------------------------------------------------+
| Query_ID | Duration    | Query                                                       |
+----------+-------------+-------------------------------------------------------------+
|        1 | 46.78110075 | ALTER TABLE test add column c char(32) default 'dummy_text' |
+----------+-------------+-------------------------------------------------------------+
1 row in set (0.00 sec)

session2 > show profile for query 1;
+------------------------------+-----------+
| Status                       | Duration  |
+------------------------------+-----------+
| starting                     |  0.000060 |
| checking permissions         |  0.000003 |
| checking permissions         |  0.000003 |
| init                         |  0.000005 |
| Opening tables               |  0.000045 |
| System lock                  |  0.000006 |
| setup                        |  0.000016 |
| creating table               |  0.168283 |
| After create                 |  0.000061 |
| copy to tmp table            |  0.165808 |
| rename result table          | 46.446738 |
| end                          |  0.000035 |
| Waiting for query cache lock |  0.000003 |
| end                          |  0.000006 |
| query end                    |  0.000003 |
| closing tables               |  0.000008 |
| freeing items                |  0.000016 |
| cleaning up                  |  0.000004 |
+------------------------------+-----------+
18 rows in set (0.00 sec)

So the ALTER waited on the meta data locks just after the table with the new structure had been created and populated with data but before the old table was swapped with the new one. Note that ALTER is a multi-step process, the old table is locked in shared mode and then something similar to the following steps are taken: a new table with the new structure is created and then INSERT INTO new_table SELECT * FROM old_table is done and then RENAME old_table to tmp_table, new_table to old_table and finally DROP tmp_table.
Let’s see another example, this time trying a RENAME:

session2 > RENAME TABLE test to test_2;

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
|  1 | msandbox | localhost | test | Sleep   |   49 |                                 | NULL                        |
|  2 | msandbox | localhost | test | Query   |   35 | Waiting for table metadata lock | RENAME TABLE test to test_2 |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist            |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
3 rows in set (0.00 sec)

And you can see that the RENAME is also blocked, because a transaction that accessed the table “test” is still open.

So we have an interesting conclusion here that the ALTER waits only at the last stages when its making changes to the table metadata, a table ALTER that alters a big table can keep executing without any hindrance, copying rows from the table with the old structure to the table with the new structure and will only wait at the last step when its about to make changes to table metadata.

Let’s see another interesting side-affect of metadata locking.

When can ALTER render the table inaccessible?

Now there is another interesting side-affect, and that is that when the ALTER comes at the state where it needs to wait for metadata locks, at that point the ALTER simply blocks any type of queries to the table, we know that writes would be blocked anyhow for the entire duration of the ALTER, but reads would be blocked as well at the time when the ALTER is waiting for metadata locks. Let’s see this in action via another test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test_2 order by id;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
|  2 | bar  | dummy_text |
+----+------+------------+
2 rows in set (0.00 sec)

session6 > ALTER TABLE test_2 DROP COLUMN c;

session7 > select * from test_2 order by id;
session8 > select * from test_2 order by id;
session9 > select * from test_2 order by id;
session10 > select * from test_2 order by id;

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                             |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
|  1 | msandbox | localhost | test | Sleep   |  403 |                                 | NULL                             |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist                 |
|  6 | msandbox | localhost | test | Query   |  229 | Waiting for table metadata lock | ALTER TABLE test_2 DROP COLUMN c |
|  7 | msandbox | localhost | test | Query   |  195 | Waiting for table metadata lock | select * from test_2 order by id |
|  8 | msandbox | localhost | test | Query   |  180 | Waiting for table metadata lock | select * from test_2 order by id |
|  9 | msandbox | localhost | test | Query   |  169 | Waiting for table metadata lock | select * from test_2 order by id |
| 10 | msandbox | localhost | test | Query   |   55 | Waiting for table metadata lock | select * from test_2 order by id |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
7 rows in set (0.00 sec)

And you can see that the table is blocked for any kind of operation. Let’s see the profiling information for one of the queries that was blocked to see where the query spent most of its time:

session10 > show profile for query 1;
+----------------------+------------+
| Status               | Duration   |
+----------------------+------------+
| starting             |   0.000058 |
| checking permissions |   0.000006 |
| Opening tables       | 213.028481 |
| System lock          |   0.000009 |
| init                 |   0.000014 |
| optimizing           |   0.000002 |
| statistics           |   0.000005 |
| preparing            |   0.000006 |
| executing            |   0.000001 |
| Sorting result       |   0.000002 |
| Sending data         |   0.000040 |
| end                  |   0.000003 |
| query end            |   0.000002 |
| closing tables       |   0.000003 |
| freeing items        |   0.000007 |
| logging slow query   |   0.000002 |
| cleaning up          |   0.000002 |
+----------------------+------------+
17 rows in set (0.00 sec)

And you can see how the query spent nearly all its time waiting in the “Opening tables” state. Now this behavior with respect to ALTER making the table inaccessible in some cases is not really documented and as such I have reported a bug: http://bugs.mysql.com/bug.php?id=67647

Metadata locking behaves differently for queries that are serviced from the Query Cache, let’s see what happens in that case.

Metadata Locking and Query Cache

How does metadata locking behave with query_cache? That is an important question. If Query Cache is enabled and the SELECT can be serviced from the Query Cache then the SELECT will not block on the ALTER even though the ALTER is waiting for meta data locks. Why? Because in such a case no table open operation has to be performed. Let’s see this scenario via a test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test_2 order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session6 > RENAME TABLE test_2 to test;

session10 > select * from test_2 order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
|  1 | msandbox | localhost | test | Sleep   |   22 |                                 | NULL                        |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist            |
|  6 | msandbox | localhost | test | Query   |    3 | Waiting for table metadata lock | RENAME TABLE test_2 to test |
| 10 | msandbox | localhost | test | Sleep   |   37 |                                 | NULL                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
4 rows in set (0.00 sec)

The query proceeds without being blocked on anything while the RENAME is still waiting for metadata locks. Let’s see the profiling information for this query:

session10 > show profile for query 11;
+--------------------------------+----------+
| Status                         | Duration |
+--------------------------------+----------+
| starting                       | 0.000018 |
| Waiting for query cache lock   | 0.000003 |
| checking query cache for query | 0.000007 |
| checking privileges on cached  | 0.000003 |
| checking permissions           | 0.000005 |
| sending cached result to clien | 0.000011 |
| logging slow query             | 0.000002 |
| cleaning up                    | 0.000002 |
+--------------------------------+----------+
8 rows in set (0.00 sec)

You can see that no table open operation was performed and hence no wait.

Does the fact that the table has already been opened and table object is in the table_cache change anything with respect to metadata locks.

Metadata Locking and Table Cache

No matter if a connection accesses a table that is already in the Table Cache, any query to a table that has a DDL operation waiting, will block. Why? Because MySQL sees that the old entries in the Table Cache have to be invalidated, and any query that accesses the table will have to reopen the modified table and there will be new entries in the Table Cache. Let’s see this phenomenon in action:

session6 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (59.80 sec)

session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   | 30    |
| Opened_tables | 0     |
+---------------+-------+
2 rows in set (0.00 sec)

session10 > select * from test order by id;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
|  2 | bar  | dummy_text |
+----+------+------------+
2 rows in set (53.78 sec)

session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   | 30    |
| Opened_tables | 1     |
+---------------+-------+
2 rows in set (0.00 sec)

session10 > show profile for query 18;
+----------------------+-----------+
| Status               | Duration  |
+----------------------+-----------+
| starting             |  0.000059 |
| checking permissions |  0.000010 |
| Opening tables       | 53.786685 |
| System lock          |  0.000009 |
| init                 |  0.000012 |
| optimizing           |  0.000003 |
| statistics           |  0.000007 |
| preparing            |  0.000006 |
| executing            |  0.000001 |
| Sorting result       |  0.000004 |
| Sending data         |  0.000033 |
| end                  |  0.000003 |
| query end            |  0.000002 |
| closing tables       |  0.000004 |
| freeing items        |  0.000009 |
| logging slow query   |  0.000002 |
| cleaning up          |  0.000002 |
+----------------------+-----------+
17 rows in set (0.00 sec)

As you can see that the SELECT query still blocks, and the status counter Opened_tables is also incremented once the query finishes.

So much for the explanation, now let’s take a look at the consequences.

Consequences

The consequences of these changes in metadata locking is that, if you have some really hot tables, for example in web applications its typical to see a “sessions” table that is accessed on every request, then care should be taken when you have to ALTER the table otherwise it can easily cause a stall as many threads can get piled up waiting for table metadata lock bringing down the MySQL server or causing all the connections to get depleted.

There are some other interesting consequences as well for application that use MySQL versions prior to 5.5:

  • I remember a customer case where there is a reporting slave that daily runs a long running transaction, this transactions tends to run for hours. Now everyday one of the tables was renamed and swapped and that table was the one that is read from by the long running transaction. As the slave tried to execute the rename query it would simply block waiting for the long running transaction to finish, this would cause the slave to lag for hours waiting for the transaction to be completed, as you know that the slave is single-threaded so it cannot really apply any other event. This was never an issue when the application was using MySQL version < 5.5 as the datastore.
  • There was another interesting case this time with how Active MQ uses MySQL when in HA mode. In HA mode there are two Active MQ servers, both try to do something similar to the following sequence of events:
    session1 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    Query OK, 0 rows affected (0.09 sec)
    
    session1 > insert into t1 values(null);
    Query OK, 1 row affected (0.21 sec)
    
    session1 > start transaction;
    Query OK, 0 rows affected (0.00 sec)
    
    session1 > select * from t1 where i=1 for update;
    +---+
    | i |
    +---+
    | 1 |
    +---+
    1 row in set (0.00 sec)
    
    session2 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    

    When using MySQL 5.1 the second CREATE would just fail immediately with the error “ERROR 1050 (42S01): Table ‘t1′ already exists”, but because of how meta data locking works in 5.5 this is no longer the case, the second CREATE will simply block with unintended consequences. A workaround here would be to set lock_wait_timeout variable to a very low value and then execute the CREATE TABLE, this will make sure that the CREATE fails immediately (however due to a different reason):

    session2 > set session lock_wait_timeout=1;CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    

    However, I feel that the CREATE TABLE should fail in such a case when the table already exists and there is no other DDL like a DROP table waiting to run on the same table, and as such I have reported the bug: http://bugs.mysql.com/bug.php?id=67873

The post Implications of Metadata Locking Changes in MySQL 5.5 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com