In this blog post, we’ll look at how using thread_statistics can cause high memory usage.
I was recently working on a high memory usage issue for one of our clients, and made some interesting discoveries: high memory usage with no bounds. It was really tricky to diagnose.
Below, I am going to show you how to identify that having thread_statistics enabled causes high memory usage on busy systems with many threads.
Part 1: Issue Background
I had a server with 55.0G of available memory. Percona Server for MySQL version:
Version | 5.6.35-80.0-log Percona Server (GPL), Release 80.0, Revision f113994f31 Built On | debian-linux-gnu x86_64
We have calculated approximately how much memory MySQL can use in a worst case scenario for max_connections=250:
mysql> select ((@@key_buffer_size+@@innodb_buffer_pool_size+@@innodb_log_buffer_size+@@innodb_additional_mem_pool_size+@@net_buffer_length+@@query_cache_size)/1024/1024/1024)+((@@sort_buffer_size+@@myisam_sort_buffer_size+@@read_buffer_size+@@join_buffer_size+@@read_rnd_buffer_size+@@thread_stack)/1024/1024/1024*250); +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ((@@key_buffer_size+@@innodb_buffer_pool_size+@@innodb_log_buffer_size+@@innodb_additional_mem_pool_size+@@net_buffer_length+@@query_cache_size)/1024/1024/1024)+((@@sort_buffer_size+@@myisam_sort_buffer_size+@@read_buffer_size+@@join_buffer_size+@@read_rnd | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 12.445816040039 | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec)
So in our case, this shouldn’t be more than ~12.5G.
After MySQL Server has been restarted, it allocated about 7G. After running for a week, it reached 44G:
# Memory ##################################################### Total | 55.0G Free | 8.2G Used | physical = 46.8G, swap allocated = 0.0, swap used = 0.0, virtual = 46.8G Shared | 560.0k Buffers | 206.5M Caches | 1.0G Dirty | 7392 kB UsedRSS | 45.3G Swappiness | 60 DirtyPolicy | 20, 10 DirtyStatus | 0, 0
# Top Processes ############################################## PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2074 mysql 20 0 53.091g 0.044t 8952 S 126.0 81.7 34919:49 mysqld
I checked everything that could be related to the high memory usage (for example, operating system settings such as Transparent Huge Pages (THP), etc.). But I still didn’t find the cause (THP was disabled on the server). I asked my teammates if they had any ideas.
Part 2: Team Is on Rescue
After brainstorming and reviewing the status, metrics and profiles again and again, my colleague (Yves Trudeau) pointed out that User Statistics is enabled on the server.
User Statistics adds several
INFORMATION_SCHEMA
tables, several commands, and theuserstatvariable. The tables and commands can be used to better understand different server activity, and to identify the different load sources. Check out the documentation for more information.
mysql> show global variables like 'user%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | userstat | ON | +---------------+-------+ mysql> show global variables like 'thread_statistics%'; +-------------------------------+---------------------------+ | Variable_name | Value | +-------------------------------+---------------------------+ | thread_statistics | ON | +-------------------------------+---------------------------+
Since we saw many threads running, it was a good option to verify this as the cause of the issue.
Part 3: Cause Verification – Did It Really Eat Our Memory?
I decided to apply some calculations, and the following test cases to verify the cause:
- Looking at the THREAD_STATISTICS table in the INFORMATION_SCHEMA, we can see that for each connection there is a row like the following:
mysql> select * from THREAD_STATISTICS limit 1G *************************** 1. row *************************** THREAD_ID: 3566 TOTAL_CONNECTIONS: 1 CONCURRENT_CONNECTIONS: 0 CONNECTED_TIME: 30 BUSY_TIME: 0 CPU_TIME: 0 BYTES_RECEIVED: 495 BYTES_SENT: 0 BINLOG_BYTES_WRITTEN: 0 ROWS_FETCHED: 27 ROWS_UPDATED: 0 TABLE_ROWS_READ: 0 SELECT_COMMANDS: 11 UPDATE_COMMANDS: 0 OTHER_COMMANDS: 0 COMMIT_TRANSACTIONS: 0 ROLLBACK_TRANSACTIONS: 0 DENIED_CONNECTIONS: 0 LOST_CONNECTIONS: 0 ACCESS_DENIED: 0 EMPTY_QUERIES: 0 TOTAL_SSL_CONNECTIONS: 1 1 row in set (0,00 sec)
- We have 22 columns, each of them BIGINT, which gives us ~ 176 bytes per row.
- Let’s calculate how many rows we have in this table at this time, and check once again in an hour:
mysql> select count(*) from information_schema.thread_statistics; +----------+ | count(*) | +----------+ | 7864343 | +----------+ 1 row in set (15.35 sec)
In an hour:
mysql> select count(*) from information_schema.thread_statistics; +----------+ | count(*) | +----------+ | 12190801 | +----------+ 1 row in set (24.46 sec)
- Now let’s check on how much memory is currently in use:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2096 mysql 20 0 12.164g 0.010t 16036 S 173.4 18.9 2274:51 mysqld
- We have 12190801 rows in the THREAD_STATISTICS table, which is ~2G in size.
- Issuing the following statement cleans up the statistics:
mysql> flush thread_statistics; mysql> select count(*) from information_schema.thread_statistics; +----------+ | count(*) | +----------+ | 0 | +----------+ 1 row in set (00.00 sec)
- Now, let’s check again on how much memory is in use:
ID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2096 mysql 20 0 12.164g 7.855g 16036 S 99.7 14.3 2286:24 mysqld
As we can see, memory usage drops to the approximate value of 2G that we had calculated earlier!
That was the root cause of the high memory usage in this case.
Conclusion
User Statistics (basically Thread_Statistics) is a great feature that allows us to identify load sources and better understand server activity. At the same time, though, it can be dangerous (from the memory usage point of view) to use as a permanent monitoring solution due to no limitations on memory usage.
As a reminder, thread_statistics is NOT enabled by default when you enable User_Statistics. If you have enabled Thread_Statistics for monitoring purposes, please don’t forget to pay attention to it.
As a next step, we are considering submitting a feature request to implement some default limits that can prevent Out of Memory issues on busy systems.