If you use Percona Server 5.5 and you have configured it to use multiple buffer pool instances than sooner or later you’ll see the following lines on the server’s error log and chances are you’ll be worried about them:
InnoDB: detected cycle in LRU for buffer pool 5, skipping to next buffer pool. InnoDB: detected cycle in LRU for buffer pool 3, skipping to next buffer pool. InnoDB: detected cycle in LRU for buffer pool 7, skipping to next buffer pool.
Worry not as this is mostly harmless. It’s becoming a February tradition for me (Fernando) to face a question about this subject (ok, it’s maybe a coincidence) and this time I’ve teamed up with my dear colleague and software engineer George Lorch to provide you the most complete blog post ever published on this topic … (with a belated thank you! to Ernie Souhrada, with whom I’ve also discussed this same matter one year ago).
InnoDB internals: what is “LRU” ?
There’s a short and to-the-point section of the MySQL manual that explains in a clear way what is the InnoDB buffer pool, how it operates and why it plays such an important role in MySQL performance. If you’re interested in understanding InnoDB internals then that page is a good start. In this section we’ll refrain ourselves to explain what the “LRU” that shows in our subject message is so we’ll only slightly dig into InnoDB internals, enough to make for some context. Here’s a quick introduction to the buffer pool, quoting from the above manual page:
InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. (…) Ideally, you set the size of the buffer pool to as large a value as practical, leaving enough memory for other processes on the server to run without excessive paging. The larger the buffer pool, the more InnoDB acts like an in-memory database, reading data from disk once and then accessing the data from memory during subsequent reads.
In practice, however, we can rarely fit our whole dataset inside the InnoDB buffer pool so there must be a process to manage this limited pool of memory pages:
InnoDB manages the pool as a list, using a variation of the least recently used (LRU) algorithm. When room is needed to add a new block to the pool, InnoDB evicts the least recently used block and adds the new block to the middle of the list.
There you go, InnoDB employs a variation of the Least Recently Used algorithm called midpoint insertion strategy to manage the pages within the buffer pool. We should mention it does makes exceptions, such as during a full table scan, when it knows the loaded pages might end up being read only a single time.
Dumping and reloading the buffer pool
Before we can get to the main point of this article lets first examine why would you want to dump the buffer pool to disk, which is at the core of the matter here: that’s when those warning messages we’re discussing may appear.
When you start a MySQL server the buffer pool is empty by default. Performance is at it’s worse at this point because no data can be found in memory so in practice each request for data results in an I/O operation to retrieve the data in the disk and bring it to memory. With time the buffer pool gets filled and performance improves – more and more data can now be found in memory. With yet more time we reach a peek performance state: the buffer pool not only is full but it is filled with the most popular data. The time between the start of the server and reaching this optimum state in the buffer pool is called server warm up. How long it takes depends mostly on two things: the size of the buffer pool and the level of activity of the server – the less busy it is the less requests it will get and thus more time is needed until the popular data is fully loaded.
Now, there could be a shortcut: what if before we save the buffer pool on a disk file before we stop MySQL? We could later use it to reload the buffer pool to an optimum state when we restart the server, thus decreasing the warm up period dramatically.
Percona was a pioneer in this field related to other MySQL distributions and implemented this functionality in Percona Server 5.5. Later on, MySQL 5.6 was released with a similar functionality which also allowed preloading the buffer pool for a faster warm up. Percona Server 5.6 incorporates this upstream feature, effectively replacing its own implementation. However, while in Percona Server 5.5 we could periodically dump the buffer pool in MySQL and Percona Server 5.6 it is only dumped at shutdown or at request.
“Detected cycle in LRU”
In the section above we introduced a functionality that allows to dump a fingerprint of the buffer pool to disk so we can later reload it at server restart (note that even though the buffer pool might be very large the fingerprint will be small enough to make this practical). What we didn’t mention was that this is yet most useful outside of maintenance time and planned shutdows – that is, when the server crashes. When a crash happens it’s that more important to bring it back to a warm up state soon, so it can resume providing data fast enough. And giving we cannot predict a crash the only way we can arrange to have the latest buffer pool on disk is by flushing it often.
While the buffer pool is divided into pages for efficiency of high-volume read operations it is implemented as a linked list of pages, for efficiency of cache management. During the process of dumping the buffer pool to disk a mutex is acquired on the LRU list. However, this mutex is not hold for the duration of the process – it is periodically released to prevent stalling of the system. The problem is: in between the release of the mutex and the moment it is acquired again the list may get reshuffled. Since the dump keeps a pointer to its position across the mutex boundry, the dump can get put into some artificial cycling.
Lets consider a linked list:
A > B > C > D > E
where each letter corresponds to a memory page. Now lets say the initial dump was partially taken and covered the first three pages, “A > B > C”, placing a pointer on “C” before releasing the mutex. Once the mutex is reacquired the list has been reshuffled: “A > C > B > D > E”. The resulting junction of the partial list we have already copied and the reshuffled list now includes a loop, which would incur in a cycle: “(A > B > C) > B > D > E”. When the dumping process detects a cycle on the LRU list it stops copying from the actual buffer pool, throws in a warning message, and moves on to the next buffer pool instance – otherwise it would keep dumping in an infinite loop.
How harmless are those messages ?
It is fairly harmless except for the fact you will only have a partial LRU list dump for that buffer pool instance – that is, until the next dump occurs. If the server crashes or is shutdown before the next dump takes place the existing one won’t be totally up to date for the server warm up to complete – it will still be used and will still provide a partially filled, somewhat “warm” buffer pool, just not as optimal as it could have been if the last dump had been taken fully.
The post Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message? appeared first on MySQL Performance Blog.