Jun
10
2024
--

A Practical Guide to PostgreSQL Indexes

PostgreSQL IndexesPostgreSQL is known for its reliability and feature-rich environment, but as with any database, when datasets grow larger and query complexity increases, efficient data retrieval becomes crucial for maintaining optimal database performance. This is where indexes play a vital role, enhancing performance by allowing for faster data retrieval. Indexes are data structures that store a […]

Jul
05
2023
--

The Impacts of Fragmentation in MySQL

impacts of Fragmentation in MySQL

Fragmentation is a common concern in some database systems. Highly fragmented tables can affect performance and resource allocation. But reducing fragmentation often involves rebuilding the table completely. This blog post will discuss fragmentation and its impact on InnoDB.

What is fragmentation?

We say that something is fragmented when it is formed by parts that are separate or placed in a different order than the natural one. In databases, we can experiment with different types of fragmentation:

  • Segment Fragmentation: segments are fragmented; they are stored not following the order of data, or there are empty pages gaps between the data pages.
  • Tablespace Fragmentation: the tablespace is stored in non-consecutive filesystem blocks.
  • Table Fragmentation: data is stored not following the primary key order (heap tables), or table pages have a significant amount of free space (heap tables or index tables).
  • Index Fragmentation: indexes usually store the data following the index order (b-tree) or in a random order (hash). In both cases, fragmentation means free space inside the index pages.
  • Row Fragmentation: single rows are spread across multiple pages.

Not all types of fragmentation can happen on all the database technologies, and sometimes you can’t avoid a certain degree of fragmentation. InnoDB tables are index tables, meaning data is organized following the primary key.

The principle of locality

While the principle of locality is usually related to processors and cache access patterns, it also applies to data access in general. This principle describes two data access patterns: spatial and temporal locality.

Temporal locality means that data that has been recently retrieved is more prone to be required again in a short period of time. Spatial locality tells us that somehow related data (near) tends to be accessed together. If your data is organized following the principle of locality, data access will be more efficient.

How does fragmentation affect the locality of data?

Table and index fragmentation often lead to database pages containing significant free space. This reduces the probability of storing frequently accessed data together and goes against the temporal locality principle.

Table fragmentation also affects the spatial locality by storing related data on different database pages. Regarding tablespace and segment fragmentation, modern storage systems tend to reduce the impact of these types of fragmentation.

The situation with row fragmentation is a bit different. We usually have three possibilities: row fragmentation is not supported, it is supported through row chaining (the partial row has a pointer to another page where the row continues), or it is supported only for large datatypes. In the last case, blobs are usually stored out-of-band in a separate area. This type of fragmentation can improve performance and efficiency.

Fragmentation in InnoDB

In InnoDB, everything is an index. Primary keys are important because they define how data will be sorted in the table. One of the effects of this design is that there is no fragmentation due to unordered data. But we still can have fragmentation caused by free space within the table pages.

To split or not to split, that is the question.

InnoDB stores the rows in pages. A new row is placed on a specific page based on the primary key. But what happens when a page becomes full? InnoDB must allocate a new page where the new row will be stored. Here InnoDB is quite clever. Most RDBMS performs a page split: a new page is created, and half the contents of the full page are moved to the recently allocated page, leaving two half-full pages. What InnoDB does instead is to analyze the insertion pattern and, if it is sequential, create a page and place the new row there. This is very efficient for sequential primary key inserts.

The inserts don’t need to be purely sequential; they need to follow a direction: incremental or decremental. We will not cover the internals; just tell you that each index leave page has metadata to indicate the direction of recent inserts and how many rows were inserted following it.

Random vs. Sequential inserts, effect on fragmentation

As we explained before, InnoDB has a clever method to identify if a new row has to be inserted in an empty page or if it makes sense to perform a page split. This method is extremely efficient for sequential insertions because it generates the minimum number of additional pages and, traditionally, has been considered harmful for non-sequential insertions (when the primary key is random or unknown).

We will review the process with sequential and random insertions to understand how rows are inserted. But first, let’s see the contents of an empty page. Initially, we have some metadata and free space for new data.

InnoDB empty leaf page

Once we start inserting data into this page, it does not matter if the data is sequential or not; the page will start filling.

InnoDB Leaf with data

But once we go below 1/16th of free space, we must allocate new pages. How new pages are allocated and filled depends on whether the insertions are sequential or random. For sequential insertions, we have this pattern:

InnoDB sequential insertion pattern

The new data is inserted in the new leaf page. But for random insertions, we have a different behavior:

InnoDB random insertion pattern

We must reserve free space on both pages because we can’t assume where new rows will be inserted.

As new rows are inserted, the sequential inserts will continue with a low fragmentation:

InnoDB sequential primary key insertion impact on storage allocation

But what happens with random insertions? For simplicity, we are assuming that the primary keys are uniformly distributed.

Impact of random primary key insertion on storage allocation

Here we see an interesting phenomenon when fragmentation is low; new inserts may trigger page splits that increase fragmentation. But once we reach a certain level of fragmentation, almost all pages will have enough free space to accept new rows without performing splits. Until the threshold is reached and new splits will happen again.

This means that random insertions lead to temporary fragmentation.

Random inserts and deletes

The previous case covered situations where we only insert data, but what happens when we also remove it? Often we have tables where old rows are periodically purged. If they are sorted by primary key, there is not any problem: empty pages will be removed completely from the beginning of the indexed table.

Impact of InnoDB sequential primary key insertion and deletion on storage allocation

What we see here is that old rows belong to the same pages, and once they are removed, it is possible to return that page to the tablespace. Later this page will be allocated again to be used for new rows.

But what happens when inserts and deletions are random? The assumption that deletes are also random is correct, as data is distributed randomly.

Impact of InnoDB random primary key insertion and deletion on storage allocation.

As we can see, as long as the number of rows deleted and inserted is roughly equal, this pattern will not significantly increase (or decrease) fragmentation.

The ideal situation with random inserts and deletions is having enough space to insert new rows without reaching the split point.

Additional causes of fragmentation

Three factors define additional causes of fragmentation. We’ve been analyzing the first one previously: how data is inserted. The other two are data modification (updates) and data removal (deletes).

When a null field is filled with data or a varchar field content is replaced by a longer text, the page must make room for this extra data. What happens if there is not enough free space? InnoDB will split the page into two half-full pages. This increases fragmentation. To avoid this, InnoDB reserves 1/16th of each page for data modification. This reservation is made regardless of the insertion pattern.

If you increase the size of many rows, you will generate fragmentation.

And what happens with deletes? Deletes increase fragmentation by reducing the number of records stored on affected pages. To fix this, if a page goes below 50% utilization, InnoDB will look at adjacent pages (following the index order), and if one of those pages is below 50%, it will merge both pages into one. The freed space will not return to the file system but will be reused by new pages.

Detecting fragmentation

Currently, there is no easy way to measure page fragmentation. There is a column in the table information_schema.tables that is supposed to contain the average row length.

https://dev.mysql.com/doc/refman/8.0/en/information-schema-tables-table.html

But after some verification, we’ve found this is not properly calculated. We opened a bug to MySQL support and Percona Support to get this fixed.

https://bugs.mysql.com/bug.php?id=107739

To calculate the fragmentation, it is feasible to write an SQL script that returns the total size of the data in the table using the information from the MySQL documentation and the actual data length of variable-length columns. We can estimate the average fragmentation by comparing that data with the number of pages allocated for data. Unfortunately, this would not provide enough information to detect fragmentation hot spots.

https://dev.mysql.com/doc/refman/8.0/en/storage-requirements.html

Measuring page splits

An indirect method to identify that fragmentation is happening is measuring the number of page splits. Unfortunately, it is impossible to measure the number of page splits created during inserts or updates on a specific table.

The global statistics about InnoDB page splits are stored in the information_schema table innodb_metrics.

These statistics must be enabled by using the innodb_monitor_enable global variable.

https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_monitor_enable

https://dev.mysql.com/doc/refman/8.0/en/innodb-information-schema-metrics-table.html

InnoDB Ruby

It is possible to analyze the structure of InnoDB structures using an external open-source tool called InnoDB Ruby. Jeremy Cole has developed this tool, and is accessible here:

https://github.com/jeremycole/innodb_ruby

There is also a wiki page that documents application usage:

https://github.com/jeremycole/innodb_ruby/wiki

To get an overview of fragmentation for a specific table, you can use the following command:

innodb_space -f space-extents-illustrate

This command returns a graphical representation of the tablespace using different formats to display space allocation on each page.

InnoDB Ruby

Reducing fragmentation

Once a table is fragmented, the only method to reduce fragmentation is rebuilding the table. The problem with reducing fragmentation by rebuilding the table is that random inserts will fragment the table quickly. This fragmentation appears quickly because new rows are inserted randomly, and the fragmentation reduction leads to no free space for new rows.

Rebuilding the table could lead to a massive increase in fragmentation shortly after the rebuild, as page splits would bring us to a point where almost all the pages are half full.

Innodb_fill_factor

Ideally, if we perform random inserts, we must allocate enough space for new inserts after a full table rebuild. There is a global variable that exactly does this: innodb_fill_factor.

https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_fill_factor

Innodb_fill_factor defines the percentage of space on each B-tree page that is filled during a sorted index build, with the remaining space reserved for future index growth. For example, setting innodb_fill_factor to 80 reserves 20 percent of the space on each B-tree page for future index growth. Actual percentages may vary. The innodb_fill_factor setting is interpreted as a hint rather than a hard limit.

This means that if we perform random inserts and deletes and we rebuild the tables using a fill factor big enough to hold all the inserts before the purge, the table will maintain a low level of fragmentation.

Random insert and delete tests and recommended fill factor

We performed multiple tests with different fill factors. The test performed consisted of the following:

  1. Create a table.
  2. Insert 2,000,000 records using md5 as the function to generate the hash.
  3. Set the fill factor to the test value.
  4. Optimize the table.
  5. Repeat 400 times
    1. Insert 10,000 rows into the table.
    2. Remove 10,000 rows from the table.
    3. Measure the fragmentation.

We tested with these fill factors: 75, 80, 82, 83, 85, and 100.

Total space file size

This chart shows the initial and final space allocation.

Initial and final size of innodb file after multiple insertions and deletions.

As we can see, using a fill factor of 83 provides the best results for this test.

Page splits and merges

We also analyzed the number of page splits (the number of times a row does not fit in the corresponding page and the page needs to be split into two pages) and the number of page merges (the number of times that, after a delete operation, a page goes below 50% and InnoDB tries to merge it with adjacent pages).

Fill factor Page splits Merge attempts Merge successful
75 63 4 3
80 565 108 34
82 1363 348 106
83 2063 658 203
85 4350 1324 318
100 44771 15527 2323

As we can see, there are a number of page splits for every fill factor. For the 100 fill factor, we had a page split for every 89 rows processed, while with a fill factor of 83, we had a page split for every 1930 rows processed.

Fragmentation maps

We provide fragmentation maps for 75, 83, and 100 fill factors after 400 iterations.

 

 

Conclusions

Fragmentation usually is not a problem for InnoDB tables. InnoDB deals with fragmentation quite efficiently, and table rebuilds are seldom needed.

There is only an edge case when data is inserted following a random primary key. In this case, results will depend on the table’s structure, the keys’ distribution, and how frequently data is inserted or removed.

For our tests, a value of innodb_fill_factor of around 83% was optimal. It allowed for keeping the fragmentation under control. Smaller fill factors did not provide additional benefits. Your mileage may vary.

If you have a large table with random insertions, we recommend using a tool like innodb_ruby to monitor fragmentation and analyze if the table needs a rebuild with a different fill factor.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Jan
19
2021
--

The MySQL Clone Wars: Plugin vs. Percona XtraBackup

MySQL Plugin vs. Percona XtraBackup

MySQL Plugin vs. Percona XtraBackupLarge replication topologies are quite common nowadays, and this kind of architecture often requires a quick method to rebuild a replica from another server.

The Clone Plugin, available since MySQL 8.0.17, is a great feature that allows cloning databases out of the box. It is easy to rebuild a replica or to add new nodes to a cluster using the plugin. Before the release of the plugin, the best open-source alternative was Percona XtraBackup for MySQL Databases.

In this blog post, we compare both alternatives for cloning purposes. If you need to perform backups, Percona XtraBackup is a better tool as it supports compression and incremental backups, among other features not provided by the plugin. The plugin supports compression only for network transmission, not for storage.

But one of the plugin’s strong points is simplicity. Once installed and configured, cloning a database is straightforward. Just issuing a command from the destination database is enough.

Percona XtraBackup, on the other side, is a more complex tool. The cloning process involves several stages: backup, stream, write, and prepare. These stages can take place in parallel: we can stream the backup to the new server using netcat and, at the same time, we can write it into the destination directory. The only stage that is sequential is the last one: prepare.

Test Characteristics

We used sysbench to create 200 tables of 124Mb each for a total of 24Gb. Both source and replica virtual machines run 4 cores, 8 Gb RAM, and 60Gb storage. We created the disks on the same datastore.

During the tests, we did not generate additional operations on the database. We measured only the clone process, reducing the benchmark complexity. Otherwise, we would have to take into consideration things like application response time, or the number of transactions executed. This is beyond the scope of this assessment.

We tested different combinations of clone and Percona XtraBackup operations. For XtraBackup, we tested 1 to 4 threads, with and without compression. In the case of compression, we allocated the same number of threads to compression and decompression. For the clone plugin, we tested auto (which lets the server decide how many threads will perform the clone) and 1 to 4 threads. We also tested with and without compression. Finally, we executed all the tests using three different network limits: 500mbps, 1000mbps, and 4000mbps. These make a total of 54 tests, executed 12+1 times each.

All times are in seconds. In the graphs below, lower values are better.

Method

Clone

Out of the required parameters to operate the clone plugin, the following were set up accordingly in the recipient server:

  • clone_max_concurrency=<maximum number of threads> Defines the maximum number of threads used for a remote cloning operation with autotune enabled. Otherwise, this is the exact number of threads that remote cloning uses.
  • clone_autotune_concurrency If enabled the clone operation uses up to clone_max_concurrency threads. The default is 16.
    • clone_enable_compression If enabled, the remote clone operation will use compression.

Percona XtraBackup

To stream the backup we used the xbstream format and sent the data to the remote server using netcat. We applied the following parameters:

  • parallel=<number of threads> Xtrabackup and xbstream parameter that defines the number of threads used for backup and restore operations.
  • rebuild-threads The number of threads used for the rebuild (prepare) operation.
  • decompress_threads and compress_threads Xtrabackup and xbstream parameters that define the number of threads used for compression operations.

Some people use additional parameters like innodb-read-io-threads, innodb-write-io-threads, or innoDB-io-capacity, but these parameters only affect the behavior of InnoDB background threads. They have no impact during backup and restore operations.

Results

Clone

No compression

For the lower bandwidth tests, the number of threads used does not make a difference. Once we increase bandwidth we see that time cuts by half when we move from one thread to two. Going beyond that value improves slightly. Probably we reach the disk i/o limit.

Clone Plugin performance without compression.

The auto option is consistently the fastest one.

Compression

Compression is supposed to improve performance for lower bandwidth connections, but we see that this is not the case. Bandwidth has no impact on execution time and compression makes the clone slower. Again auto gives the best results, equivalent to 4 threads.
clone plugin with compression

Percona XtraBackup

No Compression

Without compression, we see again that the number of threads does not make any difference in the lower bandwidth test. When we increase bandwidth, the number of threads is important, but we quickly reach i/o limits.

Percona Xtrabackup stream without compression

Compression

When using compression, we see that requires less time to complete in almost every case compared with the option without compression, even when bandwidth is not the limit.

Percona Xtrabackup stream with compression

Conclusion

We see that, when using compression, the clone plugin is the slower option while Percona XtraBackup gives great results for all bandwidths. Without compression, the clone plugin is faster when using more than 2 threads. XtraBackup is faster for fewer threads.

Xtrabackup vs. Clone plugin - results summary

Below, we have a chart comparing the worst and best results. As expected, the worst results correspond to one thread executions.

The Clone Plugin is a great option for simplicity. Percona XtraBackup is excellent to save bandwidth and provides better results with fewer threads. With enough threads and bandwidth available, both solutions provide comparable results.

Jan
05
2021
--

MongoDB 101: 5 Configuration Options That Impact Performance and How to Set Them

MongoDB configuration options that impact Performance

MongoDB configuration options that impact PerformanceAs with any database platform, MongoDB performance is of paramount importance to keeping your application running quickly.   In this blog post, we’ll show you five configuration options that can impact the performance for your MongoDB Deployment and will help keep your database fast and performing at its peak.

MongoDB Performance Overview

MongoDB performance consists of several factors; there’s OS Settings, DB Configuration Settings, DB Internal Settings, Memory Settings, and Application Settings.  This post is going to focus on the MongoDB database configuration options around performance and how to set them.  These options are ones that are set in the database configuration itself that can impact your performance.

Configuration Options

So how do we ensure our performance configuration options are enabled or set up correctly?  And which ones are the most important?  We’ll now go through five configuration options that will help your MongoDB environment be performant!

MongoDB uses a configuration file in the YAML file format.  The configuration file is usually found in the following locations, depending on your Operating System:

DEFAULT CONFIGURATION FILE

  • On Linux, a default /etc/mongod.conf configuration file is included when using a package manager to install MongoDB.
  • On Windows, a default <install directory>/bin/mongod.cfg configuration file is included during the installation.
  • On macOS, a default /usr/local/etc/mongod.conf configuration file is included when installing from MongoDB’s official Homebrew tap.

 

storage.wiredTiger.engineConfig.cacheSizeGB

Our first configuration option to help with your MongoDB performance is storage.wiredTiger.engineConfig.cacheSizeGB.

storage:
   wiredTiger:
       engineConfig:
           cacheSizeGB: <value>

Since MongoDB 3.0, MongoDB has used WiredTiger as its default Storage Engine, so we’ll be examining MongoDB Memory Performance from a WiredTiger perspective. By default, MongoDB will reserve 50% of the available memory – 1 GB for the WiredTiger cache or 256 MB, whichever is greater.  For example, a system with 16 GB of RAM would have a WiredTiger cache size of 7.5 GB.

( 0.5 * (16-1) )

The size of this cache is important to ensure WiredTiger is performant. It’s worth taking a look to see if you should alter it from the default. A good rule of thumb is that the size of the cache should be large enough to hold the entire application working set.

Note that if you’re in a containerized environment, you may also need to set the configuration option to 50% – 1 GB of the memory that is available to the container.  MongoDB may adhere to the container’s memory limits or it may get the host’s memory limit, depending on how the system call is returned when MongoDB asks.  You can verify what MongoDB believes the memory limit is by running:

db.hostInfo()

And checking the hostinfo.system.memLimitMB value.  This is available from MongoDB 3.6.13+ and MongoDB 4.0.9+ forward.

How do we know whether to increase or decrease our cache size? Look at the cache usage statistics:

db.serverStatus().wiredTiger.cache
{
"application threads page read from disk to cache count" : 9,
"application threads page read from disk to cache time (usecs)" : 17555,
"application threads page write from cache to disk count" : 1820,
"application threads page write from cache to disk time (usecs)" : 1052322,
"bytes allocated for updates" : 20043,
"bytes belonging to page images in the cache" : 46742,
"bytes belonging to the history store table in the cache" : 173,
"bytes currently in the cache" : 73044,
"bytes dirty in the cache cumulative" : 38638327,
"bytes not belonging to page images in the cache" : 26302,
"bytes read into cache" : 43280,
"bytes written from cache" : 20517382,
"cache overflow score" : 0,
"checkpoint blocked page eviction" : 0,
"eviction calls to get a page" : 5973,
"eviction calls to get a page found queue empty" : 4973,
"eviction calls to get a page found queue empty after locking" : 20,
"eviction currently operating in aggressive mode" : 0,
"eviction empty score" : 0,
"eviction passes of a file" : 0,
"eviction server candidate queue empty when topping up" : 0,
"eviction server candidate queue not empty when topping up" : 0,
"eviction server evicting pages" : 0,
"eviction server slept, because we did not make progress with eviction" : 735,
"eviction server unable to reach eviction goal" : 0,
"eviction server waiting for a leaf page" : 2,
"eviction state" : 64,
"eviction walk target pages histogram - 0-9" : 0,
"eviction walk target pages histogram - 10-31" : 0,
"eviction walk target pages histogram - 128 and higher" : 0,
"eviction walk target pages histogram - 32-63" : 0,
"eviction walk target pages histogram - 64-128" : 0,
"eviction walk target strategy both clean and dirty pages" : 0,
"eviction walk target strategy only clean pages" : 0,
"eviction walk target strategy only dirty pages" : 0,
"eviction walks abandoned" : 0,
"eviction walks gave up because they restarted their walk twice" : 0,
"eviction walks gave up because they saw too many pages and found no candidates" : 0,
"eviction walks gave up because they saw too many pages and found too few candidates" : 0,
"eviction walks reached end of tree" : 0,
"eviction walks started from root of tree" : 0,
"eviction walks started from saved location in tree" : 0,
"eviction worker thread active" : 4,
"eviction worker thread created" : 0,
"eviction worker thread evicting pages" : 902,
"eviction worker thread removed" : 0,
"eviction worker thread stable number" : 0,
"files with active eviction walks" : 0,
"files with new eviction walks started" : 0,
"force re-tuning of eviction workers once in a while" : 0,
"forced eviction - history store pages failed to evict while session has history store cursor open" : 0,
"forced eviction - history store pages selected while session has history store cursor open" : 0,
"forced eviction - history store pages successfully evicted while session has history store cursor open" : 0,
"forced eviction - pages evicted that were clean count" : 0,
"forced eviction - pages evicted that were clean time (usecs)" : 0,
"forced eviction - pages evicted that were dirty count" : 0,
"forced eviction - pages evicted that were dirty time (usecs)" : 0,
"forced eviction - pages selected because of too many deleted items count" : 0,
"forced eviction - pages selected count" : 0,
"forced eviction - pages selected unable to be evicted count" : 0,
"forced eviction - pages selected unable to be evicted time" : 0,
"forced eviction - session returned rollback error while force evicting due to being oldest" : 0,
"hazard pointer blocked page eviction" : 0,
"hazard pointer check calls" : 902,
"hazard pointer check entries walked" : 25,
"hazard pointer maximum array length" : 1,
"history store key truncation calls that returned restart" : 0,
"history store key truncation due to mixed timestamps" : 0,
"history store key truncation due to the key being removed from the data page" : 0,
"history store score" : 0,
"history store table insert calls" : 0,
"history store table insert calls that returned restart" : 0,
"history store table max on-disk size" : 0,
"history store table on-disk size" : 0,
"history store table out-of-order resolved updates that lose their durable timestamp" : 0,
"history store table out-of-order updates that were fixed up by moving existing records" : 0,
"history store table out-of-order updates that were fixed up during insertion" : 0,
"history store table reads" : 0,
"history store table reads missed" : 0,
"history store table reads requiring squashed modifies" : 0,
"history store table remove calls due to key truncation" : 0,
"history store table writes requiring squashed modifies" : 0,
"in-memory page passed criteria to be split" : 0,
"in-memory page splits" : 0,
"internal pages evicted" : 0,
"internal pages queued for eviction" : 0,
"internal pages seen by eviction walk" : 0,
"internal pages seen by eviction walk that are already queued" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"maximum bytes configured" : 8053063680,
"maximum page size at eviction" : 376,
"modified pages evicted" : 902,
"modified pages evicted by application threads" : 0,
"operations timed out waiting for space in cache" : 0,
"overflow pages read into cache" : 0,
"page split during eviction deepened the tree" : 0,
"page written requiring history store records" : 0,
"pages currently held in the cache" : 24,
"pages evicted by application threads" : 0,
"pages queued for eviction" : 0,
"pages queued for eviction post lru sorting" : 0,
"pages queued for urgent eviction" : 902,
"pages queued for urgent eviction during walk" : 0,
"pages read into cache" : 20,
"pages read into cache after truncate" : 902,
"pages read into cache after truncate in prepare state" : 0,
"pages requested from the cache" : 33134,
"pages seen by eviction walk" : 0,
"pages seen by eviction walk that are already queued" : 0,
"pages selected for eviction unable to be evicted" : 0,
"pages selected for eviction unable to be evicted as the parent page has overflow items" : 0,
"pages selected for eviction unable to be evicted because of active children on an internal page" : 0,
"pages selected for eviction unable to be evicted because of failure in reconciliation" : 0,
"pages walked for eviction" : 0,
"pages written from cache" : 1822,
"pages written requiring in-memory restoration" : 0,
"percentage overhead" : 8,
"tracked bytes belonging to internal pages in the cache" : 5136,
"tracked bytes belonging to leaf pages in the cache" : 67908,
"tracked dirty bytes in the cache" : 493,
"tracked dirty pages in the cache" : 1,
"unmodified pages evicted" : 0
}

There’s a lot of data here about WiredTiger’s cache, but we can focus on the following fields:

Looking at the above values, we can determine if we need to up the size of the WiredTiger cache for our instance. Additionally, we can look at the wiredTiger.cache.pages read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.

storage.wiredTiger.engineConfig.directoryForIndexes

Our second configuration option is storage.wiredTiger.engineConfig.directoryForIndexes.

storage:
   wiredTiger:
       engineConfig:
           directoryForIndexes: <true or false>

Setting this value to true creates two directories in your storage.dbPath directory, one named collection which will hold your collection data files, and another one named index which will hold your index data files.  This allows you to create separate storage volumes for collections and indexes if you wish, which can spread the amount of disk I/O across each volume, but with most modern storage options you can get the same performance benefits by just striping your disk across two volumes (RAID 0).  This can help separate index I/O from collection-based I/O and reduce storage based latencies, although index-based I/O is unlikely to be costly due to its smaller size.

storage.wiredTiger.collectionConfig.blockCompressor

Our third configuration option is storage.wiredTiger.collectionConfig.blockCompressor.

storage:
   wiredTiger:
       collectionConfig:
           blockCompressor: <value>

This option sets the compression options for all of your collection data.  Possible values for this parameter are none, snappy (default), zlib, and zstd.  So how does compression help your performance?  The WiredTiger cache generally stores changes uncompressed, with the exception of some very large documents. Now we need to write that uncompressed data to disk.

Compression Types:

Snappy compression is fairly straightforward, snappy compression gathers your data up to a maximum of 32KB, compresses your data, and if compression is successful, writes the block rounded up to the nearest 4KB.

Zlib compression works a little differently; it will gather more data and compress enough to fill a 32KB block on disk. This is more CPU-intensive but generally results in better compression ratios (independent of the inherent differences between snappy and zlib).

Zstd is a newer compression algorithm, developed by Facebook that offers improvements over zlib (better compression rates, less CPU intensive, faster performance).

Which compression algorithm to choose depends greatly on your workload.  For most write-heavy workloads, snappy compression will perform better than zlib and zstd but will require more disk space.  For read-heavy workloads, zstd is often the best choice because of its better decompression rates.

storage.directoryPerDB

Another configuration option to help with MongoDB performance is storage.directoryPerDB.

storage:
   directoryPerDB: <true or false>

Similar to the above configuration file option, storage.wiredTiger.engineConfig.directoryForIndexes, setting this value to true creates a separate directory in your storage.dbPath for each database in your MongoDB instance.  This allows you to create separate storage volumes for each database if you wish, which can spread the amount of disk I/O across each volume.  This can help when you have multiple databases with intensive I/O needs. Additionally, if you use this parameter in tandem with storage.wiredTiger.engineConfig.directorForIndexes, your directory structure will look like this:

-Database_name
    -Collection
    -Index

net.compression.compressors

Our final configuration option that can help keep your database performant is the net.compression.compressors configuration option.

net:
   compression:
       compressors: <value>

This option allows you to compress the network traffic between your mongos, mongod, and even your mongo shell. There are currently three types of compression available, snappy, zlib, and zstd. Since MongoDB 3.6, compression has been enabled by default.  In MongoDB 3.6 and 4.0, snappy was the default.  Since MongoDB 4.2, the default is now snappy, zstd, and zlib compressors, in that order.  It’s also important to note that you must have at least one mutual compressor on each side of your network conversation for compression to happen.  For example, if your shell uses zlib compression but you have your mongod set to only accept snappy compression, then no compression will occur between the two. If both accept zstd compression, then zstd compression would be used between them.  When compression is set it can be very helpful in reducing replication lag and overall reducing network latency as the size of the data moving across the network is decreased, sometimes dramatically.  In cloud environments, setting this configuration option can also lead to decreased data transfer costs.

Summary:

In this blog post, we’ve gone over five MongoDB configuration options to ensure you have a more performant MongoDB deployment.  We hope that these configuration options will help you build more performant MongoDB deployments and avoid slowdowns and bottlenecks.   Thanks for reading!

Additional Resources: MongoDB Best Practices 2020 Edition

Aug
05
2019
--

Webinar 8/7: Performance Analyses and Troubleshooting Technologies for Databases

webinar Performance Analyses and Troubleshooting Technologies

webinar Performance Analyses and Troubleshooting TechnologiesPlease join Percona CEO Peter Zaitsev as he presents “Performance Analyses and Troubleshooting Technologies for Databases” on Wednesday, August 7th, 2019 at 11:00 AM PDT (UTC-7).

Register Now

Have you heard about the USE Method (Utilization – Saturation – Errors), RED (Rate – Errors – Duration) or Golden Signals (Latency – Traffic – Errors – Saturations)?

In this presentation, we will talk briefly about these different-but-similar “focuses” and discuss how we can apply them to data infrastructure performance analysis troubleshooting and monitoring.

We will use MySQL as an example, but most of this talk applies to other database technologies as well.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Feb
22
2019
--

Measuring Percona Server for MySQL On-Disk Decryption Overhead

benchmark heavy IO percona server for mysql 8 encryption

Percona Server for MySQL 8.0 comes with enterprise grade total data encryption features. However, there is always the question of how much overhead – or performance penalty – comes with the data decryption. As we saw in my networking performance post, SSL under high concurrency might be problematic. Is this the case for data decryption?

To measure any overhead, I will start with a simplified read-only workload, where data gets decrypted during read IO.

MySQL decryption schematic

During query execution, the data in memory is already decrypted so there is no additional processing time. The decryption happens only for blocks that require a read from storage.

For the benchmark I will use the following workload:

sysbench oltp_read_only --mysql-ssl=off --tables=20 --table-size=10000000 --threads=$i --time=300 --report-interval=1 --rand-type=uniform run

The datasize for this workload is about 50GB, so I will use

innodb_buffer_pool_size = 5GB

  to emulate a heavy disk read IO during the benchmark. In the second run, I will use

innodb_buffer_pool_size = 60GB

  so all data is kept in memory and there are NO disk read IO operations.

I will only use table-level encryption at this time (ie: no encryption for binary log, system tablespace, redo-  and undo- logs).

The server I am using has AES hardware CPU acceleration. Read more at https://en.wikipedia.org/wiki/AES_instruction_set

Benchmark N1, heavy read IO

benchmark heavy IO percona server for mysql 8 encryption

Threads encrypted storage no encryption encryption overhead
1 389.11 423.47 1.09
4 1531.48 1673.2 1.09
16 5583.04 6055 1.08
32 8250.61 8479.61 1.03
64 8558.6 8574.43 1.00
96 8571.55 8577.9 1.00
128 8570.5 8580.68 1.00
256 8576.34 8585 1.00
512 8573.15 8573.73 1.00
1024 8570.2 8562.82 1.00
2048 8422.24 8286.65 0.98

Benchmark N2, data in memory, no read IO

benchmark data in memory percona server for mysql 8 encryption

Threads Encryption No encryption
1 578.91 567.65
4 2289.13 2275.12
16 8304.1 8584.06
32 13324.02 13513.39
64 20007.22 19821.48
96 19613.82 19587.56
128 19254.68 19307.82
256 18694.05 18693.93
512 18431.97 18372.13
1024 18571.43 18453.69
2048 18509.73 18332.59

Observations

For a high number of threads, there is no measurable difference between encrypted and unencrypted storage. This is because a lot of CPU resources are spent in contention and waits, so the relative time spend in decryption is negligible.

However, we can see some performance penalty for a low number of threads: up to 9% penalty for hardware decryption. When data fully fits into memory, there is no measurable difference between encrypted and unencrypted storage.

So if you have hardware support then you should see little impact when using storage encryption with MySQL. The easiest way to check if you have support for this is to look at CPU flags and search for ‘aes’ string:

> lscpu | grep aes Flags: ... tsc_deadline_timer aes xsave avx f16c ...

Aug
07
2018
--

Resource Usage Improvements in Percona Monitoring and Management 1.13

PMM 1-13 reduction CPU usage by 5x

In Percona Monitoring and Management (PMM) 1.13 we have adopted Prometheus 2, and with this comes a dramatic improvement in resource usage, along with performance improvements!

What does it mean for you? This means you can have a significantly larger number of servers and database instances monitored by the same PMM installation. Or you can reduce the instance size you use to monitor your environment and save some money.

Let’s look at some stats!

CPU Usage

PMM 1.13 reduction in CPU usage by 5x

Percona Monitoring and Management 1.13 reduction in CPU usage after adopting Prometheus 2 by 8x

We can see an approximate 5x and 8x reduction of CPU usage on these two PMM Servers. Depending on the workload, we see CPU usage reductions to range between 3x and 10x.

Disk Writes

There is also less disk write bandwidth required:

PMM 1.13 reduction in disk write bandwidth

On this instance, the bandwidth reduction is “just” 1.5x times. Note this is disk IO for the entire PMM system, which includes more than only the Prometheus component. Prometheus 2 itself promises much more significant IO bandwidth reduction according to official benchmarks

According to the same benchmark, you should expect disk space usage reduction by 33-50% for Prometheus 2 vs Prometheus 1.8. The numbers will be less for Percona Monitoring and Management, as it also stores Query Statistics outside of Prometheus.

Resource usage on the monitored hosts

Also, resource usage on the monitored hosts is significantly reduced:

Percona Monitoring and Management 1.13 reduction of resource usage by Prometheus 2

Why does CPU usage go down on a monitored host with a Prometheus 2 upgrade? This is because PMM uses TLS for the Prometheus to monitored host communication. Before Prometheus 2, a full handshake was performed for every scrape, taking a lot of CPU time. This was optimized with Prometheus 2, resulting in a dramatic CPU usage decrease.

Query performance is also a lot better with Prometheus 2, meaning dashboards visually load a lot faster, though we did not do any specific benchmarks here to share the hard numbers. Note though this improvement only applies when you’re querying the data which is stored in Prometheus 2.

If you’re querying data that was originally stored in Prometheus 1.8, it will be queried through the much slower and less efficient “Remote Read” interface, being quite a bit slower and using a lot more CPU and memory resources.

If you love better efficiency and Performance, consider upgrading to PMM 1.13!

The post Resource Usage Improvements in Percona Monitoring and Management 1.13 appeared first on Percona Database Performance Blog.

Aug
01
2018
--

Saving With MyRocks in The Cloud

The main focus of a previous blog post was the performance of MyRocks when using fast SSD devices. However, I figured that MyRocks would be beneficial for use in cloud workloads, where storage is either slow or expensive.

In that earlier post, we demonstrated the benefits of MyRocks, especially for heavy IO workloads. Meanwhile, Mark wrote in his blog that the CPU overhead in MyRocks might be significant for CPU-bound workloads, but this should not be the issue for IO-bound workloads.

In the cloud the cost of resources is a major consideration. Let’s review the annual cost for the processing and storage resources.

 Resource cost/year, $   IO cost $/year   Total $/year 
c5.9xlarge  7881    7881
1TB io1 5000 IOPS  1500  3900    5400
1TB io1 10000 IOPS  1500  7800    9300
1TB io1 15000 IOPS  1500  11700  13200
1TB io1 20000 IOPS  1500  15600  17100
1TB io1 30000 IOPS  1500  23400  24900
3.4TB GP2 (10000 IOPS)  4800    4800

 

The scenario

The server version is Percona Server 5.7.22

For instances, I used c5.9xlarge instances. The reason for c5 was that it provides high performance Nitro virtualization: Brendan Gregg describes this in his blog post. The rationale for 9xlarge instances was to be able to utilize io1 volumes with a 30000 IOPS throughput – smaller instances will cap io1 throughput at a lower level.

I also used huge gp2 volumes: 3400GB, as this volume provides guaranteed 10000 IOPS even if we do not use io1 volumes. This is a cheaper alternative to io1 volumes to achieve 10000 IOPS.

For the workload I used sysbench-tpcc 5000W (50 tables * 100W), which for InnoDB gave about 471GB in storage used space.

For the cache I used 27GB and 54G buffer size, so the workload is IO-heavy.

I wanted to compare how InnoDB and RocksDB performed under this scenario.

If you are curious I prepared my terraform+ansible deployment files here: https://github.com/vadimtk/terraform-ansible-percona

Before jumping to the results, I should note that for MyRocks I used LZ4 compression for all levels, which in its final size is 91GB. That is five times less than InnoDB size. This alone provides operational benefits—for example to copy InnoDB files (471GB) from a backup volume takes longer than 1 hour, while it is much faster (five times) for MyRocks.

The benchmark results

So let’s review the results.

InnoDB versus MyRocks throughput in the cloud

Or presenting average throughput in a tabular form:

cachesize IOPS engine avg TPS
27 5000 innodb 132.66
27 5000 rocksdb 481.03
27 10000 innodb 285.93
27 10000 rocksdb 1224.14
27 10000gp2 innodb 227.19
27 10000gp2 rocksdb 1268.89
27 15000 innodb 436.04
27 15000 rocksdb 1839.66
27 20000 innodb 584.69
27 20000 rocksdb 2336.94
27 30000 innodb 753.86
27 30000 rocksdb 2508.97
54 5000 innodb 197.51
54 5000 rocksdb 667.63
54 10000 innodb 433.99
54 10000 rocksdb 1600.01
54 10000gp2 innodb 326.12
54 10000gp2 rocksdb 1559.98
54 15000 innodb 661.34
54 15000 rocksdb 2176.83
54 20000 innodb 888.74
54 20000 rocksdb 2506.22
54 30000 innodb 1097.31
54 30000 rocksdb 2690.91

 

We can see that MyRocks outperformed InnoDB in every single combination, but it is also important to note the following:

MyRocks on io1 5000 IOPS showed the performance that InnoDB showed in io1 15000 IOPS.

That means that InnoDB requires three times more in storage throughput. If we take a look at the storage cost, it corresponds to three times more expensive storage. Given that MyRocks requires less storage, it is possible to save even more on storage capacity.

On the most economical storage (3400GB gp2, which will provide 10000 IOPS) MyRocks showed 4.7 times better throughput.

For the 30000 IOPS storage, MyRocks was still better by 2.45 times.

However it is worth noting that MyRocks showed a greater variance in throughput during the runs. Let’s review the charts with 1 sec resolution for GP2 and io1 30000 IOPS storage:Throughput 1 sec resolution for GP2 and io1 30000 IOPS storage MyROCKS versus InnoDB

Such variance might be problematic for workloads that require stable throughput and where periodical slowdowns are unacceptable.

Conclusion

MyRocks is suitable and beneficial not only for fast SSD, but also for cloud deployments. By requiring less IOPS, MyRocks can provide better performance and save on the storage costs.

However, before evaluating MyRocks, make sure that your workload is IO-bound i.e. the working set is much bigger than available memory. For CPU-intensive workloads (where the working set fits into memory), MyRocks will be less beneficial or even perform worse than InnoDB (as described in the blog post A Look at MyRocks Performance)

 

 

 

The post Saving With MyRocks in The Cloud appeared first on Percona Database Performance Blog.

Jun
18
2018
--

Webinar Tues 19/6: MySQL: Scaling and High Availability – Production Experience from the Last Decade(s)

scale high availability

scale high availability
Please join Percona’s CEO, Peter Zaitsev as he presents MySQL: Scaling and High Availability – Production Experience Over the Last Decade(s) on Tuesday, June 19th, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

 

Percona is known as the MySQL performance experts. With over 4,000 customers, we’ve studied, mastered and executed many different ways of scaling applications. Percona can help ensure your application is highly available. Come learn from our playbook, and leave this talk knowing your MySQL database will run faster and more optimized than before.

Register Now

About Peter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads. Peter lives in North Carolina with his wife and two children. In his spare time, Peter enjoys travel and spending time outdoors.

The post Webinar Tues 19/6: MySQL: Scaling and High Availability – Production Experience from the Last Decade(s) appeared first on Percona Database Performance Blog.

May
07
2018
--

Webinar Wednesday, May 9, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM)

MySQL Troubleshooting

MySQL TroubleshootingPlease join Percona’s CEO, Peter Zaitsev as he presents MySQL Troubleshooting and Performance Optimization with PMM on Wednesday, May 9, 2018, at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).

Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications must handle heavy traffic loads while remaining responsive and stable so that you can deliver an excellent user experience. Further, DBAs’ bosses expect solutions that are cost-efficient.

In this webinar, Peter discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them.

Register for the webinar now.

Peter ZaitsevPeter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. The Inc. 5000 recognized Percona in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He was also tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

The post Webinar Wednesday, May 9, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com