PostgreSQL 14 – Performance, Security, Usability, and Observability

Percona PostgreSQL 14

Today saw the launch of PostgreSQL14. For me, the important area to focus on is what the community has done to improve performance for heavy transactional workloads and additional support for distributed data.

The amount of data that enterprises have to process continues to grow exponentially. Being able to scale up and out to support these workloads on PostgreSQL 14 through better handling of concurrent connections and additional features for query parallelism makes a lot of sense for performance, while the expansion of logical replication will also help.

Performance Enhancements

Server Side Enhancements

Reducing B-tree index bloat

Frequently updated indexes tend to have dead tuples that cause index bloat. Typically, these tuples are removed only when a vacuum is run. Between vacuums, as the page gets filled up, an update or insert will cause a page split – something that is not reversible. This split would occur even though dead tuples inside the existing page could have been removed, making room for additional tuples.

PostgreSQL 14 implements the improvement where dead tuples are detected and removed even between vacuums, allowing for a reduced number of page splits thereby reducing index bloat.

Eagerly delete B-tree pages

Reducing overhead from B-trees, the vacuum system has been enhanced to eagerly remove deleted pages. Previously, it took 2 vacuum cycles to do so, with the first one marking the page as deleted and the second one actually freeing up that space.

Enhancements for Specialized Use Cases

Pipeline mode for libpq

High latency connections with frequent write operations can slow down client performance as libpq waits for each transaction to be successful before sending the next one. With PostgreSQL 14, ‘pipeline mode’ has been introduced to libpq allowing the client to send multiple transactions at the same time, potentially giving a tremendous boost to performance. What’s more – because this is a client-side feature, PostgreSQL 14’s libpq can even be used with older versions of the PostgreSQL server.

Replicating in-progress transactions

Logical replication has been expanded to allow streaming in-progress transactions to subscribers. Large transactions were previously written to disk till the transaction was completed before replicating to the subscriber. By allowing in-progress transactions to be streamed, users gain significant performance benefits along with more confidence in their distributed workloads.

Add LZ4 compression to TOAST

PostgreSQL 14 adds support for LZ4 compression for TOAST, a system used to efficiently store large data. LZ4 is a lossless compression algorithm that focuses on the speed of compression and decompression. LZ4 can be configured at the column as well as the system level. Previously, the only option was pglz compression – which is fast but has a small compression ratio.

Enhancements for Distributed Workloads

The PostgreSQL foreign data wrapper – postgres_fdw – has been instrumental in easing the burdens of handling distributed workloads. With PostgreSQL 14, there are 2 major improvements in the FDW to improve performance for such transactions: Query parallelism can now be leveraged for parallel table scans on foreign tables, and Bulk insert of data is now allowed on foreign tables.

Both these improvements further cement the increasing ability of PostgreSQL to scale horizontally and natively handle distributed databases.

Improvement to SQL parallelism

PostgreSQL’s support for query parallelism allows the system to engage multiple CPU cores in multiple threads to execute queries in parallel, thereby drastically improving performance. PostgreSQL 14 brings more refinement to this system by adding support for RETURN QUERY and REFRESH MATERIALIZED VIEW to execute queries in parallel. Improvements have also been rolled out to the performance of parallel sequential scans and nested loop joins.


SCRAM as default authentication

SCRAM-SHA-256 authentication was introduced in PostgreSQL 10 and has now been made the default in PostgreSQL 14. The previous default MD5 authentication has had some weaknesses that have been exploited in the past. SCRAM is much more powerful, and it allows for easier regulatory compliance for data security.

Predefined roles

Two predefined roles have been added in PostgreSQL 14 – pg_read_all_data and pg_write_all_data. The former makes it convenient to grant read-only access for a user to all tables, views, and schemas in the database. This role will have read access by default to any new tables that are created. The latter makes it convenient to create super-user-styled privileges, which means that one needs to be quite careful when using it!

Convenience for Application Developers

Access JSON using subscripts

From an application development perspective, I have always found support for JSON in PostgreSQL very interesting. PostgreSQL has supported this unstructured data form since version 9.2, but it has had a unique syntax for retrieving data. In version 14, support for subscripts has been added, making it easier for developers to retrieve JSON data using a commonly recognized syntax.

Multirange types

PostgreSQL has had Range types since version 9.2. PostgreSQL 14 now introduces ‘multirange’ support which allows for non-contiguous ranges, helping developers write simpler queries for complex sequences. A simple example of practical use would be specifying the ranges of time a meeting room is booked through the day.

OUT parameters in stored procedures

Stored procedures were added in PostgreSQL 11, giving developers transactional control in a block of code. PostgreSQL 14 implements the OUT parameter, allowing developers to return data using multiple parameters in their stored procedures. This feature will be familiar to Oracle developers and welcome addition for folks trying to migrate from Oracle to PostgreSQL.


Observability is one of the biggest buzzwords of 2021, as developers want more insight into how their applications are performing over time. PostgreSQL 14 adds more features to help with monitoring, with one of the biggest changes being the move of query hash system from pg_stat_statement to the core database. This allows for monitoring a query using a single ID across several PostgreSQL systems and logging functions. This version also adds new features to track the progress of COPY, WAL activity, and replication slots statistics.

The above is a small subset of the more than 200 features and enhancements that make up the PostgreSQL 14 release. Taken altogether, this version update will help PostgreSQL continue to lead the open source database sector.

As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL


Percona Monthly Bug Report: September 2021

Percona Bug Report Sept 2021

Percona Bug Report Sept 2021Here at Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open source form, report back on any issues or bugs you might encounter along the way.

We constantly update our bug reports and monitor other boards to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. These posts are a central place to get information on the most noteworthy open and recently resolved bugs. 

In this September 2021 edition of our monthly bug report, we have the following list of bugs:

Percona Server for MySQL/MySQL Bugs

PS-7866 (MySQL#104961): MySQL crashes when running a complex query with JOINS. In MySQL error log it only adds a single line as “Segmentation fault (core dumped)” and there are no further details like backtraces in MySQL error log. Further analyzing the core dump shows that something happened at CreateIteratorFromAccessPath.

Affects Version/s: 8.0 [Tested/Reported version 8.0.23, 8.0.25]


PS-7778 (MySQL#104168): The prepare statement can be failed when triggers with DEFINER are used. This is a regression bug introduce after WL#9384 implementation. Issue not reproducible in lower versions, tested with 8.0.20

Affects Version/s: 8.0 [Tested/Reported version 8.0.22, 8.0.25]

Fixed version: PS-8.0.26


MySQL#102586:  When doing a multiple-table DELETE that is anticipating a foreign key ON DELETE CASCADE, the statements work on the primary but it breaks row-based replication.

Affects Version/s: 8.0, 5.7  [Tested/Reported version 8.0.23, 5.7.33]


Percona XtraDB Cluster

PXC-3724: PXC node crashes with long semaphore, this issue occurred due to locking when writing on multiple nodes in PXC cluster. This is critical as it blocks all nodes to perform any transactions and finally crashing PXC node.

Affects Version/s: 8.0  [Tested/Reported version 8.0.22]


PXC-3729: PXC node fails with two applier threads due to conflict. In a certain combination of a table having both primary key and (multi-column) unique key, and highly concurrent write workload updating similar UK values, parallel appliers on secondary nodes may conflict with each other, causing node aborts with inconsistency state. These can be seen even when writes are going to a single writer node only. As transactions were already certified and replicated, they should never fail on the appliers.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]


PXC-3449: When ALTER TABLE (TOI) is executed in the user session, sometimes it happens that it conflicts (MDL) with high priority transaction, which causes BF-BF abort and server termination.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25


Percona XtraBackup

PXB-2375:  In some cases, xtrabackup will write the wrong binlog filename, pos, and GTID combination info in xtrabackup_binlog_info. Due to this xtrabackup might not work as expected with GTID.

If we are using this backup with GTID position details in xtrabackup_binlog_info to create a new replica, then most likely replication will break due to incorrect GTID position.

Looks like the GTID position is not consistent with binlog file pos, they are captured differently and later printed together in xtrabackup_binlog_info  file.

Workaround to avoid this bug,

  •  Use binary log coordinates 
  • Take backup in non-peak hours since this issue mostly occurred when MySQL under heavy write operations.

Affects Version/s:  8.0 [Tested/Reported version 8.0.14]


Percona Toolkit

PT-1889: Incorrect output when using pt-show-grants for users based on MySQL roles as result they can not be applied back properly on MySQL server. Due to this, we can not use pt-show-grants for mysql roles until this issue is fixed.

Affects Version/s:  3.2.1


PT-1747: pt-online-schema-change was bringing the database into a broken state when applying the “rebuild_constraints” foreign keys modification method if any of the child tables were blocked by the metadata lock.

Affects Version/s:  3.0.13

Fixed Version: 3.3.2


PMM  [Percona Monitoring and Management]

PMM-8421: Starting from pmm-server to 2.19.0, it will break existing external service monitoring, for internal supported services like MySQL not impacted.

Affects Version/s: 2.19.0  [Tested/Reported version 2.19]

Fixed Version:  2.22.0


PMM-7116: MongoDB ReplSet Summary Dashboard shows incorrect replset member’s state: STARTUP instead of PRIMARY.

Affects Version/s: 2.x  [Tested/Reported version 2.12,2.20]


PMM-8824:If there are custom dashboards, tagged with tags that are different from the tags used for default PMM dashboards the fails with the error. As a result upgrade to the next pmm version fails.

Affects Version/s: 2.21.0  [Tested/Reported version 2.21.0]

Fixed Version:  2.22.0


PMM-4665: Frequent error messages in pmm-agent.log for components like tokudb storage engine which are not supported by upstream MySQL. As a result, it increases the overall log file size due to these additional messages.

Affects Version/s:  2.x  [Tested/Reported version 2.13.0]

Fixed version: 2.19.0


PMM-7846:  Adding MongoDB instance via pmm-admin with tls option not working and failing with error Connection check failed: timeout (context deadline exceeded).

Affects Version/s: 2.x  [Tested/Reported version 2.13, 2.16]



We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, How to Report Bugs, Improvements, New Feature Requests for Percona Products.

For the most up-to-date information, be sure to follow us on Twitter, LinkedIn, and Facebook.

Quick References:

Percona JIRA  

MySQL Bug Report

Report a Bug in a Percona Product

MySQL 8.0.24 Release notes


About Percona:

As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides expertise, software, support, and services no matter the technology.

Whether it’s enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.

Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.


New Experimental Environment Dashboards for Percona Monitoring and Management

Environment Dashboards Percona Monitoring and Management

As Technical Product Manager, I get a lot of user feedback, both positive and negative. In recent months many people have complained about the Home dashboard. It went something like this:

“Hey, the Home page is useless! We have several hundred services monitored by a single Percona Monitoring and Management (PMM), so this list of metrics is not providing any value when I have this many servers”.

We were happy to note that people were using PMM for big deployments, and we decided to create new types of dashboards for these users. I will provide their description below. But what exactly is the problem with the Home dashboard now? Let’s take a look.

Percona Monitoring and Management Dashboard

Percona Monitoring and Management Dashboard

The red box represents the visible part of the screen on my laptop. So to see any data, I need to scroll to the bottom of the page. But even if I do, I will see a set of small graphs not related to one another.

If I have more than three nodes, I get something like the example below: (

Percona Monitoring and Management Nodes
Remember, on the laptop screen, you can only see this much:

PMM Dashboard
You can’t see the complete picture and compare the performance of individual servers; there is nothing actionable.

This dashboard is clearly not meeting the goal of giving the user a high-level overview of their infrastructure. To achieve this goal, we have created two new experimental dashboards – Environment Overview and Environment Summary.

The Environment label is already present in the current version of the dashboard. It lets users specify different groups of the Service. We have the particular flag to add an environment label when you add a Service to PMM.

# pmm-admin add mongodb … --environment=environment1 ..

Here are some ideas of what you can use as an environment:

  • “production”, “development”, “testing”
  • “departmentA”, “departmentB”, “subDepartmentC”
  • “Datacenter1”, “datacenter2”, “cloud-region-east2”
  • <Your ideas here>

Setting the environment for your services will simplify the search/selection on all dashboards. The new dashboards let you group your nodes by their environment label.

Environment Overview Dashboard

The Environment Overview Dashboard is designed to be a possible replacement for the default Home dashboard. This dashboard aims to give the user a high-level view of all Environments and how they are behaving.

This dashboard presents the main parameters of all environments. It shows six graphs – three for Node metrics (CPU, Memory, Disk) per environment and three for Services metrics (Used Connections, QPS, Latency) per environment. The Service metrics are inspired by RED Method for MySQL Performance Analyses – Percona Database Performance Blog.

You get some valuable data on the first screen. It provides you a summary of the parameters of your environments and highlights apparent problems. The second screen shows the relationships between the main metrics and how they’ve changed over a selected period of time and allows you to click on graphs and drill down for more detailed information. This will help you spot the environments where you might have a problem and know where to dig deeper to investigate it.

PMM Environment Overview Dashboard

PMM Dashboard

Environment Summary Dashboard

The Environment Summary Dashboard is designed to give you information about one specific Environment and an overview of the activities and behaviors of the Services and Nodes inside that particular environment.

Environment Summary Dashboard PMM

You can also drill down to the specific Services, Nodes, and their parameters for more details from this dashboard. This will help you see the unhealthy Service or Node.

Percona Monitoring and Management Disk Space Usage

How to Install Dashboards

  1. Get dashboards:
  2. Import dashboards to PMM2

What’s Next?

These two experimental dashboards are not yet ready to be a part of the standard PMM release, but we would LOVE to make them the default. We would appreciate your feedback on making them not just a step forward from the current home dashboard but a HUGE step forward.

If you have many servers, please test the dashboards and let us know if these dashboards provide better visibility over your infrastructure. If not, we would love to hear what sort of data you want to see in these dashboards to speed up decision-making.

You can leave your feedback on our Percona Community Forum. Please help us make Percona Monitoring and Management more useful!

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today


Horizontal Scaling in MySQL – Sharding Followup

Horizontal Scaling in MySQL Sharding

Horizontal Scaling in MySQL ShardingIn a previous post, A Horizontal Scalability Mindset for MySQL, I discussed the concerns around growing individual MySQL instances too large and some basic strategies:

  • Optimizing/minimizing size with proper data types
  • Removing unused/duplicate indexes
  • Keeping your Primary Keys small
  • Pruning data

Finally, if those methods have been exhausted, I touched on horizontal sharding as the approach to keep individual instances at a reasonable size. When discussing my thoughts across our internal teams, there was lots of feedback that we needed to dive into the sharding topic in more detail. This post aims to give more theory and considerations around sharding along with a lightweight ProxySQL sample implementation.

What is Sharding?

Sharding is a word that is frequently used but is often misunderstood. Classic vertical scaling (often just called “scaling”) is when you increase the resources on a single server to handle more data and/or load. Horizontal scaling (aka sharding) is when you actually split your data into smaller, independent buckets and keep adding new buckets as needed.

A sharded environment has two or more groups of MySQL servers that are isolated and independent from each other. While specific implementation details are dependent on your environment, here are some general guidelines:

  • A shard can be single server or replication hierarchy
  • Each shard has identical schemas – you can run the same query pattern against any shard and expect to get a result set (of course varying the WHERE clause of the identifier)
  • A given identifier belongs to exactly one shard at a time – you should NOT have a Customer ID spanning shards (unless only temporarily during migration between shards for example – this is an edge case that requires special handling)

With this understanding of sharding, let’s look at some of the key considerations and challenges for actual implementation.

Sharding Considerations and Challenges

While sharding often becomes the only way to manage exploding data, most experts and architects advise that it is delayed as long as possible. If it is such a great way to handle data, then why is that? The primary reason is complexity. While vertical scaling is trivial (just add more hardware and make no other changes), sharding requires significant thought and planning. The main challenges fall into a few primary buckets:

  • How should I split my data?
  • What if my data spans shards?
  • How do I find the data I need?

Each bucket has its own set of challenges, but each can also be an overall blocker when trying to implement sharding.

How Do I Split My Data?

Picking the shard key is the MOST IMPORTANT part of implementing sharding. The main concept is to pick an identifier that is shared across records, such as CustomerID, ClientID, or UserID. Generally, you’ll want to pick the smallest unit possible (generally at the UserID level), but this also depends on what level of aggregation is needed (more on this later). The risk here is that the key isn’t selective enough and you end up not splitting the data evenly. However, if you are too selective, then you risk breaking up units that logically should remain together (think sharding on ForumPostID vs UserID).

What if My Data Spans Shards?

This is often a deciding factor that prevents an organization from sharding. Splitting the data and retrieving individual records, while not trivial, is relatively straightforward. In many workloads, this is enough. Locate my small subset of data and fetch it. When the use case dictates aggregate queries, however, this workflow breaks down. Take this common query to find the newest users across the platform:

SELECT user_id, date_created FROM users ORDER BY date_created DESC LIMIT 10;

When all the users are stored in a single server, this query is trivial. However, when the data is split across multiple shards, we now need a way to aggregate this data. Implementing this is outside the scope of this post, but the general approach looks something like this:

  1. Issue the query on every shard
  2. Collect the results from each shard in a central aggregation service
  3. Combine/filter the individual results into a single result set

If your use case relies heavily on aggregate queries, implementing this aggregation layer is critical and often very complicated. This is also the use case that discourages so many from sharding, even when it is necessary.

In an ideal scenario, the active OLTP workload can be contained within individual shards. Reporting and data warehousing requires the implementation of distributed queries which can be managed through different systems, or via custom aggregation using multi-source MySQL asynchronous replication.

How Do I Find My Data?

Now, assuming that you have chosen a sharding key, you need to be able to retrieve records when needed. This is the next major hurdle that needs to be overcome. There are two distinct components to this challenge:

  • Determining which shard holds the data
  • How to connect to that shard from my application

Finding the ShardID

Determining which shard holds that data can be as simple or complex as you want. However, with each approach, there are tradeoffs. The simple approach using a simple hash/modulus to determine the shard looks something like this:

shardID = identifier % numShards
return shardID

The most basic example would be sharding by userID across 2 shards. UserIDs that are even would be on shard 0 and odd userIDs would be on shard 1. While incredibly simplistic, what do I do when 2 shards aren’t enough? The goal of this architecture is to just add more shards when individual shards get too big. So now, when I add new shards, the number of shards increases so the shardID may change for many of the records.

A more flexible approach would be to use a directory service. In this approach, there is a very simple datastore that maps an identifier to a shard. To determine which shard holds the data, you simply query the datastore with the identifier and it returns the shardID. This gives incredible flexibility for moving data between shards, but also adds complexity to the system. Naturally, this service itself would need to be highly available, but doesn’t have to be in MySQL. In addition to complexity, you have potentially introduced additional latency to the system with an extra lookup.

Connecting to the Shard

Once we have the proper shardID, we need to connect to the database cluster in question. Again, we have options and the implementation is dependent on the application. We can look at one of two main approaches:

  • Use a different connection string for each shard
  • Leverage a proxy layer that routes intelligently

Using a different connection string is generally self-explanatory: I have a collection of shardIDs and their corresponding connection strings. When I know the shardID, I then fetch the connection string, connect to the cluster, and away we go.

But we know developers don’t want to (and shouldn’t) mess with connection strings and credentials. So a proxy layer that can route the query based on the query itself or some metadata in the query is ideal. Being a layer 7 service, ProxySQL is able to inspect the queries themselves and make decisions based on defined rules. While definitely not the only way, let’s look at a sample ProxySQL implementation that uses query comments to route to the correct shard.

Sample ProxySQL Implementation

For our example, here is the basic architecture:

  • Sharding users by UserID
  • Directory service that does simple routing based on modulus
  • ProxySQL is already used for HA
  • Data is split across 3 shards

A typical user query may look something like this:

FROM users
WHERE user_id = 5

With UserID being the shard key and simple modulus sharding strategy, we can identify the shardID easily for this query:

Identifier % numShards = shardID
5 % 3 = 2

With the shardID defined, we can now inject a comment into the original query that can be used by ProxySQL to properly route the query to the correct shard:

SELECT /* shard=shard_2 */ 
FROM users
WHERE user_id = 5

ProxySQL uses the concept of hostgroups to define routing internally. In many HA cases, this is simply a reader and writer hostgroup. For the sharding architecture, however, we can expand that define hostgroups as actual shards:

INSERT INTO mysql_servers (hostgroup_id, hostname, comment) 
(1,'','Shard 0'),
(2,'','Shard 1')
(3,'','Shard 2');

With the hostgroups in place, we just need to add a ProxySQL rule to choose the hostgroup based on the injected comment:

INSERT INTO mysql_query_rules (rule_id,active,match_pattern,destination_hostgroup,apply)

Now, after we issue a few queries, you can see in the ProxySQL stats that we are matching the shardID and routing accordingly:

*************************** 1. row ***************************
active: 1
hits: 3
rule_id: 6
match_digest: NULL
match_pattern: \S*\s*\/\*\s*shard=shard_1\s*\*\/\S*\s*
replace_pattern: NULL
cache_ttl: NULL
apply: 1
flagIN: 0
*************************** 2. row ***************************
active: 1
hits: 5
rule_id: 7
match_digest: NULL
match_pattern: \S*\s*\/\*\s*shard=shard_2\s*\*\/\S*\s*
replace_pattern: NULL
cache_ttl: NULL
apply: 1
flagIN: 0

Obviously, this is a massive simplification and you’d want rules that are more generic and more complex hostgroups. For example, your deployment would need to account for queries where no shard hint is provided or the case where an invalid shardID is supplied. However, it displays the potential of writing a custom routing layer within the very lightweight tool ProxySQL. This approach, while it requires some additional thought and logic, could be a viable first step towards splitting your data horizontally.

Again, this is just a sample proof-of-concept implementation. Each use case is different. The main takeaway from this post is that manual sharding is possible, but requires significant planning and implementation work.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today


Percona Software Recognized with Product Awards

Percona Software Awards 2021

Percona Software Awards 2021It’s been a while since Percona started to develop its own software products, but collecting reviews is a new experience for us. We started this activity about a year ago, and the results we got are stunning.

Your reviews said more about us than we ever could imagine. From everyone at Percona, we’d like to send a big thank you to all the users of our products. All your efforts allowed our software to be recognized by several marketplaces as the market leaders.

Percona Monitoring and Management (PMM) received 38 reviews on SourceForge and is recognized in its category as:

  • Leader 2020
  • Leader Winter 2021, Spring 2021, and Summer 2021

PMM (Percona Monitoring and Management) Sourceforge Award

Our MySQL products (Percona XtraDB, Percona XtraBackup, and Percona Server for MySQL) achieved the leading positions in the following nominations on SourceForge:

  • Top Performer Spring 2021
  • Top Performer Summer 2021
  • Percona Server for MySQL also became the Leader of Summer 2021

They collected their well-deserved 11, 14, and 47 reviews accordingly.

MySQL Sourceforge Awards

It is a pleasure to notice that SourceForge and G2 awarded Percona MongoDB products as well. Percona Server for MongoDB received 53 reviews on SourceForge and became:

  • Leader 2020
  • Leader Winter 2021, Spring 2021, and Summer 2021

G2 highly recognized Percona Server for MongoDB also. The list of well-deserved badges is extensive:

  • High Performer Spring 2021
  • High Performer in Small Businesses Spring 2021
  • High Performer Fall 2021 in Document Databases and NoSQL Databases
  • High Performer Summer 2021 in Document Databases and NoSQL Databases
  • High Performer in Small Businesses Fall 2021
  • Best Meets Requirements Fall 2021

G2 Percona Server for MongoDB Award

Percona Backup for MongoDB has 18 reviews from you on G2. G2 users considered it to be one of the most convenient and reliable tools for backups, and it allowed to receive the following awards:

  • Leader Fall 2021
  • Leader Spring 2021
  • Leader Summer 2021

On SourceForge, it also got 29 reviews and became Leader 2020, Leader Winter 2021, Spring 2021, and Summer 2021.

Percona Backup for MongoDB G2 Award

The team of Percona highly appreciates your support and trust in our software products. This success is based on the efforts of everyone supporting open source and helping us on this way for 15 years. These achievements would not be possible without you, our loyal users, and your constant feedback. Thank you, and keep leaving your honest reviews! It encourages us in our work and gives us the strength to keep open source open further.


Talking Drupal #313 – DevPanel

Welcome to Talking Drupal. Today we are talking about DevPanel with Salim Lakhani.


  • John – Nedcamp is going virtual!
  • Tara – Made lots of bread
  • Salim
    • NYC Drupal camp talks / Devops summit
    • Govcon talk
    • Book Pandemic by AG Riddle
  • Nic – Cannot use patches for Drupal 9 compatibility (more news soon)
  • DevPanel elevator pitch
  • How it works
  • Hosting providers (AWS only at the moment)
  • Adding other AWS services
  • Billing structure
  • Certifications
  • Update process
  • Is it open source
  • Features
  • Roadmap
  • DevPanel is closed source but a Public Benefit Company – what that means



Salim Lakhani – @salimlakhani


Nic Laflin – @nicxvan John Picozzi – @johnpicozzi Tara King – @sparklingrobots



A Horizontal Scalability Mindset for MySQL

Horizontal Scalability for MySQL

Horizontal Scalability for MySQLAs a Technical Account Manager at Percona, I get to work with many of our largest clients. While the industry verticals vary, one main core challenge generally remains the same – what do I do with all this data? Dealing with massive data sets in MySQL isn’t a new challenge, but the best approach still isn’t trivial. Each application is obviously different, but I wanted to discuss some of the main best practices around dealing with lakes of data.

Keep MySQL Instances Small

First and foremost, the architecture needs to be designed to keep each MySQL instance relatively small. A very common question I get from teams new to working with MySQL is: “So what is the largest instance size MySQL supports?”. My answer goes back to my time in consulting: “It depends”. Can my MySQL instance support a 20TB dataset? Maybe, but it depends on the workload pattern. Should I store 20TB of data in a single MySQL instance? In most cases, absolutely not.

MySQL can definitely store massive amounts of data. But RDBMSs are designed to store, write, and read that data. When the data grows that large, often the read performance starts to suffer. But what if my working dataset still fits in RAM? This is often the critical consideration when it comes to sizing an instance. In this case, active read/write operations may remain fast, but what happens when you need to take a backup or ALTER a table? You are reading (and writing out) 20TB which will always be bounded by I/O.

So what is the magic number for sizing? Many large-scale shops try to keep individual instance sizes under the 2-3TB mark. This results in a few major advantages:

  • Predictable operational times (backups, alters, etc)
  • Allows for optimized and standardized hardware
  • Potential for parallelism in loading data

If I know my instance will never exceed a couple of terabytes, I can fully optimize my systems for that data size. The results are predictable and repeatable operational actions. Now, when a backup is “slow”, it is almost assuredly due to hardware and not being an outlier instance that is double the size. This is a huge win for the operations team in managing the overall infrastructure.  In addition to backups, you have the additional consideration of restore time.  Massive backups will slow restoration and have a negative impact on RTO.

Store Less Data

Now that the negative impact of large, individual instances is known, let’s look at how we keep the sizes down.  While seemingly obvious, the best way to keep data sizes small is to store less data.  There are a few ways to approach this:

  • Optimize data types
    • If data types are bigger than needed, it results in excess disk footprint (i.e. using bigint when int will suffice)
  • Review indexes for bloat
    • Limit composite Primary Keys (PKs)
    • Find and remove redundant indexes (using pt-duplicate-key-checker)
    • Avoid PKs using varchar
  • Purge old data
    • When possible, remove records not being read
    • Tools like pt-archiver can really help in this process

These techniques can help you delay the need for more advanced techniques.  However, in some cases (due to compliance, limited flexibility, etc), the above options aren’t possible.  In other cases, you may already be doing them and are still hitting size limits.

Horizontal Sharding

So what is another way to deal with massive data sets in MySQL? When all other options are exhausted, you need to look at splitting the data horizontally and spreading it across multiple equally sized instances. Unfortunately, this is much easier said than done. While there are some tools and options out there for MySQL (such as Vitess), often the best and most flexible approach is building this sharding logic into your application directly. Sharding can be done statically (key modulus for example) or more dynamically (via a dictionary lookup) or some hybrid approach of the two:

Horizontal Scalability Mindset for MySQL

Sharding Considerations

When you finally have to bite the bullet and split data horizontally, there are definitely some things to keep in mind.  First and foremost, picking the correct sharding key is imperative.  With the wrong key, shards won’t be balanced and you’ll end up with sizes all over the board.  This then becomes the same problem where a shard can grow too large.

Once you have the correct key, you need to understand that different workloads will be impacted by sharding differently.  When the data is split across shards, individual lookups are generally the easiest to implement.  You take the key, map to a shard, and fetch the results.  However, if the workload requires aggregate access (think reports, totals, etc), now you are dealing with combining multiple shards.  This is a primary and major challenge when looking at horizontal sharding.  As is the case in most architectures, the business requirements and workload will dictate the design.

If your team is struggling with an exploding data set, the Professional Services team at Percona can help you design a more flexible and scalable solution. Each case is unique and our team can work with your specific use case and business requirements to guide you in the right direction. The biggest thing to remember: please don’t just keep adding hard disk space to your instances while expecting it to scale. Proper design and horizontal sharding is the critical factor as your data grows! Check out the next post in the series for more theory and considerations around sharding, along with a lightweight ProxySQL sample implementation.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today


Percona and PostgreSQL: Better Together

Percona PostgreSQL

The future of technology adoption belongs to developers.

The future of application deployment belongs in the cloud.

The future of databases belongs to open source.

That’s essentially the summary of Percona’s innovation strategy and that’s the reason I decided to join Percona – I believe the company can provide me with the perfect platform for my vision of the future of PostgreSQL.

So What Does That Mean, Really?

Gone are the days when deals were closed by executives on a golf course. That was Sales-Led Growth. The days of inbound leads based on content are also numbered. That is Marketing-Led Growth.

We are living in an age where tech decision-making is increasingly bottom-up. Where end-users get to decide which technology will suit their needs best. Where the only way to ensure a future for your PostgreSQL offerings is to delight the end users – the developers.

That’s Product-Led Growth.

PostgreSQL has been rapidly gaining in popularity. The Stack Overflow Developer survey ranked it as the most wanted database and DB-Engines declaring it the DBMS of the year 2020. DB-Engines shows steady growth in popularity of PostgreSQL, which outpaces any other database out there. It is ACID compliant, secure, fast, reliable, with a liberal license, and backed by a vibrant community.

PostgreSQL is the World’s Most Advanced Open Source Relational Database.

Being a leader in open source databases, I believe Percona is well-positioned to drive a Product Led Growth strategy for PostgreSQL. We want to offer PostgreSQL that is freely available, fully supported, and certified. We want to offer you the flexibility of using it yourself, using it with our help, or letting us completely handle your database.

Our plan is to focus on the end-users, make it easy for developers to build applications using our technology, delight with our user experience, and deliver value with our expertise to all businesses – large or small.

Data is exciting. Data has the ability to tell stories. Data is here to stay. 

Data is increasingly stored in open source databases. 

PostgreSQL is the most wanted database by developers. 

That is why I believe: Percona and PostgreSQL – Better Together.

In our latest white paper, we explore how customers can optimize their PostgreSQL databases, tune them for performance, and reduce complexity while achieving the functionality and security your business needs to succeed.

Download to Learn Why Percona and PostgreSQL are Better Together


Top 4 Club Chairs For Kids That You Can Get Today

Club chairs are designed to offer comfort and style without compromising one for the other. These sophisticated pieces of furniture can easily fit into any décor- be it your living room, bedroom, or even the lobby space in your home business. They effortlessly blend classic design with modern styles to create a look that is both nostalgic and chic. Club chairs for kids are no different than adult club chairs in this regard- they offer great looks with the added perk of being super comfy.

This article will go into depth on what a club chair is, why it is called a club chair, and show you 4 astonishing club chairs for kids available at an affordable price.

What is a Club Chair?

The club chair is simply a comfortable, mid-sized upholstered seat with armrests. The term “club chair” can be used to describe any type of armchair that has a back and arms- not just the comfy mid-sized pieces. In other words, it’s a bit of a catch-all for this category of furniture. If you are looking to find the right club chair for your home or office, it is essential to know what separates a club chair from other types of styles.

The following qualities usually define an armchair as a “club chair”:

1) Arms that curve inwards towards the body, away from where they meet the outside edge of the chair’s frame

2) A generally sleek, straight back with no curvature

3) Low-key sides (even if they are padded), without excessive decoration like buttons or tassels that detract from its simplicity

Club chairs for kids follow these same basic rules but often feature clever designs on the front of the seat cushion or back of the chair. If you want to incorporate club chairs into your home or office décor, it is important to find pieces that follow these rules while also featuring designs and patterns that match your overall style.

Why is it Called a Club Chair?

The term “club chair” is often thought of as a dusty, old-fashioned term. However, if you take a closer look at the term, it may become clear why they gave this type of chair this name in the first place.

You see, club chairs were initially designed for use in gentlemen’s clubs and organizations way back in the mid-1700s. These pieces of furniture were a beacon of luxury and comfort during this period, which is why they used them in exclusive clubs that only members could enter.

In particular, club chairs were designed with the comfort of smokers in mind. The armrests on these types of chairs often allowed for cigar storage and places to rest one’s arms comfortably. There is no record of whether or not these chairs were ever used for drinking tea, which was a popular pastime during this period.

Regardless of its origins, the name “club chair” has represented any type of armchair with a sleek, mid-sized appearance and comfortable armrests.

4 Club Chairs for Kids that are Perfect for Your Child’s Bedroom or Playroom

If you are looking to spice up your child’s bedroom or playroom with a touch of class, it can be pretty tricky to find pieces that match the sophistication of adult-sized club chairs. However, there are plenty of sumptuous club chairs for kids out there that feature flying saucers, racing cars, and other fun designs- making them perfect additions to any bedroom or playroom.

The following are four unique club chairs for kids currently available at a highly affordable price:

Single Upholstered Kids Armchair

This single upholstered kids armchair features a sleek, mid-sized body with curved arms and sides. The seat cushion is slightly angled away from the chair’s frame to ensure that your child can sit comfortably in it for long periods without getting tired. It looks fantastic in any living room, bedroom or playroom.


Colors: Pink, Blue

Buy Now

Christopher Knight Home Freemont Leather Club Chair

This leather club chair features a classic mid-sized design with curved arms and sides. The brown leather of the seat cushion is easy to coordinate with existing decor, allowing you to place it in any living room or bedroom without worry. There are also silver nailhead accents on the armrests that add character to this piece of furniture. It looks great in a child’s playroom or bedroom. The brown leather is easy to coordinate with existing decor, allowing you to place it in any living room or bedroom without worry.


Colors: Leather (Brown)

Buy Now

Keet Roundy Children’s Chair Microsuede

This round children’s chair is perfect for encouraging your child to sit in one place. It features a plush micro suede cushion with an ergonomic design that ensures your child will be sitting comfortably at all times during the day or night. The dark brown microsuede is excellent for coordinating with most existing decor, making it easy to incorporate into any bedroom. The dark brown microsuede is perfect for blending with most existing decor, making it easy to incorporate into any bedroom.

ASIN: B07K57DH67

Colors: Charcoal, Brown, Hot Pink, Kiwi Green, Lavender, Pink

Buy Now

Christopher Knight Home Evete Tufted Fabric Club Chair

This tufted fabric club chair has a sleek, classic design that is perfectly suited to add a touch of old-school luxury to your home. It features tufted buttons on the seat cushion and curved arms with elegant rolled edges for an old-fashioned appearance. The brown puckered fabric looks great in any living room or bedroom.


Colors: Beige/Brown Pattern

Buy Now

Final Words on Kids’ Club Chairs

If you are looking to add a touch of class and sophistication to your child’s bedroom or playroom, shopping for club chairs for kids is the way to go. The four models listed above offer impressive designs that will look amazing in any bedroom or playroom- no matter what color palette you prefer. You can even opt for a more old-fashioned appearance with one of the brown club chairs for kids! All you need to do now is decide which chair is suitable for your child.

The post Top 4 Club Chairs For Kids That You Can Get Today appeared first on Comfy Bummy.


Percona Distribution for MySQL, DBaaS in Percona Monitoring and Management: Release Roundup September 27, 2021

Percona Releases Sept 27

Percona Releases Sept 27It’s release roundup time again here at Percona!

Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since September 13, 2021. Take a look!


Percona Distribution for MySQL (PXC Variant) 8.0.23

Percona Distribution for MySQL (PXC variant) 8.0.23 was released on September 15, 2021. It provides better performance and concurrency for even the most demanding workload. It delivers greater value to MySQL server users with optimized performance, greater performance scalability and availability, enhanced backups, and increased visibility.

This update of the Percona XtraDB Cluster-based variant of the Percona Distribution for MySQL includes the HAProxy version 2.3.14, which fixes the security vulnerability CVE-2021-40346.

Download Percona Distribution for MySQL (PXC variant) 8.0.23


Percona Monitoring and Management 2.22.0

Percona Monitoring and Management 2.22.0 was released on September 24, 2021. It is a best-of-breed open source database monitoring solution, helping you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed. A Release Highlight in this version is a Technical Preview, where DBaaS users can now use the PMM UI to upgrade existing Clusters to the newer version of the operator without interacting directly with Kubernetes. In addition, there are several new features, improvements, and bug fixes, which can be read about in the release notes.

Download Percona Monitoring and Management 2.22.0


Percona XtraBackup 2.4.24

On September 14, 2021, we released Percona XtraBackup 2.4.24, a free, online, open source, complete database backups solution for all versions of Percona Server for MySQL and MySQL, complete with enterprise features for free. It provides a method of performing a hot backup of your MySQL data while the system is running. Improvements in this release are that the xbcloud binary should retry on an error and utilize incremental backoff (Thanks to Baptiste Mille-Mathias for reporting this issue), and with the xbcloud binary, a chunk-upload on SSL connect error to Amazon S3 was not retried. (Thanks to Tim Vaillancourt for providing the patch).

Download Percona XtraBackup 2.4.24


That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Powered by WordPress | Theme: Aeros 2.0 by