Percona XtraBackup Changing to Strict by Default

Percona XtraBackup Changing to Strict by Default

Percona XtraBackup Changing to Strict by Default

Backups are a key part of a disaster recovery strategy, making sure you can continue or restore your business in case of an unwanted event with your data.

We always work on trying to improve Percona XtraBackup reliability, always favoring consistency, attempting to make unwanted outcomes be noticed as earlier as possible in the process.

Enabling –strict by Default

As of the upcoming release of 8.0.27, XtraBackup will no longer accept invalid parameters. Since the beginning of the times, validation of parameters has been a difficult task for XtraBackup as it mixes server-side and XtraBackup only parameters.

Starting at Percona XtraBackup 8.0.7 we implemented PXB-1493 which added –strict option defaulting to false.

This option issues a warning for each parameter/option that XtraBackup does not recognize. 

Having this as a warning is not good enough for a few reasons:

  1. It’s easy for it to go unnoticed as we mostly never read all the lines from the output of the backup, paying attention only to the end of the backup to validate if it completed ok.
  2. Having it as a warning doesn’t prevent us from shooting ourselves in the foot. As an example, when applying a –prepare on an incremental backup, if we mistype –apply-log-only parameter on an incremental backup that is not the last one, we will be executing the rollback phase making it rollback in-fly transactions that might have been completed on the next incremental backup.

And there are more.

With all the above in mind, we have decided to enable –strict mode by default.

What Does it Mean For You?

From the 8.0.27 release going forward, you might see XtraBackup failing in case you use an invalid option, either from the [xtrabackup] group of your configuration file or passed as an argument on the command line.

For the next few releases, we will still allow you to change back to the old behavior by passing the parameter –skip-strict, –strict=0, or –strict=OFF.

In the future, –strict will be the sole option, and disabling it will be forbidden.


Fisher-Prize Space Saver High Chair – When Space Is At The Essence Of Our Everyday Living

Fisher-Price has been a good name in the baby products industry for quite some time now. They are known to produce high-quality products that are safe, durable, and efficient. The Fisher-Price Space Saver High Chair is one of these products that have managed to establish itself as a part of our daily lives.

About Fisher-Prize Space Saver High Chair

The Fisher-Price SpaceSaver High Chair is one of the most popular high chairs in Europe and the US, which allows you to keep your child closer when they need it most: because a full-size high chair is too big for a tiny apartment or a small space in a guest room. But a regular booster seat isn’t safe enough when they start to feed themselves.

When the Fisher-Price high chair was introduced in 2009, it quickly landed on top of the ‘must have’ lists for moms with mobile little ones who prefer to roam around when eating.

The Fisher-Price Space Saver High Chair is simple to use, super safe, and it’s easy to clean. It also comes in a variety of colors so you can find the perfect match for your décor. But let’s take a closer look at this great device:
From a safety point of view, the Fisher-Price Space Saver High Chair has many features that make it safe for you and your child. The five-point harness keeps the baby safely in place while feeding themselves or being fed by others, so there is no need to worry.

You can choose from three different height settings according to what is comfortable for you when using this high chair which makes it easy on your back, and the adjustable tray is removable so you can clean it easily. The Fisher-Price Space Saver High Chair is also easy to transport from home to a grandparent’s house or even on holiday, making it one of the best high chairs for travel.

The size of this chair makes it possible to keep your baby close while they are eating or playing on their own. This means that you can enjoy your meals together while at the same time keeping an eye on them to make sure they are safe and happy.

From a practical point of view, having a high chair in your dining room is not ideal when you have little space. But with this baby device, you don’t sacrifice safety for life’s area because the Fisher-Price Space Saver High Chair only takes a little corner of your dining room.

The price of this product is very reasonable, and it’s more affordable than some other high chairs, which makes it also good value for money compared to others on the market. Some people opt for another type of chair like a booster seat because they don’t want to spend so much, but when you think about is this the safest option for your child? The fact that Fisher-Price has considered all this when designing this product is evident.

There are some disadvantages to owning this chair. One of which is that it can be a bit tricky to assemble, and another is that some people find it hard to adjust the height settings. But considering its many benefits, I think this device is definitely worth having in your home.

If you are thinking about purchasing a high chair for your baby, then you should take a good look at the Fisher-Price Space Saver High Chair. It’s easy to use, has excellent features, and does the job perfectly.

The Fisher-Price Space Saver High Chair is a baby feeding chair that fits seamlessly into your modern-day family life. The chair’s design was in direct response to moms who wanted convenience and simplicity without compromising on safety, comfort, or style for their babies.

The Fisher-Price Space Saver High Chair features:

  • Adjustable seat: 2 height adjustments & 3 recline positions
  • Machine-washable seat pad
  • Dishwasher-safe, segmented tray
  • Reusable tray liner
  • Safety harness
  • Toddler booster mode

Is The Fisher-Price Space Saver High Chair worth it?

The Fisher-Price Space Saver High Chair is easy to clean and has adjustable heights and reclines, making it practical for feeding or playing. The price is reasonable compared to other chairs with the same features.

Space Saver High Chair has a removable dishwasher tray and a 3-point harness system that keeps your baby safe and secure. While the baby is seated, a deep seat with soft polyester fabric, a machine-washable seat pad, a 5-point restraint belt for securing the child into the booster seat, and an adjustable height feature that allows you to bring the tray table to your baby.

The Fisher-Price Space Saver High Chair folds down flat for storage or travel. You can use this chair until your little one reaches up to 35 pounds, usually around 1 year of age. The Fisher-Price Space Saver High Chair will give you many years of enjoyment and help to feed your little one in complete safety and comfort.

This high chair is recommended by pediatricians and moms due to its safety standards & compact size. But some parents had issues with the height settings and putting it together.

However, we think that considering all the pros and some minor cons, this chair is still worth having in your home. And if you’re looking for a high chair without all those extra bells & whistles, then this may be something to consider for your family.

If this sounds just like what you have been looking for, don’t wait any longer! Buy the ultimate high chair from Amazon and enjoy your baby feeding time to the fullest!

How to assemble The Fisher-Price Space Saver High Chair

Assembling the Fisher-Price Space Saver High Chair can be a pain if you don’t know what you are doing. It’s not rocket science, but every little part of this device must be in its place.

This is what you need to do to assemble the chair:

  1. Place the seat pad on top of the metal frame and push the two pieces together to attach them.
  2. Place the shoulder straps in between the seat pad and metal frame to connect them.
  3. Feed both armrests through the holes on the back of the metal frame, and you should hear a ‘click’ sound indicating that they are firmly attached.
  4. Make sure that all joints and bolts are correctly tightened.

Now that your chair is assembled, you can start using it as soon as possible. You just have to put the adjustable tray in place and strap your child in with the safety buckle.

Our opinion on Fisher-Price Space Saver High Chair

The Fisher-Price Space Saver High Chair is a great product. It’s simple and does what it’s supposed to do well. Is there anything else to say? We love this baby chair because it allows you to eat dinner as a family without sacrificing your child’s safety. After all, the most important thing is that they are secure and happy.

A quick look on some websites that I trust for buying baby gear reveals a fantastic 95% of very positive reviews, which is definitely a good score, showing how much people love this product.

We love this high chair because it is the perfect solution for small homes, as well as a great alternative to standard high chairs or those bulky baby feeding gadgets that take up too much space. The Fisher-Price Space Saver High Chair is safe and easy to use, so I highly recommend it.

The post Fisher-Prize Space Saver High Chair – When Space Is At The Essence Of Our Everyday Living appeared first on Comfy Bummy.


Attack No-PK Replication Lag with MySQL/Percona Server 8 Invisible Columns!

no primary key replication lag mysql

no primary key replication lag mysqlThe most common issue when using row-based replication (RBR) is replication lag due to the lack of Primary keys.

The problem is that any replicated DML will do a full table scan for each modified row on the replica. This bug report explains it more in-depth: https://bugs.mysql.com/bug.php?id=53375

For example, if a delete is executed on the following table definition:

CREATE TABLE `joinit` (
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL


With this amount of rows:

mysql> select count(*) from joinit;
| count(*) |
|  1048576 |


The delete being:

mysql> flush status ;

mysql> delete from joinit where i > 5 and i < 150;
Query OK, 88 rows affected (0.04 sec)

mysql> show status like '%handler%';
| Variable_name              | Value   |
| Handler_commit             | 2       |
| Handler_delete             | 1       |
| Handler_read_rnd_next      | 1048577 |

It can be seen that the delete on the Primary requires a full table scan (Handler_read_rnd_next matches row amount + 1) to delete 88 rows.

The additional problem is that each of the rows being deleted will be recorded in the binary log individually like this:

#220112 18:29:05 server id 1  end_log_pos 3248339 CRC32 0xdd9d1cb2 Delete_rows: table id 106 flags: STMT_END_F
### DELETE FROM `test2`.`joinit`
###   @1=6
###   @2='764d302b-73d5-11ec-afc8-00163ef3b519'
###   @3='18:28:39'
###   @4=27
### DELETE FROM `test2`.`joinit`
###   @1=7
###   @2='764d30bc-73d5-11ec-afc8-00163ef3b519'
###   @3='18:28:39'
###   @4=5
{88 items}

Which will result in 88 full table scans on the replica, and hence the performance degradation.

For these cases, the recommendation is to add a primary key to the table, but sometimes adding a PK might not be easy because:

  • There are no existing columns that could be considered a PK.
  • Or adding a new column (as the PK) is not possible as it might impact queries from a 3rd party tool that we have no control over (or too complex to fix with query rewrite plugin).

The solution is to use MySQL/Percona Server for MySQL 8 and add an invisible column

Adding a new column (named “newc”) invisible as a primary key can be done with the following line:


Note, adding a PK is an expensive operation that requires a table rebuild as shown here.

After adding an invisible PK, the table will look like this:

CREATE TABLE `joinit` (
  `newc` int NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)


Deleting a row now will be recorded in the binary log like this:

### DELETE FROM `test`.`joinit`
###   @1=1048577
###   @2=1
###   @3='string'
###   @4='17:23:04'
###   @5=5
# at 430
#220112 17:24:56 server id 1  end_log_pos 461 CRC32 0x826f3af6 Xid = 71

Where @1 is the first column ( the PK in this case) which the replica can use to find the matching row without having to do a full table scan.

The operation executed on the replica would be similar to the following which requires only one scan to find the matching row:

mysql> flush status ;
Query OK, 0 rows affected (0.01 sec)

mysql> delete from joinit where newc = 1048578;
Query OK, 1 row affected (0.00 sec)

mysql> show status like '%handler%';
| Variable_name              | Value |
| Handler_commit             | 2     |
| Handler_read_key           | 1     |
| Handler_read_rnd_next      | 0     |


Also as the name suggests, an invisible column won’t show nor it needs to be referenced when doing operations over the table, i.e:

mysql> select * from joinit limit 2; 
| i | s                                    | t        | g  |
| 2 | ecc6cbed-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 58 |
| 3 | ecc7d9bb-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 56 |
2 rows in set (0.00 sec)

mysql> insert into joinit values (4, "string", now(), 5);
Query OK, 1 row affected (0.01 sec)


But if needed, the new column (newc) can be fetched if explicitly queried:

mysql> select newc, i, s, t, g from joinit limit 2; 
| newc | i | s                                    | t        | g  |
|    1 | 2 | ecc6cbed-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 58 |
|    2 | 3 | ecc7d9bb-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 56 |
2 rows in set (0.00 sec)


What If…?

What if MySQL automatically detects that the PK is missing for InnoDB tables and adds the invisible PK?

Taking into account that an internal six bytes PK is already added when the PK is missing, it might be a good idea to allow the possibility of making the PK visible if you need to. 

This means that when you execute this CREATE TABLE statement:

CREATE TABLE `joinit` (
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)


Will be automatically translated to:

CREATE TABLE `joinit` (
  `newc` int NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)


And then we can execute this command:


To make it visible.


Missing primary keys is a problem for scaling databases, as replication will require a full table scan for each updated/delete row, and the more data the more lag.

Adding a PK might not be always possible because of 3rd party tools or restrictions, but adding an invisible primary key will do the trick and have the benefits of adding a PK without impacting syntax and operations from 3rd party clients/tools. What will be awesome is to make MySQL able to detect the missing PK, add it automatically, and change it to visible if you need to.


Talking Drupal #330 – Remote Development on a LAN

Today we are talking about Remote Development on a LAN.



  • John – Don’t Look Up
  • Stephen – Docker Desktop no longer free
  • Nic – Snowman – Computer
  • Remote development on a LAN
  • What remote development is
  • History behind approach
    • Mixed env
    • Solution for overheating on Mac and performance
    • BLT
    • 30 seconds on Linux 20 minutes on Mac
  • Key concepts
    • Best tool
    • Desktop of choice
    • Linux box
      • LAMP
      • Code
      • Git
    • Desktop
      • Browser
      • IDE
      • Terminal
  • Benefits
  • Challenges
  • Key takeaways
  • Getting started



Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Stephen Cross – @stephencross


  • External Links
  • External Links is a small module used to differentiate between internal and external links. Using jQuery, it will find all external links on a page and add an external icon indicating it will take you offsite or a mail icon for mailto: links.

Comparing Graviton (ARM) Performance to Intel and AMD for MySQL (Part 3)

Comparing Graviton (ARM) Performance to Intel and AMD for MySQL

Comparing Graviton (ARM) Performance to Intel and AMD for MySQLRecently we published the first part (m5, m5a, m6g) and the second part (C5, C5a, C6g) of research regarding comparing Graviton ARM with AMD and Intel CPU on AWS. We selected general-purpose EC2 instances with the same configurations (amount of vCPU in the first part). In the second part, we compared compute-optimized EC2 instances with the same conditions. The main goal was to see the trend and make a general comparison of CPU types on the AWS platform only for MySQL. We didn’t set the goal to compare the performance of different CPU types. Our expertise is in MySQL performance tuning. We share research “as is” with all scripts, and anyone interested could rerun and reproduce it.
All scripts, raw logs and additional plots are available on GitHub: 

(arm_cpu_comparison_m5, csv_file_with_all_data_m5,


arm_cpu_comparison_m6, csv_file_with_all_data_m6). 

We were happy to see the reactions from our Percona Blog readers to our research. And we are open to any feedback. If anyone has any ideas on updating our methodology, we would be happy to correct it. 

This post is a continuation of research based on our interest in general-purpose EC2 (and, of course, because we saw that our audience wanted to see it). The main inspiration for this research was the feedback of our readers that we compared different generations of instances, especially old AMD instances (m5a.*), and compared it with the latest Graviton instances (m6g.*).  Additionally, we also decided to use the latest Intels instances (m6i.*) too.

Today, we will talk about (AWS) the latest general-purpose EC2: M6i, M6a, M6g (complete list in appendix). 

Short Conclusion:

  1. In most cases for m6i, m6g, and m6a instances, Intel shows better performance in throughput for MySQL read transactions. However, AMD instances are pretty close to Intel’s results.
  2. Sometimes Intel could show a significant advantage — more than almost 200k rps (almost 45% better) than Graviton. However, AMD’s gap wasn’t as significant as in previous results.
    Unfortunately, we compared Graviton with others. So we didn’t concentrate on comparing AMD with Intel. 
  3. If we could say in a few words: m6i instances (with Intel)  are better in their class than other m6a, m6g instances (in performance for MySql). And this advantage starts from 5%-10% and could be up to 45% compared with other CPUs.
  4. But Gravitons instances are still cheaper

Details, or How We Got Our Short Conclusion:


  1. Tests were run  on M6i.* (Intel) , M6a.* (AMD),  M6g.*(Graviton) EC2 instances in the US-EAST-1 region. (List of EC2 see in the appendix). It was selected using only the same class of instances without additional upgrades. The main goal is to take the same instances with only differences in CPU types and identify their performance for MySQL.
  2. Monitoring was done with Percona Monitoring and Management (PMM).
  3. OS: Ubuntu 20.04 LTS 
  4. Load tool (sysbench) and target DB (MySQL) installed on the same EC2 instance.
  5. Oracle MySQL Community Server — 8.0.26-0 — installed from official packages (it was installed from Ubuntu repositories).
  6. Load tool: sysbench —  1.0.18
  7. innodb_buffer_pool_size=80% of available RAM
  8. Test duration is five minutes for each thread and then 90 seconds cool down before the next iteration. 
  9. Tests were run four times independently (to smooth outliers / to have more reproducible results). Then results were averaged for graphs. Also, graphs show min and max values that were during the test, which shows the range of variance. 
  10. We are going to use the “high-concurrency” scenario definition for scenarios when the number of threads would be bigger than the number of vCPU. And “low-concurrent” scenario definition with scenarios where the number of threads would be less or equal to a number of vCPU on EC2.
  11. We are comparing MySQL behavior on the same class of EC2, not CPU performance.
  12. We got some feedback regarding our methodology, and we would update it in the next iteration, with a different configuration, but for this particular research we leave previous to have possibility compare “apples to apples”.
  13. The post is not sponsored by any external company. It was produced using only Percona resources. We do not control what AWS uses as CPU in their instances, we only operate with what they offer. 

Test Case:


To use only CPU (without disk and network) we decided to use only read queries from memory. To do this we did the following actions. 

1. Create DB with 10 tables with 10 000 000 rows each table

sysbench oltp_read_only --threads=10 --mysql-user=sbtest --mysql-password=sbtest --table-size=10000000 --tables=10 --db-driver=mysql --mysql-db=sbtest prepare

2. Load all data to LOAD_buffer 

sysbench oltp_read_only --time=300 --threads=10 --table-size=1000000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run


3. Run in a loop for same scenario but  different concurrency THREAD (1,2,4,8,16,32,64,128) on each EC2 

sysbench oltp_read_only --time=300 --threads=${THREAD} --table-size=100000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run


Result reviewing was split into four parts:

  1. For “small” EC2 with 2, 4, and 8 vCPU
  2. For “medium” EC2 with 16  and 32 vCPU
  3. For  “large” EC2 with 48 and 64 vCPU
  4. For all scenarios to see the overall picture.

There would be four graphs for each test:

  1. Throughput (queries per second) that EC2 could perform for each scenario (number of threads)
  2. Latency 95 percentile that  EC2 could perform for each scenario, (number of threads)
  3. Relative comparing Graviton and Intel, Graviton, and AMD
  4. Absolute comparing Graviton and Intel, Graviton and AMD

Validation that all load goes to the CPU, not to DISK I/O or network, was done also using PMM (Percona Monitoring and Management). 

perona monitoring and management

pic 0.1. OS monitoring during all test stages


Result for EC2 with 2, 4, and 8 vCPU:

Result for EC2 with 2, 4, and 8 vCPU

Plot 1.1.  Throughput (queries per second) for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 1.2.  Latencies (95 percentile) during the test for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 1.3.1 Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 1.3.2  Percentage comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 1.4.1. Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 1.4.2. Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4 and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads



  1. Based on plot 1.1. We could say that EC2 with Intel hasn’t an absolute advantage compared with Graviton and AMD. 
  2. Especially Intel and AMD, showing an advantage a little bit over – 20% over Graviton.
  3. In numbers, it is over five thousand and more requests per second. 
  4. AMD showed better results for two vCPU instances. 
  5. And it looks like in M6 class of Gravitons CPUs show the worst result compared with others.

Result for EC2 with 16 and 32 vCPU:

Result for EC2 with 16 and 32 vCPU

Plot 2.1. Throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 2.2. Latencies (95 percentile) during the test for EC2 with 16 and 32 vCPU for scenarios with 1,2 4,8,16,32,64,128 threads


Plot 2.3.1 Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 2.3.2  Percentage comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 2.4.1. Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 2.4.2. Numbers comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads



  1. Plot 2.1 shows that Intel vCPU is more performance efficient. AMD is the second, and Graviton is the third. 
  2. According to plots 2.3.1 and 2.3.2, Intel is better than Graviton up to 30 % and AMD is better than Graviton up to 20%. Graviton has some exceptional performance advantage over  AMD in some scenarios. But with this configuration and this instance classes, it is an exception according to the plot 2.3.2 scenarios for 8 and 16 concurrent threads. 
  3. In real numbers, Intel could execute up to 140 k read transactions more than Graviton CPUs, and AMD could read more than 70 k read transactions than Graviton. (plot 2.1. , plot 2.4.1.)
  4. In most cases, AMD and Intel are better than Graviton EC2 instances (plot 2.1, plot 2.3.2, plot 2.4.2).


Result for EC2 with 48 and 64 vCPU:

Result for EC2 with 48 and 64 vCPU

Plot 3.1. Throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 3.2. Latencies (95 percentile) during the test for EC2 with 48 and 64 vCPU for scenarios with 1,2 4,8,16,32,64,128 threads


Plot 3.3.1 Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 3.3.2  Percentage comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 3.4.1. Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 3.4.2. Numbers comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


  1. For “Large” instances, Intel is still better than other vCPU. AMD was still in second place, except when Graviton stole some cases. (plot 3.1.)
  2. According to plot 3.3.1. Intel showed an advantage over Graviton up to 45%. On the other hand, AMD showed an advantage over Graviton up to 20% in the same case.
  3. There were two cases when Graviton showed some better results, but it is an exception. 
  4. In real numbers: Intel could generate over 150k-200k read transactions more than Graviton. And AMD could execute more than 70k – 130k read transactions than Graviton.


Full Result Overview:

Plot 4.1.1. Throughput (queries per second) – bar plot for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.1.2. Throughput (queries per second) – line plot for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.2.1. Latencies (95 percentile) during the test – bar plot for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.2.2. Latencies (95 percentile) during the test – line plot for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.3.1 Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.3.2 Percentage comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.4.1. Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.4.2. Numbers comparison Graviton and AMD CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.5.1. Percentage comparison INTEL and AMD CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads


Plot 4.5.2. Numbers comparison INTEL and AMD CPU in throughput (queries per second) for EC2 with 2, 4, 8, 16, 32, 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

Final Thoughts

  1. We compare general-purpose EC2 (m6i, m6a, m6g) instances from the AWS platform and their behavior for MySQL.  
  2. In these competitions, Graviton instances (m6g) do not provide any competitive results for MySQL. 
  3. There was some strange behavior. AMD and Intel showed their best performance when loaded (in the number of threads) are equal numbers of vCPU. According to plot 4.1.2. We could see some jump in performance when the load becomes the same as the amount of vCPU. This point was hard to see on the bar chart. But this is very interesting. However, Graviton worked more slightly without any “jumps”, and that’s why it showed exceptionally better results in some scenarios with AMD.
  4. Last point. Everyone wants to see an AMD vs Intel comparison. Plot 4.5.1 and 4.5.2.  The result – Intel is better in most cases.  And AMD was better only in one case with 2 vCPU. So the advantage of Intel compared with AMD could rise up to 96% for “large instances” (in some cases). It is unbelievable. But in most cases, this advantage is that Intel could run in 30% more MySql read transactions than AMD.
  5. It is still an open question regarding the economic efficiency of all this EC2. We would research this topic and answer this question a little bit later.


List of EC2 used in research:

CPU type Cpu info:

Model name

EC2 Memory GB Amount vCPU EC2 price per hour (USD)
AMD AMD EPYC 7R13 Processor 2650 MHz m6a.large 8 2 $0.0864
AMD m6a.xlarge 16 4 $0.1728
AMD m6a.2xlarge 32 8 $0.3456
AMD m6a.4xlarge 64 16 $0.6912
AMD m6a.8xlarge 128 32 $1.3824
AMD m6a.12xlarge 192 48 $2.0736
AMD m6a.16xlarge 256 64 $2.7648
Graviton ARMv8 AWS Graviton2 2500 MHz m6g.large 8 2 $0.077 
Graviton m6g.xlarge 16 4 $0.154
Graviton m6g.2xlarge 32 8 $0.308
Graviton m6g.4xlarge 64 16 $0.616
Graviton m6g.8xlarge 128 32 $1.232
Graviton m6g.12xlarge 192 48 $1.848
Graviton m6g.16xlarge 256 64 $2.464
Intel Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz m6i.large 8 2 $0.096000
Intel m6i.xlarge 16 4 $0.192000
Intel m6i.2xlarge 32 8 $0.384000 
Intel m6i.4xlarge 64 16 $0.768000
Intel m6i.8xlarge 128 32 $1.536000
Intel m6i.12xlarge 192 48 $2.304000
Intel m6i.16xlarge 256 64 $3.072000







server_id = 7

# general

table_open_cache = 200000






# files





# buffers






innodb_flush_log_at_trx_commit  = 1

innodb_doublewrite= 1

innodb_flush_method= O_DIRECT

innodb_file_per_table= 1





bind_address =




DBaaS and the Enterprise

DBaaS and the Enterprise

DBaaS and the EnterpriseInstall a database server. Give the application team an endpoint. Set up backups and monitor in perpetuity. This is a pattern I hear about regularly from DBAs with most of my enterprise clients. Rarely do they get to troubleshoot or work with application teams to tune queries or design schemas. This is what triggers the interest in a DBaaS platform from database administrators.

What is DBaaS?

DBaaS stands for “Database as a Service”. When this acronym is thrown out, the first thought is generally a cloud offering such as RDS. While this is a very commonly used service, a DBaaS is really just a managed database platform that offloads much of the operational burden from the DBA team. Tasks handled by the platform include:

  • Installing the database software
  • Configuring the database
  • Setting up backups
  • Managing upgrades
  • Handling failover scenarios

A common misconception is that a DBaaS is limited to the public cloud. As many enterprises already have large data centers and heavy investments in hardware, an on-premise DBaaS can also be quite appealing. Keeping the database in-house is often favored when the hardware and resources are already available. In addition, there are extra compliance and security concerns when looking at a public cloud offering.

DBaaS also represents a difference in mindset. In conventional deployments, systems and architecture are often designed in very exotic ways making automation a challenge. With a DBaaS, automation, standardization, and best practices are the priority. While this can be seen as limiting flexibility, this approach can lead to larger and more robust infrastructures that are much easier to manage and maintain.

Why is DBaaS Appealing?

From a DBA perspective (and being a former DBA myself), I always enjoyed working on more challenging issues. Mundane operations like launching servers and setting up backups make for a less-than-exciting daily work experience. When managing large fleets, these operations make up the majority of the work.

As applications grow more complex and data sets grow rapidly, it is much more interesting to work with the application teams to design and optimize the data tier. Query tuning, schema design, and workflow analysis are much more interesting (and often beneficial) when compared to the basic setup. DBAs are often skilled at quickly identifying issues and understanding design issues before they become problems.

When an enterprise adopts a DBaaS model, this can free up the DBAs to work on more complex problems. They are also able to better engage and understand the applications they are supporting. A common comment I get when discussing complex tickets with clients is: “well, I have no idea what the application is doing, but we have an issue with XYZ”. If this could be replaced with a detailed understanding from the design phase to the production deployment, these discussions would be very different.

From an application development perspective, a DBaaS is appealing because new servers can be launched much faster. Ideally, with development or production deployment options, an application team can have the resources they need ready in minutes rather than days. It greatly speeds up the development life cycle and makes developers much more self-reliant.

DBaaS Options

While this isn’t an exhaustive list, the main options when looking to move to a DBaaS are:

  • Public cloud
    • Amazon RDS, Microsoft Azure SQL, etc
  • Private/Internal cloud
    • Kubernetes (Percona DBaaS), VMWare, etc
  • Custom provisioning/operations on bare-metal

Looking at public cloud options for a DBaaS, security and compliance are generally the first concern. While they are incredibly easy to launch and generally offer some pay-as-you-go options, managing access is a major consideration.

Large enterprises with existing hardware investments often want to explore a private DBaaS. I’ve seen clients work to create their own tooling within their existing infrastructure. While this is a viable option, it can be very time-consuming and require many development cycles. Another alternative is to use an existing DBaaS solution. For example, Percona currently has a DBaaS deployment as part of Percona Monitoring and Management in technical preview. The Percona DBaaS automates PXC deployments and management tasks on Kubernetes through a user-friendly UI.

Finally, a custom deployment is just what it sounds like. I have some clients that manage fleets (1000s of servers) of bare metal servers with heavy automation and custom scripting. To the end-user, it can look just like a normal DBaaS (an endpoint with all operations hidden). On the backend, the DBA team spends significant time just supporting the infrastructure.

How Can Percona help?

Percona works to meet your business where you are. If that is supporting Percona Server for MySQL on bare metal or a fleet of RDS instances, we can help. If your organization is leveraging Kubernetes for the data tier, the Percona Private DBaaS is a great option to standardize and simplify your deployments while following best practices. We can help from the design phase through the entire life cycle. Let us know how we can help!


Help Percona Make Better Databases

Percona Make Better Databases

Hello everyone!  The new year is upon us and we are looking at planning some awesome new features, enhancements, and ideas in 2022.  One of the things we could really use your help on, however, is providing us with feedback on how we are doing and what we can do to improve our database products.  We have put together a few short surveys for each one of our main products (Supporting MySQL, MongoDB, PostgreSQL, Percona Monitoring and Management, and our Kubernetes Operators).  Each survey is a quick 5-7 questions and should take no more than a minute or two to complete.

Percona Server for MySQL & PXC https://per.co.na/lnsTqM
Percona Server for MongoDB https://per.co.na/7BGhFe
Percona Distribution for PostgreSQL https://per.co.na/1KfvRt
Percona Monitoring and Management https://per.co.na/JQ7yAt
Percona Operators (Either for PG, MYSQL, or MongoDB) https://per.co.na/Gxwcrj

Note, everyone who fills in a survey is eligible to be part of a drawing for a Percona T-Shirt. 10 people will randomly be selected. You don’t have to leave an email when you fill out the survey, but you do to be entered into the drawing.

Thank you for helping us out!


Toddler wooden chair – cute and worth your trust!

There is nothing better than a classic, trusty wooden chair when you need a place for your child to sit. Made of natural materials and can take the significant weight, easy to clean with a damp cloth – you can’t go wrong with this.

As a parent, the biggest problem that you may come across is to find a reliable toddler wooden chair. It is much likely that you will find various ones in the market, but it may be confusing to choose which one would suit your kid’s needs.

You don’t need to worry, as we have compiled all the necessary information about toddler wooden chairs and their pros and cons here. First, let us start with what to consider when buying a wooden chair for your toddler.

Things To Consider Before Buying A Wooden Toddler Chair

You may find many stylish and durable toddler wooden chairs, but it’s crucial to keep certain things in your mind before buying one.

• Your kid’s comfort. Make sure that the seat is deep enough to accommodate your toddler. The bottom of the chair must be sturdy to bear heavy pressure on it.

• Durability. The wooden toddler chair should be sturdy and long-lasting so that your child can use it for many years without any problem. A strong base is advisable because the child’s weight will be on the chair, so avoid chairs made up of plastic or metal as they are not durable. The wooden one is best as it can be used for many years.

• Material. Wooden toddler chairs are much durable and comfortable, so that’s the reason most of the parents prefer buying a wooden one rather than other types. There are various kinds of woods available in the market but go for ones with veneer as it is more durable and long-lasting.

• Price. You may find many wooden toddler chairs within your budget, so do not worry about the price.

These are some tips that you need to consider before buying a wooden toddler chair for your child. It not only allows cherishing with your kid but also develops various skills in them like creativity and independence.

Best wooden chairs for your toddler

Do you want to buy a wooden toddler chair for your child? Here are some of the best ones available in the market.

Famobay Bunny Ears Wooden Toddler Chair

Just look how cute is this toddler chair! The backrest shaped like bunny ears is an adorable feature for little ones. It is made of solid and smooth wooden construction, making it long-lasting and sturdy.

Based on its innovative design, you would think that this toddler chair would be much more expensive than any other option on the market. However, it is not the case here!

This bunny ears toddler chair is available on Amazon at a very reasonable price. It is solid, durable, and easy to keep clean, making it the best choice for your household.

Famobay Antlers Wooden Toddler Chair

If you like Bunny Ears Toddler Wooden Chair, you will be happy to learn that another similar toddler chair is available. This time its design is based on antlers to make your little one feel like a wild animal.

Like the previous product, this antlers toddler chair is made of natural smooth wood, making it very durable. It can carry heavyweight up to 400 lbs, which you can’t say about other products on the market. There is no doubt that your child can use this chair for many years.

Melissa & Doug Wooden Chairs, Set of 2

Melissa & Doug’s kids’ furniture is the way to go for those wanting a simple and classic design for kids’ room. The playroom chairs set was created with safety in mind – something Melissa & Doug is well-known for. The chairs feature a reinforced, tip-resistant design.

They are made from durable wood, with materials that hold up to 100 pounds. The Melissa & Doug Solid Wood Chairs set makes an excellent gift for kids from 3 to 8 years old.

Toddler Chair CXRYLZ

If you like a simple design but still wish to bring your toddler a little bit of color, take a look at this chair. This little chair is made of wood and will surely look lovely in any kids’ room. It also has round edges, which are much safer for your toddler than sharp ones.

This chair is so cute, you might not want to give it away when your kid grows up! Beautiful colors are not the only thing that makes this chair so unique. It is also made of quality materials which make it durable and long-lasting.

Its compact size will help you fit it almost anywhere in your house, even if you have limited space for kids’ furniture: 25cm/10in legs, 30 x 30cm/12” x 12” seat surface, and 26.5 x 20cm/10.5” x 7.9” backrest. This cutie can hold up to 250 lbs.

HOUCHICS Wooden Toddler Chair

Unique toddler wooden chair! This piece has a curved back to better protect your child from being injured by the prismatic edge of the wooden stool. It is not only for safety reasons but also to give your child a more comfortable sitting position.

This toddler chair has an additional feature of non-slip pads that will help prevent the stool from sliding on the floor surface. You can also adjust its height depending on what you need at the moment.

The HOUCHICS Wooden Toddler Chair is very easy to clean – you can use a damp cloth and soap. A wide handle at the back provides a comfortable grip to lift the toddler chair off the floor.

It’s suitable for children from 3 years of age up to 6-7 years. Be sure that your child will be comfortable sitting on this stool!

iPlay iLearn 10 Inch Kids Solid Hard Wood Animal Chair

Cute animal chairs with vivid, engaging characters are appealing and attractive to children aged 2 and up. Not only will your child be excited to sit on the iPlay iLearn animal chairs, but they will be learning at the same time.

Each chair has a sturdy 10-inch solid wood base designed to prevent tipping when your child leans against it. The triangular structure of the chair legs helps prevent tipping, and anti-slip pads on each leg prevent falls.

The iPlay iLearn chairs can be stacked for easy storage, and they are made of 100% natural smooth hardwood. They are hand-painted with non-toxic paints to ensure complete safety.

This is the perfect gift for toddlers aged 2-4! Parents love it because it is sturdy, safe, and easy to clean; kids love them because of the vibrant colors and animal characters. Among the patterns, you’ll find a giraffe visible above, a frog, and a cow.

Why is it necessary to buy a wooden toddler chair?

As we all know, kids look forward to spending time with their parents and if you can encourage them to spend more time on furniture, then do it as soon as possible. As a parent, you must have noticed that your child prefers being beside you rather than playing alone. It’s a sign that your child wants to spend more time with you, which is absolutely normal.

The best way to encourage them is to buy them a wooden toddler chair because it would allow your child to sit beside you and enjoy being together while reading books or watching exhibits on display cabinets. This can be good practice for your child to be more responsible and understand the concept of time, as it also encourages them to play independently.

Children prefer wooden chairs because they are much comfortable than other types. They like splashing colors in their surrounding, so if you buy a wooden toddler chair in vibrant color, your child will love it. A wooden toddler chair is the best way to encourage your child and develop their creative side.

Apart from all these benefits, toddlers learn many things by watching us, so if you want them to become responsible and obedient in the future, teach them how to sit correctly on a wooden toddler chair. They will get an idea about sitting straight, and it would be beneficial for them in the future.

The post Toddler wooden chair – cute and worth your trust! appeared first on Comfy Bummy.


How Patroni Addresses the Problem of the Logical Replication Slot Failover in a PostgreSQL Cluster

PostgreSQL Patroni Logical Replication Slot Failover

Failover of the logical replication slot has always been the pain point while using the logical replication in PostgreSQL. This lack of feature undermined the use of logical replication and acted as one of the biggest deterrents. The stake and impact were so high that many organizations had to discard their plans around logical replication, and it affected many plans for migrations to PostgreSQL. It was painful to see that many had to opt for proprietary/vendor-specific solutions instead.

At Percona, we have written about this in the past: Missing Piece: Failover of the Logical Replication Slot.  In that post, we discussed one of the possible approaches to solve this problem, but there was no reliable mechanism to copy the slot information to a Physical standby and maintain it.

The problem, in nutshell, is: the replication slot will be always maintained on the Primary node. If there is a switchover/failover to promote one of the standby, the new primary won’t have any idea about the replication slot maintained by the previous primary node. This breaks the logical replication from the downstream systems or if a new slot is created, it becomes unsafe to use.

The good news is that Patroni developers and maintainers addressed this problem from Version 2.1.0 and provided a working solution without any invasive methods/extensions. For me, this is a work that deserves a big round of applause from the Patroni community and that is the intention of this blog post and to make sure that a bigger crowd is aware of it.

How to Set it Up

A ready-to-use Patroni package is available from the Percona repository. But you are free to use Patroni from any source.

Basic Configuration

In case you are excited about this and want to try it, the following steps might be helpful.

The entire discussion is about logical replication. So the minimum requirement is to have a wal_level set to “logical”. If the existing Patroni configuration is having wal_level set to “replica” and if you want to use this feature, you may just edit the Patroni configuration.

Patroni configuration

However, this change requires the PostgreSQL restart:

“Pending restart” with * marking indicates the same.

You may use Patroni’s “switchover” feature to restart the node to make the changes into effect because the demoted node goes for a restart.

If there are any remaining nodes, they can be restarted later.

Creating Logical Slots

Now we can add a permanent logical replication slot to PostgreSQL which will be maintained by Patroni.

Edit the patroni configuration:

$ patronictl -c /etc/patroni/patroni.yml edit-config

A slot specification can be added as follows:

    database: postgres
    plugin: pgoutput
    type: logical

The “slots:” section defines permanent replication slots. These slots will be preserved during switchover/failover. “pgoutput” is the decoding plugin for PostgreSQL logical replication.

Once the change is applied, the logical replication slot will be created on the primary node. Which can be verified by querying:

select * from pg_replication_slots;

The following is a sample output:

patroni output

Now here is the first level of magic! The same replication slot will be created on the standbys, also. Yes, Patroni does it. Patroni internally copies the replication slot information from the primary to all eligible standby nodes!.

We can use the same query on the pg_replication_slots on the standby and see similar information.

The following is an example showing the same replication slot reflecting on the standby side:

replication slot

This slot can be used by the subscription by explicitly specifying the slot name while creating the subscription.

CREATE SUBSCRIPTION sub2 CONNECTION '<connection_string' PUBLICATION <publication_name> WITH (copy_data = true, create_slot=false, enabled=true, slot_name=logicreplia);

Alternatively, an existing subscription can be modified to use the new slot which I generally prefer to do.

For example:

ALTER SUBSCRIPTION name SET (slot_name=logicreplia);

Corresponding PostgreSQL log entries can confirm the slot name change:

2021-12-27 15:56:58.294 UTC [20319] LOG:  logical replication apply worker for subscription "sub2" will restart because the replication slot name was changed
2021-12-27 15:56:58.304 UTC [20353] LOG:  logical replication apply worker for subscription "sub2" has started

From the publisher side, We can confirm the slot usage by checking the active_pid and advancing LSN for the slots.

The second level of Surprise! The Replication Slot information in all the standby nodes of the Patroni cluster is also advanced as the logical replication progresses from the primary side

At a higher level, this is exactly what this feature is doing:

  1. Automatically create/copy the replication slot information from the primary node of the Patroni cluster to all eligible standby nodes.
  2. Automatically advances the LSN numbers on slots of standby nodes as the LSN number advances on the corresponding slot on the primary.

After a Switchover/Failover

In the event of a switchover or failover, we are not losing any slot information as they are already maintained on the standby nodes.

After the switchover, the topology looks like this:

Now, any downstream logical replica can be repointed to the new primary.

postgres=# ALTER SUBSCRIPTION sub2 CONNECTION 'host= port=5432 dbname=postgres user=postgres password=vagrant';                                                                                                                                                  

This continues the replication, and pg_replication_slot information can confirm this.

Summary + Key Points

The logical replication slot is conceptually possible only on the primary Instance because that is where the logical decoding happens. Now with this improvement, Patroni makes sure that the slot information is available on standby also and it will be ready to take over the connection from the subscriber.

  • This solution requires PostgreSQL 11 or above because it uses the  pg_replication_slot_advance() function which is available from PostgreSQL 11 onwards, for advancing the slot.
  • The downstream connection can use HAProxy so that the connection will be automatically routed to the primary (not covered in this post). No modification to PostgreSQL code or Creation of any extension is required.
  • The copying of the slot happens over PostgreSQL protocol (libpq) rather than any OS-specific tools/methods. Patroni uses rewind or superuser credentials. Patroni uses the pg_read_binary_file()  function to read the slot information. Source code Reference.
  • Once the logical slot is created on the replica side, Patroni uses pg_replication_slot_advance() to move the slot forward.
  • The permanent slot information will be added to DCS and will be continuously maintained by the primary instance of the Patroni. A New DCS key with the name “status” is introduced and supported across all DCS options (zookeeper, etcd, consul, etc.).
  • hot_standby_feedback must be enabled on all standby nodes where the logical replication slot needs to be maintained.
  • Patroni parameter postgresql.use_slots must be enabled to make sure that every standby node uses a slot on the primary node.

Creating a Standby Cluster With the Percona Distribution for PostgreSQL Operator

Standby Cluster With the Percona Distribution for PostgreSQL Operator

A customer recently asked if our Percona Distribution for PostgreSQL Operator supports the deployment of a standby cluster, which they need as part of their Disaster Recovery (DR) strategy. The answer is yes – as long as you are making use of an object storage system for backups, such as AWS S3 or GCP Cloud Storage buckets, that can be accessed by the standby cluster. In a nutshell, it works like this:

  • The primary cluster is configured with pgBackRest to take backups and store them alongside archived WAL files in a remote repository;
  • The standby cluster is built from one of these backups and it is kept in sync with the primary cluster by consuming the WAL files that are copied from the remote repository.

Note that the primary node in the standby cluster is not a streaming replica from any of the nodes in the primary cluster and that it relies on archived WAL files to replicate events. For this reason, this approach cannot be used as a High Availability (HA) solution. Even though the primary use of a standby cluster in this context is DR, it can be also employed for migrations as well.

So, how can we create a standby cluster using the Percona operator? We will show you next. But first, let’s create a primary cluster for our example.

Creating a Primary PostgreSQL Cluster Using the Percona Operator

You will find a detailed procedure on how to deploy a PostgreSQL cluster using the Percona operator in our online documentation. Here we want to highlight the main steps involved, particularly regarding the configuration of object storage, which is a crucial requirement and should better be done during the initial deployment of the cluster. In the following example, we will deploy our clusters using the Google Kubernetes Engine (GKE) but you can find similar instructions for other environments in the previous link.

Considering you have a Google account configured as well as the gcloud (from the Google Cloud SDK suite) and kubectl command-line tools installed, authenticate yourself with gcloud auth login, and off we go!

Creating a GKE Cluster and Basic Configuration

The following command will create a default cluster named “cluster-1” and composed of three nodes. We are creating it in the us-central1-a zone using e2-standard-4 VMs but you may choose different options. In fact, you may also need to indicate the project name and other main settings if you do not have your gcloud environment pre-configured with them:

gcloud container clusters create cluster-1 --preemptible --machine-type e2-standard-4 --num-nodes=3 --zone us-central1-a

Once the cluster is created, use your IAM identity to control access to this new cluster:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)

Finally, create the pgo namespace:

kubectl create namespace pgo

and set the current context to refer to this new namespace:

kubectl config set-context $(kubectl config current-context) --namespace=pgo

Creating a Cloud Storage Bucket

Remember for this setup we need a Google Cloud Storage bucket configured as well as a Service Account created with the necessary privileges/roles to access it. The respective procedures to obtain these vary according to how your environment is configured so we won’t be covering them here. Please refer to the Google Cloud Storage documentation for the exact steps. The bucket we created for the example in this post was named cluster1-backups-and-wals.

Likewise, please refer to the Creating and managing service account keys documentation to learn how to create a Service Account and download the corresponding key in JSON format – we will need to provide it to the operator so our PostgreSQL clusters can access the storage bucket.

Creating the Kubernetes Secrets File to Access the Storage Bucket

Create a file named my-gcs-account-secret.yaml with the following structure:

apiVersion: v1
kind: Secret
  name: cluster1-backrest-repo-config
type: Opaque
  gcs-key: <VALUE>

replacing the <VALUE> placeholder by the output of the following command according to the OS you are using:


base64 --wrap=0 your-service-account-key-file.json


base64 your-service-account-key-file.json

Installing and Deploying the Operator

The most practical way to install our operator is by cloning the Git repository, and then moving inside its directory:

git clone -b v1.1.0 https://github.com/percona/percona-postgresql-operator
cd percona-postgresql-operator

The following command will deploy the operator:

kubectl apply -f deploy/operator.yaml

We have already prepared the secrets file to access the storage bucket so we can apply it now:

kubectl apply -f my-gcs-account-secret.yaml

Now, all that is left is to customize the storages options in the deploy/cr.yaml file to indicate the use of the GCS bucket as follows:

        type: gcs
        bucket: cluster1-backups-and-wals

We can now deploy the primary PostgreSQL cluster (cluster1):

kubectl apply -f deploy/cr.yaml

Once the operator has been deployed, you can run the following command to do some housekeeping:

kubectl delete -f deploy/operator.yaml

Creating a Standby PostgreSQL Cluster Using the Percona Operator

After this long preamble, let’s look at what brought you here: how to deploy a standby cluster, which we will refer to as cluster2, that will replicate from the primary cluster.

Copying the Secrets Over

Considering you probably have customized the passwords you use in your primary cluster and that they differ from the default values found in the operator’s git repository, we need to make a copy of the secrets files, adjusted to the standby cluster’s name. The following procedure facilitates this task, saving the secrets files under /tmp/cluster1-cluster2-secrets (you can choose a different target directory):

NOTE: make sure you have the yq tool installed in your system.
mkdir -p /tmp/cluster1-cluster2-secrets/
export primary_cluster_name=cluster1
export standby_cluster_name=cluster2
export secrets="${primary_cluster_name}-users"
kubectl get secret/$secrets -o yaml \
| yq eval 'del(.metadata.creationTimestamp)' - \
| yq eval 'del(.metadata.uid)' - \
| yq eval 'del(.metadata.selfLink)' - \
| yq eval 'del(.metadata.resourceVersion)' - \
| yq eval 'del(.metadata.namespace)' - \
| yq eval 'del(.metadata.annotations."kubectl.kubernetes.io/last-applied-configuration")' - \
| yq eval '.metadata.name = "'"${secrets/$primary_cluster_name/$standby_cluster_name}"'"' - \
| yq eval '.metadata.labels.pg-cluster = "'"${standby_cluster_name}"'"' - \

Deploying the Standby Cluster: Fast Mode

Since we have already covered the procedure used to create the primary cluster in detail in a previous section, we will be presenting the essential steps to create the standby cluster below and provide additional comments only when necessary.

NOTE: the commands below are issued from inside the percona-postgresql-operator directory hosting the git repository for our operator.

Deploying a New GKE Cluster Named cluster-2

This time using the us-west1-b zone here:

gcloud container clusters create cluster-2 --preemptible --machine-type e2-standard-4 --num-nodes=3 --zone us-west1-b
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)
kubectl create namespace pgo
kubectl config set-context $(kubectl config current-context) --namespace=pgo
kubectl apply -f deploy/operator.yaml

Apply the Adjusted Kubernetes Secrets:

export standby_cluster_name=cluster2
export secrets="${standby_cluster_name}-users"
kubectl create -f /tmp/cluster1-cluster2-secrets/$secrets

The list above does not include the GCS secret file; the key contents remain the same but the backrest-repo pod name needs to be adjusted. Make a copy of that file:

cp my-gcs-account-secret.yaml my-gcs-account-secret-2.yaml

then edit the copy to indicate “cluster2-” instead of “cluster1-”:

name: cluster2-backrest-repo-config

You can apply it now:

kubectl apply -f my-gcs-account-secret-2.yaml

The cr.yaml file of the Standby Cluster

Let’s make a copy of the cr.yaml file we customized for the primary cluster:

cp deploy/cr.yaml deploy/cr-2.yaml

and edit the copy as follows:

1) Change all references (that are not commented) from cluster1 to cluster2  – including current-primary but excluding the bucket reference, which in our example is prefixed with “cluster1-”; the storage section must remain unchanged. (We know it’s not very practical to replace so many references, we still need to improve this part of the routine).

2) Enable the standby option:

standby: true

3) Provide a repoPath that points to the GCS bucket used by the primary cluster (just below the storages section, which should remain the same as in the primary cluster’s cr.yaml file):

repoPath: “/backrestrepo/cluster1-backrest-shared-repo”

And that’s it! All that is left now is to deploy the standby cluster:

kubectl apply -f deploy/cr-2.yaml

With everything working on the standby cluster, do some housekeeping:

kubectl delete -f deploy/operator.yaml

Verifying it all Works as Expected

Remember that the standby cluster is created from a backup and relies on archived WAL files to be continued in sync with the primary cluster. If you make a change in the primary cluster, such as adding a row to a table, that change won’t reach the standby cluster until the WAL file it has been recorded to is archived and consumed by the standby cluster.

When checking if all is working with the new setup, you can force the rotation of the WAL file (and subsequent archival of the previous one) in the primary node of the primary cluster to accelerate the sync process by issuing:

psql> SELECT pg_switch_wal();

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

Learn More About Percona Kubernetes Operators

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com