Oct
15
2021
--

Comparing Graviton (ARM) Performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQLRecently, AWS presented its own CPU on ARM architecture for server solutions.

It was Graviton. As a result, they update some lines of their EC2 instances with new postfix “g” (e.g. m6g.small, r5g.nano, etc.). In their review and presentation, AWS showed impressive results that it is faster in some benchmarks up to 20 percent. On the other hand, some reviewers said that Graviton does not show any significant results and, in some cases, showed fewer performance results than Intel.

We decided to investigate it and do our research regarding Graviton performance, comparing it with other CPUs (Intel and AMD) directly for MySQL.

Disclaimer

  1. The test is designed to be CPU bound only, so we will use a read-only test and make sure there is no I/O activity during the test.
  2. Tests were run  on m5.* (Intel) , m5a.* (AMD),  m6g.*(Graviton) EC2 instances in the US-EAST-1 region. (List of EC2 see in the appendix).
  3. Monitoring was done with Percona Monitoring and Management (PMM).
  4. OS: Ubuntu 20.04 TLS.
  5. Load tool (sysbench) and target DB (MySQL) installed on the same EC2 instance.
  6. MySQL– 8.0.26-0 — installed from official packages.
  7. Load tool: sysbench —  1.0.18.
  8. innodb_buffer_pool_size=80% of available RAM.
  9. Test duration is five minutes for each thread and then 90 seconds warm down before the next iteration. 
  10. Tests were run three times (to smooth outliers or have more reproducible results), then results were averaged for graphs. 
  11. We are going to use high concurrency scenarios for those scenarios when the number of threads would be bigger than the number of vCPU. And low concurrent scenario with scenarios where the number of threads would be less or equal to a number of vCPU on EC2.
  12. Scripts to reproduce results on our GitHub.

Test Case

Prerequisite:

1. Create DB with 10 tables with 10 000 000 rows each table

sysbench oltp_read_only --threads=10 --mysql-user=sbtest --mysql-password=sbtest --table-size=10000000 --tables=10 --db-driver=mysql --mysql-db=sbtest prepare

2. Load all data to LOAD_buffer

sysbench oltp_read_only --time=300 --threads=10 --table-size=1000000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Test:

Run in a loop for same scenario but  different concurrency THREAD (1,2,4,8,16,32,64,128) on each EC2.

sysbench oltp_read_only --time=300 --threads=${THREAD} --table-size=100000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Results:

Result reviewing was split into 3 parts:

  1. for “small” EC2 with 2, 4, and 8 vCPU
  2. for “medium” EC2 with 16 and 32 vCPU
  3. for  “large” EC2 with 48 and 64 vCPU

This “small”, “medium”, and “large” splitting is just synthetic names for better reviewing depends on the amount of vCPu per EC2
There would be four graphs for each test:

  1. Throughput (Queries per second) that EC2 could perform for each scenario (amount of threads)
  2. Latency 95 percentile that  EC2 could perform for each scenario (amount of threads)
  3. Relative comparing Graviton and Intel
  4. Absolute comparing Graviton and Intel

Validation that all load goes to CPU, not to DISK I/O, was done also using PMM (Percona Monitoring and Management). 

pic 0.1. OS monitoring during all test stages

pic 0.1 – OS monitoring during all test stages

From pic.0.1, we can see that there was no DISK I/O activity during tests, only CPU activity. The main activity with disks was during the DB creation stage.

Result for EC2 with 2, 4, and 8 vCPU

plot 1.1. Throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8  vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. AMD has the biggest latencies in all scenarios and for all EC2 instances. We won’t repeat this information in all future overviews, and this is the reason why we exclude it in comparing with other CPUs in percentage and numbers values (in plots 1.3 and 1.4, etc).
  2. Instances with two and four vCPU Intel show some advantage for less than 10 percent in all scenarios.
  3. However, an instance with 8 vCPU intel shows an advantage only on scenarios with threads that less or equal amount of vCPU on EC2.
  4. On EC2 with eight vCPU, Graviton started to show an advantage. It shows some good results in scenarios when the number of threads is more than the amount of vCPU on EC2. It grows up to 15 percent in high-concurrency scenarios with 64 and 128 threads, which are 8 and 16 times bigger than the amount of vCPU available for performing.
  5. Graviton start showing an advantage on EC2 with eight vCPU and with scenarios when threads are more than vCPU amount. This feature would appear in all future scenarios – more load than CPU, better result it shows.

 

Result for EC2 with 16 and 32 vCPU

plot 2.1.  Throughput (queries per second)  for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. In scenarios with the same load for ec2 with 16 and 32 vCPU, Graviton is continuing to have advantages when the amount of threads is more significant than the amount of available vCPU on instances.
  2. Graviton shows an advantage of up to 10 percent in high concurrency scenarios. However, Intel has up to 20 percent in low concurrency scenarios.
  3. In high-concurrency scenarios, Graviton could show an incredible difference in the number of (read) transactions per second up to 30 000 TPS.

Result for EC2 with 48 and 64 vCPU

plot 3.1.  Throughput (queries per second)  for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.2.  Latencies (95 percentile) during the test for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. It looks like Intel shows a significant advantage in most scenarios when its number of threads is less or equal to the amount of vCPU. It seems Intel is really good for such kind of task. When it has some additional free vCPU, it would be better, and this advantage could be up to 35 percent.
  2. However, Graviton shows outstanding results when the amount of threads is larger than the amount of vCPU. It shows an advantage from 5 to 14 percent over Intel.
  3. In real numbers, Graviton advantage could be up to 70 000 transactions per second over Intel performance in high-concurrency scenarios.

Total Result Overview

 

plot 4.2.  Latencies (95 percentile) during the test for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

Conclusions

  1. ARM CPUs show better results on EC2 with more vCPU and with higher load, especially in high-concurrency scenarios.
  2. As a result of small EC2 instances and small load, ARM CPUs show less impressive performance. So we can’t see its benefits comparing with Intel EC2
  3. Intel is still the leader in the area of low-concurrency scenarios. And it is definitely winning on EC2 with a small amount of vCPU.
  4. AMD does not show any competitive results in all cases.   

Final Thoughts

  1. AMD — we have a lot of questions about EC2 instances on AMD. So it would be a good idea to check what was going on that EC2 during the test and check the general performance of CPUs on those EC2.
  2. We found out that in some specific conditions, Intel and Graviton could compete with each other. But the other side of the coin is economical. What is cheaper to use in each situation? The next article will be about it. 
  3. It would be a good idea to try to use EC2 with Graviton for real high-concurrency DB.  
  4. It seems it needs to run some additional scenarios with 256 and 512 threads to check the hypothesis that Graviton could work better when threads are more than vCPU.

APPENDIX:

List of EC2 used in research:

CPU type EC2 EC2 price per hour (USD) vCPU RAM
Graviton m6g.large 0.077 2 8 Gb
Graviton m6g.xlarge 0.154 4 16 Gb
Graviton m6g.2xlarge 0.308 8 32 Gb
Graviton m6g.4xlarge 0.616 16 64 Gb
Graviton m6g.8xlarge 1.232 32 128 Gb
Graviton m6g.12xlarge 1.848 48 192 Gb
Graviton m6g.16xlarge 2.464 64 256 Gb
Intel m5.large 0.096 2 8 Gb
Intel m5.xlarge 0.192 4 16 Gb
Intel m5.2xlarge 0.384 8 32 Gb
Intel m5.4xlarge 0.768 16 64 Gb
Intel m5.8xlarge 1.536 32 128 Gb
Intel m5.12xlarge 2.304 48 192 Gb
Intel m5.16xlarge 3.072 64 256 Gb
AMD m5a.large 0.086 2 8 Gb
AMD m5a.xlarge 0.172 4 16 Gb
AMD m5a.2xlarge 0.344 8 32 Gb
AMD m5a.4xlarge 0.688 16 64 Gb
AMD m5a.8xlarge 1.376 32 128 Gb
AMD m5a.12xlarge 2.064 48 192 Gb
AMD m5a.16xlarge 2.752 64 256 Gb

 

my.cnf

my.cnf:
[mysqld]
ssl=0
performance_schema=OFF
skip_log_bin
server_id = 7

# general
table_open_cache = 200000
table_open_cache_instances=64
back_log=3500
max_connections=4000
 join_buffer_size=256K
 sort_buffer_size=256K

# files
innodb_file_per_table
innodb_log_file_size=2G
innodb_log_files_in_group=2
innodb_open_files=4000

# buffers
innodb_buffer_pool_size=${80%_OF_RAM}
innodb_buffer_pool_instances=8
innodb_page_cleaners=8
innodb_log_buffer_size=64M

default_storage_engine=InnoDB
innodb_flush_log_at_trx_commit  = 1
innodb_doublewrite= 1
innodb_flush_method= O_DIRECT
innodb_file_per_table= 1
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_flush_neighbors=0
max_prepared_stmt_count=1000000 
bind_address = 0.0.0.0
[client]

 

Oct
14
2021
--

Percona Is a Finalist for Best Use of Open Source Technologies in 2021!

Percona Finalist Open Source

Percona has been named a finalist in the Computing Technology Product Awards for Best Use of Open Source Technologies. If you’re a customer, partner, or just a fan of Percona and what we stand for, we’d love your vote.

With Great Power…

You know the phrase. We’re leaving it to you and your peers in the tech world to push us to the top.

Computing’s Technology Product Awards are open to a public vote until October 29. Vote Here!

percona Best Use of Open Source Technologies

Thank you for supporting excellence in the open source database industry. We look forward to the awards ceremony on Friday, November 26, 2021.

Why We’re an Open Source Finalist

A contributing factor to our success has been Percona Monitoring and Management (PMM), an open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical MySQL, MongoDB, PostgreSQL, and MariaDB database environments, no matter where they are located or deployed. It’s impressing customers, and even competitors, in the industry.

If you want to see how Percona became a finalist, learn more about Percona Monitoring and Management, and be sure to follow @Percona on all platforms.

Vote Today!

Oct
14
2021
--

Custom Percona Monitoring and Management Metrics in MySQL and PostgreSQL

mysql postgresl custom metrics

mysql postgresl custom metricsA few weeks ago we did a live stream talking about Percona Monitoring and Management (PMM) and showcased some of the fun things we were doing at the OSS Summit.  During the live stream, we tried to enable some custom queries to track the number of comments being added to our movie database example.  We ran into a bit of a problem live and did not get it to work. As a result, I wanted to follow up and show you all how to add your own custom metrics to PMM and show you some gotchas to avoid when building them.

Custom metrics are defined in a file deployed on each server you are monitoring (not on the server itself).  You can add custom metrics by navigating over to one of the following:

  • For MySQL:  /usr/local/percona/pmm2/collectors/custom-queries/mysql
  • For PostgreSQL:  /usr/local/percona/pmm2/collectors/custom-queries/postgresql
  • For MongoDB:  This feature is not yet available – stay tuned!

You will notice the following directories under each directory:

  • high-resolution/  – every 5 seconds
  • medium-resolution/ – every 10 seconds
  • low-resolution/ – every 60 seconds

Note you can change the frequency of the default metric collections up or down by going to the settings and changing them there.  It would be ideal if in the future we added a resolution config in the YML file directly.  But for now, it is a universal setting:

Percona Monitoring and Management metric collections

In each directory you will find an example .yml file with a format like the following:

mysql_oss_demo: 
  query: "select count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments;"
  metrics: 
    - comment_cnt: 
        usage: "GAUGE" 
        description: "count of the number of comments coming in"

Our error during the live stream was we forgot to include the database in our query (i.e. table_name.database_name), but there was a bug that prevented us from seeing the error in the log files.  There is no setting for the database in the YML, so take note.

This will create a metric named mysql_oss_demo_comment_cnt in whatever resolution you specify.  Each YML will execute separately with its own connection.  This is important to understand as if you deploy lots of custom queries you will see a steady number of connections (this is something you will want to consider if you are doing custom collections).  Alternatively, you can add queries and metrics to the same file, but they are executed sequentially.  If, however, the entire YML file can not be completed in less time than the defined resolution ( i.e. finished within five seconds for high resolution), then the data will not be stored, but the query will continue to run.  This can lead to a query pile-up if you are not careful.   For instance, the above query generally takes 1-2 seconds to return the count.  I placed this in the medium bucket.  As I added load to the system, the query time backed up.

You can see the slowdown.  You need to be careful here and choose the appropriate resolution.  Moving this over to the low resolution solved the issue for me.

That said, query response time is dynamic based on the conditions of your server.  Because these queries will run to completion (and in parallel if the run time is longer than the resolution time), you should consider limiting the query time in MySQL and PostgreSQL to prevent too many queries from piling up.

In MySQL you can use:

mysql>  select /*+ MAX_EXECUTION_TIME(4) */  count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments ;
ERROR 1317 (70100): Query execution was interrupted

And on PostgreSQL you can use:

SET statement_timeout = '4s'; 
select count(1) as comment_cnt from movies_normalized_user_comments ;
ERROR:  canceling statement due to statement timeout

By forcing a timeout you can protect yourself.  That said, these are “errors” so you may see errors in the error log.

You can check the system logs (syslog or messages) for errors with your custom queries (note at this time as of PMM 2.0.21, errors were not making it into these logs because of a potential bug).  If the data is being collected and everything is set up correctly, head over to the default Grafana explorer or the “Advanced Data Exploration” dashboard in PMM.  Look for your metric and you should be able to see the data graphed out:

Advanced Data Exploration PMM

In the above screenshot, you will notice some pretty big gaps in the data (in green).  These gaps were caused by our query taking longer than the resolution bucket.  You can see when we moved to 60-second resolution (in orange), the graphs filled in.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Oct
13
2021
--

Migrating MongoDB to Kubernetes

Migrating MongoDB to Kubernetes

Migrating MongoDB to KubernetesThis blog post is the last in the series of articles on migrating databases to Kubernetes with Percona Operators. Two previous posts can be found here:

As you might have guessed already, this time we are going to cover the migration of MongoDB to Kubernetes. In the 1.10.0 release of Percona Distribution for MongoDB Operator, we have introduced a new feature (in tech preview) that enables users to execute such migrations through regular MongoDB replication capabilities. We have already shown before how it can be used to provide cross-regional disaster recovery for MongoDB, we encourage you to read it.

The Goal

There are two ways to migrate the database:

  1. Take the backup and restore it.
    – This option is the simplest one, but unfortunately comes with downtime. The bigger the database, the longer the recovery time is.
  2. Replicate the data to the new site and switch the application once replicas are in sync.
    – This allows the user to perform the migration and switch the application with either zero or little downtime.

This blog post is a walkthrough on how to migrate MongoDB replica set to Kubernetes with replication capabilities. 

MongoDB replica set to Kubernetes

  1. We have a MongoDB cluster somewhere (the Source). It can be on-prem or some virtual machine. For demo purposes, I’m going to use a standalone replica set node. The migration procedure of a replica set with multiple nodes or sharded cluster is almost identical.
  2. We have a Kubernetes cluster with Percona Operator (the Target). The operator deploys 3 standalone MongoDB nodes in unmanaged mode (we will talk about it below).
  3. Each node is exposed so that the nodes on the Source can reach them.
  4. We are going to replicate the data to Target nodes by adding them into the replica set.

As always all blog post scripts and configuration files are available publicly in this Github repository.

Prerequisites

  • MongoDB cluster – either on-prem or VM. It is important to be able to configure mongod to some extent and add external nodes to the replica set.
  • Kubernetes cluster for the Target.
  • kubectl to deploy and manage the Operator and database on Kubernetes.

Prepare the Source

This section explains what preparations must be made on the Source to set up the replication.

Expose

All nodes in the replica set must form a mesh and be able to reach each other. The communication between the nodes can go through the public internet or some VPN. For demonstration purposes, we are going to expose the Source to the public internet by editing mongod.conf:

net:
  bindIp: <PUBLIC IP>

If you have multiple replica sets – you need to expose all nodes of each of them, including config servers.

TLS

We take security seriously at Percona, and this is why by default our Operator deploys MongoDB clusters with encryption enabled. I have prepared a script that generates self-signed certificates and keys with the openssl tool. If you already have Certificate Authority (CA) in use in your organization, generate the certificates and sign them by your CA.

The list of alternative names can be found either in this document or in this openssl configuration file. Note DNS.20 entry:

DNS.20      = *.mongo.spron.in

I’m going to use this wildcard entry to set up the replication between the nodes. The script also generates an

ssl-secret.yaml

file, which we are going to use on the Target side.

You need to upload CA and certificate with a private key to every Source replica set node and then define it in the

mongod.conf

:

# network interfaces
net:
  ...
  tls:
    mode: preferTLS
    CAFile: /path/to/ca.pem
    certificateKeyFile: /path/to/mongod.pem

security:
  clusterAuthMode: x509
  authorization: enabled

Note that I also set

clusterAuthMode

to x509. It enforces the use of x509 authentication. Test it carefully on a non-production environment first as it might break your existing replication.

Create System Users

Our Operator needs system users to manage the cluster and perform health checks. Usernames and passwords for system users should be the same on the Source and the Target. This script is going to generate the

user-secret.yaml

to use on the Target and mongo shell code to add the users on the Source (it is an example, do not use it in production).

Connect to the primary node on the Source and execute mongo shell commands generated by the script.

Prepare the Target

Apply Users and TLS secrets

System users’ credentials and TLS certificates must be similar on both sides. The scripts we used above generate Secret object manifests to use on the Target. Apply them:

$ kubectl apply -f ssl-secret.yaml
$ kubectl apply -f user-secret.yam

Deploy the Operator and MongoDB Nodes

Please follow one of the installation guides to deploy the Operator. Usually, it is one step operation through

kubectl

:

$ kubectl apply -f operator.yaml

MongoDB nodes are deployed with a custom resource manifest – cr.yaml. There are the following important configuration items in it:

spec:
  unmanaged: true

This flag instructs Operator to deploy the nodes in unmanaged mode, meaning they are not configured to form the cluster. Also, the Operator does not generate TLS certificates and system users.

spec:
…
  updateStrategy: Never

Disable the Smart Update feature as the cluster is unmanaged.

spec:
…
  secrets:
    users: my-new-cluster-secrets
    ssl: my-custom-ssl
    sslInternal: my-custom-ssl-internal

This section points to the Secret objects that we created in the previous step.

spec:
…
  replsets:
  - name: rs0
    size: 3
    expose:
      enabled: true
      exposeType: LoadBalancer

Remember, that nodes need to be exposed and reachable. To do this we create a service per Pod. In our case, it is a

LoadBalancer

object, but it can be any other Service type.

spec:
...
  backup:
    enabled: false

If the cluster and nodes are unmanaged, the Operator should not be taking backups. 

Deploy unmanaged nodes with the following command:

$ kubectl apply -f cr.yaml

Once nodes are up and running, also check the services. We will need the IP-addresses of new replicas to add them later to the replica set on the Source.

$ kubectl get services
NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)           AGE
…
my-new-cluster-rs0-0    LoadBalancer   10.3.252.134   35.223.104.224   27017:31332/TCP   2m11s
my-new-cluster- rs0-1   LoadBalancer   10.3.244.166   34.134.210.223   27017:32647/TCP   81s
my-new-cluster-rs0-2    LoadBalancer   10.3.248.119   34.135.233.58    27017:32534/TCP   45s

Configure Domains

X509 authentication is strict and requires that the certificate’s common name or alternative name match the domain name of the node. As you remember we had wildcard

*.mongo.spron.in

included in our certificate. It can be any domain that you use, but make sure a certificate is issued for this domain.

I’m going to create A-records to point to public IP-addresses of MongoDB nodes:

k8s-1.mongo.spron-in -> 35.223.104.224
k8s-2.mongo.spron.in -> 34.134.210.223
k8s-3.mongo.spron-in -> 34.135.233.58

Replicate the Data to the Target

It is time to add our nodes in the Kubernetes cluster to the replica set. Login into the mongo shell on the Source and execute the following:

rs.add({ host: "k8s-1.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-2.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-3.mongo.spron.in", priority: 0, votes: 0} )

If everything is done correctly these nodes are going to be added as secondaries. You can check the status with the

rs.status()

command.

Cutover

Check that newly added node are synchronized. The more data you have, the longer the synchronization process is going to take. To understand if nodes are synchronized you should compare the values of

optime

and

optimeDate

of the Primary node with the values for the Secondary node in

rs.status()

output:

{
        "_id" : 0,
        "name" : "147.182.213.59:27017",
        "stateStr" : "PRIMARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},
{
        "_id" : 1,
        "name" : "k8s-1.mongo.spron.in:27017",
        "stateStr" : "SECONDARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDurable" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},

When nodes are synchronized, we are ready to perform the cutover. Please ensure that your application is configured properly to minimize downtime during the cutover.

The cutover is going to have two steps:

  1. One of the nodes on the Target becomes the primary.
  2. Operator starts managing the cluster and nodes on the Source are no longer present in the replica set.

Switch the Primary

Connect with mongo shell to the primary on the Source side and make one of the nodes on the Target primary. It can be done by changing the replica set configuration:

cfg = rs.config()
cfg.members[1].priority = 2
cfg.members[1].votes = 1
rs.reconfig(cfg)

We enable voting and set priority to two on one of the nodes in the Kubernetes cluster. Member id can be different for you, so please look carefully into the output of

rs.config()

command.

Start Managing the Cluster

Once the primary is running in Kubernetes, we are going to tell the Operator to start managing the cluster. Change

spec.unmanaged

to

false

 in the Custom Resource with patch command:

$ kubectl patch psmdb my-cluster-name --type=merge -p '{"spec":{"unmanaged": true}}'

You can also do this by changing

cr.yaml

and applying it. This is it, now you have the cluster in Kubernetes which is managed by the Operator. 

Conclusion

You truly start to appreciate Operators once you get used to them. When I was writing this blog post I found it extremely annoying to deploy and configure a single MongoDB node on a Linux box; I don’t want to think about the cluster. Operators abstract Kubernetes primitives and database configuration and provide you with a fully operational database service instead of a bunch of nodes. Migration of MongoDB to Kubernetes is a challenging task, but it is much simpler with Operator. And once you are on Kubernetes, Operator takes care of all day-2 operations as well.

We encourage you to try out our operator. See our GitHub repository and check out the documentation.

Found a bug or have a feature idea? Feel free to submit it in JIRA.

For general questions please raise the topic in the community forum

You are a developer and looking to contribute? Please read our CONTRIBUTING.md and send the Pull Request.

Percona Distribution for MongoDB Operator

The Percona Distribution for MongoDB Operator simplifies running Percona Server for MongoDB on Kubernetes and provides automation for day-1 and day-2 operations. It’s based on the Kubernetes API and enables highly available environments. Regardless of where it is used, the Operator creates a member that is identical to other members created with the same Operator. This provides an assured level of stability to easily build test environments or deploy a repeatable, consistent database environment that meets Percona expert-recommended best practices.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
13
2021
--

The Best Delta Children Kids’ Chairs

Delta Children is a United States-based company that specializes in children’s furniture. The company was founded in 1968 and has since expanded into a wide array of products.

Delta Children’s Products produces furniture for children and toddlers. The company offers a wide range of styles, including contemporary, modern, traditional, transitional, and cottage kids’ chairs.

Their goal is to provide parents with safe, high-quality products for their children. Delta pieces are known for being sturdy, attractive additions to your child’s room or play area.

The best thing is: you can find them online on Amazon! Here are some of our favorite chairs.

Delta Children Upholstered Chair

Delta Children Upholstered Chair

First up, we have the Delta Children Upholstered Chair – a fan favorite on Amazon! The fun character design of this chair makes it an excellent choice for boys and girls. It’s upholstered in microfiber fabric with a sturdy wood frame and metal legs. The chair can hold children weighing as much as 100 pounds.

Many reviewers love this colorful chair for their kids’ bedrooms and play areas. Others have purchased several chairs to create a fun reading nook in their child’s room. It can be hard to find a comfortable place for children to read, but the Delta Children Upholstered Chair makes a perfect spot.

It’s also a great choice for kids who are too big for their high chairs but not ready to sit at the table. Children can use this chair at playtime or as extra seating when they need it. Since this Delta piece is so affordable, you might even consider buying more than one!

Overall, this chair is an excellent value for the price. It’s an affordable piece that can be used in several situations!

Regardless of what cartoon is your child’s favorite, it’s likely that you will find Delta Children Upholstered Kids Chair with its beloved character:

Delta Children High Back Upholstered Chair

Delta Children High Back Upholstered Chair

Another good choice for kids is the Delta Children High Back Upholstered Chair. This budget-friendly chair features a thickly padded seat and backrest. It’s upholstered in polyester with a sturdy wood frame. The frame is built to hold up to 100 pounds of weight.

Many of these chairs feature characters that children love, whether from favorite TV shows or comic books.

It also makes a good spare chair for when you have company over! Children can place it in their rooms or at the dining table for added seating.

Delta Children Chair Desk with Storage Bin

Delta Children Chair Desk with Storage Bin

Some kids need a little extra help with their homework. A chair desk with a storage bin is a great solution for children who have difficulty staying organized.

The Delta Children Chair Desk is helpful in any room of the house, from the bedroom to the playroom. The wide surface adds ample space for your child’s books, calculators, pens, and paper. Plus, the storage bin is a great place to keep crayons, books, and other supplies.

This piece is a great way to give your child their own little work area. For years, this chair desk has been a top choice on Amazon. It’s super functional, and the attractive design makes it a great addition to any child’s room.

Delta Children Cozee Fluffy Chair with Memory Foam

Delta Children Cozee Fluffy Chair with Memory Foam

The next item on our list is the Delta Children Cozee Fluffy Chair with Memory Foam. This chair is excellent for lounging or watching TV.

The memory foam provides your child with an extra boost of comfort for those cozy movie nights. It’s also perfect for video game marathons.

The sturdy wood frame and foam filling will keep your child sitting comfortably for hours. Side pockets are designed to hold your child’s game console or snacks.

This piece is both comfortable and practical. It’s available in several colors and patterns, including:

Delta Children Chelsea Kids Upholstered Chair with Cup Holder

If you’re looking for a stylish and practical kids’ chair, then take a look at this next option: Delta Children Chelsea Kids Upholstered Chair with Cup Holder.

This piece is ideal for smaller children who may need additional support when sitting. It features an upholstered seat and backrest in polyester fabric. The seat and backrest are both padded for your child’s comfort.

The armrests on this chair will help keep younger kids from falling forward as they slouch down in their seats. Plus, the cup holder is a great place to store drinks or snacks.

This chair can also be considered a kids accent chair, as it comes in stylish colors: navy, soft pink, or grey.

Delta Children Cozee Cube Chair with Memory Foam

Delta Children Cozee Cube Chair with Memory Foam

Delta Children Cozee Cube Chair with Memory Foam is another great option on our list of the best chairs for kids.

The memory foam filling inside the chair offers extra comfort to your child as they sit down after a long day at school. It combines the comfort of a bean bag chair with the stability and support of traditional kids’ chairs.

This chair is available in three colors: navy, grey, and pink. It’s not designed for big kids, so it’s perfect for preschoolers or smaller children. If you’re looking for a gift idea that your child will love, then you have to check out this next chair.

Delta Children Saucer Chair

The final item on our list features a Delta Children Saucer Chair.

This comfy piece is made from the foam-filled cushion in durable polyester fabric. It’s designed to give your child extra comfort when lounging around or watching TV.

The durable metal frame is built to last. This piece is suitable for children ages three and up.

This stylish chair comes in designs tribute to the popular franchises:

Summary

delta-children-logo

Delta Children Chairs are great gift ideas for your kids. They are available in several attractive designs and colors.

These chairs are particularly useful if you’re looking for something to make your child feel more comfortable while watching TV or playing video games.

They are comfortable and practical while also giving every kid their own special place to sit.

More interesting reads

The post The Best Delta Children Kids’ Chairs appeared first on Comfy Bummy.

Oct
12
2021
--

Why Linux HugePages are Super Important for Database Servers: A Case with PostgreSQL

Linux HugePages PostgreSQL

Often users come to us with incidents of database crashes due to OOM Killer. The Out Of Memory killer terminates PostgreSQL processes and remains the top reason for most of the PostgreSQL database crashes reported to us. There could be multiple reasons why a host machine could run out of memory, and the most common problems are:

  1. Poorly tuned memory on the host machine.
  2. A high value of work_mem is specified globally (at instance level). Users often underestimate the multiplying effect for such blanket decisions.
  3. The high number of connections. Users ignore the fact that even a non-active connection can hold a good amount of memory allocation.
  4. Other programs co-hosted on the same machine consuming resources.

Even though we used to assist in tuning both host machines and databases, we do not always take the time to explain how and why HugePages are important and justify it with data. Thanks to repeated probing by my friend and colleague Fernando, I couldn’t resist doing so this time.

The Problem

Let me explain the problem with a testable and repeatable case. This might be helpful if anyone wants to test the case in their own way.

Test Environment

The test machine is equipped with 40 CPU cores (80 vCPUs) and 192 GB of installed memory. I don’t want to overload this server with too many connections, so only 80 connections are used for the test. Yes, just 80 connections, which we should expect in any environment and is very realistic. Transparent HugePages (THP) is disabled. I don’t want to divert the topic by explaining why it is not a good idea to have THP for a database server, but I commit that I will prepare another blog.

In order to have a relatively persistent connection,  just like the ones from application side poolers (or even from external connection poolers), pgBouncer is used for making all the 80 connections persistent throughout the tests.  The following is the pgBouncer configuration used:

[databases]
sbtest2 = host=localhost port=5432 dbname=sbtest2

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
logfile = /tmp/pgbouncer.log
pidfile = /tmp/pgbouncer.pid
admin_users = postgres
default_pool_size=100
min_pool_size=80
server_lifetime=432000

As we can see, the

server_lifetime

  parameter is specified to a high value not to destroy the connection from the pooler to PostgreSQL. Following PostgreSQL, parameter modifications are incorporated to mimic some of the common customer environment settings.

logging_collector = 'on'
max_connections = '1000'
work_mem = '32MB'
checkpoint_timeout = '30min'
checkpoint_completion_target = '0.92'
shared_buffers = '138GB'
shared_preload_libraries = 'pg_stat_statements'

The test load is created using sysbench

sysbench /usr/share/sysbench/oltp_point_select.lua --db-driver=pgsql --pgsql-host=localhost --pgsql-port=6432 --pgsql-db=sbtest2 --pgsql-user=postgres --pgsql-password=vagrant --threads=80 --report-interval=1 --tables=100 --table-size=37000000 prepare

and then

sysbench /usr/share/sysbench/oltp_point_select.lua --db-driver=pgsql --pgsql-host=localhost --pgsql-port=6432 --pgsql-db=sbtest2 --pgsql-user=postgres --pgsql-password=vagrant --threads=80 --report-interval=1 --time=86400  --tables=80 --table-size=37000000  run

The first prepare stage puts a write load on the server and the second one read-only load.

I am not attempting to explain the theory and concepts behind HugePages, but concentrate on the impact analysis. Please refer to the LWN article: Five-Level Page Tables and Andres Freund’s blog post Measuring the Memory Overhead of a Postgres Connection for understanding some of the concepts.

Test Observations

During the test, memory consumption was checked using the Linux

free

utility command. When making use of the regular pool of memory pages, the consumption started with a really low value. But it was under steady increase (please see the screenshot below). The “Available” memory is depleted at a faster rate.

Available Memory

Towards the end, it started swap activity also. Swap activity is captured in

vmstat

  output below:

swap activity

Information from the

/proc/meminfo

  reveals that the Total Page Table size has grown to more than 25+GB from the initial 45MB

This is not just memory wastage; this is a huge overhead impacting the overall execution of the program and operating system. This size is the Total of Lower PageTable entries of the 80+ PostgreSQL processes.

The same can be verified by checking each PostgreSQL Process. Following is a sample

So the Total PageTable size (25GB) should be approximately this value * 80 (connections). Since this synthetic benchmark sends an almost similar workload through all connections, All individual processes are having very close values to what was captured above.

The following shell line can be used for checking the Pss (Proportional set size). Since PostgreSQL uses Linux shared memory, focusing on Rss won’t be meaningful.

for PID in $(pgrep "postgres|postmaster") ; do awk '/Pss/ {PSS+=$2} END{getline cmd < "/proc/'$PID'/cmdline"; sub("\0", " ", cmd);printf "%.0f --> %s (%s)\n", PSS, cmd, '$PID'}' /proc/$PID/smaps ; done|sort -n

Without Pss information, there is no easy method to understand the memory responsibility per process.

In a typical database system where we have considerable DML load, the background processes of PostgreSQL such as Checkpointer, Background Writer, or Autovaccum workers will be touching more pages in the shared memory. Corresponding Pss will be higher for those processes.

Postgres Process

This should explain why Checkpointer, Background worker, or even the Postmaster often becomes the usual victim/target of an OOM Killer very often. As we can see above, they carry the biggest responsibility of the shared memory.

After several hours of execution, the individual session touched more shared memory pages. As a consequence per process, Pss values were rearranged: Checkpointer is responsible for less as other sessions shared the responsibility.

However, checkpointer retains the highest share.

Even though it is not important for this test, it will be worth mentioning that this kind of load pattern is specific to synthetic benchmarking because every session does pretty much the same job. That is not a good approximation to typical application load, where we usually see checkpointer and background writers carry the major responsibility.

The Solution: Enable HugePages

The solution for such bloated page tables and associated problems is to make use of HugePages instead. We can figure out how much memory should be allocated to HugePages by checking the VmPeak of the postmaster process. For example, if 4357 is the PID of the postmaster:

grep ^VmPeak /proc/4357/status

This gives the amount of memory required in KB:

VmPeak: 148392404 kB

This much needs to be fitting into huge pages. Converting this value into 2MB pages:

postgres=# select 148392404/1024/2;
?column?
----------
    72457
(1 row)

Specify this value in

/etc/sysctl.conf

  for

vm.nr_hugepages

, for example:

vm.nr_hugepages = 72457

Now shutdown PostgreSQL instance and execute:

sysctl -p

We shall verify whether the requested number of huge pages are created or not:

grep ^Huge /proc/meminfo
HugePages_Total:   72457
HugePages_Free:    72457
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:        148391936 kB

If we start up the PostgreSQL at this stage, we could see that the HugePages_Rsvd is allocated.

$ grep ^Huge /proc/meminfo
HugePages_Total:   72457
HugePages_Free:    70919
HugePages_Rsvd:    70833
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:        148391936 kB

If everything is fine, I would prefer to make sure that PostgreSQL always uses HugePages. because I would prefer a failure of startup of PostgreSQL rather than problems/crashes later.

postgres=# ALTER SYSTEM SET huge_pages = on;

The above change needs a restart of the PostgreSQL instance.

Tests with HugePages “ON”

The HugePages are created in advance even before the PostgreSQL startup. PostgreSQL just allocates them and uses them. So there won’t be even any noticeable change in the

free

 output before and after the startup. PostgreSQL allocates its shared memory into these HugePages if they are already available. PostgreSQL’s

shared_buffers

  is the biggest occupant of this shared memory.

HugePages PostgreSQL

The first output of

free -h

in the above screenshot is generated before the PostgreSQL startup and the second one after the PostgreSQL startup. As we can see, there is no noticeable change

I have done the same test which ran for several hours and there wasn’t any change;  the only noticeable change even after many hours of run is the shift of “free” memory to filesystem cache which is expected and what we want to achieve. The total “available” memory remained pretty much constant as we can see in the following screenshot.

Available Memory PostgreSQL

Total Page Tables size remained pretty much the same :

As we can see the difference is huge: Just 61MB with HugePages instead of 25+GB previously. Pss per session also reduced drastically:

Pss per session

The biggest advantage I could observe is that CheckPointer or Background Writer is no longer accountable for several GBs of RAM.

Instead, they are accountable only for a few MBs of consumption. Obviously, they won’t be a candidate victim for the OOM Killer anymore.

Conclusion

In this blog post, we discussed how Linux Huge Pages can potentially save the database server from OOM Killers and associated crashes. We could see two improvements:

  1. Overall memory consumption was reduced by a big margin. Without the HugePages, the server almost ran out of memory (available memory completely depleted, and swapping activity started). However, once we switched to HugePages, 38-39GB remained as Available/Linux filesystem cache. This is a huge saving.
  2. With HugePages enabled, PostgreSQL background processes are not accounted for a large amount of shared memory.  So they won’t be candidate victim/target for OOM Killer easily.

These improvements can potentially save the system if it is on the brink of OOM condition, but I don’t want to make any claim that this will protect the database from all OOM conditions forever.

HugePage (hugetlbfs) originally landed in Linux Kernel in 2002 for addressing the requirement of database systems that need to address a large amount of memory. I could see that the design goals are still valid.

There are other additional indirect benefits of using HugePages:

  1. HugePages never get swapped out. When PostgreSQL shared buffers are in HugePages, it can produce more consistent and predictable performance.  I shall discuss that in another article.
  2. Linux uses multi-level page lookup method. HugePages are implemented using direct pointers to pages from the middle layer (a 2MB huge page would be found directly at the PMD level, with no intervening PTE page). The address translation becomes considerably simpler. Since this is a high-frequency operation in a database server with a large amount of memory, the gains are multiplied.

Note: The HugePages discussed in this blog post are about fixed size (2MB) huge pages.

Additionally, as a side note, I want to mention that there are a lot of improvements in Transparent HugePages (THP) over the years which allows applications to use HugePages without any code modification. THP is often considered as a replacement for regular HugePages (hugetlbfs) for a generic workload. However, usage of THP is discouraged on database systems as it can lead to memory fragmentation and increased delays. I want to cover that topic in another post and just want mention that these are not PostgreSQL specific problem, but affects every database systems.  For example,

  1. Oracle Recommends disabling TPH. Reference link
  2. MongoDB Recommends disabling THP. Reference link
  3. “THP is known to cause performance degradation with PostgreSQL for some users on some Linux versions.” Reference link.

As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL

Oct
12
2021
--

ProxySQL 2.3.0: Enhanced Support for MySQL Group Replication

ProxySQL 2.3 MySQL Group Replication

ProxySQL 2.3 MySQL Group ReplicationProxySQL 2.3.0 was recently released and when I was reading the release notes, I was really impressed with the Group Replication enhancements and features. I thought of experimenting with those things and was interested to write a blog about them. Here, I have focused on the following two topics:

  • When the replication lag threshold is reached, ProxySQL will move the server to SHUNNED state, instead of moving them to OFFLINE hostgroup. When shunning a server, it will be performed gracefully and not immediately drop all backend connections.
  • The servers can be taken to maintenance through ProxySQL using “OFFLINE_SOFT”.

Test Environment

To test this, I have configured a three-node GR cluster (gr1,gr2,gr3) in my local environment. I have configured a single primary cluster (1 writer, 2 readers).

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
+-------------+--------------+-------------+----------------+
| member_host | member_state | member_role | member_version |
+-------------+--------------+-------------+----------------+
| gr1         | ONLINE       | PRIMARY     | 8.0.26         |
| gr2         | ONLINE       | SECONDARY   | 8.0.26         |
| gr3         | ONLINE       | SECONDARY   | 8.0.26         |
+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

Currently, there is no transaction delay in the GR cluster.

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)

To compare the result with older ProxySQL versions, I have configured two versions of ProxySQL. One is the latest version (2.3.0) and another one is the older version (2.2.2).

ProxySQL 1: (2.3.0)

mysql>  show variables like '%admin-version%';
+---------------+------------------+
| Variable_name | Value            |
+---------------+------------------+
| admin-version | 2.3.1-8-g794b621 |
+---------------+------------------+
1 row in set (0.00 sec)

ProxySQL 2: ( < 2.3.0 )

mysql> show variables like '%admin-version%';
+---------------+-------------------+
| Variable_name | Value             |
+---------------+-------------------+
| admin-version | 2.2.2-11-g0e7630d |
+---------------+-------------------+
1 row in set (0.01 sec)

GR nodes are configured on both the ProxySQLs:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

Host group settings are:

mysql> select writer_hostgroup,reader_hostgroup,offline_hostgroup from runtime_mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
 writer_hostgroup: 2
 reader_hostgroup: 3
offline_hostgroup: 4
1 row in set (0.00 sec)

Scenario 1: When the replication lag threshold is reached, ProxySQL will move the server to SHUNNED state, instead of moving them to OFFLINE host group.

Here the replication lag threshold is configured when the ProxySQL is “20”.

mysql> select @@mysql-monitor_groupreplication_max_transactions_behind_count;
+----------------------------------------------------------------+
| @@mysql-monitor_groupreplication_max_transactions_behind_count |
+----------------------------------------------------------------+
| 20                                                             |
+----------------------------------------------------------------+
1 row in set (0.00 sec)

As per my current setting,

  • At ProxySQL 2.3.0, if the transaction_behind reaches 20, then the node will be put into “SHUNNED” state.
  • At “< ProxySQL 2.3.0”, if the transaction_behind reaches 20, then the node will be put into an offline hostgroup.

To manually create the replication lag, I am going to start the read/write load on the GR cluster using the sysbench.

sysbench oltp_read_write --tables=10 --table_size=1000000  --mysql-host=172.28.0.96 --mysql-port=6033 --mysql-user=monitor --mysql-password="Monitor@321" --mysql-db=jc --time=30000 --threads=50 --report-interval=1 run

As expected, now I can see the transaction delay in the cluster.

mysql> select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                 457 |                    0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)

Let’s see how the different ProxySQL versions are behaving now.

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname, status from runtime_mysql_servers;
+--------------+----------+---------+
| hostgroup_id | hostname | status  |
+--------------+----------+---------+
| 2            | gr1      | ONLINE  |
| 3            | gr2      | SHUNNED |
| 3            | gr3      | SHUNNED |
+--------------+----------+---------+
3 rows in set (0.00 sec)

As expected, both the reader nodes (gr2,gr3) are moved to “SHUNNED” state. And, the servers are still available in reader_hostgroup.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname, status from runtime_mysql_servers;
+--------------+----------+---------+
| hostgroup_id | hostname | status  |
+--------------+----------+---------+
| 2            | gr1      | ONLINE  |
| 4            | gr2      | ONLINE  |
| 4            | gr3      | ONLINE  |
+--------------+----------+---------+
3 rows in set (0.00 sec)

The server status is still ONLINE. But, the hostgroup_id is changed from 3 to 4. “4” is the offline hostgroup_id.

So, when comparing both the results, seems the latest release (2.3.0) has the correct implementation. Shunning the node is just temporarily taking the server out of use until the replication lag issue is fixed. When shunning a server, it will be performed gracefully and not immediately drop all backend connections. You can see the servers are still available in the reader hostgroups. With the previous implementation, the servers are moved to offline_hostgroup immediately.

Again, from ProxySQL 2.3.0, during the lag, shunning the nodes depend on the parameter “mysql-monitor_groupreplication_max_transactions_behind_for_read_only”. The parameter has 3 values (0,1,2).

  • “0” means only servers with “read_only=0” are placed as SHUNNED.
  • “1” means Only servers with “read_only=1” are placed as SHUNNED. This is the default one.
  • “2” means Both servers with “read_only=1” and “read_only=0” are placed as SHUNNED.

Scenario 2: The servers can be taken to maintenance through ProxySQL using “OFFLINE_SOFT”.

Personally, I would say, this is one of the nice implementations. From ProxySQL 2.3.0, ProxySQL itself can put the servers into the maintenance mode using the “OFFLINE_SOFT”. In the older version, you can also set it, but it was not stable. Let me explain the behavior of the latest and the older version.

Both ProxySQLs have the following configuration:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

— Now, I am going to put the server “gr3” into maintenance mode on both ProxySQL.

After putting it into maintenance mode, both ProxySQL has the following output.

mysql> update mysql_servers set status='offline_soft' where hostname='gr3'; load mysql servers to runtime; save mysql servers to disk;
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.04 sec)

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

— Now, I am going to stop the group replication service on the “gr3”.

mysql> stop group_replication;
Query OK, 0 rows affected (4.59 sec)

Let’s check the ProxySQL status now.

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

The latest release still maintains the same status. “gr3” is still in maintenance mode.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 4            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

The older ProxySQL release removed the “OFFLINE_SOFT” flag from “gr3” and put it on the offline hostgroup (hg 4).

— Now, I am again going to start the group_replication service on gr3.

mysql> start group_replication;
Query OK, 0 rows affected (2.58 sec)

At ProxySQL 2.3.0:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------------+
| hostgroup_id | hostname | status       |
+--------------+----------+--------------+
| 2            | gr1      | ONLINE       |
| 3            | gr3      | OFFLINE_SOFT |
| 3            | gr2      | ONLINE       |
+--------------+----------+--------------+
3 rows in set (0.00 sec)

The latest release still maintains the same state.

At “< ProxySQL 2.3.0”:

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

At the older release, the server “gr3” came to ONLINE automatically. This is not the expected one because we did manually put that node into maintenance mode.

As you see in the comparison for the latest and older releases, the latest release has the right implementation. To remove the maintenance, we have to manually update the status to “ONLINE” as shown below.

mysql> update mysql_servers set status='online' where hostname='gr3'; load mysql servers to runtime; save mysql servers to disk;
Query OK, 1 row affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.04 sec)

mysql> select hostgroup_id,hostname,status from runtime_mySQL_servers;
+--------------+----------+--------+
| hostgroup_id | hostname | status |
+--------------+----------+--------+
| 2            | gr1      | ONLINE |
| 3            | gr3      | ONLINE |
| 3            | gr2      | ONLINE |
+--------------+----------+--------+
3 rows in set (0.00 sec)

I believe these two new implementations are very helpful to those who are running with the GR + ProxySQL setup. Apart from GR, the recent major releases have other important features as well. I will try to write a blog about them in the future, be on the lookout for that.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
12
2021
--

The Ultimate Guide To Moon Chairs For Kids

Moon chairs for kids have been popular for a long time, and there are plenty of reasons why. If you have found this article, we assume that you might be interested in buying one as well.

If you are keen to learn more about moon chairs, we suggest you keep reading. You will know what a kids’ moon chair is, the benefits of owning one, how to clean it, and where you can purchase them. We have also tested their safety, so stay tuned.

What is a Kids’ Moon Chair?

A kids’ moon chair is a small foldable chair for toddlers and small kids. It resembles the kids’ egg chair, but the size is smaller, and the weight is lower.

Moon chairs can be used indoors and outdoors because they’re made from sturdy material that’s easy to clean with a damp rag. That being said, they have been known to fade in the sun, so it’s best to give them some protection from the elements when possible.

The Moon Chair, also known as “Kinder Egg Chair” or “Moon Seats,” is a trendy piece of furniture for toddlers because they fit so well with their tiny bodies. In addition to being fun and stylish, these chairs provide a healthy and comfortable way for your toddler to sit.

What Are The Benefits Of Owning A Moon Chair?

Now that you know what a moon chair for kids is, we will look at the benefits of owning a moon chair. As you can imagine, there are many, but we have limited this article to the most common features people love about the moon chair.

Foldable

Most toddler chairs are small and can easily be folded for storage when not in use. This makes moon chairs perfect for anyone with limited space or looking to save money by minimizing their purchases.

Easy To Carry

The chair’s smaller size makes it easy to carry around your house, outside on the porch, or from room to room.

In addition, the moon chair’s low weight makes it easy for toddlers to pick up and carry themselves. It also means you won’t have difficulty loading the chair in and out of a car trunk when going on a family road trip.

Comfortable

Moon chairs for kids provide a comfortable place to lean back and relax. With a thick cushion, you can be sure that your young child will sit comfortably for as long as they like.

In addition, the thick cushion makes it easier to clean because you can remove most spills with a damp rag. In other words, you won’t have to drag out the vacuum when your child spills their juice on the chair!

Note that not all moon chairs are plushy. Some versions may be made for a lean-back feel like the ones adults would use, but most kids prefer something soft and cozy. It’s really up to your toddler which one they prefer.

Fun Designs

The great thing about kids’ moon chairs is that you can find them in practically any color, shape, and design. The sky’s the limit when it comes to choosing your child’s favorite style because there are so many varieties available.

Parents typically buy their children one or two styles based on their favorite color or how well they like them. For example, your child may like the green one with racing stripes because that is the color of their favorite sports team.

You can also choose styles based on how many kids will be sharing the chair. If you’re buying two chairs for your children, it’s a good idea to purchase similar moon chairs. If you’re buying only one chair, then it’s OK to get any style you like.

Cheap

You can typically find kids’ moon chairs for under $30, which is a steal considering how well-made and durable they are. Most kids’ moon chairs will last until their child has outgrown them or becomes too big for the chair.

Despite their lower price, you won’t have to sacrifice quality to save money. Keep in mind that price is just one small factor when deciding which chair is right for you and your child.

How Safe Are Moon Chairs?

With so many benefits, you may wonder if moon chairs are safe. The answer is yes! Moon chairs for kids are entirely safe when used correctly and under normal circumstances.

That being said, there are safety precautions you should take to ensure your child’s safety when using their chair, just like with any other piece of furniture or toy they’ll play with.

The first thing you should do is check the weight limit for your moon chair. Most chairs are suitable for children that weigh between 100-125 pounds but may vary depending on how sturdy or well-made your specific model is.

As long as you don’t purchase a chair that’s too small for your child’s size and weight, you won’t have to worry about it breaking.

You should also ensure that your child doesn’t climb on top of their chair or use it as a step ladder. Be sure to keep an eye on them as they play and keep the chairs out of rooms where children aren’t allowed, such as bathrooms, kitchens, or laundry rooms.

Can I Buy Moon Chairs for Kids on Amazon?

Of course, you can buy children’s moon chairs on Amazon! The great thing about shopping online is that it makes finding the chair of your child’s dreams a cinch.

You’ll be able to shop for moon chairs by searching through different categories such as most popular, top-rated, and best selling. You may also see some suggested items or other products that may interest you.

In addition, Amazon has a helpful customer review section where parents leave feedback about the moon chair. The reviews are precious because they provide real-world insight from other customers who have already purchased and used that product.

If you want to read more about the product before buying it, then be sure to check out the customer reviews.

You can browse for children’s moon chairs here.

How To Clean a Moon Chair

Moon chairs can get dirty quickly with a child playing in them constantly. Luckily, it’s easy to clean when you have the right tools.

To effectively remove any dirt from your children’s moon chair, you will need:

A vacuum cleaner
Rubber gloves
Spray bottle filled with soap and water solution
To start, use your vacuum cleaner to remove any large chunks of dirt. Then, put on rubber gloves and spray the moon chair with a mixture of water and soap.

You can also add in some vinegar or bleach, depending on how dirty the chair is. The solution will help break down any grime so that it’s easier to clean up.

Next, go over the chair with a damp sponge. Be sure to rub off any tough dirt spots and stains that remain on your children’s toy. You may need to repeat this step several times, depending on how dirty the toy is.

When you’re done cleaning it, be sure to let the moon chair air dry before allowing your child to play with it again.

More Interesting Reads

The post The Ultimate Guide To Moon Chairs For Kids appeared first on Comfy Bummy.

Oct
11
2021
--

Talking Drupal #315 – WordPress vs Drupal Communities

Today we are talking about What The Drupal & WordPress communities could learn from one another with Tara King.

TalkingDrupal.com/315

Topics

  • What makes the WP community different?
  • Automattic influence on the community.
  • What could the Drupal community learn from the WP community?
  • Is there a lot of crossover in people in each community?
  • Do you think there is anything the WP community could learn from Drupal?
  • What are some struggles shared across the two communities?
  • How does platform sustainability differ between the two communities?
  • Why do you think so many more people gravitate towards WP?

Module of the Week

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Tara King – @sparklingrobots

Oct
11
2021
--

Upgrade Fix in Percona Distribution for PostgreSQL, Percona Server for MongoDB Updates: Release Roundup October 11, 2021

Percona Releases Oct 11

It’s release roundup time again here at Percona!

Percona Releases Oct 11Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since September 27, 2021. Take a look!

 

Percona Distribution for PostgreSQL 13.4 Update

The Percona Distribution for PostgreSQL 13.4 Update was released on September 30, 2021. It installs PostgreSQL and complements it by a selection of extensions that enable solving essential practical tasks efficiently. This update fixes the inability of a user to upgrade from the previous version of PostgreSQL from PGDG (PostgreSQL Global Development Group) to Percona Distribution for PostgreSQL on Ubuntu.

Download Percona Distribution for PostgreSQL 13.4 Update

 

Percona Distribution for PostgreSQL Operator 1.0.0

On October 7, 2021, Percona Distribution for PostgreSQL Operator 1.0.0 was released. It automates the lifecycle, simplifies deploying and managing open source PostgreSQL clusters on Kubernetes. Along with new features, some release highlights include the ability to configure scheduled backups following the declarative approach in the deploy/cr.yaml file,  OpenShift compatibility, and, for the first time, the main functionality of the Operator is covered by functional tests, which ensure the overall quality and stability.

Download Percona Distribution for PostgreSQL Operator 1.0.0

 

Percona Server for MongoDB 4.4.9-10

On October 7, 2021, we released Percona Server for MongoDB 4.4.9-10. It is an enhanced, source available, and highly scalable database that is a fully compatible, drop-in replacement for MongoDB 4.4.9 Community Edition, supporting MongoDB 4.4.9 protocols and drivers. Note: Beginning with MongoDB 4.4.2, several data impacting or corrupting bugs were introduced. These bugs are fixed in MongoDB 4.4.9. Percona Server for MongoDB 4.4.9-10 includes the upstream fixes of these bugs.

Download Percona Server for MongoDB 4.4.9-10

 

Percona Server for MongoDB 4.0.27-22

September 28, 2021, saw the release of Percona Server for MongoDB 4.0.27-22. An improvement in this release disables the deletion of the mongod user in RPM packages.

Download Percona Server for MongoDB 4.0.27-22

 

Percona Distribution for MongoDB Operator 1.10.0

Percona Distribution for MongoDB Operator 1.10.0 was released on September 30, 2021. The Operator contains the necessary Kubernetes settings to maintain a consistent Percona Server for MongoDB instance. modification, or deletion of items in your Percona Server for MongoDB environment.

Download Percona Distribution for MongoDB Operator 1.10.0

 

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com