Dec
31
2021
--

Percona Support Team New Year Greetings!

Happy New Year from Percona Support

In Percona Support we spend a few hours per week doing Labs: short collective projects, not directly related to the tickets.
One of our last labs was about to get ready for the winter holiday season which means preparing the tree and decorating it. To do it we used our favorite tools: MySQL, and ASCII art (https://en.wikipedia.org/wiki/ASCII_art). As a result, we created a function that prints into the terminal a New Year Tree (https://en.wikipedia.org/wiki/New_Year_tree) and decorates it with symbols we want.
For example, to create a three-level tree, decorated with starts, call the function as follow:

$ mysql ny_tree -e "select ny_tree(3, '*')" --vertical --skip-column-names -s
*************************** 1. row ***************************

   /\
  /__\
  /* \
 /____\
 /  * \
/______\
   ||

Since MySQL supports Unicode you can use any symbol, having it has the same width as the space symbol. For example, you can decorate your tree with wax candles without any risk of firing up your home:
$ mysql ny_tree -e "select ny_tree(4, '?')" --vertical --skip-column-names -s
*************************** 1. row ***************************

    /\
   /__\
   /? \
  /____\
  /?  ?\
 /______\
 /?? ?  \
/________\
    ||
You can change your terminal colors to those that look more celebrating, align the size of the terminal with the number of levels in the tree, and call the function in a loop to have an animated picture.
For example, animation at the beginning of this post was created by a few calls like:
$ for i in `seq 1 1 1000`; do mysql ny_tree -e "select '\\nHappy New 2022 Year\!', ny_tree(12, '?')" --vertical --skip-column-names -s; sleep 1; done
I only adjusted the number of levels for each terminal size.
Code for the

ny_tree

  and two helper functions is located at https://github.com/Percona-Lab/ny-tree

We, Percona Support Team, wish you Happy Holidays and Happy New Year without serious database issues!
Dec
30
2021
--

MongoDB Config Server Upgrade From SCCC to CSRS ReplicaSet

MongoDB Config Server Upgrade

MongoDB Config Server UpgradeRecently, I got some of our customers doing an upgrade to v4.x from v3.0 (Yeah, still could see older MongoDB versions and we suggest everyone do the upgrade to stay with the current GA!). There were some pain points in the upgrade process, and especially before migrating to v3.4, you will need to change from SCCC config setup to CSRS replicaSet setup. Interestingly, I did some tests in that process and would like to share them here. Even though this topic is pretty outdated, this would be beneficial to the people out there who still seek to migrate from the older version and are stuck at this config server upgrade.

In this blog post, we will see how to migrate a mirrored config server configuration (SCCC) to the replicaSet configuration (CSRS) along with the storage engine from MMAP to wiredTiger change. From MongoDB 3.2.4, the replicaSet configuration is supported for config servers too and the support of SCCC (Sync Cluster Connection Configuration) configuration is removed from MongoDB 3.4. So it is important to make sure the config servers are configured with the replicaSet before upgrading to MongoDB 3.4. Also, the config servers must run on wiredTiger when configured as replicaSet. There are two ways to do this – one is that when you want to replace existing config servers and another one is to migrate to the new set of servers/ports. Let’s see them in the following topics.

Migrate From SCCC to CSRS on the Different Hosts/Ports:

This section gives a brief about migrating the existing mirrored hosts/ports to different hosts or ports. It is always recommended to backup your config server before your steps and get downtime to avoid any writes on this crucial step. The test shared cluster setup runs on Percona Server for MongoDB (PSMDB) v3.2.22 with SCCC set up on the following ports:

47080 – mongos
47087, 47088, 47089 – mongoc – config servers with mmapv1 engine (sccc)
47081, 47082, 47083 – shard01 replicaset
47084, 47085, 47086 – shard02 replicaset

New Config servers to migrate (for testing purposes, using localhost but different ports to migrate):

47090, 47091, 47092 – new mongoc (csrs)

Now stop the balancer:

mongos> sh.stopBalancer()
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "balancer" })
mongos> 
mongos> sh.getBalancerState()
false

 

Initiate the replicaSet in one of the config servers (here using config db running on 47087). Use this member as a base for migrating from SCCC to CSRS:

$ mongo32 localhost:47087/admin
MongoDB shell version: 3.2.21-2-g105acca0d4
connecting to: localhost:47087/admin
configsvr> 
configsvr> rs.initiate({ _id: "configRepl", configsvr: true, version:1, members: [ {_id:0, host: "localhost:47087"} ] } )
{ "ok" : 1 }

 

Start the new config DB instances using the same replicaset name used in the above initiate command:

$ /home/balaguru/mongodb/mongodb-linux-x86_64-3.2.21-2-g105acca0d4/bin/mongod --dbpath /home/balaguru/mongodb/testshard32/data/configrepl/rs1/db/ --logpath /home/balaguru/mongodb/testshard32/data/configrepl/rs1/mongod.log --port 47090 --fork --configsvr --storageEngine wiredTiger --wiredTigerCacheSizeGB 1 --replSet configRepl
about to fork child process, waiting until server is ready for connections.
forked process: 476923
child process started successfully, parent exiting

 

$ /home/balaguru/mongodb/mongodb-linux-x86_64-3.2.21-2-g105acca0d4/bin/mongod --dbpath /home/balaguru/mongodb/testshard32/data/configrepl/rs2/db/ --logpath /home/balaguru/mongodb/testshard32/data/configrepl/rs2/mongod.log --port 47091 --fork --configsvr --storageEngine wiredTiger --wiredTigerCacheSizeGB 1 --replSet configRepl
about to fork child process, waiting until server is ready for connections.
forked process: 478193
child process started successfully, parent exiting

 

$ /home/balaguru/mongodb/mongodb-linux-x86_64-3.2.21-2-g105acca0d4/bin/mongod --dbpath /home/balaguru/mongodb/testshard32/data/configrepl/rs3/db/ --logpath /home/balaguru/mongodb/testshard32/data/configrepl/rs3/mongod.log --port 47092 --fork --configsvr --storageEngine wiredTiger --wiredTigerCacheSizeGB 1 --replSet configRepl
about to fork child process, waiting until server is ready for connections.
forked process: 478254
child process started successfully, parent exiting

 

Now restart 47087 with the option –replSet and –configsvrMode which allow this mirrored config DB to be in replicaset and have MMAP engine.

$ /home/balaguru/mongodb/mongodb-linux-x86_64-3.2.21-2-g105acca0d4/bin/mongod --dbpath /home/balaguru/mongodb/testshard32/data/config/db --logpath /home/balaguru/mongodb/testshard32/data/config/mongod.log --port 47087 --fork --configsvr --storageEngine mmapv1 --configsvrMode sccc --replSet configRepl
about to fork child process, waiting until server is ready for connections.
forked process: 477682
child process started successfully, parent exiting

 

Add the new config DB instances to this replicaSet (set priority/votes 0 to avoid election):

configRepl:PRIMARY> rs.add({host: "localhost:47090", priority:0, votes:0})
{ "ok" : 1 }

configRepl:PRIMARY> rs.add({host: "localhost:47091", priority:0, votes:0})
{ "ok" : 1 }

configRepl:PRIMARY> rs.add({host: "localhost:47092", priority:0, votes:0})
{ "ok" : 1 }

configRepl:PRIMARY> rs.status()
{
        "set" : "configRepl",
        "date" : ISODate("2021-11-23T08:11:13.065Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "configsvr" : true,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:47087",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 154,
                        "optime" : {
                                "ts" : Timestamp(1637655072, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-11-23T08:11:12Z"),
                        "electionTime" : Timestamp(1637654919, 1),
                        "electionDate" : ISODate("2021-11-23T08:08:39Z"),
                        "configVersion" : 4,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "localhost:47090",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 69,
                        "optime" : {
                                "ts" : Timestamp(1637655069, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-11-23T08:11:09Z"),
                        "lastHeartbeat" : ISODate("2021-11-23T08:11:11.608Z"),
                       "lastHeartbeatRecv" : ISODate("2021-11-23T08:11:11.611Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "localhost:47087",
                        "configVersion" : 4
                },
                {
                        "_id" : 2,
                        "name" : "localhost:47091",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 6,
                        "optime" : {
                                "ts" : Timestamp(1637655069, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-11-23T08:11:09Z"),
                        "lastHeartbeat" : ISODate("2021-11-23T08:11:11.608Z"),
                        "lastHeartbeatRecv" : ISODate("2021-11-23T08:11:09.609Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "localhost:47090",
                        "configVersion" : 4
                },
                {
                        "_id" : 3,
                        "name" : "localhost:47092",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 3,
                        "optime" : {
                                "ts" : Timestamp(1637655069, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-11-23T08:11:09Z"),
                        "lastHeartbeat" : ISODate("2021-11-23T08:11:11.608Z"),
                        "lastHeartbeatRecv" : ISODate("2021-11-23T08:11:11.620Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "localhost:47091",
                        "configVersion" : 4
                }
        ],
        "ok" : 1
}

 

Shut down one of the other mirrored config servers (either 47088 or 47089). If one of the mirrored DB is down in between the migration, then the sharded cluster goes into read-only mode, i.e there will be a failure when issuing DDL (mentioned in the below example). So it is advised to do the activity when there is downtime to avoid any unexpected error from the application side.

// try to create new db and collection as follows to test when 47088 is down
mongos> use vinodh  
switched to db vinodh
mongos> db.testCreateColl.insert({id:1})
WriteResult({
        "writeError" : {
                "code" : 13663,
                "errmsg" : "unable to target write op for collection vinodh.testCreateColl :: caused by :: Location13663: exception creating distributed lock vinodh/balaguru-UbuntuPC:47080:1637654366:1487823871 :: caused by :: SyncClusterConnection::update prepare failed:  localhost:47088 (127.0.0.1) failed:9001 socket exception [CONNECT_ERROR] server [couldn't connect to server localhost:47088, connection attempt failed] "
        }
})

 

Once the replicaSet is set, change the priority and votes of other nodes to participate in the election:

configRepl:PRIMARY> cfg.members[1].host
localhost:47090
configRepl:PRIMARY> cfg.members[1].priority
0
configRepl:PRIMARY> cfg.members[1].priority = 1
1
configRepl:PRIMARY> cfg.members[1].votes 
0
configRepl:PRIMARY> cfg.members[1].votes = 1
1
configRepl:PRIMARY> cfg.members[2].priority = 1
1
configRepl:PRIMARY> cfg.members[3].priority = 1
1
configRepl:PRIMARY> cfg.members[2].votes = 1
1
configRepl:PRIMARY> cfg.members[3].votes = 1
1
configRepl:PRIMARY> rs.reconfig(cfg)
{ "ok" : 1 }

 

Now change the PRIMARY role to the new member and make 47087 as SECONDARY:

configRepl:PRIMARY> rs.stepDown()
2021-11-23T13:58:37.885+0530 E QUERY    [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host 'localhost:47087'  :
DB.prototype.runCommand@src/mongo/shell/db.js:135:1
DB.prototype.adminCommand@src/mongo/shell/db.js:152:1
rs.stepDown@src/mongo/shell/utils.js:1202:12
@(shell):1:1

2021-11-23T13:58:37.887+0530 I NETWORK  [thread1] trying reconnect to localhost:47087 (127.0.0.1) failed
2021-11-23T13:58:37.887+0530 I NETWORK  [thread1] reconnect localhost:47087 (127.0.0.1) ok
configRepl:SECONDARY>

 

Restart the old config member 47087 without –configsvrMode option. Here 47087 will go into REMOVED which is totally fine as it still has MMAPV2 engine + running without the configsvrMode option. Then you can restart mongos with new –-configdb option mentioning only the new config servers (–configdb “configRepl/localhost:47090,localhost:47091,localhost:47092” ):

$ /home/balaguru/mongodb/mongodb-linux-x86_64-3.2.21-2-g105acca0d4/bin/mongos --logpath /home/balaguru/mongodb/testshard32/data/mongos.log --port 47080 --configdb "configRepl/localhost:47090,localhost:47091,localhost:47092" --fork

 

Now 47087 is ready to be removed from the replicaSet finally:

configRepl:PRIMARY> rs.remove("localhost:47087")
{ "ok" : 1 }

You can then check the shard status and the data through mongos. Once everything looks good, stop the other mirrored config DBs on 47088 and 47089 ports followed by enabling the balancer.

 

Migrate From SCCC to CSRS on the Same Hosts:

This section gives a brief about replacing the existing mirrored hosts/port. This has steps until enabling the replicaSet in one of the mirrored config servers. For this testing case, the shared cluster setup runs with PSMDB v3.2.22 with SCCC set up on the following ports:

27100 – mongos
27107, 27108, 27109 – mongoc – config servers with mmapv1 engine
27101, 27102, 27103 – shard01 replicaset
27104, 27105, 27106 – shard02 replicaset

Here using the instance running on port 27107 to start the migration. As said in the above method,

  • Initiate the replicaSet
  • Restart the instance with –-replSet and –-configsvrMode options

The replicaSet status on 27107 instance.

csReplSet:PRIMARY> rs.status()
{
 "set" : "csReplSet",
 "date" : ISODate("2021-10-07T09:04:20.266Z"),
 "myState" : 1,
 "term" : NumberLong(1),
 "configsvr" : true,
 "heartbeatIntervalMillis" : NumberLong(2000),
 "members" : [
  {
   "_id" : 0,
   "name" : "localhost:27107",
   "health" : 1,
   "state" : 1,
   "stateStr" : "PRIMARY",
   "uptime" : 9,
   "optime" : {
    "ts" : Timestamp(1633597452, 1),
    "t" : NumberLong(1)
   },
   "optimeDate" : ISODate("2021-10-07T09:04:12Z"),
   "infoMessage" : "could not find member to sync from",
   "electionTime" : Timestamp(1633597451, 1),
   "electionDate" : ISODate("2021-10-07T09:04:11Z"),
   "configVersion" : 1,
   "self" : true
  }
 ],
 "ok" : 1
}

 

Then stop the second config 27108 instance and clear the dbpath files to remove MMAPv2 files. Start it with the replicaset + WT engine options like below:

$ rm -rf data/config2/db/*

$ /home/vinodh.krishnaswamy/mongo/mongodb-linux-x86_64-3.2.22/bin/mongod --dbpath /home/vinodh.krishnaswamy/mongo/testshard32/data/config2/db --logpath /home/vinodh.krishnaswamy/mongo/testshard32/data/config2/mongod.log --port 27108 --fork --configsvr  --wiredTigerCacheSizeGB 1  --replSet csReplSet
about to fork child process, waiting until server is ready for connections.
forked process: 42341
child process started successfully, parent exiting

At this point, as said earlier let me remind you of the point of the sharded cluster going into a read-only mode when an existing mirrored instance is down or not reachable. So don’t be surprised if you get any warning about DDL errors while in this situation.

Now add this 27108 to the replicaset from PRIMARY (27107) as follows with priority:0 and votes:0 to avoid this server taking part in the election:

$ mongo32 localhost:27107
MongoDB shell version: 3.2.22
connecting to: localhost:27107/test
csReplSet:PRIMARY> rs.add({host: "localhost:27108", priority: 0, votes: 0 })
{ "ok" : 1 }

 

Then repeat the same for the third config server – 27109. Remove DB files + restart it with –replSet and WT options for 27109 followed by adding it to the replicaSet from PRIMARY:

csReplSet:PRIMARY> rs.add({host: "localhost:27109", priority:0 , votes: 0})
{ "ok" : 1 }

 

Once they are in sync and the replicaSet looks healthy, you can adjust the priority and votes for the added members(27108, 27109) as follows to allow them to participate in the election.

csReplSet:PRIMARY> cfg = rs.conf()
{
 "_id" : "csReplSet",
 "version" : 3,
 "configsvr" : true,
 "protocolVersion" : NumberLong(1),
 "members" : [
  {
   "_id" : 0,
   "host" : "localhost:27107",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 1,
   "tags" : {
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 1
  },
  {
   "_id" : 1,
   "host" : "localhost:27108",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 0,
   "tags" : {
   },
   "slaveDelay" : NumberLong(0),
  "votes" : 0
  },
  {
   "_id" : 2,
   "host" : "localhost:27109",
   "arbiterOnly" : false,
   "buildIndexes" : true,
   "hidden" : false,
   "priority" : 0,
   "tags" : {
   },
   "slaveDelay" : NumberLong(0),
   "votes" : 0
  }
 ],

 "settings" : {
  "chainingAllowed" : true,
  "heartbeatIntervalMillis" : 2000,
  "heartbeatTimeoutSecs" : 10,
  "electionTimeoutMillis" : 10000,
  "getLastErrorModes" : {
  },
  "getLastErrorDefaults" : {
   "w" : 1,
   "wtimeout" : 0
  },
  "replicaSetId" : ObjectId("615eb78fdb1ed4092c166786")
 }
}

csReplSet:PRIMARY> cfg.members[1].priority = 1
1
csReplSet:PRIMARY> cfg.members[2].priority = 1
1
csReplSet:PRIMARY> cfg.members[1].votes = 1
1
csReplSet:PRIMARY> cfg.members[2].votes = 1
1
csReplSet:PRIMARY> cfg.members[2].votes
1
csReplSet:PRIMARY> rs.reconfig(cfg)
{ "ok" : 1 }

 

Then go to the first config server 27107 and step down it for another member to become PRIMARY. So that you can make it down and change the storage engine to WT in it like above as well + without configsvrMode option. When it joins back, it does the initial sync from the syscSource of the replicaSet.

csReplSet:PRIMARY> rs.stepDown()
2021-10-07T05:14:26.617-0400 E QUERY    [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host 'localhost:27107'  :
DB.prototype.runCommand@src/mongo/shell/db.js:135:1
DB.prototype.adminCommand@src/mongo/shell/db.js:153:16
rs.stepDown@src/mongo/shell/utils.js:1202:12
@(shell):1:1

2021-10-07T05:14:26.619-0400 I NETWORK  [thread1] trying reconnect to localhost:27107 (127.0.0.1) failed
2021-10-07T05:14:26.619-0400 I NETWORK  [thread1] reconnect localhost:27107 (127.0.0.1) ok
csReplSet:SECONDARY>

 

To remind you that if you forget to change the storage engine to WT and without —configsvrMode option, it will go into REMOVED state, so please make note of it. (Here it is started purposefully started to show as an example.)

$ mongo32 localhost:27107
MongoDB shell version: 3.2.22
connecting to: localhost:27107/test
csReplSet:REMOVED>

 

Now, bringing down the DB instance 27017 and clearing DB files + starting back with WT and replSet options (remove configsvrMode option this time)

$ rm -rf data/config/db/*

$ /home/vinodh.krishnaswamy/mongo/mongodb-linux-x86_64-3.2.22/bin/mongod --dbpath /home/vinodh.krishnaswamy/mongo/testshard32/data/config/db --logpath /home/vinodh.krishnaswamy/mongo/testshard32/data/config/mongod.log --port 27107 --fork --configsvr --wiredTigerCacheSizeGB 1 --replSet csReplSet
about to fork child process, waiting until server is ready for connections.
forked process: 35202
child process started successfully, parent exiting

 

Then restart the mongos instances with the below –configDB option for the new config to take effect:

mongos <options> –configDB csReplSet/localhost:27107,localhost:27108,localhost:27109

From the mongos logfile, you will see the below message if the config server replicaset was connected successfully:

2021-10-07T05:48:25.679-0400 I NETWORK  [conn6] Successfully connected to csReplSet/localhost:27107,localhost:27108,localhost:27109 (1 connections now open to csReplSet/localhost:27107,localhost:27108,localhost:27109 with a 0 second timeout)

Conclusion

We at Percona don’t recommend anyone to have EOL versions running as they don’t get any bugs fixed or you cannot use the recently added features. Hope this blog helps some who still looking to migrate from older versions.

Dec
30
2021
--

Percona Monitoring and Management 2 Test Drive Using VirtualBox and SSH Tunnels

PMM using VirtualBox and SSH

PMM using VirtualBox and SSHPercona Monitoring and Management 2 (PMM2) is the database monitoring suite assembled and developed by Percona. It is based on standard open source components and custom-made software integrations. PMM2 helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

This blog post will describe a method to test PMM2 using your laptop’s VirtualBox, ssh tunnels, and without installing any agents on the database servers. This is a testing and evaluation environment, not intended for production. If you want to perform a full-fledged test of PMM2, we recommend an environment as similar as possible to your final production setup: enterprise virtualization, docker containers, or AWS. We assume that your laptop doesn’t have direct connectivity to the databases, this is why we use ssh tunnels.

PMM2 architecture consists of 2+1 components:PMM2 High Level Architecture
Two components run on your infrastructure: PMM Agents and PMM Server. The agents gather the metrics at the database and operating system levels. PMM Server takes care of processing, storing, grouping, and displaying these metrics. It can also perform additional operations like capturing serverless databases metrics, backups, and sending alerts (the last two features are in technical preview as of this writing). The other component to complete the formula is the Percona Platform, which adds more features to PMM, from advisors to DBaaS. Disclaimer: Percona Platform is in preview release with limited functionality – suitable for early adopters, development, and testing. Besides the extended features added to PMM, the Percona Platform brings together distributions of MySQL, PostgreSQL, and MongoDB including a range of open-source tools for data backup, availability, and management. You can learn more about the Platform here.

To make setup easier, PMM2 Server can be run either as a docker container or importing an OVA image, executed using VMWare VSphere, VirtualBox, or any other hypervisor. If you run your infrastructure in AWS, you can deploy PMM from the AWS Marketplace.

To run the agents, you need a Linux box. We recommend running the agents and the database on the same node. PMM can also gather the metrics using a direct connection to a server-less database or running an operating system that does not support the agent.

Often, installing the agents is a stopper for some DBAs who would like to test PMM2. Also, while containers are frequent in large organizations, we find virtualization and containers relegated to development and quality assurance environments. These environments usually don’t have direct access to production databases.

TCP Forwarding Across SSH Connections

AllowTcpForwarding is the ssh daemon configuration option that allows forwarding TCP ports across the ssh connection. At first sight, this may seem a security risk, but as the ssh documentation states: “disabling TCP forwarding does not improve security unless users are also denied shell access, as they can always install their forwarders.”

If your system administrators do not allow TCP forwarding, other options available to accomplish the same results are socat or netcat. But we will not cover them here.

If your laptop has direct access to the databases, you can skip all the ssh tunnels and use the direct access method described later in this post.

Install PMM 2 Ova

Download the Open Virtualization Format compatible image from https://www.percona.com/downloads/pmm2/ or use the command line:

$ wget https://www.percona.com/downloads/pmm2/2.25.0/ova/pmm-server-2.25.0.ova

You can import the OVA file using the UI, with the import option from the file menu, or using the command line:

$ VBoxManage import pmm-server-2.25.0.ova --vsys 0 --vmname "PMM Testing"
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interpreting /Users/pep/Downloads/pmm-server-2.25.0.ova...
OK.
Disks:
vmdisk1 42949672960 -1 http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized PMM2-Server-2021-12-13-1012-disk001.vmdk -1 -1
vmdisk2 429496729600 -1 http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized PMM2-Server-2021-12-13-1012-disk002.vmdk -1 -1

Virtual system 0:
0: Suggested OS type: "RedHat_64"
(change with "--vsys 0 --ostype "; use "list ostypes" to list all possible values)
1: VM name specified with --vmname: "PMM Testing"
2: Suggested VM group "/"
(change with "--vsys 0 --group ")
3: Suggested VM settings file name "/Users/Pep/VirtualBox VMs/PMM2-Server-2021-12-13-1012/PMM2-Server-2021-12-13-1012.vbox"
(change with "--vsys 0 --settingsfile ")
4: Suggested VM base folder "/Users/Pep/VirtualBox VMs"
(change with "--vsys 0 --basefolder ")
5: Product (ignored): Percona Monitoring and Management
6: Vendor (ignored): Percona
7: Version (ignored): 2021-12-13
8: ProductUrl (ignored): https://www.percona.com/software/database-tools/percona-monitoring-and-management
9: VendorUrl (ignored): https://www.percona.com
10: Description "Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance"
(change with "--vsys 0 --description ")
11: Number of CPUs: 1
(change with "--vsys 0 --cpus ")
12: Guest memory: 4096 MB
(change with "--vsys 0 --memory ")
13: Network adapter: orig NAT, config 3, extra slot=0;type=NAT
14: SCSI controller, type LsiLogic
(change with "--vsys 0 --unit 14 --scsitype {BusLogic|LsiLogic}";
disable with "--vsys 0 --unit 14 --ignore")
15: Hard disk image: source image=PMM2-Server-2021-12-13-1012-disk001.vmdk, target path=PMM2-Server-2021-12-13-1012-disk001.vmdk, controller=14;channel=0
(change target path with "--vsys 0 --unit 15 --disk path";
disable with "--vsys 0 --unit 15 --ignore")
16: Hard disk image: source image=PMM2-Server-2021-12-13-1012-disk002.vmdk, target path=PMM2-Server-2021-12-13-1012-disk002.vmdk, controller=14;channel=1
(change target path with "--vsys 0 --unit 16 --disk path";
disable with "--vsys 0 --unit 16 --ignore")
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully imported the appliance.

Once the machine is imported, we will connect it to a host-only network. This network restricts network traffic only between the host and the virtual machines. But first, let’s find a suitable network:

$ VBoxManage list hostonlyifs
Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.56.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Wireless: No
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0

Select the first one that has Status up, write down the name and IP address. Then make sure that there is a DHCP server assigned to that interface:

$ VBoxManage list dhcpservers
NetworkName: HostInterfaceNetworking-vboxnet0
Dhcpd IP: 192.168.56.100
LowerIPAddress: 192.168.56.101
UpperIPAddress: 192.168.56.254
NetworkMask: 255.255.255.0
Enabled: Yes
Global Configuration:
minLeaseTime: default
defaultLeaseTime: default
maxLeaseTime: default
Forced options: None
Suppressed opts.: None
1/legacy: 255.255.255.0
Groups: None
Individual Configs: None

Now we will assign two network interfaces to our PMM virtual machine. One is allocated to the internal network, and the other uses NAT to connect to the internet and, for example, check for upgrades.

$ VBoxManage modifyvm "PMM Testing" --nic1 hostonly --hostonlyadapter1 vboxnet0
$ VBoxManage modifyvm "PMM Testing" --nic2 natnetwork

Once networking is configured, we may start the virtual machine.

$ VBoxManage startvm "PMM Testing"

The next step is to retrieve the IP address assigned to our PMM box. First, we will obtain the MAC address of the network card we recently added:

$ VBoxManage showvminfo "PMM Testing" | grep -i vboxnet0
NIC 1: MAC: 08002772600D, Attachment: Host-only Interface 'vboxnet0', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none

Using the retrieved MAC address we can look for the DHCP leases:

$ VBoxManage dhcpserver findlease --interface=vboxnet0 --mac-address=08002772600D
IP Address: 192.168.56.112
MAC Address: 08:00:27:72:60:0d
State: acked
Issued: 2021-12-21T22:15:54Z (1640124954)
Expire: 2021-12-21T22:25:54Z (1640125554)
TTL: 600 sec, currently 444 sec left

This is the IP address we will use to access the PMM server. Open a browser to connect to https://192.168.56.112 with the default credentials: admin/admin.

PMM2 Login window
The following step configures the tunnels to connect to the databases we monitor.

Set Up the SSH Tunnels

This is the topology of our network:

Private network topology
We will open two ssh connections per server we want to access from PMM. Open a terminal session and execute the following command, replacing it with the username you normally use to connect to your jump host:

$ ssh -L 192.168.56.1:3306:10.0.0.1:3306 @192.168.55.100

This creates a tunnel that connects the MySQL Server port 3306 with our local internal address in the same port. If you want to connect to more than one MySQL instance, you must use different ports. To open the tunnel for the MongoDB server, use the following command:

$ ssh -L 192.168.56.1:27017:10.0.0.2:27017 @192.168.55.100

Test the tunnel connectivity to the MySQL host using netcat:

$ nc -z 192.168.56.1 3306
Connection to 192.168.56.1 port 3306 [tcp/mysql] succeeded!

And also test the connectivity to the MongoDB host:

$ nc -z 192.168.56.1 27017
Connection to 192.168.56.1 port 27017 [tcp/*] succeeded!

This is the topology of our network including the ssh tunnels.
SSH tunnels

Configure Accounts

Follow the PMM documentation and create a MySQL account (or use an already existing account) with the required privileges:

CREATE USER 'pmm'@'10.0.0.100' IDENTIFIED BY '' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO 'pmm'@'10.0.0.100';

Note that we need to use the internal IP address for the jump host. If you don’t know the IP address, use the wildcard ‘%’.

Add also the credentials for MongoDB, run this in a Mongo session:

db.getSiblingDB("admin").createRole({
role: "explainRole",
privileges: [{
resource: {
db: "",
collection: ""
},
actions: [
"listIndexes",
"listCollections",
"dbStats",
"dbHash",
"collStats",
"find"
]
}],
roles:[]
})

db.getSiblingDB("admin").createUser({
user: "pmm_mongodb",
pwd: "",
roles: [
{ role: "explainRole", db: "admin" },
{ role: "clusterMonitor", db: "admin" },
{ role: "read", db: "local" }
]
})

Add the Services to PMM

We can’t install the agents because we don’t have access to our PMM testing environment from the database servers. Instead, we will configure both services as remote instances. Go to the “Configuration” menu , select “PMM Inventory” , then “Add instance” . Then choose MySQL Add a remote instance.
Complete the following fields:
Hostname: 192.168.56.1 (This is the internal Host-Only VirtualBox address)
Service name: MySQL8
Port: 3306
Username: pmm
Password: <password>

And press the button. It will check the connectivity and, if everything is correct, the MySQL service will be added to the inventory. If there is an error, double-check that the ssh connection is still open and you entered the correct credentials. Make sure that the host that you specified to create the MySQL user is correct.

We will use a similar process for MongoDB:

These are the fields you have to complete with the correct information:
Hostname: 192.168.56.1 (Again, the internal Host-Only VirtualBox address)
Service name: MongoDB
Port: 27017
Username: pmm_mongodb
Password: <password>

And press the button. It will check the connectivity and, if everything is correct, the MongoDB service will be added to the inventory. If there is an error, double-check again that the ssh connection is open and you entered the correct credentials. You can use also the MongoDB client application to check access.

Once you added both services, you just need to wait for a few minutes to give time to collect data and start testing PMM2.

PMM2 Query Analyzer

Dec
29
2021
--

Q & A on Webinar “MySQL Performance for DevOps”

MySQL Performance for DevOps

MySQL Performance for DevOpsFirst I want to thank everyone who attended my November 16, 2021 webinar “MySQL Performance for DevOps“. Recording and slides are available on the webinar page.

Here are answers to the questions from participants which I was not able to provide during the webinar.

Q: Hi! We have troubles with DELETE queries. We have to remove some data periodically (like, hourly, daily) and we have short-term server stalls during these DELETEs. Server is running on modern NVMe’s so we wonder why do we have this situation. Those DELETE’s are not so large, like 10 000 – 15 000 records, but tables on which DELETE’s are performed update frequently.

A: I would test if a similar

DELETE

  statement is slow when you run it on the development server in an isolated environment while no other session is connected to the MySQL server instance.  If it is slow in this case too, check if MySQL uses indexes to resolve the condition

WHERE

  for the

DELETE

  statement. You can use

EXPLAIN

  statement for

DELETE

  or convert

DELETE

  into a similar

SELECT

  query and experiment.

If the

DELETE

  statement is running fast when called in the isolated environment, check how parallel sessions affect its performance. If the tables you are deleting from are updated frequently,

DELETE

  statements could cause and be affected by locking conflicts. To resolve this situation study how MySQL works with locks. Great presentation about InnoDB locks “InnoDB Locking Explained with Stick Figures” could be found at https://www.slideshare.net/billkarwin/innodb-locking-explained-with-stick-figures Then you need to optimize

DELETE

  and

UPDATE

  statements, so they finish faster. Alternatively, you can separate them in time, so they have less effect on each other. You may also split

DELETE

  statements, so they update fewer records at a time.

Q: Question 2. We have innodb_buffer_size set around 260Gb on the dedicated server with about 320Gb of total RAM. Still, we have 99.9% memory full and there are no other large memory consumers, only MySQL (Percona 8.0.23). The server starts and around 3 hours it takes all available memory regardless of the innodb_buffer_size setting. We never had something like this with 5.7. Do you have any ideas?

A: MySQL uses memory not only for the InnoDB buffer pool but for other data, such as session-based and operation-based buffers. For example, if you have 100 connections that use underlying temporary tables to resolve queries and set the size of the internal temporary table to 100MB you will use around 10G additional memory for these tables. Query memory digest tables in Performance Schema and views on these tables in the

sys

 schema to find the operations that allocate memory in your MySQL server.

Q: Can we get a copy of this presentation?

A: You should have received a copy of the slides. If you did not, they are attached to this blog post: DevOps_Perf_202111

Q: buffer_pool_size should be what percentage of the host RAM?

A: The percentage of the host RAM is a very rough estimation of the ideal amount of memory you need to allocate for the InnoDB buffer pool. For example, the MySQL user manual in past had recommendations for having InnoDB buffer pool size up to 80% of the available RAM. But 80% of RAM is very different if the host has, say, 8G, or 1024G. In the former case, 80% is 6.4G and the host will have 1.6G for other MySQL buffers and the operating system that could be not enough. In the latter case, 80% is 819.2G and the host will have 204.8G for other needs. Depending on your workload it could be a huge waste of resources. I recommend you to read this blog post: https://www.percona.com/blog/2015/06/02/80-ram-tune-innodb_buffer_pool_size/ and follow the links in the end, then choose the size, appropriate for your data set and workload.

Q: How we can fitting RAM size vs data size?

Example: if I have 1G of data, how many RAM I need for get 100 QPS, and if I have 100G of data how many RAM I need for get 100 QPS?

A: RAM size, dataset size, and the number of queries per second that your server can handle are not directly related. You need to test your queries and follow how they are executed. For example, if you select everything from the InnoDB table and your table holds either 1G or 100G of data, and you do not access any other table on the server, the very first run will be slower than following because InnoDB will read data into the buffer pool. Then performance and the number of queries per second will be limited only by network speed and bandwidth between your client and server having you can allocate about 100G for your buffer pool. But cached size will stay almost the same as the table size no matter how many connections you have. Your MySQL server will only use a small amount of memory for new connections buffers.

In another case, however, you may have a comparatively small table that you will access by a quite complicated query. For example, if you try to repeat the test case for still valid https://bugs.mysql.com/bug.php?id=29423, a single query on the 184M table would run for a much longer time than you expect. In this case number of queries per second will be also very low.

Q: Do you have a recommendation parameter list for MySQL RDS on AWS?

A: It is the same as for the dedicated MySQL server but you may have not been able to change some of the options.

Q: If you know you have SSD’s, but ROTA = 1, what has to be configured to make use of the SSDs?

A: For SSD ROTA should be 0. If you are sure you have SSDs but they are shown as rotational disks this means that your storage is configured incorrectly. Depending on the configuration you may still have the same performance as if the disks were recognized properly. If this is not the case, check your storage, RAID controller, and system configuration.

MySQL just sends system commands for reading, writing, and syncing data. It does not care if the disk is rotational or not. For MySQL performance value of ROTA does not really matter.

Q: If you believed you tuned both master and slave for the best performance, but seconds behind master continues to increase, you decide to split the shard, but xtrabackup fails with log wrap.  But even if you were to get a good backup, once it is online, the slave will never catch up.  The Kobayashi Maru, a no win situation – have you been there?  What did you do?

A: First make sure if you configured a multi-threaded replica. If you use parallel type

LOGICAL_CLOCK

, study option binlog_transaction_dependency_tracking. Practically how it works when set to

WRITESET

  or to

WRITESET_SESSION

 . For avoiding log wrap during backup increase redo log file size. If you can stop the source server, stop it and set up a replica by copying datadir: it is faster than using XtraBackup, because you would not need to copy changes in the redo log files while the backup is running.

Q: In MySQL 5.7, the tmp tablespace is now InnoDB, how can you tune tmp to take advantage of RAM and not use disk?

A: The tablespace file on disk is used only when the in-memory table is converted into a disk-based table. Otherwise, temporary tables continue using memory.

Q: What are the top 6 variables to get the best performance, how can you verify how effective their setting are, looking at the global status, when can you know when those variables can be increased to get the best utilization from CPUs/RAM/Disk/Network.

A: While I showed variables that can improve performance in most cases on my “Conclusion” slides I recommend you to start from the issue you are trying to solve and start adjusting variables only when you understand what you are doing.

Some of such variables could be measured for effectiveness. For example, if the number of free buffers in the output of

SHOW ENGINE INNODB STATUS

  is small and the buffer pool hit rate shows that a number of disk access is consistently greater than the number of the buffer pool hits, it indicates that the buffer pool size may be too small for you your workload and data.

Regarding CPU, if the number of active threads is high, and you see performance drop when concurrency increases while the operating system shows low CPU usage, it may be a symptom that either:

– you limited the upper limit of the number of active engine threads

– disk does not support so many parallel operations and active threads are waiting for IO

Another issue with CPU performance could happen if the upper limit of the number of active engine threads is not set or too high and threads are spending time doing nothing while waiting in the priority queue.

The only option that directly limits IO activity is

innod_io_capacity

  that limits the speed of background InnoDB operations. If set too low InnoDB may underuse your fast disk and if set too high InnoDB could start writing too fast, so each write request will waste time waiting in its queue.

Q: What was the last InnoDB setting, the one which should up to no of CPU cores?

A: This is

innodb_thread_concurrency

  that limits the number of InnoDB threads that could run in parallel. You should set it either to 0 or to the number of CPU cores.

Q: Which is more secure and faster community MySQL or Percona MySQL or aws rds?

A: Percona MySQL has performance, diagnostic improvements, as well as Enterprise-level features, available as open source. AWS RDS supports hardware scaling on demand and physical replication that uses InnoDB redo log files instead of binary logs. However, it does not allow you to have the same control on the server as for your own physical instance. Community MySQL works on a higher number of platforms, thus uses function calls that work on all of them where Percona MySQL or AWS RDS may use optimized variants. So each of them has its own advantages and disadvantages.

Q: In case with open tables >>> open_files (and cannot change open_files) how to set table_open_cache? “as big as possible”?

A: Status variable

Open_files

  is “the number of files that are open. This count includes regular files opened by the server. It does not include other types of files such as sockets or pipes. Also, the count does not include files that storage engines open using their own internal functions rather than asking the server level to do so.” (https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Open_files) The status variable

Open_tables

  is “the number of tables that are open”. They are not related to each other. You need to watch that value of 

Opened_tables

  (“the number of tables that have been opened”) is not greater than

Open_tables

 .

There is an operating system option “open files” that is visible if you run the command

ulimit -n

. This option should be greater than the maximum number of files that your MySQL instance can simultaneously open. Speaking about

Open_tables

 : you cannot have this value set to a number that is larger than the operating system option “open files” unless your tables are stored in the shared or general tablespace.

Q: How to tell if we should tune join_buffer_size? wait events anywhere?

A: If you use

JOIN

  queries that do not use indexes and they perform slowly because of this. Start from regular query tuning using slow query log, Performance Schema, and Query Analyzer in PMM to find queries that require optimization. In Query Analyzer add a column “Full Join” to your query list. In the Performance Schema search for statements where the value of

SELECT_FULL_JOIN

  is greater than 0 in the

events_statements_*

  tables.

Check also my “Introduction to MySQL Query Tuning for Dev[Op]s” webinar.

Q: How to measure memory consumption of table_open_cache? 15K/table? FRM-related? some way to estimate?

A: This is event “

memory/sql/TABLE_SHARE::mem_root

” Check also this blog post.

Q: Hello guys!

Do we need to prepare different optimization depends on MySQL engine e.g. XtraDB, InnoDB? If yes, could you please explain differences?

Best regards,

Oleg Stelmach

A: XtraDB is an enhanced version of InnoDB in the Percona Server: https://www.percona.com/doc/percona-server/8.0/percona_xtradb.html. So differences are added features in the Percona Server. Namely, the options that exist in the Percona server and do not exist in the upstream Community MySQL.

Q: Regarding threads. Do better to use hyperthreading\multithreading for MySQL instance or we need to turn off this function?

Best regards,

Oleg Stelmach

A: You do not have to turn this option off but you may see that MySQL performance is not linearly predictable in high concurrent workloads. I recommend you to check this blog post with hyperthreading benchmarks on MySQL and comments on it for a better understanding of how hyperthreading can affect MySQL performance.

Q: Besides from setting os swap-pines correctly. would also recommend to enable memlock in my.cnf?

A: Normally you do not need it.

Dec
27
2021
--

Talking Drupal #327 – Layout Builder vs Paragraphs

Today we are talking about Layout Builder vs Paragraphs.

TalkingDrupal.com/327

Topics

  • Nic – Family visiting
  • Stephen – Behat
  • John – Vacation time
  • Why choose Layout Builder
    • In core
    • UI
  • Josh Miller Q1 – Content model concerns
  • Why choose paragraphs
    • Known
    • Structured content
  • Why not choose layout builder
    • Core integrations
    • Xlation
    • Content split
    • Needs deep core patches
    • Needs UI customization
  • Why not choose paragraphs
    • Performance
    • Hard to use UI
  • Why choose – use both
  • Josh Miller Q2 – Risks about moving into and out of each platform
  • Josh Miller Q3 – Templating concerns

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Stephen Cross – @stephencross

MOTW

Linkit Linkit provides an easy interface for internal and external linking with wysiwyg editors by using an autocomplete field. Linkit has by default support for nodes, users, taxonomy terms, files, comments and basic support for all types of entities.

Dec
27
2021
--

Backup Performance Comparison: mysqldump vs MySQL Shell Utilities vs mydumper vs mysqlpump vs XtraBackup

MySQL Backup Performance Comparison

MySQL Backup Performance ComparisonIn this blog post, we will compare the performance of performing a backup from a MySQL database using mysqldump, MySQL Shell feature called Instance Dump, mysqlpump, mydumper, and Percona XtraBackup. All these available options are open source and free to use for the entire community.

To start, let’s see the results of the test.

Benchmark Results

The benchmark was run on an m5dn.8xlarge instance, with 128GB RAM, 32 vCPU, and 2xNVMe disks of 600GB (one for backup and the other one for MySQL data). The MySQL version was 8.0.26 and configured with 89Gb of buffer pool, 20Gb of redo log, and a sample database of 177 GB (more details below).

We can observe the results in the chart below:

MySQL Backup Results

And if we analyze the chart only for the multi-threaded options:

multi-threaded options

As we can see, for each software, I’ve run each command three times in order to experiment using 16, 32, and 64 threads. The exception for this is mysqldump, which does not have a parallel option and only runs in a single-threaded mode.

We can observe interesting outcomes:

  1. When using zstd compression, mydumper really shines in terms of performance. This option was added not long ago (MyDumper 0.11.3).
  2. When mydumper is using gzip, MySQL Shell is the fastest backup option.
  3. In 3rd we have Percona XtraBackup.
  4. mysqlpump is the 4th fastest followed closer by mydumper when using gzip.
  5. mysqldump is the classic old-school style to perform dumps and is the slowest of the four tools.
  6. In a server with more CPUs, the potential parallelism increases, giving even more advantage to the tools that can benefit from multiple threads.

Hardware and Software Specs

These are the specs of the benchmark:

  • 32 CPUs
  • 128GB Memory
  • 2x NVMe disks 600 GB
  • Centos 7.9
  • MySQL 8.0.26
  • MySQL shell 8.0.26
  • mydumper 0.11.5 – gzip
  • mydumper 0.11.5 – zstd
  • Xtrabackup 8.0.26

The my.cnf configuration:

[mysqld]
innodb_buffer_pool_size = 89G
innodb_log_file_size = 10G

Performance Test

For the test, I used sysbench to populate MySQL. To load the data, I choose the tpcc method:

$ ./tpcc.lua  --mysql-user=sysbench --mysql-password='sysbench' --mysql-db=percona --time=300 --threads=64 --report-interval=1 --tables=10 --scale=100 --db-driver=mysql prepare

Before starting the comparison, I ran mysqldump once and discarded the results to warm up the cache, otherwise our test would be biased because the first backup would have to fetch data from the disk and not the cache.

With everything set, I started the mysqldump with the following options:

$ time mysqldump --all-databases --max-allowed-packet=4294967295 --single-transaction -R --master-data=2 --flush-logs | gzip > /backup/dump.dmp.gz

For the Shell utility:

$ mysqlsh
MySQL JS > shell.connect('root@localhost:3306');
MySQL localhost:3306 ssl test JS > util.dumpInstance("/backup", {ocimds: true, compatibility: ["strip_restricted_grants","ignore_missing_pks"],threads: 16})

For mydumper:

$ time mydumper  --threads=16  --trx-consistency-only --events --routines --triggers --compress --outputdir /backup/ --logfile /backup/log.out --verbose=2

PS: To use zstd, there are no changes in the command line, but you need to download the zstd binaries.

For mysqlpump:

$ time mysqlpump --default-parallelism=16 --all-databases > backup.out

For xtrabackup:

$ time xtrabackup --backup --parallel=16 --compress --compress-threads=16 --datadir=/mysql_data/ --target-dir=/backup/

Analyzing the Results

And what do the results tell us?

Parallel methods have similar performance throughput. The mydumper tool cut the execution time by 50% when using zstd instead of gzip, so the compression method makes a big difference when using mydumper.

For the util.dumpInstance utility, one advantage is that the tool stores data in both binary and text format and uses zstd compression by default. Like mydumper, it uses multiple files to store the data and has a good compression ratio. 

XtraBackup got third place with a few seconds of difference from MySQL shell. The main advantage of XtraBackup is its flexibility, providing PITR and encryption for example. 

Next, mysqlpump is more efficient than mydumper with gzip, but only by a small margin. Both are logical backup methods and works in the same way. I tested mysqlpump with zstd compression, but the results were the same, hence the reason I didn’t add it to the chart. One possibility is because mysqlpump streams the data to a single file.

Lastly, for mysqldump, we can say that it has the most predictable behavior and has similar execution times with different runs. The lack of parallelism and compression is a disadvantage for mysqldump; however, since it was present in the earliest MySQL versions, based on Percona cases, it is still used as a logical backup method.

Please leave in the comments below what you thought about this blog post, if I missed something, or if it helped you. I will be glad to discuss it!

Useful Resources

Finally, you can reach us through the social networks, our forum, or access our material using the links presented below:

Dec
27
2021
--

At What Age Can Babies Use A Bumbo Chair?

The Bumbo chair can be used from around 4 months old (when your baby has good head control and can sit unassisted) until they can climb out of it (usually about 12 months old). It should not be an alternative to an infant seat or bouncer. The chair helps support your baby in a seated position, which can help with their development.

However, it is essential to remember that the age recommendation is just a recommendation. There is no definitive answer as different babies will reach the developmental milestones necessary to use a bumbo chair at different ages. However, most babies can use a bumbo chair safely by around 4 months old.

Remember that it is always important to consult with your pediatrician before giving your baby any new type of seat.

Can older babies use bumbo chairs? (8 months and up)

Yes, if your child is over eight months and meets the guidelines above, they can still use a bumbo chair. Many parents report that their children enjoy using the bumbo well into the second year.

What is a Bumbo chair?

A Bumbo chair, also known as a bumbo seat, is a molded plastic chair with a round, flat bottom. There is a three-point harness to secure the baby, and the child has no control over the direction they are facing. The Bumbo chair can be used on any flat surface. The manufacturer also makes other products designed for use with the bumbo, such as playpens, activity gyms, bath seats, and stools.

However, there are concerns that a baby seated in a bumbo is not visible to those around them, which has led to several cases of accidental suffocation.

The American Academy of Pediatrics (AAP) recommends against using the bumbo as it “poses a risk for serious head injuries and ejection from the chair” as well as other problems such as aspiration. The AAP has also stated that the bumbo chair “should not be used for routine feeding.”

How to use a bumbo chair safely

Despite the risks associated with using the bumbo chair, many parents find it helpful in supporting their baby in a seated position. If you do choose to use a bumbo chair, there are steps parents can take to use the bumbo chair safely.

  • First, be sure that your baby can sit up unassisted before placing them in the bumbo seat.
  • The bumbo should not be used on an elevated surface.
  • Always use the harness at all times when seated in the bumbo.
  • The bumbo should not be used for longer than 30 minutes at a time.

If you are concerned about your baby’s safety, remember that it is essential to contact your pediatrician. It is also helpful to talk with other parents who have used the chair before, as they may offer additional advice based on their personal experiences.

Bumbo chair alternatives

If you are not comfortable using a bumbo chair, other options are available to help your baby sit up. You can try an infant seat or bouncer, which will provide more support for your baby.

Fisher-Price Sit-Me-Up Floor Seat

This Fisher-Price floor seat is an excellent alternative to the Bumbo chair. Invite your child to sit up and play in a comfortable, sturdy seat that provides them with a view of the world around them! The two linked toys, a turtle spinner with a mirrored side and entertaining clackers keep your cuddle bug occupied. And the comfy seat cushion is detachable and machine-washable, making cleanup simple!

The Sit-Me-Up Floor Seat’s supportive upright seat and broad, stable base allow your child to observe and interact with the world around him, providing comfort and security.

The lightweight and ultra-compact design of this foldable baby chair make it a breeze to transport your loved one along with you to grandma’s house or wherever you’re going.

It can be used for babies up to 25 pounds or until they can sit up on their own without assistance.

Summer Learn-to-Sit Stages 3-Position Floor Seat

This Summer 3-position floor seat is a great alternative to the Bumbo chair. It is designed to help your baby learn to sit up on their own, and it can be used for babies up to 40 pounds. The Summer Learn-to-Sit Stages 3-Position Floor Seat has three different height levels, so it can grow with your child as they gain new skills.

  1. The first position is lower to the ground and is intended for babies with complete neck control.
  2. When your baby learns to sit up, transition them to the second posture, which will assist in developing balance.
  3. The highest position is for babies who begin to sit up.

The Learn-to-Sit Stages 3-position floor seat features detachable toys and a meal tray to provide everything you need. This chair also folds completely flat for storage and travel, making it ideal.

This baby chair is extra-safe and easy to use. It supports up to 25 pounds so that you can use it for a long time! It also comes with a comfortable cushion that makes this perfect for playtime, nap time, or anytime.

The ergonomic design of this sturdy baby floor seat distributes your child’s weight evenly, making it more comfortable for them to sit up. And the broad base prevents your little one from tipping over.

Fisher-Price Healthy Care Deluxe Booster Seat

This Fisher-Price healthy care booster seat is perfect for when your baby begins to sit up on their own all the way to toddlerhood. The Healthy Care Booster Seat supports babies up to 50 pounds and helps them develop the skills they need to transition to a regular chair.

It has a three-point harness for safety and a removable tray with two cup holders, and a snap-on lid that doubles as a serving tray. The Booster Seat folds up for easy storage and transport, and it wipes clean easily.

The easy cleaning is what makes this kids’ chair special. There are no nooks or crannies to retain crumbs, so you can keep your baby’s space clean. The feeding tray can be cleaned in the dishwasher to reduce germs.

This Fisher-Price kids’ chair is durable enough to last through long family gatherings. It has an extended use up to six years so that it can grow with your child. The lightweight design makes it easy to move from one place to another.

Infantino 3-in-1 Booster Seat

This Infantino 3-in-1 booster seat is perfect for your growing child. It can be used in 3 different ways:

  • as a seated positioner
  • as a feeding booster
  • as a table booster seat

The Infantino 3-in-1 Booster Seat is designed to support babies up to 33 pounds so that it can be used for a long time.

The Infantino Grow-With-Me Discovery Seat and Booster is a three-in-one product that provides an interactive play area, snack time seat, and secure booster with harnesses.

Babies seat and play with toys: spinners, mirror play, and silly squeakers, encouraging sensory development. When it’s time for a snack, just remove the detachable toy pods to reveal a snack tray with a cup holder. Use the security straps to attach the booster seat to a kitchen chair as your baby grows.

Skip Hop 2-in-1 Sit-up Activity Baby Chair

This Skip Hop 2-in-1 Sit-up Activity Baby Chair is perfect for when your baby begins to sit up on their own. The Skip Hop 2-in-1 Sit-up Activity Baby Chair has a soft, comfortable seat with an adjustable harness to keep your child safe. It also comes with a removable tray with a built-in cup holder and an activity center with various activities to keep your baby entertained.

The 2-in-1 design makes this baby chair super convenient. Parents can alternate between a multipurpose tray (perfect for snacks!) and three interesting toys, depending on whether it is time for a meal or play.

The sturdy, wide-leg foundation provides stability, while the soft fabric seat cover makes Skip Hop chair comfy. The seat cover may be removed for simple machine washing.

Ingenuity Baby Base 2-in-1 Booster Feeding and Floor Seat with Self-Storing Tray

This product shows how to transform a playtime chair into a snack time quickly!

The Baby Base securely clamps onto dining chairs, allowing your child to sit higher and see more during family dinners. Cute in design, this tiny baby feeding chair is lightweight and portable. It’s a fantastic option for at-home or on-the-go dining with your child!

The detachable tray allows your youngster to have their own perfectly sized table, so they can easily reach food or toys. The tray conveniently folds up underneath the booster seat when it is not used. The tray is also dishwasher safe, making mealtimes a pleasure. Wipe the base and foam insert clean with a damp cloth and mild soap to remove dust.

The Baby Base may be used once your baby can hold its head up independently. The safety strap secures a tight fasten when linked to your dining chair, and the 3-point harness keeps the baby safe.

Remove the foam seat insert when your baby gets older and use it as a toddler seat. Suitable for 6 months to 36 months.

Upseat Baby Chair Booster Seat with Tray

Upseat Baby Chair Booster Seat was designed by physical therapists with developing babies in mind and targets the muscles responsible for good upright posture while also being safe for a child’s developing hips.

Upseat baby seats allow for chubby legs and keep babies’ hips in a safe and correct posture. This chair is recommended for kids who can sit on their own, mainly from the 3rd month of life until 12 months. Upseat Baby Chair Booster Seat holds up to 30 pounds.

This baby chair can be used in 3 ways:

  • as a floor seat
  • as a booster seat
  • as a feeding chair

Upseat baby chair booster seat comes with a detachable tray designed, so it is super easy to clean up after mealtime.

The post At What Age Can Babies Use A Bumbo Chair? appeared first on Comfy Bummy.

Dec
22
2021
--

Baby Shark Potty – The Secret To Successful Potty Training

There are many ways to toilet train your child. Some might say that modern technology has made this task easier for parents today than ever before, while others may argue it has only complicated the process even further. There is no denying, though, that potty training is a serious challenge for children and parents alike, regardless of how far technology has come to change it.

However, one of the best methods for potty training is also a really old-fashioned technique: The Potty Dance. It’s usually considered comical and silly, but you can actually use this dance to your advantage when toilet training a child. What happens if you connect The Potty Dance with Baby Shark?

Baby Shark Potty

If you’re looking for a potty training method that is fun and will keep your child engaged, then the Baby Shark Potty may be just what you need! This unique potty training tool is based on the popular children’s song, “Baby Shark.”

The Baby Shark Potty is an award-winning potty training seat that uses the familiar “Baby Shark” song to encourage your child to use the potty. Children exposed to this unique method become excited about using their baby shark potty seat. They enjoy hearing music play while using the bathroom, and parents find success in their potty training efforts.

Baby Shark Potty – Features

  • comfortable
  • fun design
  • realistic flushing and cheering sounds
  • easy to use
  • easy to clean lift-out pot
  • reward card and stickers included

The secret to the Baby Shark Potty’s success is its fun and engaging design. The image of Baby Shark supports the connection between your child and the goal of using their potty. The music is a bonus that makes the experience more fun and ensures they stay on task while using their baby shark potty.

The Baby Shark Potty is also made from durable materials that can withstand vigorous use. It is easy to clean and can be used with any standard toilet. Plus, it’s comfortable for your child to sit on, making it an excellent choice for potty training toddlers.

Together with a Baby Shark Potty, you will receive a reward chart and stickers to celebrate each victory of your child using their potty. This way, you can maintain their enthusiasm and encourage them to keep up the good work. Those who use a reward chart find a lot of success in their potty training efforts.

Baby Shark Potty is available now on Amazon. Get your child excited about using the potty with this fun and engaging potty training seat!

Baby Shark Potty – a perfect gift?

The Baby Shark Potty is a great gift to welcome your child into the world. It is a great shower or birthday gift that will bring excitement and joy into your child’s life. It can be used from toddlers to young children, which means it is an investment in their future potty training success.

The Baby Shark Potty has been reviewed on popular sites like Huffington Post, The Giggle Guide, and The Daily Dot. You can be sure that this potty training tool is one of the best methods available to help your child succeed in their potty training goals.

Baby Shark Potty Training

As funny as it might sound, the dance can bring about a breakthrough in toilet training your child. In most cases, parents find that they have to repeat the Potty Dance for months on end before their child finally gets it. Every time you perform this dance, you let your child know that he needs to go to the bathroom and that he is expected to follow suit.

The Baby Shark Potty Dance is a great place to start if you are looking for a potty training method that is both fun and effective. It is based on the Baby Shark song by Pinkfong, which has become a global phenomenon with more than 2 billion views on YouTube.

The Baby Shark Potty Dance is simple, but it gets the job done. The dance consists of three simple steps that are repeated over and over again:

  1. Sing the Baby Shark song
  2. Do the Baby Shark Dance
  3. Reward your child with a sticker, a toy, or a special treat

You need to keep this dance routine going for at least a few weeks before you see any visible results, though, so this method is not exactly a quick fix. The key is to be consistent with this routine and keep things fun for your child. If you do, then toilet training should be a breeze for everyone.

The post Baby Shark Potty – The Secret To Successful Potty Training appeared first on Comfy Bummy.

Dec
21
2021
--

Quick Guide on Azure Blob Storage Support for Percona Distribution for MongoDB Operator

Azure Blob Percona MongoDB Operator

If you have ever used backups with Percona Distribution for MongoDB Operator, you should already know that backed-up data is stored outside the Kubernetes cluster – on Amazon S3 or any S3-compatible storage. Storage types not compatible with the S3 protocol were supported indirectly in the case of an existing S3 wrapper/gateway. A good example of such a solution is running MinIO Gateway on Azure Kubernetes Service to store backups on Azure Blob Storage.

Starting with Operator version 1.11, it is now possible to use Azure Blob Storage for backups directly:

Backups on Azure Blob Storage

The following steps will allow you to configure it.

1. Get Azure Blob Storage Credentials

As with most other S3-compatible storage types, the first thing to do is to obtain credentials the Operator will use to access Azure Blob storage.

If you are new to Azure, these two tutorials will help you to configure your storage:

When you have a container to store your backups, getting credentials to access it involve the following steps:

  1. Go to your storage account settings,
  2. Open the “Access keys” section,
  3. Copy and save both the account name and account key as shown on a screenshot below:

Azure credentials

2. Create a Secret with Credentials

The Operator will use a Kubernetes Secrets Object to obtain the needed credentials. Create this Secret with credentials using the following command:

$ kubectl create secret generic azure-secret \
 --from-literal=AZURE_STORAGE_ACCOUNT_NAME=<your-storage-account-name> \
 --from-literal=AZURE_STORAGE_ACCOUNT_KEY=<your-storage-key>

3. Setup spec.backup Section in your deploy/cr.yaml file

As usual, backups are configured via the same-name section in the deploy/cr.yaml configuration file.

Make sure that backups are enabled (backup.enable key set to true), and add the following lines to the backup.storages subsection (use the proper name of your container): 

azure-blob:
  type: azure
  azure:
    container: <your-container-name>
    prefix: psmdb
    credentialsSecret: azure-secret

If you want to schedule a regular backup, add the following lines to the backup.tasks subsection:

tasks:
  - name: weekly
    enabled: true
    schedule: "0 0 * * 0"
    compressionType: gzip
    storageName: azure-blob

The backup schedule is specified in crontab format (the above example runs backups at 00:00 on Sunday). If you know nothing about cron schedule expressions, you can use this online generator.

The full backup section in our example will look like this:

backup:
  enabled: true
  restartOnFailure: true
  image: percona/percona-server-mongodb-operator:1.10.0-backup
  storages:
    azure-blob:
      type: azure
      azure:
        container: <your container name>
        prefix: psmdb
        credentialsSecret: azure-secret
  tasks:
    - name: weekly
      enabled: true
      schedule: "0 0 * * 0"
      compressionType: gzip
      storageName: azure-blob

You can find more information on backup options in the official backups documentation and the backups section of the Custom Resource options reference.

Percona Distribution for MongoDB is a freely available MongoDB database alternative, giving you a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together.

Download Percona Distribution for MongoDB Today!

Dec
21
2021
--

Data at Rest Encryption Support in Percona Distribution for MongoDB Operator

Data at Rest Encryption Support in Percona Distribution for MongoDB Operator

Data at Rest Encryption Support in Percona Distribution for MongoDB OperatorAs we all know, security is very important these days and we read about many data leaks. Security has many aspects, but one of the most important is securing data since it is a vital asset to companies. When we speak about data, it can be encrypted at rest (transparent data encryption – TDE, full disk encryption, column/field-level encryption) or in transit (TLS).

What we will concentrate on in this blog post is data at rest encryption (specifically TDE) and how it is currently supported in Percona Distribution for MongoDB Operator, but also what the limitations are and the features coming in some of the next releases.

TDE basically means that any data which is not actively used is encrypted at the storage engine level (WiredTiger in this case), but this does not include logs or data which is replicated.

TDE in Percona Distribution for MongoDB Operator

TDE in Operator is based on options that Percona Server for MongoDB (PSMDB) supports and which were developed to be mostly the same or similar as in MongoDB Enterprise edition. The differences are that PSMDB doesn’t support KMIP or Amazon AWS key management services, but instead offers the ability to store the master key inside HashiCorp Vault.

The Operator currently doesn’t support storing keys in HashiCorp Vault as PSMDB does, and the master key is stored in the Kubernetes secret and mounted locally in database pods, but I will mention this more in the limitations section and future plans.

Options for data at rest encryption support in the Operator are only a few, and the defaults in the Operator are:

  • security.enableEncryption: true
  • security.encryptionCipherMode: AES256-CBC
  • security.encryptionKeySecret: optional (needs to be a 32 character string encoded in base64

You can read about the options here, but as you can see in Operator, encryption is enabled by default and if you don’t specify some custom security.encryptionKeySecret the Operator will create one for you.

Limitations

Dynamically Enabling/Disabling

We cannot simply dynamically change the option to enable or disable encryption. If we try to do that, the Operator will try to restart MongoDB pods with new options and the pod start will fail. The reason is that MongoDB will just start with the new option on the old data directory, and it will not be able to read/write the data.

One of the ways this can be overcome is by creating a logical backup and then restoring on a cluster that has the desired option enabled. The second option would be to create a second cluster with the desired option and do a cross-cluster replication and then switch the main cluster to the new one.

Key Rotation

One of the most important things with encryption is the periodic rotation of the keys, but at this moment with the Operator, this is not so easy. This is basically the same issue as above, but if we try to just update the secret and restart the cluster the effect will be the same – MongoDB will not be able to read/write the data.

It can be overcome with the same options as above, but it will be made really easy with the ability to store the keys in the Vault. If you are interested in this functionality you can track the progress in this Jira ticket.

Storing the Key in the Vault

Currently, the Operator supports only storing the master key as a secret which is presented to PSMDB as a local file. This is not a recommended setup for production and is very limiting.

PSMDB has integration for storing the keys in HashiCorp Vault key management which is much more secure and also has the ability to rotate the master key. Basically, how it works is that PSMDB is restarted with the option “rotateMasterKey: true” and then it just generates a new key in the Vault and re-encrypts the specific database encryption keys (whole data is not re-encrypted).

Support for this is definitely one of the features in the roadmap and it will be a huge deal for data at rest encryption support in the Operator so stay tuned for upcoming changes. The request for implementing support for HashiCorp Vault integration can be tracked here.

Conclusion

As you can see, data at rest encryption in our Kubernetes Operator is supported but currently only at the most basic level. Our Percona Distribution for MySQL Operator already supports integration with HashiCorp Vault and since we like to keep the feature parity between our different operators, this functionality will be soon available in our MongoDB operator as well.

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

Learn More About Percona Kubernetes Operators

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com