Nov
30
2022
--

Upload Ongoing MyDumper Backups to S3

Upload Ongoing MyDumper Backups to S3

Upload Ongoing MyDumper Backups to S3If you are using MyDumper as your Logical Backup solution and you store your backups on S3, you need to take a local backup and then upload it to S3. But what if there is not enough space to hold the backup on the server where we are taking the backup? Even if we have enough disk space, we will need to wait until the end to start to upload the files, making the whole process longer.

MyDumper implemented stream backup in v0.11.3 and we have been polishing the code since then. We also implemented two ways of executing external commands:

--exec-per-thread: The worker that is getting the data from the database will write and redirect to the STDIN of the external command. It will be similar to execute cat FILE | command per every written and closed file.

--exec: In this case, the worker writes in the local storage and when the file is closed, the filename is enqueued. The exec threads are going to pop from the queue and execute the command based on the filename. FILENAME is a reserved word that is going to be replaced in the command, for instance, --exec=’/usr/bin/ls -l FILENAME’ will execute ls -l of every single file. The command must be an absolute path.

Both implementations have different use cases, pros, and cons. We are going to be using --exec, as current --exec-per-thread implementation doesn’t allow us to dynamically change the command with the filename which is going to be changing on each iteration.

Execution

For this example I created a table name test.mydumper2S3 with millions of rows, you need to configure a valid AWS account, install AWS CLI, and have a bucket. 

As I stated before, there are two ways of uploading the files, the main difference is the amount of execution of the AWS command or threads that you want to use. A stream will be only one process but --exec can control the amount of thread or execution with --exec-threads.

With stream

This might be the simplest way if you are familiar with piping your commands. In the example you will find, the table name, the split by rows value, the path where the temporary files will reside, and finally the --stream option:

mydumper -T myd_test.mydumper2S3 -r 20000 \
  -o data --stream | aws s3 cp - s3://davidducos/mydumper_backup.sql --region us-east-1

On the AWS CLI command, we specify the S3 service and the cp command, the – means that it will read from STDIN and then the location of the single file (s3://davidducos/mydumper_backup.sql) that is going to be uploaded.

In the log, you will entries like this:

…
2022-11-13 21:18:09 [INFO] - Releasing FTWR lock
2022-11-13 21:18:09 [INFO] - Releasing binlog lock
2022-11-13 21:18:09 [INFO] - File data/myd_test-schema-create.sql transferred | Global: 0 MB/s
2022-11-13 21:18:09 [INFO] - File data/myd_test.mydumper2S3-schema.sql transferred | Global: 0 MB/s
2022-11-13 21:18:09 [INFO] - Thread 1 dumping data for `myd_test`.`mydumper2S3`  WHERE `id` IS NULL OR `id` = 1 OR( 1 < `id` AND `id` <= 2001)       into data/myd_test.mydumper2S3.00000.sql| Remaining jobs: -3
………
2022-11-13 21:18:10 [INFO] - Thread 4 dumping data for `myd_test`.`mydumper2S3`  WHERE ( 1740198 < `id` AND `id` <= 1760198)       into data/myd_test.mydumper2S3.00009.sql| Remaining jobs: 0
2022-11-13 21:18:10 [INFO] - File data/myd_test.mydumper2S3.00002.sql transferred | Global: 27 MB/s
2022-11-13 21:18:10 [INFO] - Thread 1 dumping data for `myd_test`.`mydumper2S3`  WHERE ( 2283598 < `id` AND `id` <= 2303598)       into data/myd_test.mydumper2S3.00003.sql| Remaining jobs: 0
………
2022-11-13 21:18:10 [INFO] - Thread 3 dumping data for `myd_test`.`mydumper2S3`  WHERE ( 2424197 < `id` AND `id` <= 2424797)       into data/myd_test.mydumper2S3.00007.sql| Remaining jobs: 1
2022-11-13 21:18:10 [INFO] - Thread 3: Table mydumper2S3 completed
2022-11-13 21:18:10 [INFO] - Thread 3 shutting down
2022-11-13 21:18:10 [INFO] - Releasing DDL lock
2022-11-13 21:18:10 [INFO] - Queue count: 0 0 0 0 0
2022-11-13 21:18:10 [INFO] - Main connection closed
2022-11-13 21:18:10 [INFO] - Finished dump at: 2022-11-13 21:18:10
2022-11-13 21:18:32 [INFO] - File data/myd_test.mydumper2S3.00009.sql transferred in 22 seconds at 0 MB/s | Global: 2 MB/s
2022-11-13 21:18:36 [INFO] - File data/myd_test.mydumper2S3.00003.sql transferred in 4 seconds at 4 MB/s | Global: 2 MB/s
2022-11-13 21:18:39 [INFO] - File data/myd_test.mydumper2S3.00001.sql transferred in 2 seconds at 9 MB/s | Global: 2 MB/s
2022-11-13 21:18:41 [INFO] - File data/myd_test.mydumper2S3.00007.sql transferred in 1 seconds at 4 MB/s | Global: 3 MB/s
2022-11-13 21:18:41 [INFO] - File data/myd_test.mydumper2S3-metadata transferred | Global: 3 MB/s
2022-11-13 21:18:41 [INFO] - File data/metadata transferred | Global: 3 MB/s
2022-11-13 21:18:41 [INFO] - All data transferred was 104055843 at a rate of 3 MB/s

As you see from the log, the files are being streamed as soon as they are closed. However, it took more than 30 seconds after the dump finished for all the files to be streamed. Finally, the command returned a couple of seconds after the “All data transferred…” entry, as the buffer needs to flush the data and upload it to S3.

With--exec

If you need to upload every single file individually, this is the option that you should use. For instance, you can use –load-data or directly the –csv option to allow another process to consume the files.

Let’s see the example:

mydumper -T myd_test.mydumper2S3 -o data -v 3 \
  --exec="/usr/bin/aws s3 cp FILENAME s3://davidducos/mydumper_backup/ --region us-east-1" --exec-threads=8

In this case, AWS CLI will send to STDERR the status of the files that are being uploaded:

upload: data/myd_test-schema-create.sql to s3://davidducos/mydumper_backup/myd_test-schema-create.sql
upload: data/myd_test.mydumper2S3-schema.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3-schema.sql
upload: data/myd_test.mydumper2S3.00042.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00042.sql
upload: data/myd_test.mydumper2S3.00010.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00010.sql
upload: data/myd_test.mydumper2S3.00026.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00026.sql
upload: data/myd_test.mydumper2S3-metadata to s3://davidducos/mydumper_backup/myd_test.mydumper2S3-metadata
upload: data/metadata to s3://davidducos/mydumper_backup/metadata
upload: data/myd_test.mydumper2S3.00006.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00006.sql
upload: data/myd_test.mydumper2S3.00000.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00000.sql
upload: data/myd_test.mydumper2S3.00004.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00004.sql
upload: data/myd_test.mydumper2S3.00005.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00005.sql
upload: data/myd_test.mydumper2S3.00001.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00001.sql
upload: data/myd_test.mydumper2S3.00002.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00002.sql
upload: data/myd_test.mydumper2S3.00003.sql to s3://davidducos/mydumper_backup/myd_test.mydumper2S3.00003.sql

And the log will be the traditional mydumper log.

Conclusion

This is an example with S3 but it is also possible to use it with different vendors or if you need encryption, just pipe to your encryption command and pipe again to AWS or any other command. I didn’t use ZSTD compression which is another option that you should explore. 

Nov
30
2022
--

PHP 8.0 Reaches End of Life

PHP 8.0 Reaches End of Life

PHP 8.0 Reaches End of LifeWhile the LAMP stack – Linux/Apache/MySQL/PHP – is not the dominant platform it was a decade ago, there are still many websites that depend on it. One of the pillars of this quartet had a milestone last week when PHP 8.0 passed into End Of Life status. While it will still have limited security issue support for another year, 8.0 should be phased out of your environments.

By the way, 8.1 has another year until it reaches End of Life status.

So please update your PHP 8.0 as soon as you can, double check the connector you are using to access your database is updated too, and make sure you are ready for 8.1’s future.

And make sure that if you are currently running MySQL 5.7 that you upgrade by October 2023 when it too reaches End of Life.

Nov
29
2022
--

MySQL Data Archival With Minimal Disruption

MySQL Data Archival

MySQL Data ArchivalWe all know that data is important, and some businesses need historical data to be available all the time. The problem is that queries on large tables perform poorly if they are not properly optimized. We get many customer requests in Managed Services to purge/archive large tables, and to achieve it, we use pt-archiver.

Recently, we received a request to archive a large table, and the customer was worried about the downtime and performance issues during the archival.

We proposed a solution to the customer to archive the table using pt-archive. The idea is to archive old data to other tables and keep the latest data on the current table with minimal performance issues. All of the data will remain available and can be queried anytime.

Percona Toolkit

In the blog, I will not explain how to use the pt-archiver, but we will discuss a use case of the pt-archiver.

If you are unfamiliar with pt-archiver, please refer to “Want to archive tables? Use Percona Toolkit’s pt-archive “; it explains how pt-archiver works and various pt-archiver arguments.

We will test data archival to keep 100 days’ worth of data with a few simple steps for demonstration.

Note: This method uses timestamp datatype to filter the data.

  1. Create two dummy tables.
  2. Insert records in the source table.
  3. Archive the record from the source to the destination table using –where condition per business requirements.
  4. Rename the tables.
  5. Add Pt-archiver as a cron.

This diagram better illustrates the process.

Remember Date and –where the condition in this example is just a reference. Use the archiving condition in pt-archiver as per business requirements.

Let’s create a source table and insert records using mysql_random_data_load:

Create Table: CREATE TABLE `blogpost` (

  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,

  `created_at` timestamp NULL DEFAULT NULL,

  `updated_at` timestamp NULL DEFAULT NULL,

  `deleted_at` timestamp NULL DEFAULT NULL,

  PRIMARY KEY (`id`),

  KEY `blogpost_created_at_index` (`created_at`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1

1 row in set (0.00 sec)

Let’s create the destination table;

(Using the following table for demonstration purposes only)

create table if not exists blogpost_new like blogpost;

Query OK, 0 rows affected (0.02 sec)

Create the following triggers using pt-online-schema-change 

Why triggers?

Any modifications to data in the original tables during the copy will be reflected in the new table because the pt-online-schema-change creates triggers on the original table to update the corresponding rows in the new table. 

(For demonstration purposes, I have added triggers created from the pt-online-schema-change test run)

# Event: DELETE

delimiter //

CREATE TRIGGER `pt_osc_test_blogpost_del` AFTER DELETE ON `test`.`blogpost` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; DELETE IGNORE FROM `test`.`blogpost_new` WHERE `test`.`blogpost_new`.`id` <=> OLD.`id`; END

//

delimiter ; //


# Event : UPDATE

delimiter //

CREATE TRIGGER `pt_osc_test_blogpost_upd` AFTER UPDATE ON `test`.`blogpost` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; DELETE IGNORE FROM `test`.`_blogpost_new` WHERE !(OLD.`id` <=> NEW.`id`) AND `test`.`blogpost_new`.`id` <=> OLD.`id`; REPLACE INTO `test`.`_blogpost_new` (`id`, `created_at`, `updated_at`, `deleted_at`) VALUES (NEW.`id`, NEW.`created_at`, NEW.`updated_at`, NEW.`deleted_at`); END

//

delimiter ; //

# Event : INSERT

delimiter //

CREATE TRIGGER `pt_osc_test_blogpost_ins` AFTER INSERT ON `test`.`blogpost` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; REPLACE INTO `test`.`blogpost_new` (`id`, `created_at`, `updated_at`, `deleted_at`) VALUES (NEW.`id`, NEW.`created_at`, NEW.`updated_at`, NEW.`deleted_at`);END

delimiter ; //

Verify that all triggers have been created.

SELECT * FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_SCHEMA='test' and EVENT_OBJECT_TABLE in ('blogpost')\G

The output should be something like 

+--------------------------+

| trigger_name             |

+--------------------------+

| pt_osc_test_blogpost_ins |

| pt_osc_test_blogpost_upd |

| pt_osc_test_blogpost_del |

+--------------------------+

3 rows in set (0.01 sec)

Copy the last 100 days of data using the pt-archiver. Verify with –dry-run

(Screen session can be used to perform pt-archiver if the table is large in size.)

pt-archiver --source h=localhost,D=test,t=blogpost,i=blogpost_created_at_index \

--dest h=localhost,D=test,t=blogpost_new \

--where "created_at >= date_sub(curdate(), interval 100 day)" --no-delete --no-check-columns \

--limit=10000 --progress=10000 --no-check-charset --dry-run

The output should be something like this:

SELECT /*!40001 SQL_NO_CACHE */ `id`,`created_at`,`updated_at`,`deleted_at` FROM `test`.`blogpost` FORCE INDEX(`blogpost_created_at_index`) WHERE (created_at >= date_sub(curdate(), interval 100 day)) ORDER BY `created_at` LIMIT 10000

SELECT /*!40001 SQL_NO_CACHE */ `id`,`created_at`,`updated_at`,`deleted_at` FROM `test`.`blogpost` FORCE INDEX(`blogpost_created_at_index`) WHERE (created_at >= date_sub(curdate(), interval 100 day)) AND (((? IS NULL AND `created_at` IS NOT NULL) OR (`created_at` > ?))) ORDER BY `created_at` LIMIT 10000

INSERT INTO `test`.`blogpost_new`(`id`,`created_at`,`updated_at`,`deleted_at`) VALUES (?,?,?,?)

Let’s execute the pt-archiver:

(Following Pt-archiver will copy 100 days worth of the data to _new table and triggers will up to date the _new table.)

pt-archiver --source h=localhost,D=test,t=blogpost,i=blogpost_created_at_index \

--dest h=localhost,D=test,t=blogpost_new \

--where "created_at >= date_sub(curdate(), interval 100 day)" --no-delete --no-check-columns \

--limit=10000 –replace --progress=10000 --no-check-charset

The pt-archiver output should be something like this:

TIME                ELAPSED   COUNT

2022-09-28T23:50:20       0       0

2022-09-28T23:50:44      24   10000

2022-09-28T23:51:08      48   20000

2022-09-28T23:51:25      65   27590

Once the pt-archiver finishes, check the condition code of the pt-archiver:

echo $?

(should be 0)

The next step is to check if the rows have been inserted into a new table and compare it with the original table. 

NOTE: The below results are just examples of tests; use the right date or where condition:

select max(created_at) from test.blogpost_new;

+---------------------+

| max(created_at)     |

+---------------------+

| 2022-27-09 02:11:00 |

+---------------------+

select count(1) from test.blogpost where created_at <= date_sub('2022-27-09', interval 100 day);

+----------+

| count(1) |

+----------+

|    65366 |

+----------+

1 row in set (0.02 sec)

select count(1) from test.blogpost_new where created_at >= date_sub('2022-09-26', interval 100 day);

+----------+

| count(1) |

+----------+

|    23758 |

+----------+

1 row in set (0.01 sec)

Why rename tables? 

The idea is to keep 100 days’ worth of data in the new table and rename it as the source table, and the original table with the _archive prefix will have all the data, including the last 100 days.

RENAME TABLE blogpost TO blogpost_archive, blogpost_new to blogpost;

It should appear something like this:

+--------------------------------+

| Tables_in_test                 |

+--------------------------------+

| blogpost         |

| blogpost_archive |

+--------------------------------+

Drop the triggers created using pt-online-schema-change . 

DROP TRIGGER IF EXISTS test.test_blogpost_ins;

DROP TRIGGER IF EXISTS test.test_blogpost_upd;

DROP TRIGGER IF EXISTS test.test_blogpost_del;

Once the table rename is completed, copy data from the blogpost table to _archive table.

Let’s add the pt-archiver command to cron to make the process automatic. (It is advisable to create the script, use the below archiver command, and test it.)

pt-archiver --source h=localhost,D=test,t=blogpost,i=blogpost_created_at_index \

--dest h=localhost,D=test,t=blogpost_archive \

--where "created_at >= date_sub(curdate(), interval 100 day)" --no-delete --no-check-columns \

--limit=10000 --progress=10000 –replace b=0,L=yes –bulk-insert --no-check-charset >dev>null 2>1> path/to/logs

Now it is time to check if the cron was successful.

Check if the rows deleted have been copied to the test.blogpost_archive table, and deleted from the test.blogpost:

SELECT COUNT(1) FROM test.blogpost_archive WHERE created_at <= date_sub('2022-09-02', interval 100 day);

Verify if the blogpost table has 100 days of data, and the following query should return 0 rows:

SELECT COUNT(1) FROM test.blogpost WHERE created_at <= date_sub('2022-09-02', interval 100 day);

Hope you found this use case of the pt-archiver helpful when you need to purge/archive large tables!

Nov
29
2022
--

PMM, Federated Tables, Table Stats, and Lots of Connections!

Percona Monitoring and Management Federated Tables

Percona Monitoring and Management Federated TablesEarlier in the year, I was working on an issue where one of my clients had reported a massive influx in connection on their hosts after enabling Percona Monitoring and Management (PMM). This was something I had not seen before and after researching for a couple of days I discovered that if you monitor a MySQL instance with PMM configured to collect table statistics, and if the tables that it’s gathering statistics from are Federated, it will generate a connection on the remote host for the Federated tables, one for each Federated table in the instance. Let’s go over the details and provide some examples so we can understand this a bit better.

First, I’ll offer a reminder that a Federated table is simply a table that you can put in your MySQL instance that is empty locally and uses a network connection to get the data from another MySQL host when the table is queried. For example, if I have a normal table called peter_data on host mysql1, I can set up a Federated table on mysql2 that points to mysql1. Each time that mysql2 has a query on the peter_data table, it connects to mysql1, gets the data, and then returns it locally. This feature is a lot less common now than it once was given how MySQL replication has improved over time, but as you can see here in the MySQL reference guide, it’s still supported.

So how does this impact our issue where PMM was establishing so many connections? Let’s set this up in my lab and have a look!

Lab setup

Let’s start by setting up my first host centos7-1 as the “remote host”. This is the host that has the actual data on it.

[root@centos7-1 ~]# mysql
.....
mysql> use ftest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show create table t1 \G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) NOT NULL,
  `c2` int(11) DEFAULT NULL,
  `c3` int(11) DEFAULT NULL,
  PRIMARY KEY (`c1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> select * from t1;
+----+------+------+
| c1 | c2   | c3   |
+----+------+------+
|  1 |    1 |    1 |
|  2 |    2 |    2 |
|  3 |    3 |    3 |
+----+------+------+
3 rows in set (0.00 sec)

Now that’s done, I’ll set up my second host centos7-2 to act as the host that has the Federated table.

[root@centos7-2 ~]# mysql -u root -ppassword
mysql> select * from mysql.servers;
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
| Server_name | Host      | Db    | Username | Password | Port | Socket | Wrapper | Owner |
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
| fedlink     | 10.0.2.12 | ftest | root     | password | 3306 |        | mysql   |       |
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
1 row in set (0.00 sec)

mysql> use ftest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) NOT NULL,
  `c2` int(11) DEFAULT NULL,
  `c3` int(11) DEFAULT NULL,
  PRIMARY KEY (`c1`)
) ENGINE=FEDERATED DEFAULT CHARSET=latin1 CONNECTION='fedlink/t1'
1 row in set (0.00 sec)

mysql> select * from t1;
+----+------+------+
| c1 | c2   | c3   |
+----+------+------+
|  1 |    1 |    1 |
|  2 |    2 |    2 |
|  3 |    3 |    3 |
+----+------+------+
3 rows in set (0.01 sec)

Recreating the issue

Now that we have our Federated table set up. Let’s test and see how querying table metadata on centos7-2, the instance with the Federated table, impacts connections on centos7-1, the remote host. What I did was connect to centos7-2, query the information_schema.tables table much in the same way that PMM does, disconnected, and then connected a second time running the same query.

[root@centos7-2 ~]# mysql -u root -ppassword
...
mysql> select * from information_schema.tables where table_schema = 'ftest';
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | ENGINE    | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME | UPDATE_TIME         | CHECK_TIME | TABLE_COLLATION   | CHECKSUM | CREATE_OPTIONS | TABLE_COMMENT |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| def           | ftest        | t1         | BASE TABLE | FEDERATED |      10 | Fixed      |          3 |           5461 |       16383 |               0 |            0 |         0 |           NULL | NULL        | 1969-12-31 19:33:42 | NULL       | latin1_swedish_ci |     NULL |                |               |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
1 row in set (0.01 sec)

mysql> exit
Bye

[root@centos7-2 ~]# mysql -u root -ppassword
....
mysql> select * from information_schema.tables where table_schema = 'ftest';
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | ENGINE    | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME | UPDATE_TIME         | CHECK_TIME | TABLE_COLLATION   | CHECKSUM | CREATE_OPTIONS | TABLE_COMMENT |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| def           | ftest        | t1         | BASE TABLE | FEDERATED |      10 | Fixed      |          3 |           5461 |       16383 |               0 |            0 |         0 |           NULL | NULL        | 1969-12-31 19:33:42 | NULL       | latin1_swedish_ci |     NULL |                |               |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
1 row in set (0.01 sec)

mysql> exit
Bye

As you can see below, this resulted in two connections on centos7-1 that did not drop despite disconnecting and reconnecting on centos7-2.

mysql> show processlist;
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
| Id | User | Host            | db    | Command | Time | State    | Info             | Rows_sent | Rows_examined |
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
| 23 | root | localhost       | NULL  | Query   |    0 | starting | show processlist |         0 |             0 |
| 25 | root | 10.0.2.13:33232 | ftest | Sleep   |  112 |          | NULL             |         1 |             1 |
| 27 | root | 10.0.2.13:33236 | ftest | Sleep   |   71 |          | NULL             |         1 |             1 |
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
3 rows in set (0.00 sec)

This doesn’t sound like that big of a deal, especially considering that PMM usually remains connected to the host. So if you have one Federated table in your system and if PMM is monitoring table stats, it will only add one connection on the remote host right? That’s true, but in my lab, I expanded this to create 145 Federated tables, and the result of this despite only querying the information_schema.tables table, 145 connections were created on centos7-1.

[root@centos7-2 ~]# mysql -u root -ppassword
...
mysql> select * from information_schema.tables where table_schema = 'ftest';
....
145 rows in set (0.08 sec)

[root@centos7-1 ~]# mysql -u root -ppassword
mysql> select * from information_schema.processlist where substring_index(host,':',1) = '10.0.2.13' and user = 'root';
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
| ID   | USER | HOST             | DB    | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
| 2120 | root  | 10.0.2.13:60728 | ftest | Sleep   |    7 |       | NULL |    6477 |         1 |             1 |
| 2106 | root  | 10.0.2.13:60700 | ftest | Sleep   |    7 |       | NULL |    6701 |         1 |             1 |
....
| 2117 | root  | 10.0.2.13:60722 | ftest | Sleep   |    7 |       | NULL |    6528 |         1 |             1 |
| 2118 | root  | 10.0.2.13:60724 | ftest | Sleep   |    7 |       | NULL |    6512 |         1 |             1 |
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
145 rows in set (0.00 sec)

This can be a big problem if you have a host that doesn’t support a lot of connections and you need those connections to be available for your app!

Conclusion

Based on the lab testing above, we can see how PMM queries against the information_schema.tables table can cause issues with a lot of connections being created on a Federated remote host. This probably will not be a problem for most MySQL users considering that Federated tables aren’t that common, but if you have Federated tables and if you’re considering adding PMM monitoring, or any other monitoring that collects table statistics, be warned! The maintenance of Federated connections on a remote host is not a bug, this is how it’s supposed to behave.

If you have Federated tables and if you want to avoid this problem, you can ensure that you use the flag –disable-tablestats when adding MySQL to your local PMM client using the “pmm-admin add mysql” command.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Nov
28
2022
--

Talking Drupal #375 – Being A Creative Director

Today we are talking about Being a Creative Director with Randy Oest.

For show notes visit: www.talkingDrupal.com/375

Topics

  • What is a Creative Director?
  • How is being a CD at a technical company different?
  • Do Drupal CD’s need additional skills?
  • Sometimes things get lost in translation between design and development how do you bridge that gap?
  • How do you mentor?
  • How do you interview for creative positions?
  • Do you hire developers too?
  • Optimal makeup for a team.
  • Guiding the Four Kitchen’s team
  • Inpiration

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Randy Oest – randyoest.com @amazingrando

MOTW Correspondent

Martin Anderson-Clutz – @mandclu ECA ECA is a powerful, versatile, and user-friendly rules engine for Drupal 9+. The core module is a processor that validates and executes event-condition-action plugins. Integrated with graphical user interfaces like BPMN.iO, Camunda, ECA Core Modeller or other possible future modellers, ECA is a robust system for building conditionally triggered action sets.

Nov
28
2022
--

Cloud of Serfdom or Cloud of Freedom – What Would You Choose?

open source cloud

At its core, the cloud is a fantastic innovation — I do not think many of us will long for the days when it required days, weeks, or even months to provision a new server.  Programmable, instantly accessible infrastructure comes with so much flexibility. It allows new deployment and management approaches we could not even have imagined in the “old world.”

In the early days of the cloud, vendors very much embraced it, and focused on the new commodity the cloud was to become:

cloud computing

Electricity, mobile networks, and Internet access are wonderful commodities driving so much innovation purely because they are commodities. These commodities are accessible to everyone and (in general) have low switching costs. This is due, to a large extent, to the fact they are based on commonly agreed official or defacto standards.

The problem with a commodity though is, while it is of value for consumers and society at large, it is not so great for the company providing it, as competition inevitably reduces profitability and is not rewarded well by the stock market either.  Just compare AT&T’s P/E of around seven to Amazon’s, which is close to 90.

While initially, clouds were providing largely commoditized “Infrastructure as a Service,” they have since all moved, pushing proprietary services which exist only on their cloud.  Even more, they have invested in educational content which calls using such proprietary services as “best practice”.  If you’re an Amazon Certified Architect, you need to know you should be running Amazon Aurora, DynamoDB, or RedShift rather than using IaaS and rolling out your database on Kubernetes.

If you should follow this path of Serfdom, you will find yourself in a situation where your vendor has all the power.  It is not only pricing power but the power to shut your business down, because of international politics or because of the influence of cancel culture zealots.

Students of history may remember what happened when a young company, which was to become known as Oracle, took on IBM.

Larry Ellison

Larry Ellison

In the early days, it was saving customers from hardware vendor lock-in which came from the de-facto monopoly of Big Blue (IBM). As enough customers were sufficiently “saved” (were deep enough with Oracle technology they were not moving anywhere), Oracle gradually moved on to become the hardball company it is known for. There is a saying in the industry “Oracle does not have customers, Oracle has hostages.”

If you choose to lock yourself in, with highly differentiated proprietary technologies, you should expect the same, maybe not tomorrow or next year, but it is inevitable. As an example, you can check for yourself the difference between EC2, RDS, and Aurora and how it only gets larger as time goes on and new instance types become available.

There is one more point to consider — over the last decade, cloud spending (and so revenues for cloud providers) was growing in the double digits because so many workloads were moving to the cloud.  As we complete this move though, the growth cloud vendors seek will need to come from charging existing customers more.

This will especially be a problem for larger companies who have substantially locked in with the cloud of their choice and are perceived to pay the ransom, as they have no other choice. Early-stage companies and startups will continue to be pampered including receiving generous free credits to get started. Hey, drug lords already figured it out – to build the most loyal customer base you better give the first dose of heroin for free.

Cloud of Freedom

There is another way though, and like embracing open source in the late 90s, it requires taking the road less traveled and putting in more elbow grease, but it comes with substantial cost savings and most importantly freedom of controlling your infrastructure, your data, and being in a position to pick and choose cloud providers to your liking.

The idea is simple — use a Cloud Vendor as a commodity infrastructure provider and use Open Source Software for higher levels of the stack.

You may rightly point out that open source solutions are not as polished as a well-integrated set of services you get from AWS and other cloud vendors. It is true, of course. If you have been in the industry for a while it might remind you of a situation with Linux in the late nineties.

open source solaris and linux

I remember in those years, I chose Linux as the operating system for the startup I was building. My friends working for serious companies were laughing at me — Linux was so immature in terms of usability or performance compared to Solaris, HPUX, AIX, and Windows NT that it was almost a toy.  At one point you could not even have files more than 2GB in size. ?

Yet we all know how that story ends — the industry came together and improved Linux to the point where it is a very mature, scalable, and high-performance operating system that underpins most of the Internet.

One important contributing factor which pushed many companies, especially startups, to embrace open source was the Dot Com Crash of 2001.  Linux, Apache, MySQL, and PHP (LAMP) did not magically become much better than Microsoft or Java stack.  Many companies just could not afford them anymore, and in the end had to use open source technologies, and improve them along the way.

Where we use to have Linux, in the cloud age we need more than that, and we’re getting it with Kubernetes and the cloud native ecosystem.

The cloud native ecosystem is still young compared to the cloud, but it’s getting a lot of investment and growing rapidly.  You just need to attend KubeCon/CloudnativeCon to see this ecosystem is going places!   You see hundreds of companies innovating in their own way — it is messy at times, and there are often multiple options and multiple opinions on how to approach a specific problem. We know other long-term approaches tend to provide a better outcome, for example, democracy, with all of its inefficiencies, tends to win against dictatorship.

Yes, of course, AWS, GCP, Azure, and other clouds also have wonderful partnership ecosystems, but it is different. It is never about working on the best solution to the problem or achieving the best outcome for the customer. Instead, it is about one dominating player setting the rules of the game with their own commercial interests and others choosing to play, or not.

Path to the Cloud of Freedom

So, let’s say I have convinced you to embrace open source, and using the cloud as a commodity is a better path, even though it might not be easy. How should you embrace it?

Let’s be real — in its current state, even though fantastic progress is made every year, the cloud native ecosystem may not have robust solutions for everything major clouds have to offer.  If you’ve been on the proprietary cloud bandwagon, it is likely the team has more knowledge of proprietary cloud approaches, and as such, would need to go out of their comfort zone to embrace cloud native solutions. The practices to follow are basically the same which helped open source to dominate in the data center:

  • Strategically embrace open source solutions in the cloud, create incentives to overcome friction, and create situations where only what is really not feasible with open source uses proprietary cloud services. Focus on new applications first — it is much easier to build new software on a new stack than retrofit the old one.
  • Support open source developers, and vendors doing development.  Software does not build itself and we collectively need to fund the development and maintenance of software if we want robust, scalable, secure software to exist.
  • Participate in the open source ecosystem yourself.  The best way to make sure open source software meets your needs is to make it so. Even if you can’t contribute code, contribute bugs, ideas, documentation, and content.

As often happens with open source, we have proprietary vendors figuring out what exactly works in the market and educating the masses about new approaches, now is the time for the open source community to build on it and bring freedom into the cloud.  I’m sure 10 years from now, the cloud will look nothing like it is today!

Nov
28
2022
--

Percona Distribution for PostgreSQL 15.1, 14.6, 13.9; Percona Server for MySQL 8.0.30-22: Release Roundup November 28, 2022

Percona Releases

It’s time for the release roundup!

Percona ReleasesPercona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since November 14, 2022. Take a look!

Percona Distribution for PostgreSQL 15.1

Percona Distribution for PostgreSQL 15.1 was released on November 21, 2022. It is a solution with the collection of tools from PostgreSQL community that are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster Recovery, Security, Performance and Scalability, and others that enterprises are facing.

Download Percona Distribution for PostgreSQL 15.1

 

Percona Distribution for PostgreSQL 14.6

Percona Distribution for PostgreSQL 14.6 was released on November 23, 2022. This release of Percona Distribution for PostgreSQL is based on PostgreSQL 14.6. Percona Distribution for PostgreSQL now includes the meta-packages that simplify its installation. The percona-ppg-server meta-package installs PostgreSQL and the extensions, while percona-ppg-server-ha package installs high-availability components that are recommended by Percona.

Download Percona Distribution for PostgreSQL 14.6

 

Percona Distribution for PostgreSQL 13.9

On November 24, 2022, Percona Distribution for PostgreSQL 13.9 was released. Percona Distribution for PostgreSQL is based on PostgreSQL 13.9 and now includes the meta-packages that simplify its installation. The percona-ppg-server meta-package installs PostgreSQL and the extensions, while percona-ppg-server-ha package installs high-availability components that are recommended by Percona.

Download Percona Distribution for PostgreSQL 13.9

 

Percona Server for MySQL 8.0.30-22

Percona Server for MySQL 8.0.30-22 was released on November 21, 2022. It includes all the features and bug fixes available in the MySQL 8.0.30 Community Edition in addition to enterprise-grade features developed by Percona. Percona Server for MySQL is a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior performance, scalability, and instrumentation. Please be sure to check out all the release highlights and new features.

Download Percona Server for MySQL 8.0.30-22

 

Percona XtraBackup 8.0.30-23

Released on November 14, 2022, Percona XtraBackup 8.0.30-23 enables MySQL backups without blocking user queries. Percona XtraBackup is ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Along with bug fixes, the following features and improvements introduced in Percona XtraBackup 8.0.30-23 are available only in tech preview:

  • Percona XtraBackup adds support for Zstandard (ZSTD) compression algorithm. ZSTD is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios.
  • Percona XtraBackup implements the Smart memory estimation feature to compute the memory required to --prepare a backup. Percona XtraBackup has extended the crash recovery logic to extract the formula used to allocate memory. Now, in the backup phase, while copying redo log entries, Percona XtraBackup computes the required memory for --prepare.

Download Percona XtraBackup 8.0.30-23

 

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Nov
25
2022
--

Percona Operator for MongoDB Backup and Restore on S3-Compatible Storage – Backblaze

Percona Operator for MongoDB Backblaze

Percona Operator for MongoDB BackblazeOne of the main features that I like about the Percona Operator for MongoDB is the integration with Percona Backup for MongoDB (PBM) tool and the ability to backup/restore the database without manual intervention. The Operator allows backing up the DB to S3-compatible cloud storage and so you can use AWS, Azure, etc.

One of our customers asked about the integration between Backblaze and the Operator for backup and restore purposes. So I was checking for it and found that it is S3-compatible and provides a free account with 10GB of cloud storage. So I jumped into testing it with our Operator. Also, I saw in our forum that a few users are using Backblaze cloud storage. So making this blog post for everyone to utilize if they want to test/use Backblaze S3-compatible cloud storage for testing our Operator and PBM.

S3-compatible storage configuration

The Operator supports backup to S3-compatible storage. The steps for backup to AWS or Azure blob are given here. So you can try that as well. In this blog let me focus on B2 cloud storage configuring as the backup location and restore it to another deployment.

Let’s configure the Percona Server for MongoDB (PSMDB) Sharded Cluster using the Operator (with minimal config as explained here). I have used PSMDB operator v1.12.0 and PBM 1.8.1 for the test below. You can sign up for the free account here – https://www.backblaze.com/b2/cloud-storage-b.html. Then log in to your account. You can first create a key pair to access the storage from your operator as follows in the “App Keys” tab:

Add Application Key GUI

 

Then you can create a bucket with your desired name and note down the S3-compatible storage’s details like bucketname (shown in the picture below) and the endpointUrl to point here to send the backup files. The details of endpointUrl can be obtained from the provider and the region is specified in the prefix of the endpointURL variable.

Create Bucket

 

Deploy the cluster

Now let’s download the Operator from GitHub (I used v1.12.0) and configure the files for deploying the MongoDB sharded cluster. Here, I am using cr-minimal.yaml for deploying a very minimal setup of single member replicaset for a shard, config db, and a mongos.

#using an alias for the kubectl command
$ alias "k=kubectl"
$ cd percona-server-mongodb-operator

# Add a backup section in the cr file as shown below. Use the appropriate values from your setup
$ cat deploy/cr-minimal.yaml
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
  name: minimal-cluster
spec:
  crVersion: 1.12.0
  image: percona/percona-server-mongodb:5.0.7-6
  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: 5.0-recommended
    schedule: "0 2 * * *"
  secrets:
    users: minimal-cluster
  replsets:
  - name: rs0
    size: 1
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  sharding:
    enabled: true
    configsvrReplSet:
      size: 1
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 3Gi
    mongos:
      size: 1

  backup:
    enabled: true
    image: percona/percona-backup-mongodb:1.8.1
    serviceAccountName: percona-server-mongodb-operator
    pitr:
      enabled: false
      compressionType: gzip
      compressionLevel: 6
    storages:
      s3-us-west:
        type: s3
        s3:
          bucket: psmdbbackupBlaze
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-004
          endpointUrl: https://s3.us-west-004.backblazeb2.com/
#          prefix: ""
#          uploadPartSize: 10485760
#          maxUploadParts: 10000
#          storageClass: STANDARD
#          insecureSkipTLSVerify: false

 

The backup-s3.yaml contains the key details to access the B2 cloud storage. Encode the Key ID and Access Details (retrieved from Backblaze as mentioned here) as follows to use inside the backup-s3.yaml file. The key name: my-cluster-name-backup-s3 should be unique which is used to refer to the other yaml files:

# First use base64 to encode your keyid and access key:
$ echo "key-sample" | base64 --wrap=0
XXXX==
$ echo "access-key-sample" | base64 --wrap=0
XXXXYYZZ==

$ cat deploy/backup-s3.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-cluster-name-backup-s3
type: Opaque
data:
 AWS_ACCESS_KEY_ID: XXXX==
 AWS_SECRET_ACCESS_KEY: XXXXYYZZ==

 

Then deploy the cluster as mentioned below and deploy backup-s3.yaml as well.

$ k apply -f ./deploy/bundle.yaml
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com created
role.rbac.authorization.k8s.io/percona-server-mongodb-operator created
serviceaccount/percona-server-mongodb-operator created
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created
deployment.apps/percona-server-mongodb-operator created

$ k apply -f ./deploy/cr-minimal.yaml
perconaservermongodb.psmdb.percona.com/minimal-cluster created

$ k apply -f ./deploy/backup-s3.yaml 
secret/my-cluster-name-backup-s3 created

After starting the Operator and applying the yaml files, the setup looks like the below:

$ k get pods
NAME                                               READY   STATUS    RESTARTS   AGE
minimal-cluster-cfg-0                              2/2     Running   0          39m
minimal-cluster-mongos-0                           1/1     Running   0          70m
minimal-cluster-rs0-0                              2/2     Running   0          38m
percona-server-mongodb-operator-665cd69f9b-44tq5   1/1     Running   0          74m

$ k get svc
NAME                     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)     AGE
kubernetes               ClusterIP   10.96.0.1     <none>        443/TCP     76m
minimal-cluster-cfg      ClusterIP   None          <none>        27017/TCP   72m
minimal-cluster-mongos   ClusterIP   10.100.7.70   <none>        27017/TCP   72m
minimal-cluster-rs0      ClusterIP   None          <none>        27017/TCP   72m

 

Backup

After deploying the cluster, the DB is ready for backup anytime. Other than the scheduled backup, you can create a backup-custom.yaml file to take a backup whenever you need it (you will need to provide a unique backup name each time, or else a new backup will not work). Our backup yaml file looks like the below one:

$ cat deploy/backup/backup-custom.yaml
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  finalizers:
  - delete-backup
  name: backup1
spec:
  clusterName: minimal-cluster
  storageName: s3-us-west
#  compressionType: gzip
#  compressionLevel: 6

Now load some data into the database and then start the backup now:

$ k apply -f deploy/backup/backup-custom.yaml 
perconaservermongodbbackup.psmdb.percona.com/backup1 configured

The backup progress looks like the below:

$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS      COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:21:58Z   requested               43s
$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS      COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   requested               46s
$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS    COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   running               49s

Here, if you have any issues with the backup, you can view the backup logs from the backup agent sidecar as follows:

$ k logs pod/minimal-cluster-rs0 -c backup-agent

To start another backup, edit backup-custom.yaml and change the backup name followed by applying it (using name:backup2):

$ k apply -f deploy/backup/backup-custom.yaml 
perconaservermongodbbackup.psmdb.percona.com/backup2 configured 

Monitor the backup process (you can use -w option to watch the progress continuously). It should show the status as READY:

$ k get perconaservermongodbbackup.psmdb.percona.com -w
NAME      CLUSTER           STORAGE      DESTINATION            STATUS   COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   ready    12m         14m
backup2   minimal-cluster   s3-us-west                                               8s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   requested               21s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   running                 26s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   ready       0s          41s

From the bucket on Backblaze, the backup files are listed as they were sent from the backup:

Backup Files in B2 cloud Storage

 

Restore

You can restore the cluster from the backup into another similar deployment or into the same cluster. List the backups and restore one of them as follows. The configuration restore-custom.yaml has the backup information to restore. If you are using another deployment, then you can also include backupSource section which I commented on below for your reference, from which the restore process finds the source of the backup. In this case, make sure you create a secret my-cluster-name-backup-s3 before restoring as well to access the backup.

$ cat deploy/backup/restore-custom.yaml 
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  name: restore2
spec:
  clusterName: minimal-cluster
  backupName: backup2
#  pitr:
#    type: date
#    date: YYYY-MM-DD HH:MM:SS
#  backupSource:
#    destination: s3://S3-BACKUP-BUCKET-NAME-HERE/BACKUP-DESTINATION
#    s3:
#      credentialsSecret: my-cluster-name-backup-s3
#      region: us-west-004
#      bucket: S3-BACKUP-BUCKET-NAME-HERE
#      endpointUrl: https://s3.us-west-004.backblazeb2.com/
#      prefix: ""
#    azure:
#      credentialsSecret: SECRET-NAME
#      prefix: PREFIX-NAME
#      container: CONTAINER-NAME

Listing the backup:

$ k get psmdb-backup
NAME      CLUSTER           STORAGE      DESTINATION            STATUS   COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   ready    3h5m        3h6m
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   ready    171m        172m
backup3   minimal-cluster   s3-us-west   2022-09-08T04:16:39Z   ready    130m        131m

 

To verify the restore process, I write some data into a collection vinodh.testData after the backup and before the restore. So the newly inserted document shouldn’t be there after the restore:

# Using mongosh from the mongo container to see the data
# Listing data from collection vinodh.testData
$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[ { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 } ]
pod "mongo-client" deleted

Inserting a document into it:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.insert({id:2})\" --quiet "
If you don't see a command prompt, try pressing enter.
DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.
{
  acknowledged: true,
  insertedIds: { '0': ObjectId("631980fe07180f860bd22534") }
}
pod "mongo-client" delete

Listing it again to verify:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[
  { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 },
  { _id: ObjectId("631980fe07180f860bd22534"), id: 2 }
]
pod "mongo-client" deleted

Running restore as follows:

$ k apply -f deploy/backup/restore-custom.yaml
perconaservermongodbrestore.psmdb.percona.com/restore2 created

Now check the data again in vinodh.testData collection and verify whether the restore is done properly. The below data proves that the collection was restored from the backup as it is listing only the record from the backup:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=minimal-cluster-mongos --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[ { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 } ]

 

Hope this helps you! Now you can try the same from your end to check and use Backblaze in your production if it suits your requirements. I haven’t tested the performance of the network yet. If you have used Backblaze or similar S3-compatible storage for backup, then you can share your experience with us in the comments.

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

Learn More About Percona Kubernetes Operators

Nov
25
2022
--

Small Business Saturday: Supercharging Your Database for the Holiday Shopping Surge

Small Business Saturday Database Prep Managed Services

Shopping mayhem on the day after Thanksgiving exploded in the 1950s and ’60s, with cops dubbing it Black Friday. Internet shopping put Cyber Monday in the retail lexicon 17 years ago. In 2010, a credit card company coined (pun intended) the title Small Business Saturday.

Small Business Saturday Database Prep Managed ServicesBut enough of the history. The names — commercial, ominous, or otherwise — are here to stay, and the biggest shopping days of 2022 are upon us. There’s a lot at stake, but you know that. Hopefully, your database is running at peak performance to match the high customer demand of the peak shopping days ahead. If not, there’s time left to supercharge your database operations … and make a difference in your bottom line this holiday season. 

Speaking of your bottom line, this is probably the time of year when you’re most focused on core business. But database problems (i.e. delayed or broken customer buying experiences) can steal that focus. Options for addressing those problems include hiring DBAs or using managed services. When choosing, consider the differences in time and money:

In-house DBAs

Sounds nice to have DBA expertise on-hand, with people on your staff who can tune performance and monitor your systems around the clock. Nice, indeed, but it’s a tall and costly order. 

There’s the monetary cost: The median annual salary for a database administrator is around $100,000 USD (according to U.S. News & World Report, the U.S. Bureau of Labor Statistics, and others), and that figure can shoot much higher when factoring in the DBA professional’s geographic salary requirements and the specialty needs of your company. Further, if you need more than one DBA, or even an entire team, the salary pain can become unbearable.

There’s also the cost of your time and effort: Data is king, and DBAs are among the most coveted IT professionals. Seeking and landing the DBA talent you need can be frustrating. With a mere 1.5% of DBAs reportedly unemployed (U.S. News), there isn’t much of a candidate pool. The shortage isn’t a new trend, either, and it’s likely to worsen. A 2015 Unisphere Research report showed that more than 40% of data professionals intended to leave the field within the next decade. Since then, there have been thousands of additional DBA openings without the professionals to fill them. According to the Bureau of Labor, there’ll be about 11,500 additional openings for database administrators and architects in the coming decade, with many of those openings attributable to DBAs entering other occupations or retiring.

Managed services for meeting your specific DBA needs

Percona offers an alternative to the cost and frustration of hiring full-time DBAs to keep databases operating at optimal levels. Percona Managed Services keep you in control of your budget. It’s a partnership that provides database management support at a fraction of the full-time DBA cost. You decide the level of support, and you can adjust that level as needs change.

Regardless of your support level, you’ll always have the reassurance of in-depth technical expertise on demand and at an affordable price. Percona DBAs will keep your MySQL, PostgreSQL, MongoDB, and MariaDB databases running at peak performance so your technical team can focus on your core business. In addition to deep operational knowledge of the aforementioned databases, Percona DBAs also have expertise in the cloud platforms of Amazon AWS, Microsoft Azure, and Google Compute Engine, as well as Amazon’s DBaaS offering of RDS and Aurora.

Again, you get to decide how and when to apply that expertise. You can use Percona to augment your existing resources, or you can have Percona fully manage your database requirements from start to finish. You also get to define the relationship itself: If you choose to adopt our software and end the relationship, the software is yours. (There’s no lock-in and never any licensing fees.) 

Regardless of how you shape the relationship, Percona Managed Services can help your company:

  • Stay competitive and agile.
  • Operate on-premises and in the cloud.
  • Achieve business performance goals.
  • Maximize the value of your open source database.
  • Reduce cost and complexity.

Keeping the partnership cost-effective, comprehensive, and simple

The final attribute in the list above – reducing cost and complexity – often resonates most loudly. So Percona strives continually to keep the partnership simple and transparent. Along those lines, Percona Managed Services are now part of Percona Platform Subscription, which delivers unified access to open source Percona software, advanced Percona Advisors, Private DBaaS, knowledge base, self-paced training, and more — at no additional cost.

Additionally, with our Percona Monitoring and Management (PMM) tool, as included at no cost, you can monitor your database health and behavior, and you can manage and improve the performance of your databases, wherever they are located or deployed. We can do that for you, of course, but as always, it’s up to you.

This time of year can be chaotic, and Percona Managed Services provide an extended team with top database talent so you can focus on strategic initiatives.

Embrace the holiday traffic

Mayhem. Frenzy. Expensive. Sure, they’re potentially scary words during this make-or-break season. But like the names of the Big Three shopping days, they’re just words that paint scenarios, words that represent urgency. Use them as inspiration. Heck, back in the day, “traffic” was a dirty word, especially if it was snowy and muddy and slushy out there.

Now, “traffic” is a beautiful word, as long as you keep it flowing and your database operations tuned up. You can. Seize the day and the season with Percona Managed Services.

Learn more about Percona Managed Services

Nov
25
2022
--

Exploring Data Dump and Load Utility With MySQL Shell

Data Dump and Load Utility With MySQL Shell

Data Dump and Load Utility With MySQL ShellIn this blog post, I will try to cover a few dump and data-loading utilities offered by MySQL Shell.

What is MySQL Shell?

It is a robust client and code editor for MySQL. In addition to having APIs for dealing with MySQL, MySQL Shell offers scripting capabilities for JavaScript and Python. It provides an AdminAPI to manage MySQL servers and create InnoDB Cluster and ReplicaSet deployments. It supports an X DevAPI to work on both SQL and NoSQL interfaces via the X Protocol. Finally, it provides utilities to make working with MySQL in Python and Javascript mode easier.

Now, let’s jump into some practical stuff and see what these tools offer us.

Dump Utility

In the dump utility, there are three tools: util.dumpInstance(), util.dumpSchemas(), and util.dumpTables () so let’s discuss them one by one. 

1.  “util.dumpInstance()” – This was introduced in MySQL Shell 8.0.21 and supports the export of all databases excluding information_schema, MySQL, ndbinfo, performance_schema, and sys schema. The backups can be kept locally or could be taken remotely from the MySQL Shell utility.

Performing a complete instance dump with “util.dumpInstance()”:

a) Log in to the database via MySQL Shell.

shell> mysqlsh --uri root@localhost

b) Prior to doing the actual backup, we will first perform a dryRun procedure to check for any potential problems or compatibility issues.

mysql shell > util.dumpInstance("/home/backup/", {dryRun:”true”})

Output:

dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
1 out of 5 schemas will be dumped and within them 16 tables, 0 views.
3 out of 5 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Writing DDL - done         
Starting data dump
0% (0 rows / ~1.60M rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed

 So, here we didn’t get any issues or warnings so we are good to proceed with the actual backup process.

c) Running the full instance backup and saving in the target location.

mysql shell >util.dumpInstance("/home/backup/")

Output:

dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
1 out of 5 schemas will be dumped and within them 16 tables, 0 views.
3 out of 5 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (1.63M rows / ~1.60M rows), 114.56K rows/s, 22.13 MB/s uncompressed, 10.09 MB/s compressed                 
Dump duration: 00:00:15s                                                                       
Total duration: 00:00:15s                                                                      
Schemas dumped: 1                                                                              
Tables dumped: 16                                                                              
Uncompressed data size: 313.55 MB                                                              
Compressed data size: 142.87 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 1629368                                                                          
Bytes written: 142.87 MB                                                                       
Average uncompressed throughput: 20.01 MB/s                                                    
Average compressed throughput: 9.12 MB/s

We have successfully finished the backup here. By default, it employs “4 threads” and applies the zstd compression algorithm to the database backup.

d) Running the instance dump with more advanced options.

mysql shell >util.dumpInstance("/home/backup/", {threads:8,maxRate:"100M",consistent:true,chunking:true,bytesPerchunk:"64M",compression:"zstd"})

Output:

Acquiring global read lock
Global read lock acquired
Initializing - done 
1 out of 5 schemas will be dumped and within them 16 tables, 0 views.
3 out of 5 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (1.63M rows / ~1.60M rows), 468.47K rows/s, 84.22 MB/s uncompressed, 38.41 MB/s compressed                  
Dump duration: 00:00:03s                                                                       
Total duration: 00:00:03s                                                                      
Schemas dumped: 1                                                                              
Tables dumped: 16                                                                              
Uncompressed data size: 313.55 MB                                                              
Compressed data size: 142.87 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 1629368                                                                          
Bytes written: 142.87 MB                                                                       
Average uncompressed throughput: 85.24 MB/s                                                    
Average compressed throughput: 38.84 MB/s

There are various options available to control and enhance the dump process.

“dryRun”: List out all of the compatibility issues before starting the dump.
“consistent”: In order to provide consistent backup, it automatically acquires a global read lock. By default, it is turned on.
“chunking”: splits each table's data into a number of files. It is enabled by default.
“bytesPerChunk”: specify the approximate number of bytes that should be written for each data block. 
“threads”: The number of parallel threads to employ when dumping data from the MySQL instance. "4" threads are used by default.
“maxRate”: Maximum data read throughput for each thread during the dump in bytes per second.

2) “util.dumpSchemas” –  This was introduced in MySQL Shell 8.0.21 and supports the export of selected databases either locally or remotely. 

Performing database dump with “util.dumpSchemas()”. Here we will take a specific database(“sbtest”) backup only.

mysql shell > util.dumpSchemas(["sbtest"],"/home/backup/sbtest/",{threads :2})

Output:

dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
1 schemas will be dumped and within them 16 tables, 0 views.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Running data dump using 2 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
104% (1.63M rows / ~1.56M rows), 424.56K rows/s, 81.96 MB/s uncompressed, 37.35 MB/s compressed                  
Dump duration: 00:00:03s                                                                       
Total duration: 00:00:03s                                                                      
Schemas dumped: 1                                                                              
Tables dumped: 16                                                                              
Uncompressed data size: 313.55 MB                                                              
Compressed data size: 142.87 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 1629368                                                                          
Bytes written: 142.87 MB                                                                       
Average uncompressed throughput: 81.82 MB/s                                                    
Average compressed throughput: 37.28 MB/s

3) “util.dumpTables” –  In MySQL Shell 8.0.22, the table dump utility “util.dumpTables()” was released.

 Performing selective tables dump. Here we take only table “sbtest2” dump from the “sbtest” database.

mysql shell> util.dumpTables("sbtest", [ "sbtest2"], "/home/backup/sbtest/",{threads: 4})

Output:

Writing schema metadata - done       
Writing DDL - done       
Writing table metadata - done       
Starting data dump
101% (101.89K rows / ~100.58K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed                
Dump duration: 00:00:00s                                                                    
Total duration: 00:00:00s                                                                   
Schemas dumped: 1                                                                           
Tables dumped: 1                                                                            
Uncompressed data size: 19.61 MB                                                            
Compressed data size: 8.93 MB                                                               
Compression ratio: 2.2                                                                      
Rows written: 101892                                                                        
Bytes written: 8.93 MB                                                                      
Average uncompressed throughput: 19.61 MB/s                                                 
Average compressed throughput: 8.93 MB/s

Data load utility

So far, we’ve seen how to use various methods to take a dump. We will now demonstrate how to restore the same using a single restoration command for all sorts of backups.

The utility “util.loadDump()” for loading dumps was introduced in MySQL Shell 8.0.21. It enabled the parallel loading of tables or table chunks during the data-loading process. 

Load the dump files using util.loadDump().

a) Running a dry run process to validate any compatibility issues or errors prior to the actual restoration process.

mysql shell > util.loadDump("/home/backup/", {dryRun :"true"})

Output:

Loading DDL and Data from '/home/backup/' using 4 threads.
Opening dump...
dryRun enabled, no changes will be made.
Target is MySQL 5.7.38-41-57-log. Dump was produced from MySQL 5.7.38-41-57-log
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
Executing common postamble SQL                        
0% (0 bytes / 313.55 MB), 0.00 B/s, 16 / 16 tables done
Recreating indexes - done 
No data loaded.                                        
0 warnings were reported during the load.

Note – no data loaded, it just prints us information about any warnings or errors during the restoration process.

b) Running the data restoration process.

mysql shell > util.loadDump("/home/backup/", {progressFile :"/home/backuplog.json",threads:4,backgroundThreads:4,maxBytesPerTransaction:"4096"})

Output:

Loading DDL and Data from '/home/backup/' using 4 threads.
Opening dump...
Target is MySQL 5.7.38-41-57-log. Dump was produced from MySQL 5.7.38-41-57-log
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
Executing common postamble SQL                                              
100% (313.55 MB / 313.55 MB), 3.03 MB/s, 16 / 16 tables done                
Recreating indexes - done       
16 chunks (1.63M rows, 313.55 MB) for 16 tables in 1 schemas were loaded in 2 min 43 sec (avg throughput 1.92 MB/s)
0 warnings were reported during the load.

Here, we have successfully completed the restoration process.

There are various options available to control and enhance the data loading process.

“threads”:The number of parallel threads to employ when dumping data from the MySQL instance. "4" is used by default.
“backgroundThreads”:the number of threads in the pool of background threads that were used to retrieve the file data.
“maxBytesPerTransaction”:The maximum number of bytes that can be loaded from a data file with a single LOAD DATA statement.
“loadIndexes”: Secondary indexes are not built during the import when this option is set to false; you must do so afterwards. This may come in handy if you're loading the DDL files and data files separately and want to modify the table structure after the DDL files have been loaded.
“deferTableIndexes”:Postpone creating secondary indexes until after loading the table's contents. This may speed up loading. off denotes that all indexes are built as the table is loaded.

Taking backups on cloud storage (GCP and AWS S3)

MySQL Shell also provides the capability to store dumps remotely over some S3-compatible cloud storage such as Amazon S3 and Google Cloud storage. Here, we’ll talk about how to do it with Google Cloud storage and an Amazon S3 environment.

Let’s start by configuring it for Google Cloud storage.

Prerequisites

  • Storage Bucket should exist in Google Cloud
  • Access keys and secrets defined in a file(“/home/credentials”) 
[gcb]
aws_access_key_id=xxxxxxxx
aws_secret_access_key=xxxxxxxx

 

a) Running full backup and putting it in the GCP bucket “ajpxctest”.

mysql shell> util.dumpInstance("sbtest",{s3BucketName: "ajpxctest",s3EndpointOverride: "https://storage.googleapis.com", s3CredentialsFile:"/home/credentials", s3Profile: "gcb", threads: 4})

Output:

dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
1 out of 5 schemas will be dumped and within them 16 tables, 0 views.
3 out of 5 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
104% (1.63M rows / ~1.56M rows), 172.12K rows/s, 34.60 MB/s uncompressed, 15.75 MB/s compressed                  
Dump duration: 00:00:12s                                                                       
Total duration: 00:00:14s                                                                      
Schemas dumped: 1                                                                              
Tables dumped: 16                                                                              
Uncompressed data size: 313.55 MB                                                              
Compressed data size: 142.87 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 1629368                                                                          
Bytes written: 142.87 MB                                                                       
Average uncompressed throughput: 25.21 MB/s                                                    
Average compressed throughput: 11.49 MB/s

b) Then, using these same details, we will restore the data from the S3 bucket into the local database.

mysql shell> util.loadDump("sbtest",{s3BucketName: "ajpxctest", s3EndpointOverride: "https://storage.googleapis.com",s3CredentialsFile:"/home/credentials", s3Profile: "gcb", threads: 4})

Output:

Loading DDL and Data from AWS S3 bucket=ajpxctest, prefix='sbtest' using 4 threads.
Opening dump...
Target is MySQL 5.7.38-41-57-log. Dump was produced from MySQL 5.7.38-41-57-log
Fetching dump data from remote location...
Listing files - done 
Scanning metadata - done         
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done         
Executing view DDL - done       
Starting data load
1 thds loading / 100% (313.55 MB / 313.55 MB), 6.23 MB/s, 14 / 16 tables done
Recreating indexes - done       
Executing common postamble SQL                                               
16 chunks (1.63M rows, 313.55 MB) for 16 tables in 1 schemas were loaded in 1 min 8 sec (avg throughput 7.41 MB/s)
0 warnings were reported during the load.      

S3 options:

“s3BucketName”: Name of the s3 bucket where the dump will be kept.
“s3EndpointOverride”: s3 endpoint or url.
“s3CredentialsFile”: Credential file containing key & secret details.
“s3Profile”: The name of the s3 profile to identify the credential details.

Let’s configure the same for Amazon S3 

Prerequisites

  • AWS S3 bucket should exist
  • Configure AWS CLI
###~/.aws/credentials### 
[default]
aws_access_key_id=xxxxxxx
aws_secret_access_key=xxxxxxx

###~/.aws/config### 
[default]
region = us-east-1

a) Now, let’s perform the instance backup process which keeps the dump file in “ajtestbkp” S3 bucket.

mysql shell> util.dumpInstance("sbtest",{s3bucketName: "ajtestbkp",threads: 4})

Output:

Acquiring global read lock
Global read lock acquired
Initializing - done 
1 out of 5 schemas will be dumped and within them 16 tables, 0 views.
3 out of 5 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (1.63M rows / ~1.61M rows), 287.53K rows/s, 53.81 MB/s uncompressed, 24.51 MB/s compressed                  
Dump duration: 00:00:06s                                                                       
Total duration: 00:00:07s                                                                      
Schemas dumped: 1                                                                              
Tables dumped: 16                                                                              
Uncompressed data size: 313.55 MB                                                              
Compressed data size: 142.87 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 1629368                                                                          
Bytes written: 142.87 MB                                                                       
Average uncompressed throughput: 49.93 MB/s                                                    
Average compressed throughput: 22.75 MB/s

b) Now, let’s restore the dump from S3 into our local database.

mysql shell> util.loadDump("sbtest",{s3BucketName: "ajtestbkp", threads: 4})

Output:

Loading DDL and Data from AWS S3 bucket=ajtestbkp, prefix='sbtest' using 4 threads.
Opening dump...
Target is MySQL 5.7.38-41-57-log. Dump was produced from MySQL 5.7.38-41-57-log
Fetching dump data from remote location...
Listing files - done 
Scanning metadata - done        
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done         
Executing view DDL - done       
Starting data load
1 thds loading / 100% (313.55 MB / 313.55 MB), 8.80 MB/s, 15 / 16 tables done
Recreating indexes - done       
Executing common postamble SQL                                               
16 chunks (1.63M rows, 313.55 MB) for 16 tables in 1 schemas were loaded in 38 sec (avg throughput 9.14 MB/s)
0 warnings were reported during the load.

Summary

These utilities are a great addition to MySQL Shell and very useful in the scenario of backup/restoration of large tables by using different control options. Now, we can have better handling of the dump/load process by defining options such as threads, max rate, and maxBytesPerTransaction.  Dump utilities are supported in (MySQL 5.6,5.7, and 8.0), however, for data loading, the database should be either MySQL 5.7 or 8.0.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com