Aug
30
2022
--

Percona Server for MongoDB Data Directory Resize/Increase

Percona Server for MongoDB Data Directory Resize

Percona Server for MongoDB Data Directory ResizeRunning a customer’s technical infrastructure often necessitates making changes in response to the shifting requirements of the industry. Thankfully, with the introduction of cloud computing, adapting the infrastructure has never been easier. These days, we can quickly add or remove resources as conditions dictate.

Historically, businesses’ storage capacity needs have fluctuated, and responding to these ever-changing needs has required regular attention. In the past, altering storage capacity typically meant expanding it—a tedious and time-consuming process. Recently, however, cloud computing has made increasing capacity much easier.

In this blog, we will be looking at the processes involved with how to resize EBS volumes in AWS. Once you increase the volume size on the cloud provider level, the commands to increase the size on the OS level will be the same irrespective of the cloud provider.

Suppose a customer is running an environment on an EC2 instance—perhaps a small application that doesn’t require many resources. Since this application uses little storage, a small 15 GB general purpose AWS EBS volume has been attached. As the business needs grow, the customer would like to add more AWS resources, making it necessary to grow your EBS volume to support more data. Needless to say, you would prefer to do this without any downtime. I have divided this into three parts and further subdivided them. The first part will be where we are going to see how we will extend the underlying volume. Following how we can resize the volume depending on the use case like whether the disk is partitioned, not partitioned, or using LVM. In the end, we will be expanding the file system.

Extending underlying volume

Below are steps performed on AWS EBS, you can check and do it as per your cloud service provider.

First, you need to log in to your AWS account and go to the volume and choose the volume you want to resize. After selecting the volume click on “Modify volume” under “Action”. 

Percona Server for MongoDB Data Directory

You are then given the option to change both the disk size and the volume type. You can increase the size now. 

Percona Server for MongoDB Data Directory Resize

 

Once you increase the volume, wait till it reflects on instance level and on OS too. In this blog, we’ll extend the volume to 20 GB and retain the volume type. After the volume has been extended, the EC2 instance and the system both need to be adjusted to adapt to the new size. Note that you can do this adjustment either as a root or as a user with Sudo privileges. Below I will do it as a root user.

Resizing volume

  • Below are the steps you have to perform when you have an LVM.
    1. The first step in this process is checking the volume, partition, and LVM size.

                    Before extending the underlying volume:

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  15G  0 disk
??xvdf1                     202:81   0  15G  0 part
  ??mongovg_new-mongolv_new 253:0    0  15G  0 lvm  /mongodb

                    After extending the underlying volume:

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  15G  0 part
  ??mongovg_new-mongolv_new 253:0    0  15G  0 lvm  /mongodb

The above shows that the volume has been extended when the disk has LVM. However, the disk having the primary partition is still at the volume’s original size. 

                2. To expand the partition, use the below command. After you do so, you will see that the partition has grown to match the volume size.

# growpart /dev/xvdf 1
CHANGED: partition=1 start=2048 old: size=31455199 end=31457247 new: size=41940959 end=41943007

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  20G  0 part
  ??mongovg_new-mongolv_new 253:0    0  15G  0 lvm  /mongodb

Note that there is a space between “/dev/xvdf” and “1.” “1” refers to the partition number.

Also, if you do not have growpart installed, you can do it using the below command:

# yum install -y cloud-utils-growpart

Above you can see that the partition has been expanded. 

                3. Now we are going to expand the LVM. In order to do that, first we need to resize the physical volume.                             Resize physical volume:

# pvresize /dev/xvdf1
 Physical volume "/dev/sdf1" changed
 1 physical volume(s) resized or updated / 0 physical volume(s) not resized

                    Extending size of LV:
                    Below we have extended the LV to 100% of free space-

# lvextend -l +100%FREE /dev/mongovg_new/mongolv_new
&nbsp;&nbsp;Size of logical volume mongovg_new/mongolv_new changed from <15.00 GiB (3839 extents) to <20.00 GiB (5119 extents).  
Logical volume mongovg_new/mongolv_new successfully resized

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  20G  0 part
  ??mongovg_new-mongolv_new 253:0    0  20G  0 lvm  /mongodb

If you want to extend the LV to a particular size then you can use the below command (for example I have increased it by 3G):

# lvextend -l +3G /dev/mongovg_new/mongolv_new

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  20G  0 part
  ??mongovg_new-mongolv_new 253:0    0  18G  0 lvm  /mongodb

  • Below are the steps when you don’t have a partition:
    1. First, you need to check the volume size.

                    Before extending the volume:

# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  25G  0 disk
??xvda1 202:1    0  25G  0 part /
xvdf    202:80   0  15G  0 disk /mongodb

                    After extending the volume:

# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  25G  0 disk
??xvda1 202:1    0  25G  0 part /
xvdf    202:80   0  20G  0 disk /mongodb

The above shows that the volume has been extended when the disk does not have any partition. 

  • Below are the steps when you have a partition.
    1. The first step in this process is checking the volume and partition size.

                    Before extending the underlying volume:

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  15G  0 disk
??xvdf1                     202:81   0  15G  0 part /mongodb

                    After extending the underlying volume:

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  15G  0 part /mongodb

The above shows that the volume has been extended when the disk has a partition. However, the disk having the primary partition is still at the volume’s original size. 

                    2. To expand the partition, use the below command. After you do so, you will see that the partition has grown to match the volume size.

# growpart /dev/xvdf 1
CHANGED: partition=1 start=2048 old: size=31455199 end=31457247 new: size=41940959 end=41943007

# lsblk
NAME                        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                        202:0    0  25G  0 disk
??xvda1                     202:1    0  25G  0 part /
xvdf                        202:80   0  20G  0 disk
??xvdf1                     202:81   0  20G  0 part /mongodb

Note that there is a space between “/dev/xvdf” and “1.” “1” refers to the partition number.

Also, if you do not have growpart installed, you can do it using the below command:

# yum install -y cloud-utils-growpart

Above you can see that the partition has been expanded.

Expanding file system

Now the file system size needs to be checked. Below you will notice that it is still only 15GB, even though the volume/partition/LVM has been resized above.

# df -h
Filesystem                           Size  Used Avail Use% Mounted on
devtmpfs                             484M     0  484M   0% /dev
tmpfs                                492M     0  492M   0% /dev/shm
tmpfs                                492M  428K  491M   1% /run
tmpfs                                492M     0  492M   0% /sys/fs/cgroup
/dev/xvda1                            20G  4.1G   16G  21% /
tmpfs                                 99M     0   99M   0% /run/user/1000
/dev/mapper/mongovg_new-mongolv_new   15G   48M   15G   1% /mongodb

In our case, since the file system is XFS, we have to rely on the “xfs_growfs” tool, which should already be in the system. If not, you can install it yourself as part of the “xfsprogs” package.

You can run the below command to expand the XFS Filesystem:

# sudo xfs_growfs /mongodb
meta-data=/dev/xvdf              isize=512    agcount=4, agsize=983040 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0
data     =                       bsize=4096   blocks=3932160, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 3932160 to 5242880

# df -h
Filesystem                           Size  Used Avail Use% Mounted on
devtmpfs                             484M     0  484M   0% /dev
tmpfs                                492M     0  492M   0% /dev/shm
tmpfs                                492M  428K  491M   1% /run
tmpfs                                492M     0  492M   0% /sys/fs/cgroup
/dev/xvda1                            20G  4.1G   16G  21% /
tmpfs                                 99M     0   99M   0% /run/user/1000
/dev/mapper/mongovg_new-mongolv_new   20G   48M   20G   1% /mongodb

If you are using an ext4 file system, you could extend it using the below command:

resize2fs /dev/xvda1

The volume is now fully resized and ready to be used, and no downtime resulted from this process. You can extend any other partition with the steps shown above, and you just have to ensure you’re using LVM and know the partition you’re extending.

Aug
30
2022
--

Efficient Data Archiving in MySQL

Efficient Data Archiving in MySQL

Efficient Data Archiving in MySQLRecently I have been working with a few customers with multiple terabytes of transactional data on their MySQL clusters. These very large datasets are not really needed for their daily operations but they are very convenient because they allow them to query historical data easily. However the convenience comes at a high price, you pay a lot more for the storage, backup and restoration take much longer, and are, of course, much larger. So, the question is: how can they perform “efficient data archiving”?

Let’s try to define what would be an efficient data archiving architecture. We can layout some key requirements:

  • The archive should be on an asynchronous replica
  • The archive replica should be using a storage configuration optimized for large dataset
  • The regular cluster should just be deleting data normally
  • The archiving system should remove delete statements from the replication stream and keep only the inserts and updates
  • The archiving system should be robust and able to handle failures and resume replication

Key elements

Our initial starting point is something like this:

The cluster is composed of a source (S) and two replicas (R1 and R2) and we are adding a replica for the archive (RA). The existing cluster is pretty much irrelevant in all the discussions that will follow, as long as the row-based replication format is used with full-row images.

The above setup is in theory sufficient to archive data but in order to do so, we must not allow the delete statements on the tables we want to archive to flow through the replication stream. The deletions must be executed with sql_log_bin = 0 on all the normal servers. Although this may look simple, it has a number of drawbacks. A cron job or a SQL event must be called regularly on all the servers. These jobs must delete the same data on all the production servers. Likely this process will introduce some differences between the tables. Verification tools like pt-table-checksum may start to report false positives.  As we’ll see, there are other options.

Capturing the changes (CDC)

An important component we need is a way to capture the changes going to the table we want to archive. The MySQL binary log, when used with the row-based format and full row image, is perfect for the purpose.  We need a tool that can connect to a database server like a replica, convert the binary log event into a usable form, and keep track of its position in the binary log.

For this project, we’ll use Maxwell, a tool developed by Zendesk. Maxwell connects to a source server like a regular replica and outputs the row-based events in JSON format. It keeps track of its replication position in a table on the source server.

Removing deletions

Since the CDC component will output the events in JSON format, we just need to filter for the tables we are interested in and then ignore the delete events. You can use any programming language that has decent JSON and MySQL support. In this post, I’ll be using Python.

Storage engine for the archives

InnoDB is great for transactional workload but far less optimal for archiving data. MyRocks is a much better option, as it is write-optimized and is much more efficient at data compression.

Architectures for efficient data archiving

Shifted table

We have a few architectural options for our archiving replica. The first architecture, shown below, hooks the CDC to the archiving replica. This means if we are archiving table t, we’ll need to have on the archiving replica both the production t, from which data is deleted, and the archived copy tA, which keeps its data long term.

Efficient Data Archiving in MySQL

The main advantage of this architecture is that all the components related to the archiving process only interact with the archiving replica. The negative side is, of course, the presence of duplicate data on the archiving replica as it has to host both t and tA. One could argue that the table t could be using the blackhole storage engine but let’s not dive down such a rabbit hole.

Ignored table

Another architectural option is to use two different replication streams from the source. The first stream is the regular replication link but the replica has the replication option replicate-ignore-table=t. The replication events for table t are handled by a second replication link controlled by Maxwell. The deletions events are removed and the inserts and updates are applied to the archiving replica.

Efficient Data Archiving in MySQL Maxwell

While this later architecture stores only a single copy of t on the archiving replica, it needs two full replication streams from the source.

Example

The application

My present goal is to provide an example as simple as possible while still working.  I’ll be using the Shifted table approach with the Sysbench tpc-c script. This script has an option, enable_purge, that removes old orders that have been processed. Our goal is to create the table tpccArchive.orders1 which contains all the rows, even the deleted ones, while the table tpcc.orders1 is the regular orders table. They have the same structure but the archive table is using MyRocks.

Let’s first prepare the archive table:

mysql> create database tpccArchive;
Query OK, 1 row affected (0,01 sec)

mysql> use tpccArchive;
Database changed

mysql> create table orders1 like tpcc.orders1;
Query OK, 0 rows affected (0,05 sec)

mysql> alter table orders1 engine=rocksdb;
Query OK, 0 rows affected (0,07 sec)
Records: 0  Duplicates: 0  Warnings: 0

Capturing the changes

Now, we can install Maxwell. Maxwell is a Java-based application so a compatible JRE is needed. It will also connect to MySQL as a replica so it needs an account with the required grants.  It also needs its own maxwell schema in order to persist replication status and position.

root@LabPS8_1:~# apt-get install openjdk-17-jre-headless 
root@LabPS8_1:~# mysql -e "create user maxwell@'localhost' identified by 'maxwell';"
root@LabPS8_1:~# mysql -e 'create database maxwell;'
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON maxwell.* TO maxwell@localhost;'
root@LabPS8_1:~# mysql -e 'grant SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO maxwell@localhost;'
root@LabPS8_1:~# curl -sLo - https://github.com/zendesk/maxwell/releases/download/v1.37.6/maxwell-1.37.6.tar.gz| tar zxvf -
root@LabPS8_1:~# cd maxwell-1.37.6/
root@LabPS8_1:~/maxwell-1.37.6# ./bin/maxwell -help
Help for Maxwell:

Option                   Description                                                                       
------                   -----------                                                                       
--config <String>        location of config.properties file                                                
--env_config <String>    json object encoded config in an environment variable                             
--producer <String>      producer type: stdout|file|kafka|kinesis|nats|pubsub|sns|sqs|rabbitmq|redis|custom
--client_id <String>     unique identifier for this maxwell instance, use when running multiple maxwells   
--host <String>          main mysql host (contains `maxwell` database)                                     
--port <Integer>         port for host                                                                     
--user <String>          username for host                                                                 
--password <String>      password for host                                                                 
--help [ all, mysql, operation, custom_producer, file_producer, kafka, kinesis, sqs, sns, nats, pubsub, output, filtering, rabbitmq, redis, metrics, http ]


In our example, we’ll use the stdout producer to keep things as simple as possible. 

Filtering script

In order to add and update rows to the tpccArchive.orders1 table, we need a piece of logic that will identify events for the table tpcc.orders1 and ignore the delete statements. Again, for simplicity, I chose to use a Python script. I won’t present the whole script here, feel free to download it from my GitHub repository.  It is essentially a loop on line written to stdin. The line is loaded as a JSON string and then some decisions are made based on the values found.  Here’s a small section of code at its core:

...
for line in sys.stdin:
    j = json.loads(line)
    if j['database'] == dbName and j['table'] == tableName:
        debug_print(line)
        if j['type'] == 'insert':
            # Let's build an insert ignore statement
             sql += 'insert ignore into ' + destDbName + '.' + tableName
...

The above section creates an “insert ignore” statement when the event type is ‘insert’. The script connects to the database using the user archiver and the password tpcc and then applies the event to the table tpccArchive.orders1.

root@LabPS8_1:~# mysql -e "create user archiver@'localhost' identified by 'tpcc';"
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON tpccArchive.* TO archiver@localhost;'

All together

Just to make it easy to reproduce the steps, here’s the application (tpcc) side:

yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 prepare
yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 run

The database is running a VM whose IP is 10.0.4.158.  The enable_purge option causes old orders1 to be deleted. For the archiving side, running on the database VM:

root@LabPS8_1:~/maxwell-1.37.6# bin/maxwell --user='maxwell' --password='maxwell' --host='127.0.0.1' \
        --producer=stdout 2> /tmp/maxerr | python3 ArchiveTpccOrders1.py

After the two hours tpcc run we have:

mysql> select  TABLE_SCHEMA, TABLE_ROWS, DATA_LENGTH, INDEX_LENGTH, ENGINE from information_schema.tables where table_name='orders1';
+--------------+------------+-------------+--------------+---------+
| TABLE_SCHEMA | TABLE_ROWS | DATA_LENGTH | INDEX_LENGTH | ENGINE  |
+--------------+------------+-------------+--------------+---------+
| tpcc         |      48724 |     4210688 |      2310144 | InnoDB  |
| tpccArchive  |    1858878 |    38107132 |     14870912 | ROCKSDB |
+--------------+------------+-------------+--------------+---------+
2 rows in set (0,00 sec)

A more realistic architecture

The above example is, well, an example. Any production system will need to be hardened much more than my example. Here are a few requirements:

  • Maxwell must be able to restart and continue from the correct replication position
  • The Python script must be able to restart and continue from the correct replication position
  • The Python script must be able to reconnect to MySQL and retry a transaction if the connection is dropped.

Maxwell already takes care of the first point, it uses the database to store its current position.

The following logical step would be to add a more robust queuing system than a simple process pipe between Maxwell and the Python script. Maxwell supports many queuing systems like kafka, kinesis, rabbitmq, redis and many others. For our application, I tend to like a solution using kafka and a single partition.  kafka doesn’t manage the offset of the message, it is up to the application. This means the Python script could update a row of a table as part of every transaction it is applying to keep track of its position in the kafka stream. If the archive tables are using RocksDB, the queue position tracking table should also use RocksDB so the database transaction is not across storage engines.

Conclusion

In this post, I provided a solution to archive data using the MySQL replication binary logs. Archiving fast-growing tables is a frequent need and hopefully, such a solution can help. It would be great to have a MySQL plugin on the replica able to filter the replication events directly.  This would remove the need for an external solution like Maxwell and my python script. Generally speaking, however, this archiving solution is just a specific case of a summary table. In a future post, I hope to present a more complete solution that will also maintain a summary.

Aug
29
2022
--

Talking Drupal #362 – Progressive Web Apps

Today we are talking about Progressive Web Apps with Alex Borsody and Wilfred Arambhan.

www.talkingDrupal.com/362

Topics

  • What are Progressive Web Apps (PWAs)
  • How is a PWA different from a native app or cross platform app
  • What features point towards a PWA
  • What features are difficult to implement
  • Where do they store their data
  • What are some use cases
  • What does the PWA module do
  • Why would you use the PWA module
  • Potential enhancements to the PWA module
  • Google’s Plans
  • Security

Resources

Guests

Alex Borsody – @alexborsody Wilfred Arambhan – @wilfredarambhan

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Cathy Theys – @YesCT

MOTW

Responsive Tables Filter It makes the following tables responsive:

  • Any tables within fields where Drupal text format filters can be applied
  • Views tables
Aug
29
2022
--

PostgreSQL for MySQL DBAs: Watch Command

PostgreSQL for MySQL DBA Watch Command

Those new to the realm of PostgreSQL from other databases will find little gems sprinkled liberally through the software.  One of those gems is the watch command. It can be used to repeatedly run a query over and over again.

Pretend you are watching the progress of the importation of a CSV file and want to sample the progress every so often.  Before you start the import, type query to sample the incoming data.

blog=# SELECT MAX(id) FROM x1;
max
------

(1 row)

To get updates three times a minute, you would then do the following while the import runs.

\watch 20
8/24/2022 11:22:34 AM (every 20s)


 max
------

 1000
(1 row)

For MySQL-ers who miss the information from SHOW PROCESSLIST, try instead this combination.  First, check the latest queries from the various sessions.

blog=# SELECT datname, query, usename FROM pg_stat_activity ORDER BY query_start;

 datname |                                   query                                    | usename

---------+----------------------------------------------------------------------------+----------

 blog    | insert into x1 (id) values (generate_series(1300,1499));                   | percona

 blog    | SELECT datname, query, usename FROM pg_stat_activity ORDER BY query_start; | percona

Then set a fifteen-second timer!

blog=# \watch 15
8/24/2022 11:39:00 AM (every 15s)

datname | query | usename
---------+----------------------------------------------------------------------------+----------
blog | insert into x1 (id) values (generate_series(1300,1499)); | percona
blog | SELECT datname, query, usename FROM pg_stat_activity ORDER BY query_start; | percona


8/24/2022 11:39:15 AM (every 15s)

datname | query | usename
---------+----------------------------------------------------------------------------+----------
blog | insert into x1 (id) values (generate_series(2100,2499)); | percona
blog | SELECT datname, query, usename FROM pg_stat_activity ORDER BY query_start; | percona

The watch command comes in handy whenever you repetitively need to run a query.

Aug
29
2022
--

Two Extremely Useful Tools (pt-upgrade and checkForServerUpgrade) for MySQL Upgrade Testing

MySQL Upgrade Testing

MySQL Upgrade TestingMy last blog, Percona Utilities That Make Major MySQL Version Upgrades Easier, detailed the tools available from the Percona toolkit that assists us with major MySQL version upgrades. The pt-upgrade tool aids in testing application queries and generates reports on how each question performs on servers running various versions of MySQL.

MySQL Shell Upgrade Checker is a utility that helps in compatibility tests between MySQL 5.7 instances and MySQL 8.0 upgrades, which is part of the mysql-shell-utilities. The util.checkForServerUpgrade() function checks whether the MySQL 5.7 instance is ready for the MySQL 8.0 upgrade and generates a report with warnings, errors, and notices for preparing the current MySQL 5.7 setup for upgrading to MySQL 8.0.

We can run this Upgrade Checker Utility in the current MySQL 5.7 environment to generate the report; I would recommend running it on any of the replica instances that have the same configuration as the production.

The user account used to execute the upgrade checker tool must have ALL rights up to MySQL Shell 8.0.20. The user account requires RELOAD, PROCESS, and SELECT capabilities as of MySQL Shell 8.0.21.

How to generate a report using Upgrade Checker Utility

To generate a report using Upgrade Checker Utility we may either login to the shell prompt or execute directly from the command prompt.

mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf > CheckForServerUpgrade_Report.txt
Please provide the password for 'mysqluser@localhost:3306':

$ mysqlsh
MySQL  JS > util.checkForServerUpgrade('root@localhost:3306', { "targetVersion":"8.0.29", "configPath":"/etc/my.cnf"})
Please provide the password for 'mysqluser@localhost:3306':

To quit the mysqlsh command prompt, type \exit.

MySQL  JS > \exit
Bye!

Do pt-upgrade and Upgrade Checker Utility do the same tests?  No!

Don’t confuse the Upgrade Checker Utility with the pt-upgrade tool since they are used for different kinds of major version upgrade testing. The Upgrade Checker Utility performs a variety of tests on the selected MySQL server to ascertain whether the upgrade will be successful; however, the tool does not confirm whether the upgrade is compatible with the application queries or routines.

Does it check both my.cnf file and the MySQL server variables?

The utility can look for system variables declared in the configuration file (my.cnf) but removed in the target MySQL Server release, as well as system variables not defined in the configuration file but with a different default value in the target MySQL Server release.  You must give the file path to the configuration file when executing checkForServerUpgrade() for these checks. However, the tool is unable to identify the variables that have been deleted in the my.cnf file but are set in the MySQL server.

Let us remove query_cache_type from /etc/percona-server.conf.d/mysqld.cnf and run the command.

]# mysql -uroot -p -e "SHOW VARIABLES WHERE Variable_Name IN ('query_cache_type','query_cache_size')"
Enter password:
+------------------+---------+
| Variable_name    | Value   |
+------------------+---------+
| query_cache_size | 1048576 |
| query_cache_type | ON      |
+------------------+---------+

]# cat /etc/my.cnf
#
# The Percona Server 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#   Please make any edits and changes to the appropriate sectional files
#   included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-server.conf.d/
]#

Remove query_cache_type variable from mysqld.cnf:

]# sed -i '/query_cache_type/d' /etc/percona-server.conf.d/mysqld.cnf
]#

]# grep -i query /etc/my.cnf /etc/percona-server.conf.d/mysqld.cnf
/etc/percona-server.conf.d/mysqld.cnf:query_cache_size=5058320
]#

As the query cache type variable has been deleted from my.cnf,  the tool is unable to detect it.

#  mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf | grep  -B 6  -i "query_cache"
15) Removed system variables
  Error: Following system variables that were detected as being used will be
    removed. Please update your system to not rely on them before the upgrade.
  More information:
    https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed

  query_cache_size - is set and will be removed
ERROR: 1 errors were found. Please correct these issues before upgrading to avoid compatibility issues.

In JSON format, the report looks like this:

Note: To make the blog more readable, I shortened the report.

# mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf --output-format=JSON
{
    "serverAddress": "localhost:3306",
    "serverVersion": "5.7.39-42 - Percona Server (GPL), Release 42, Revision b0a7dc2da2e",
    "targetVersion": "8.0.29",
    "errorCount": 1,
    "warningCount": 27,
    "noticeCount": 1,
    "summary": "1 errors were found. Please correct these issues before upgrading to avoid compatibility issues.",
    "checksPerformed": [
        {
            "id": "oldTemporalCheck",
            "title": "Usage of old temporal type",
            "status": "OK",
            "detectedProblems": []
        },
        {
            "id": "reservedKeywordsCheck",
            "title": "Usage of db objects with names conflicting with new reserved keywords",
            "status": "OK",
            "detectedProblems": []
        },
…
        {
            "id": "sqlModeFlagCheck",
            "title": "Usage of obsolete sql_mode flags",
            "status": "OK",
            "description": "Notice: The following DB objects have obsolete options persisted for sql_mode, which will be cleared during upgrade to 8.0.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
            "detectedProblems": [
                {
                    "level": "Notice",
                    "dbObject": "global system variable sql_mode",
                    "description": "defined using obsolete NO_AUTO_CREATE_USER option"
                }
            ]
        },
        {
            "id": "enumSetElementLenghtCheck",
            "title": "ENUM/SET column definitions containing elements longer than 255 characters",
            "status": "OK",
            "detectedProblems": []
        },
…
        {
            "id": "removedSysVars",
            "title": "Removed system variables",
            "status": "OK",
            "description": "Error: Following system variables that were detected as being used will be removed. Please update your system to not rely on them before the upgrade.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed",
            "detectedProblems": [
                {
                    "level": "Error",
                    "dbObject": "query_cache_size",
                    "description": "is set and will be removed"
                }
            ]
        },
        {
            "id": "sysVarsNewDefaults",
            "title": "System variables with new default values",
            "status": "OK",
            "description": "Warning: Following system variables that are not defined in your configuration file will have new default values. Please review if you rely on their current values and if so define them before performing upgrade.",
            "documentationLink": "https://mysqlserverteam.com/new-defaults-in-mysql-8-0/",
            "detectedProblems": [
                {
                    "level": "Warning",
                    "dbObject": "back_log",
                    "description": "default value will change"
                },
                {
                    "level": "Warning",
                    "dbObject": "innodb_max_dirty_pages_pct",
                    "description": "default value will change from 75 (%)  90 (%)"
                }
            ]
        },
        {
            "id": "zeroDatesCheck",
            "title": "Zero Date, Datetime, and Timestamp values",
            "status": "OK",
            "detectedProblems": []
        },
…
    ],
    "manualChecks": [
        {
            "id": "defaultAuthenticationPlugin",
            "title": "New default authentication plugin considerations",
            "description": "Warning: The new default authentication plugin 'caching_sha2_password' offers more secure password hashing than previously used 'mysql_native_password' (and consequent improved client connection authentication). However, it also has compatibility implications that may affect existing MySQL installations.  If your MySQL installation must serve pre-8.0 clients and you encounter compatibility issues after upgrading, the simplest way to address those issues is to reconfigure the server to revert to the previous default authentication plugin (mysql_native_password). For example, use these lines in the server option file:\n\n[mysqld]\ndefault_authentication_plugin=mysql_native_password\n\nHowever, the setting should be viewed as temporary, not as a long term or permanent solution, because it causes new accounts created with the setting in effect to forego the improved authentication security.\nIf you are using replication please take time to understand how the authentication plugin changes may impact you.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues\nhttps://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-replication"
        }
    ]
}

Please read Daniel Guzmán Burgos’ blog post to find out more about the Upgrade Checker Utility and click the link to learn more about the pt-upgrade testing.

Prior to a major version upgrade, application query testing and configuration checks are an inevitable task, and the pt-upgrade and “Upgrade Checker Utility” are quite helpful.

Aug
26
2022
--

PostgreSQL 15: Stats Collector Gone? What’s New?

PostgreSQL 15: Stats Collector Gone

Anyone trying the upcoming PostgreSQL 15 might observe that one of the background processes is missing.

postgres    1710       1  0 04:03 ?        00:00:00 /usr/pgsql-15/bin/postmaster -D /var/lib/pgsql/15/data/
postgres    1711    1710  0 04:03 ?        00:00:00 postgres: logger 
postgres    1712    1710  0 04:03 ?        00:00:00 postgres: checkpointer 
postgres    1713    1710  0 04:03 ?        00:00:00 postgres: background writer 
postgres    1715    1710  0 04:03 ?        00:00:00 postgres: walwriter 
postgres    1716    1710  0 04:03 ?        00:00:00 postgres: autovacuum launcher 
postgres    1717    1710  0 04:03 ?        00:00:00 postgres: logical replication launcher

if we compare this with PostgreSQL 14:

postgres    1751       1  0 04:04 ?        00:00:00 /usr/pgsql-14/bin/postmaster -D /var/lib/pgsql/14/data/
postgres    1752    1751  0 04:04 ?        00:00:00 postgres: logger 
postgres    1754    1751  0 04:04 ?        00:00:00 postgres: checkpointer 
postgres    1755    1751  0 04:04 ?        00:00:00 postgres: background writer 
postgres    1756    1751  0 04:04 ?        00:00:00 postgres: walwriter 
postgres    1757    1751  0 04:04 ?        00:00:00 postgres: autovacuum launcher 
postgres    1758    1751  0 04:04 ?        00:00:00 postgres: stats collector 
postgres    1759    1751  0 04:04 ?        00:00:00 postgres: logical replication launcher

Yes, the “stats collector” is missing, and it is gone for good. One of the major bottlenecks and headaches is gone forever.

What does the stats collector do?

Novice users might be wondering what it is and why it is needed for PG 14 and older versions. At least a few users get confused about table-level statistics collection (ANALYZE), which is used for query planning. But this is different. PostgreSQL tracks all activities of each process to have cumulative stats like how many times a table or index is scanned, or when the last vacuum or autovacuum ran on the table, or how many times the autovacuum ran on a table, etc. All the information collected by the stats collector is available through different pg_stat_* views.

What was wrong?

Since each backend of a session is an individual process in PostgreSQL, collecting stats and transmitting them is not an easy task. Each backend sends the information about what activity they have done to a single “stats collector” process. This communication used to happen over the UDP socket. There were a lot of problems with this approach, this is not a scalable model. Users often reported different types of issues like 1. stale statistics, 2. stats collector not running, 3. autovacuum not working/starting, etc.

It used to be really hard to understand what was wrong if a stats collector had a problem with a specific machine.

Another adverse effect of “stats collector” is the IO it causes. if you enable the DEBUG level 2, you might see messages that keep coming to PostgreSQL log like:

2022-08-22 03:49:57.153 UTC [736] DEBUG:  received inquiry for database 0
2022-08-22 03:49:57.153 UTC [736] DEBUG:  writing stats file "pg_stat_tmp/global.stat"
2022-08-22 03:49:57.153 UTC [736] DEBUG:  writing stats file "pg_stat_tmp/db_0.stat"
2022-08-22 03:49:57.168 UTC [1278] DEBUG:  autovacuum: processing database "postgres"
2022-08-22 03:49:57.168 UTC [736] DEBUG:  received inquiry for database 13881
2022-08-22 03:49:57.168 UTC [736] DEBUG:  writing stats file "pg_stat_tmp/global.stat"
2022-08-22 03:49:57.168 UTC [736] DEBUG:  writing stats file "pg_stat_tmp/db_13881.stat"
2022-08-22 03:49:57.169 UTC [736] DEBUG:  writing stats file "pg_stat_tmp/db_0.stat"

This can cause considerable IO on the mount point where your data directory is located.
This is the place pointed by the value of the parameter stats_temp_directory. On many systems, it will be the pg_stat_tmp within the data directory.

On Ubuntu/Debian, it will be in /var/run/postgresql, for example:

postgres=# show stats_temp_directory ;
          stats_temp_directory           
-----------------------------------------
 /var/run/postgresql/14-main.pg_stat_tmp
(1 row)

What is new in PostgreSQL 15?

Instead of using the files and filesystem, statistics now use dynamic shared memory.

You can refer to the commit here by Andres Freund for a summary :

Previously the statistics collector received statistics updates via UDP and shared statistics data by writing them out to temporary files regularly. These files can reach tens of megabytes and are written out up to twice a second. This has repeatedly prevented us from adding additional useful
statistics.

Now statistics are stored in shared memory. Statistics for variable-numbered objects are stored in a dshash hashtable (backed by dynamic shared memory). Fixed-numbered stats are stored in plain shared memory.

The header for pgstat.c contains an overview of the architecture.

The stats collector is not needed anymore, remove it.

By utilizing the transactional statistics drop infrastructure introduced in a prior commit statistics entries cannot “leak” anymore. Previously leaked statistics were dropped by pgstat_vacuum_stat(), called from [auto-]vacuum. Onsystems with many small relations pgstat_vacuum_stat() could be quite
expensive.

Now that replicas drop statistics entries for dropped objects, it is not necessary anymore to reset stats when starting from a cleanly shut down replica.

Obviously, the parameter stats_temp_directory is gone. So we don’t need the pg_stat_tmp the directory which gets created within the data directory (or other location) where all the stats files are generated and read from. However, this directory is retained for not breaking many extensions like pg_stat_statements, which depend on the directory. The directory remains empty until the extension libraries are loaded,  For example, if we load the pg_stat_statements library, a file appears in the directory.

$ ls pg_stat_tmp/
pgss_query_texts.stat

Of course, the extensions are not free. They carry their own cost.

In the new architecture, most stats updates are first accumulated locally in each process as “pending” (each backend has a backend-local hashtable). “Pending” in the sense that they are accumulated but not yet submitted to the shared stats system.  This is later flushed to shared memory just after a commit or by timeout.

Since stats are getting updated concurrently while someone tries to read, read consistency comes into the picture. So PostgreSQL 15 introduces a new parameter: stats_fetch_consistency which can take three values none, cache or snapshot.

none” is the most efficient. But that won’t give read consistency if there are monitoring queries that expect that. But should be OK for most of the use.  “cache” ensures repeat accesses yield the same values, which is essential for queries involving e.g. self-joins. “snapshot” can be useful when interactively inspecting statistics but has higher overhead. The default is “cache“.

If it is in shared memory, how does it survive a restart?

They are written out to the filesystem by the checkpointer process just before the shutdown and again loaded back during the startup by the startup process. As usual, stats will be discarded if there is a crash.

Will this affect my monitoring tool/script?

All stats monitoring views pg_stat_* will continue to work as it is. But, please make sure to select the appropriate value for stats_fetch_consistency . As mentioned above, the pg_stat_tmp directory is preserved not to break extensions developed using this approach. However, it is up to the extension developer to thoroughly test the extension against PostgreSQL 15

What else?

People like me use PostgreSQL wait events to understand where PostgreSQL and its sessions are spending their time. The data collection and analysis tools like pg_gather, which we use in our day-to-day life, make use of these wait event analyses to understand problems. Three new wait events are introduced for better monitoring.

PgStatsDSA Waiting for stats dynamic shared memory allocator access
PgStatsHash Waiting for stats shared memory hash table access
PgStatsData Waiting for shared memory stats data access

With all the overhead of the stats collector and its maintenance going away, other subsystems like autovacuum have less work to do.

Additionally, monitoring tools that query the stats information frequently are expected to cause much less load on the system.

Thanks to the community

Thanks to the entire PostgreSQL community, especially the hackers, for this amazing improvement. The whole discussion started four years back when Kyotaro Horiguchi started discussing the idea and patches. It is finally materialized by the great work of Andres Freund, Melanie Plageman, and the team. We can see that it was indeed great teamwork of many contributors like Alvaro Herrera, David G Johnston, Thomas Munro, Tomas Vondra, Arthur Zakirov, Antonin Houska, Justin Pryzby, Tom Lane, Fujii Masao, Greg Stark, Robert Haas, Stephen Frost, Bertrand Drouvot, Magnus Hagander, and many others.

It is time to celebrate that PostgreSQL is becoming slim and trim while acquiring many more capabilities.

Aug
25
2022
--

Book Review: PostgreSQL 14 Administration Cookbook

Book Review: PostgreSQL 14 Administration Cookbook

The open source database communities have been blessed the last several months with many books hitting the marketplace.  The PostgreSQL 14 Administration Cookbook by Simon Riggs and Gianni Ciolli has been shown to be a proven resource.  Reference books are often overlooked as you can always find the information online and ‘RTFM’.  The ability to peruse a text often provides insights and examples that you are probably not going to find in the manual pages. And good reference books have many examples to guide you through the rough spots of a system in a way that a bulleted list of options on a manual page cannot.

This book is over five hundred and fifty pages long and starts with an introduction and installation guide before covering many subjects and finishing with replication.  There are over one hundred and seventy-five ‘recipes’  that will be handy to answer questions like ‘How many tables are in a schema?’ or ‘How do you remove a user without dropping their data?”.  Database administration (or Site Reliability Engineering) is made up of many complex functions and operations that require skill and knowledge. Picking up that skill and knowledge has a steep learning curve, and this book does an outstanding job with the areas that one needs to cover for a modern PostgreSQL server.

The book is divided into sections – First Steps, Exploring the Database, Server Configuration, Server Control, Tables and Data, Security, Database Administration, Monitoring and Diagnosis, Regular Maintenance, Performance and Concurrency, Backup and Recovery, and Replication and Recovery.  Each section is well-covered and there are numerous code examples.  I would not hesitate to hand this book to a DBA/SRE with experience in another relational database and ask them to administer a PostgreSQL server.

Anyone new to PostgreSQL will find this book useful and I highly recommend this book. The recipes and tips are very handy.  If you are a long-time user of PostgreSQL, you may find many sections interesting, such as creating a time series table with portioning.

Aug
25
2022
--

Seven Ways To Reduce MySQL Costs in the Cloud

Reduce MySQL Costs in the Cloud

Reduce MySQL Costs in the CloudWith the economy slowing down and inflation raging in many parts of the world, your organization will love you if you find ways to reduce the costs of running their MySQL databases. This is especially true if you run MySQL in the cloud, as it often allows you to see the immediate effect of those savings, which is what this article will focus on.

With so many companies announcing layoffs or hiring freezes, optimizing your costs may free enough budget to keep a few team members on or hire folks your team needs so much. 

1. Optimize your schema and queries

While optimizing schema and queries is only going to do so much to help you to save on MySQL costs in the cloud, it is a great thing to start with. Suboptimal schema and queries can require a much larger footprint, and it is also something “fully managed” Database as a Service (DBaaS) solutions from major cloud vendors do not really help you with. It is not uncommon for a system with suboptimal schema and queries to require 10x or more resources to run than an optimized system. 

At Percona, we’ve built Percona Monitoring and Management (PMM), which helps you to find out which of the queries need attention and how to optimize it. If you need more help, Percona Professional Services are often able to get your schema and queries in shape in a matter of days, providing long-term saving opportunities with a very reasonable upfront cost.

2. Tune your MySQL configuration

Optimal MySQL configuration depends on the workload, which is why I recommend tuning your queries and schema first. While gains from MySQL configuration tuning are often smaller than from fixing your queries, it is still significant. We have an old but still very relevant article on the basic MySQL settings you’ll want to tune, which you can check out. You can also consider using tools like Releem or Ottertune to help you to get a better MySQL configuration for your workload.  

More advanced tuning might include exploring alternative storage engines, such as MyRocks (included in Percona Distribution for MySQL). MyRocks can offer fantastic compression and minimize required IO, thereby drastically reducing storage costs. For a more in-depth look at MyRocks performance in the cloud, check out the blog Scaling IO-Bound Workloads for MySQL in the Cloud. 

3. Implement caching

Caching is cheating — and it works great! The most state-of-the-art caching for MySQL is available by rolling out ProxySQL. It can provide some additional performance benefits, such as connection pooling and read-write splitting, but I think caching is most generally useful for MySQL cost reduction. ProxySQL is fully supported by Percona with a Percona Platform subscription and is included in Percona Distribution for MySQL.  

Enabling query cache for your heavy queries can be truly magical — often enough it is the heaviest queries that do not need to provide the most up-to-date information and can have their results cached for a significant time.

You can read more about how to configure ProxySQL caching on the Percona Blog, as well as these articles:

4. Rightsize your resources

Once you have optimized your schema and queries and tuned MySQL configuration, you can check what resources your MySQL instances are using and where you can put them on a diet without negatively impacting performance. CPU, memory, disk, and network are four primary resources that impact MySQL performance and often can be managed semi-independently in the cloud.  For example, if your workload needs a lot of CPU power but does not need a lot of memory for caching, you can consider CPU-intensive instances. PMM has some great tools to understand which resources are most demanded by your workload.  

MySQL Costs in the Cloud

You can also read some additional resource-specific tips in my other article on MySQL performance

In our experience, due to the simplicity of “scaling with credit cards”, many databases in the cloud become grossly over-provisioned over time, and there can be a lot of opportunity for savings with instance size reduction!

5. Ditch DBaaS for Kubernetes

The price differential between DBaaS and comparable resources continues to grow.  With the latest Graviton instances, you will pay double for Amazon RDS compared to the cost of the underlying instance. Amazon Aurora, while offering some wonderful features, is even more expensive. If you are deploying just a couple of small nodes, this additional cost beats having to hire people to deploy and manage “do it yourself” solutions. If you’re spending tens of thousands of dollars a month on your DBaaS solution, the situation may be different.   

A few years ago, building your own database service using building blocks like EC2, EBS, or using a DBaaS solution such as Amazon RDS for MySQL were the only solutions. Now, another opportunity has emerged — using Kubernetes.  

You can get Amazon EKS – Managed Kubernetes Service, Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS) for a relatively small premium as infrastructure costs. Then, you can use Percona’s MySQL Operator to deploy and manage your database at a fraction of the complexity of traditional deployments. If you’re using Kubernetes for your apps already or use an infrastructure as code (IaC) deployment approach, it may even be handier than DBaaS solutions and 100% open source. 

Need help? Percona has your back with Percona Platform.

6. Consider lower-cost alternatives

A few years ago, only major cloud providers (in the U.S.: AWS, GCP,  Azure) had DBaaS solutions for MySQL.  Now the situation has changed, with MySQL DBaaS being available from second-tier and typically lower-cost providers, too. You can get MySQL DBaaS from Linode, Digital Ocean, and Vultr at a significantly lower cost (though, with fewer features). You can also get MySQL from independent providers like Aiven.    

If you’re considering deploying databases on a cloud vendor different than the one your application uses, make sure you’re using a close location for deployment and make sure to check the network latency between your database and your application — poor network latency or reliability issues can negate all the savings you’ve achieved. 

7. Let experts manage your MySQL database

If you’re spending more than $20,000 a month on your cloud database footprint, or if your application is growing or changing rapidly, you’ll often have better database performance, security, and reliability at lower cost by going truly “fully managed” instead of “cloud fully managed.”  The latter relies on “shared responsibility” in many key areas of MySQL best practices.  

“Cloud fully managed” will ensure the database infrastructure is operating properly (if scaled with a credit card), but will not fully ensure you have optimal MySQL configuration and optimized queries, are following best security practices, or picked the most performant and cost-effective instance for your MySQL deployment. 

Percona Managed Services is a solution from Percona to consider, though there are many other experts on the market that can take better care of your MySQL needs than DBaaS at major clouds.  

Summary

If the costs of your MySQL infrastructure in the cloud are starting to bite, do not despair. There are likely plenty of savings opportunities available, and I hope some of the above tips will apply to your environment.  

It’s also worth noting that my above advice is not theory, but is based on the actual work we’ve done at Percona. We’ve helped many leading companies realize significant cost savings when running MySQL in the cloud, including Patreon, which saved more than 50% on their cloud database infrastructure costs with Percona. 

 

Learn how Patreon saved more than 50% on database infrastructure costs

Aug
23
2022
--

Installing PMM Server from RHEL Repositories

Installing PMM Server from RHEL Repositories

Installing PMM Server from RHEL RepositoriesWe currently provide multiple ways to install Percona Monitoring and Management (PMM) Server, with the primary way  to use a docker:

Install Percona Monitoring and Management

or Podman:

Podman – Percona Monitoring and Management

We implemented it this way to simplify deployments, as the PMM server uses multiple components like Nginx, Grafana, PostgreSQL, ClickHouse, VictoriaMetrics, etc. So we want to avoid the pain of configuring each component individually, which is labor intensive and error-prone.

For this reason, we also provide bundles in a virtual box:

Virtual Appliance – Percona Monitoring and Management

or as Amazon AMI:

AWS Marketplace – Percona Monitoring and Management

Recently, for even simpler and effort-free deployments, we partnered with HostedPMM. They will install and manage the PMM server for you with a 1-Click experience.

However, we still see some interest in having a PMM server installed from repos, using, for example, rpm packages. For this reason, we have crafted ansible scripts that you can use on RedHat 7 compatible system.

The scripts are located here:

Percona-Lab/install-repo-pmm-server (github.com)

Please note these scripts are ONLY for EXPERIMENTATION and quick evaluation of PMM-server, and this way IS NOT SUPPORTED by Percona.

The commands to install PMM Server:

git clone https://github.com/Percona-Lab/install-repo-pmm-server
cd install-repo-pmm-server/ansible
ansible-playbook -v -i 'localhost,' -c local pmm2-docker/main.yml
ansible-playbook -v -i 'localhost,' -c local /usr/share/pmm-update/ansible/playbook/tasks/update.yml

And you should have an empty RedHat 7 system without any database or web-server packages installed.

Let us know if you have interest in this way of deploying PMM server!

Aug
22
2022
--

Talking Drupal #361 – Drupal Credit System

Today we are talking about The Drupal Credit System with Matthew Tift.

www.talkingDrupal.com/361

Topics

  • What is the Drupal Credit system
  • How is credit given
  • How is credit tracked on the backend
  • What is the trickiest part of integration
  • Contributions are weighted, how is that handled
  • Why are contributions weighted
  • Are non code contributions included in the Drupal Credit system
  • How do you run analytics on the data
  • What is changing with the credit system
  • Other communities are thinking of integrating a credit system what are lessons shared

Resources

Guests

Matthew Tift – matthewtift.com @matthewtift

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Cathy Theys – @YesCT

MOTW

Entity Redirect Adds a configurable redirect after saving a node or other entity. The redirect is configurable per bundle. Also, given sufficient permissions (and presuming it is enabled for that specific content/bundle), individual users can configure their own redirects (on their profile edit page).

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com