Sep
29
2023
--

Using MySQL Offline Mode To Disconnect All Client Connections

MySQL Offline Mode

As a DBA, one of the very frequent tasks is to stop/start MySQL service for batching or some other activities. Before stopping MySQL, we may need to check if there are any active connections; if there are, we may need to kill all those. Generally, we use pt-kill to kill the application connections or prepare kill statements using the select queries.

Example commands:

pt-kill --host=192.168.11.11 --user=percona -p --sentinel /tmp/pt-kill.sentinel2 --pid /tmp/pt-kill.pid  --victims all --match-command 'Query' --ignore-user 'pmm|rdsadmin|system_user|percona' --busy-time 10 --verbose --print --kill 

select concat('kill ',id,';') from information_schema.processlist where user='app_user';

MySQL has a variable called offline_mode to set the server into maintenance mode. When you set this, it immediately disconnects all the client connections that don’t have SYSTEM_VARIABLES_ADMIN and CONNECTION_ADMIN privileges and does not allow new connections except if a user has them.  If you are killing the connections manually or using pt-kill, you can’t avoid the new connection creation. But by using this mode, we can avoid the new connections. This is a global and dynamic variable, and we can set this mode when the server is running.

To enable the offline_mode, the user account must have the SYSTEM_VARIABLES_ADMIN privilege and the CONNECTION_ADMIN privilege (or the deprecated SUPER privilege, which covers both of these privileges). CONNECTION_ADMIN is required from MySQL 8.0.31 and recommended in all releases to prevent accidental lockout. Let’s test this.

To test this, create a new user, “app_user,” with only the privileges for DDL/DML.

mysql> create user app_user identified by 'App@!234TEst';
Query OK, 0 rows affected (0.20 sec)

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP , REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE,CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE on *.* to app_user;
Query OK, 0 rows affected (0.00 sec)

Start the sysbench tool using  aap_user user.

[root@centos12 vagrant]# sysbench /usr/share/sysbench/oltp_read_write.lua --threads=10 --time=100  --mysql-db=sysbench --mysql-user=app_user --mysql-password='App@!234TEst' run

sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:

Number of threads: 10

Initializing random number generator from current time

Initializing worker threads...

 

mysql> show processlist;
+----+-----------------+---------------------+----------+-------------+------+-----------------------------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Time_ms | Rows_sent | Rows_examined |
+----+-----------------+---------------------+----------+-------------+------+-----------------------------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
| 5 | event_scheduler | localhost | NULL | Daemon | 2151 | Waiting for next activation | NULL | 2151034 | 0 | 0 |
| 9 | bhuvan | 192.168.33.11:50642 | NULL | Binlog Dump | 2102 | Source has sent all binlog to replica; waiting for more updates | NULL | 2102317 | 0 | 0 |
| 14 | bhuvan | localhost | NULL | Query | 0 | init | show processlist | 0 | 0 | 0 |
| 20 | app_user | localhost | sysbench | Execute | 0 | waiting for handler commit | COMMIT | 11 | 0 | 0 |
| 21 | app_user | localhost | sysbench | Execute | 0 | updating | DELETE FROM sbtest1 WHERE id=5000 | 6 | 0 | 0 |
| 23 | app_user | localhost | sysbench | Execute | 0 | waiting for handler commit | COMMIT | 8 | 0 | 0 |
| 24 | app_user | localhost | sysbench | Execute | 0 | waiting for handler commit | COMMIT | 18 | 0 | 0 |
| 25 | app_user | localhost | sysbench | Execute | 0 | updating | UPDATE sbtest1 SET c='99153469917-25523144931-18125321038-96151238215-88445737418-14906501975-136014 | 13 | 0 | 0 |
| 27 | app_user | localhost | sysbench | Execute | 0 | waiting for handler commit | COMMIT | 7 | 0 | 0 |
| 28 | app_user | localhost | sysbench | Execute | 0 | statistics | SELECT c FROM sbtest1 WHERE id=5003 | 0 | 0 | 0 |
| 29 | app_user | localhost | sysbench | Execute | 0 | updating | UPDATE sbtest1 SET c='84180675456-88426921120-90373546373-84823361786-77912396694-08592771856-912331 | 13 | 0 | 0 |
+----+-----------------+---------------------+----------+-------------+------+-----------------------------------------------------------------+------------------------------------------------------------------------------------------------------+---------+-----------+---------------+
13 rows in set (0.00 sec)

While sysbench is running, set offline_mode=ON, and all connections from sysbech will get terminated. You will see errors in sysbench.

mysql> select @@offline_mode;
+----------------+
| @@offline_mode |
+----------------+
| 0 |
+----------------+
1 row in set (0.15 sec)
mysql> set global offline_mode=1;
Query OK, 0 rows affected (0.15 sec)
mysql> show processlist;
+----+-----------------+---------------------+------+-------------+------+-----------------------------------------------------------------+------------------+---------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Time_ms | Rows_sent | Rows_examined |
+----+-----------------+---------------------+------+-------------+------+-----------------------------------------------------------------+------------------+---------+-----------+---------------+
| 5 | event_scheduler | localhost | NULL | Daemon | 2178 | Waiting for next activation | NULL | 2178008 | 0 | 0 |
| 9 | bhuvan | 192.168.33.11:50642 | NULL | Binlog Dump | 2129 | Source has sent all binlog to replica; waiting for more updates | NULL | 2129291 | 0 | 0 |
| 14 | bhuvan | localhost | NULL | Query | 0 | init | show processlist | 0 | 0 | 0 |
+----+-----------------+---------------------+------+-------------+------+-----------------------------------------------------------------+------------------+---------+-----------+---------------+
3 rows in set (0.01 sec)

If you try to connect the DB with app_user when offline_mode=1 , it will not allow the connections and get an error message that the server is currently in offline mode. This offline_mode won’t affect the replication. You can see the processlist logs above, and the replication thread is not disconnected when we set offline_mode=1. To disable the offline_mode, set the value 0.

mysql> set global offline_mode=0;
Query OK, 0 rows affected (0.00 sec)

Conclusion

The offline_mode is a good option to put the server in maintenance mode. Just make sure the application users don’t have admin privileges and only the admin does. We can use the offline_mode in the following situations:

  1. Before taking the DB server out for maintenance or its related activities, make the changes in the configuration file and persist in this mode until all the activities are complete.
  2. While taking the backup from the server, we can set this offline_mode to avoid the load on the server and make the backup faster.
  3. In case there was a huge replication on the replica due to the huge amount of queries, we can set this mode until the replication gets synchronized with the primary.
  4. When you want to terminate all the application connections immediately.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

 

Download Percona Monitoring and Management Today

Sep
28
2023
--

Don’t Start Using Your MySQL Server Until You’ve Configured Your OS

Don’t Start Using Your MySQL Server Until You’ve Configured Your OS

Whenever you install your favorite MySQL server on a freshly created Ubuntu instance, you start by updating the configuration for MySQL, such as configuring buffer pool, changing the default datadir director, and disabling one of the most outstanding features – query cache. It’s a nice thing to do, but first things first. Let’s review the best practices we usually follow in Managed Services before using your MySQL server in production and stage env, even for home play purposes.

Memory

Our usual recommendation is to use specific memory parameters, which we suggest to ensure optimal performance.

  • To prevent out-of-memory (OOM) episodes, the OOM Score has to be set to -800.
  • vm.swappiness = 1
  • Disable Transparent Huge Pages
  • Install and enable jemalloc. Let’s briefly go through each setting to understand why adjustments are needed. Afterward, we will see how to configure these settings on your OS.

OOM

The OOM killer checks oom_score_adj to adjust its final calculated score. This file is present in /proc/$pid/oom_score_adj. You can add a sizable negative score to this file to ensure that OOM killer is less likely to pick up and terminate your process. The oom_score_adj can vary from -1000 to 1000. If you assign -1000 to it, it can use 100% memory and avoid getting terminated by OOM killer. On the other hand, if you assign 1000 to it, the Linux kernel will keep killing the process even when it uses minimal memory.

Swappiness

Swappiness is a Linux kernel parameter determining how aggressively the Linux virtual machine swaps pages between memory and the swap space on the system’s disk. The default value of vm.swappiness is 60, representing the percentage of free memory before activating the swap. Lower values reduce swapping and keep more memory pages in physical memory. Changing the value directly influences the performance of the Linux system. These values are defined as:

  • 0: swap is disabled
  • 1: Minimum amount of swapping without disabling it entirely
  • 10: recommended value to improve performance when sufficient memory exists in a system
  • 100: aggressive swapping

Transparent Huge Pages and Jemalloc

When it comes to Transparent Huge Pages (THP), they can take up more memory. The kernel’s memory allocation function allocates the requested page size, and sometimes more, rounded up to fit within the available memory. In other words, even if your application requires a small amount of memory, it will still be allocated at least a full page.

Additionally, pages must be contiguous in memory, which applies to ‘huge pages.’ This means that if the server cannot find a full page available in a row, it will defragment the memory before allocating it. This can negatively impact performance and cause delays.

InnoDB is built on a B*-Tree of indices, meaning that its workload will usually have sparse rather than contiguous-memory access, and, as such, it will likely noticeably perform worse with THP.

If you use jemalloc in conjunction with THP, the server may run out of memory over time because unused memory cannot be freed. Therefore, disabling Transparent Huge Pages for database servers is advisable to avoid this situation.

Using jemalloc instead of glibc memory allocator for MySQL results in less memory fragmentation and more efficient resource management. This is especially true when Transparent Huge Pages are disabled.

Action steps for memory settings

Before we change what needs to be adjusted, we need to know the current situation on our DB instance. By the way, I assume you installed the pt-toolkit and your favorite MySQL server to make your life easier. If you haven’t, please install it (Percona Toolkit documentation).

echo " -- THP check";cat /sys/kernel/mm/transparent_hugepage/enabled;  cat /sys/kernel/mm/transparent_hugepage/defrag;echo " --Swappiness";sysctl vm.swappiness ; cat /etc/sysctl.conf | grep -i swap;echo " -- OOM for MySQL";cat /proc/$(pidof mysqld)/oom_score ; cat /proc/$(pidof mysqld)/oom_score_adj;echo " -- jemalloc"; sudo pt-mysql-summary | grep -A5 -i "memory management" ; sudo grep -i jem /proc/$(pidof mysqld)/maps

We want to see something like below, but I am sure we are not.

> echo " -- THP check";cat /sys/kernel/mm/transparent_hugepage/enabled;  cat /sys/kernel/mm/transparent_hugepage/defrag;echo " --Swappiness";sysctl vm.swappiness ; cat /etc/sysctl.conf | grep -i swap;echo " -- OOM for MySQL";cat /proc/$(pidof mysqld)/oom_score ; cat /proc/$(pidof mysqld)/oom_score_adj;echo " -- jemalloc"; sudo pt-mysql-summary | grep -A5 -i "memory management" ; sudo grep -i jem /proc/$(pidof mysqld)/maps

 -- THP check
always madvise [never]
always defer defer+madvise madvise [never]
 --Swappiness
vm.swappiness = 1
vm.swappiness = 1
 -- OOM for MySQL
0
-800
 -- jemalloc
# Memory management library ##################################
jemalloc enabled in mysql config for process with id 29584
Using jemalloc from /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
# The End ####################################################
7f3456ac1000-7f3456af4000 r-xp 00000000 08:01 63812                      /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
7f3456af4000-7f3456cf3000 ---p 00033000 08:01 63812                      /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
7f3456cf3000-7f3456cf5000 r--p 00032000 08:01 63812                      /usr/lib/x86_64-linux-gnu/libjemalloc.so.1
7f3456cf5000-7f3456cf6000 rw-p 00034000 08:01 63812                      /usr/lib/x86_64-linux-gnu/libjemalloc.so.1

Let’s quickly fix it.

Disable THP

Let’s create a service which will disable THP for us:

sudo su -
cat <<EOF > /usr/lib/systemd/system/disable-thp.service
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target
EOF

And the below command to enable this service:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

vm.swappiness = 1

Change swappiness at runtime.

echo 1 > /proc/sys/vm/swappiness

And let’s persist it in the config file:

echo "# Swappiness" >> /etc/sysctl.conf
echo "vm.swappiness = 1" >> /etc/sysctl.conf

And enable this change.

sudo sysctl -p

OOM and Jemalloc

We are halfway through improving things, but let’s keep pushing for better memory usage. Let’s install jemalloc.

sudo apt-get install libjemalloc1

Please confirm that we have it on the correct path:

ls -l /usr/lib/x86_64-linux-gnu/libjemalloc.so.1

And the last thing we need to push our MySQL service to use our magic jemalloc library, let’s create an override for systemd:

Note: Depending on the system, it can be shown as mysql or mysqld. You can use systemctl | grep mysql to get the proper mysql service name.

sudo systemctl edit mysql

Add the specified content to the file below immediately.

[Service]
Environment= "LD_PRELOAD=/usr/lib64/libjemalloc.so.1"
OOMScoreAdjust=-800

To apply this change, we need to reload daemon and mysql service.

sudo systemctl daemon-reload
sudo systemctl restart mysql

The optimization of our memory settings has been completed successfully. You can verify it by executing the same check above.

echo " -- THP check";cat /sys/kernel/mm/transparent_hugepage/enabled;  cat /sys/kernel/mm/transparent_hugepage/defrag;echo " --Swappiness";sysctl vm.swappiness ; cat /etc/sysctl.conf | grep -i swap;echo " -- OOM for MySQL";cat /proc/$(pidof mysqld)/oom_score ; cat /proc/$(pidof mysqld)/oom_score_adj;echo " -- jemalloc"; sudo pt-mysql-summary | grep -A5 -i "memory management" ; sudo grep -i jem /proc/$(pidof mysqld)/maps

Mount point option for disk

Another thing I want to address in this article is how to reduce IO stress on our disks. It’s one of the most straightforward tasks we have, but it will give us a lot of performance for our powerful disks, which keeps our databases healthy and durable.

By default, when most disks are mounted using the relatime option, the system updates the metadata statistics for files each time they are accessed or changed on the mount point. This process can result in a significant amount of IO usage, which can be particularly problematic when running a database on that mount point. Given that MySQL typically accesses and writes numerous files concurrently, we must prioritize IO for more critical processes within the database rather than for updating metadata. Therefore, it is advisable to refrain from using the relatime option by default in such scenarios. To make this happen, we need to update it to noatime,nodiratime.

How to check the current options we have: I assume that you are using a separate mount point for the MySQL database attached to /var/lib/mysql path.

sudo mount | grep "/var/lib/mysql"

The result you will more likely get is:

/dev/sdb on /var/lib/mysql type ext4 (rw,relatime)

Action steps to apply best practices for disk settings

Let’s find out where we have these disk settings for that fstab coming to help.

> cat /etc/fstab |  grep "/var/lib/mysql"
/dev/sdb      /var/lib/mysql     ext4 defaults     0   0

So it’s easy to update the fstab file and add the required options for mount point = noatime, nodiratime.

sudo vim  /etc/fstab
/dev/sdb         /var/lib/mysql     ext4 defaults,noatime,nodiratime    0   0

From that moment, we are almost done, but we can’t apply these changes until our MySQL server is running, so we need to stop our mysql service, umount datadir directory, and mount it with new options.

sudo systemctl stop mysql

Once MySQL service is stopped, we can unmount our /mysql directory,

sudo umount /var/lib/mysql

and mount it again using updated /etc/fstab settings:

sudo mount -av

At that point, disk settings should be good, but it’s worth verifying that we have the desired mount point options. Afterward, we can start the MySQL service:

> sudo mount | grep  grep "/var/lib/mysql"
/dev/sdb on /var/lib/mysql type ext4 (rw,noatime,nodiratime)

We see the options are correct, so we can start the mysql service.

sudo systemctl start mysql

Conclusion

Optimizing memory and disk settings for MySQL can greatly improve the performance and stability of your database. Following the steps outlined in this article, you can reduce IO stress on your disks, prioritize IO for critical processes within the database, and improve memory usage. Remember always to verify your changes and consult with a professional if you have any questions or concerns. With these optimizations in place, your MySQL database will be better equipped to handle the demands of your applications and users.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Sep
28
2023
--

Replace a Live Mongo Node With a New Node

replace a live mongo node

In this blog post, we will discuss replacing a Mongo node. Consider a scenario when a node is going to be decommissioned for some reason or if a node’s data partition is almost full and not extendable. To avoid the disk getting full or decommissioned, we need to replace that node with a new node.

Note: Make sure to have enough oplog window to cover this activity. Make sure you have an up-to-date backup of your data set. To get a consistent backup, stop the writes to the MongoDB instance and take a snapshot of the volume. Here, we will take a hotbackup or EBS snapshot of the node and will use it to replace the node.

If the node that is going to be replaced is part of a replica set Primary-Secondary-Arbiter (PSA) with three voting members, then make sure writeConcern is not set as majority; it may timeout or never be acknowledged.

To replace a live node from shards/replica set

Let’s say it’s an EBS snapshot. An EBS snapshot can be created by selecting the EC2 instance and its respective volume.

EBS snapshot

Select the respective data volume /dev/sdb and create a snapshot. Make sure to select the correct volume ID.

Note: The time to create the snapshot may be several hours, depending on the size of large initial snapshots or subsequent snapshots where many blocks have changed.

Create a new instance of a similar instance type (which is going to be replaced) by using the AMI of one of the current EC2 nodes. 

AMI of one of the current EC2 nodes

Create volume from the EBS snapshot by choosing the correct Snapshot ID. Define the EBS size that suffices for your use case.

 Attach the EBS volume to the newly created instance.

Once attached, connect to the EC2 instance and attach the volume (make sure to keep the correct device name).

Initialize the volume created from the snapshot. For more details, check the Initialize Amazon EBS volumes on Linux doc.

sudo dd if=/dev/xvdf of=/dev/null bs=1M

And then mount the volume to a directory.

sudo mkdir /mongo_data
sudo mount /dev/xvdf /mongo_data/

If it’s a hot backup (learn how to take a hot backup), launch an instance for MongoDB using the AMI of one of the current EC2 nodes and attach the desired volume size to this instance. Copy the hot backups to it.

Here, it may have two cases.

Case 1: When you have members in the replica set as DNS names. In this case, follow the below steps:

  • Add newly launched node to the shard from the existing primary node.
rs.add("newly-launch-host:27018")

  • For now, reduce its priority once it comes in sync, as we don’t want this node to become a Primary. Note down the correct member “_id” of the new node.
cfg = rs.conf()
cfg.members[_id].priority = 0.5
rs.reconfig(cfg)

  • Remove the old node from the replica set, shut down its mongo services, and verify it’s completely removed.
rs.remove("old-shard-host:27018")
rs.conf()

Shut down the mongo service from the old node.

sudo systemctl stop mongod

  • Adjust the newly-launch hostname to the node which we are going to remove.

So, the desired hostname of the newly-launched host would become:

Newly-launch-host ->  old-shard-host

Once the newly launched hostname is changed, adjust its hostname info in /etc/hosts for all nodes of the sharded cluster and application.

Note: Make sure all connectivity is tested between all nodes to the new nodes on mongo port.

  • Connect to the existing primary node of the shards or replica set, and change the hostname of the newly launched node from rs.conf(). Make sure to use correct _id to adjust the hostname of the newly launched node only.
cfg = rs.conf()
cfg.members[_id].hostname = "old-shard-host:27018"
rs.reconfig(cfg)

  • Revert back the priority of the newly launched node to 1. Again, make sure to use correct _id to adjust the priority of the newly launched node only.
cfg = rs.conf()
cfg.members[_id].priority = 1
rs.reconfig(cfg)

Case 2: When you have members in the shard/replica set as IPs, not as DNS name.

  • Start the new node as a standalone mode with attached backups to it, edit the system.replset, and update new node IPs by replacing old node IPs, which are going to be removed. A “role” : “__system” should be granted to the user you are using to connect to the node. 

Note: It gives access to every resource in the system and is intended for internal use only. Do not use this role other than in exceptional circumstances.  

  • Remove the old node from the replica set, shut down its mongo services, and verify it’s completely removed.
rs.remove("old-node-ips:27018")
rs.conf()

  • Shut down the mongo service from the old node.
sudo systemctl stop mongod

  • Re-start the new node in replication mode and add it from the primary.
rs.add("new-node-ips:27018")

Verify the replication status for the newly-launch node. If everything goes well, your old instance has been replaced with the new node.

Conclusion

Nowadays, being on the cloud, it’s common for a node to get retired or decommissioned. With the above-mentioned approach, we can easily replace a Mongo node with a new node.

We encourage you to try our products for MongoDB, like Percona Server for MongoDB, Percona Backup for MongoDB, or Percona Operator for MongoDB. Also, we recommend reading our blog MongoDB: Why Pay for Enterprise When Open Source Has You Covered?

Sep
27
2023
--

PostgreSQL Partitioning Made Easy Using pg_partman (TimeBased)

PostgreSQL Partitioning

PostgreSQL Partition Manager — ‘pg_partman’ — is an open source extension freely available, widely supported, and actively maintained by the PostgreSQL community.

pg_partman creates and manages both time-based and serial-based table partition sets. Users use it to automate and create partitions by breaking large tables into smaller partitions for better performance.

In this post, initially, we will discuss the creation of the pg_partman extension following partitioning for newly created tables. First, we will need to install the OS with the necessary control files for the appropriate PostgreSQL version for installing pg_partman on the database.

Installation of pg_partman on the database

I recommend creating the schema before creating the extension to install the extension’s objects. This step is not mandatory, but after installation, you cannot change it. By default, the extension creates objects in the ‘public’ schema.

Establishing a dedicated schema for an extension serves the purpose of storing its objects, including tables, functions, procedures, and more. This approach facilitates smoother management and simplifies tasks like handling backups, upgrades, and migrations.

Below, I present the steps and failures encountered when attempting to change the schema after creating the extension.

Step one: As an example,  let us try to create an extension without Qualifying SCHEMA.

CREATE EXTENSION pg_partman;
postgres=# dx
                         List of installed extensions
Name    | Version |   Schema   |                     Description                      
------------+---------+------------+------------------------------------------------------
pg_partman | 4.6.0   | public     | Extension to manage partitioned tables by time or ID

Step two: Next, we demonstrate a failure to set up the schema for an already created extension.

CREATE schema testpartman;

ALTER EXTENSION pg_partman set schema testpartman ;
ERROR:  extension "pg_partman" does not support SET SCHEMA

Eventually, here are the steps for Installation of pg_partman along with schema.

Step 1: Create schema for installation of the extension

partman=# CREATE SCHEMA partman;
CREATE SCHEMA

Step 2: Create extension pg_partman in schema partman

partman=# CREATE EXTENSION pg_partman SCHEMA partman;
CREATE EXTENSION

partman=# dx
                          List of installed extensions
   Name    | Version |   Schema   |                     Description                      
------------+---------+------------+--------------------------------------------------
 pg_partman | 4.6.0   | partman    | Extension to manage partitioned tables by time or ID

Implementation of native partitioning

Native partitioning is also known as declarative partitioning. PostgreSQL allows you to declare that a table is divided into partitions with a list of columns or expressions to be used as the partition key.

However, you can only implement range partitioning using pg_partman, either with time-based or serial-based methods. This is because you can only predict when a new partition needs to be created in the case of range partitioning on dates or timestamps.

Time-based partitioning by range with template

Unique constraints on the partitioned table must include all partitioning columns. Extension pg_partman helps to manage this by using a template table to manage properties that currently are not supported by native partitioning.

Properties like INDEX and unique constraint creations on the template table are created on all partitions.

partman=# CREATE TABLE employees (id INT PRIMARY KEY, fname VARCHAR(20) , lname VARCHAR (20) , dob DATE NOT NULL ,joined DATE) PARTITION BY RANGE (joined);
ERROR:  unique constraint on partitioned table must include all partitioning columns
DETAIL:  PRIMARY KEY constraint on table "employees" lacks column "joined" which is part of the partition key.

1. First, create a table with native partitioning type by range using DATE typed column

<span style="font-family: 'Courier 10 Pitch', Courier, monospace; font-size: 11.2px; background-color: #cce4f5; color: #666666;">partman=# CREATE TABLE employees  (id INT, fname VARCHAR(20) , lname VARCHAR (20) , dob DATE NOT NULL ,joined DATE NOT NULL) PARTITION BY RANGE (joined);
</span>CREATE TABLE

2. Next, create a template table

partman=# CREATE TABLE partman.employees_template (LIKE public.employees);
CREATE TABLE

3. Add constraints to the template table needed for all partitioned tables

partman=# ALTER TABLE partman.employees_template ADD PRIMARY KEY (ID);
ALTER TABLE

4. Create parent to create initial child partitioned tables

partman=# SELECT partman.create_parent('public.employees', p_control := 'joined',p_type := 'native',p_interval := 'yearly',p_template_table := 'partman.employees_template',p_premake := 2, p_start_partition := (CURRENT_TIMESTAMP + '1 hour'::interval)::text);
create_parent 
---------------
t
(1 row)

Additionally, I have listed below the various options that can be used with the ‘create_parent’ function and their respective values:

Argument  Data Type Definition Values
p_parent_table text Parent Partitioned table ( Original table to have existed before creation & MUST be schema-qualified even if it is a public schema.) schema.tablename
p_control text You should partition the column, either integer or time-based. Column name
p_type text You can use the Partman type based on your performance improvement and flexibility requirements. native,partman
p_interval text Integer range or time interval for each partition. (The generic intervals of “yearly” to  “quarter-hour”) (yearly, quarterly, monthly, weekly, daily, hourly, half-hour, quarter-hour ) Any other interval that validates time.For ID based integer ranges.
p_template_table text Template table name, if not provided, will create one in the schema the extension is installed. schema.tablename
p_premake integer Additional partitions to stay ahead of the current partition. We create a default of 4 (resulting in the creation of 5 partitions, including the current one).
p_start_partition text This allows specifying the first partition of a set instead of it being automatically determined. It must be a valid timestamp (for time-based) or a positive integer (for id-based) value. CURRENT_TIMESTAMP /(CURRENT_TIMESTAMP + ‘1 hour’::interval)::text)
p_automatic_maintenance text The parameter determines whether maintenance is automatically managed when run_maintenance() is called without a table parameter or by a background worker process. DEFAULT ‘on’::text

 

Because we specified ‘2’ for premake in the create_parent function, you can observe that two partitions ahead have been created, as shown in the table definition below.

partman=# d+ employees
                 Partitioned table "public.employees"
Column |         Type          | Collation | Nullable | Default | Storage  | Compression | Stats target | Description
--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------
id     | integer               |           |          |         | plain    |            
fname  | character varying(20) |           |          |         | extended |            
lname  | character varying(20) |           |          |         | extended |            
dob    | date                  |           | not null |         | plain    |            
joined | date                  |           | not null |         | plain    |            
Partition key: RANGE (joined)
Partitions: employees_p2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01'),
           employees_p2024 FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'),
           employees_p2025 FOR VALUES FROM ('2025-01-01') TO ('2026-01-01'),
          employees_default DEFAULT

To illustrate, we insert around 10,000 rows to see the data flush to its appropriate partitions if it exists or moves to the default partition.

partman=# INSERT INTO employees (id ,fname,lname,dob ,joined) VALUES ( generate_series(1,10000) ,(array['Oswald', 'Henry', 'Bob', 'Vennie'])[floor(random() * 4 + 1)], (array['Leo', 'Jack', 'Den', 'Daisy' ,'Woody'])[floor(random() * 5 + 1)], '1995-01-01'::date + trunc(random() * 366 * 3)::int,
'2023-01-01'::date + trunc(random() * 366 * 3)::int);
INSERT 0 10000

partman=# SELECT  employees_p2023  , employees_p2024 ,employees_p2025,employees_default,employees_totalcnt from
( SELECT COUNT(*) FROM   employees_p2023 ) AS employees_p2023,( SELECT COUNT(*) FROM   employees_p2024 ) AS employees_p2024,
( SELECT COUNT(*) FROM   employees_p2025) AS employees_p2025 ,( SELECT COUNT(*) FROM   employees_default) AS employees_default ,
( SELECT COUNT(*) FROM   employees ) AS employees_totalcnt ;
employees_p2023 | employees_p2024 | employees_p2025 | employees_default |    employees_totalcnt
-----------------+-----------------+-----------------+-------------------+------------
(3294)          | (3397)          | (3293)          | (16)              | (10000)
(1 row)

By utilizing functions, we can seamlessly create child partitions and efficiently transfer data from the default table to these child tables.

partman=# CALL partman.run_maintenance_proc();
ERROR:  updated partition constraint for default partition "employees_default" would be violated by some row
CONTEXT: SQL statement "ALTER TABLE public.employees ATTACH PARTITION public.employees_p2026 FOR VALUES FROM ('2026-01-01 00:00:00+00') TO ('2027-01-01 00:00:00+00')"

As an illustration, the usage of partman.partition_data_proc shown below, which can move data batch-wise to partitioned tables.

partman=#  CALL partman.partition_data_proc ('public.employees');
NOTICE:  Batch: 1, Rows moved: 16
NOTICE:  Total rows moved: 16
NOTICE:  Ensure to VACUUM ANALYZE the parent (and source table if used) after partitioning data
CALL

partman=#  VACUUM ANALYZE employees;
VACUUM

As a result, we can see below the newly created table structure along with the new child partition.

partman=# d+ employees
                                        Partitioned table "public.employees"
Column |         Type          | Collation | Nullable | Default | Storage  | Compression | Stats target | Description 
--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------
id     | integer               |           |          |         | plain    |             |              
fname  | character varying(20) |           |          |         | extended |             |              
lname  | character varying(20) |           |          |         | extended |             |              
dob    | date                  |           | not null |         | plain    |             |              
joined | date                  |           | not null |         | plain    |             |              
Partition key: RANGE (joined)
Partitions: employees_p2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01'),
           employees_p2024 FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'),
           employees_p2025 FOR VALUES FROM ('2025-01-01') TO ('2026-01-01'),
           employees_p2026 FOR VALUES FROM ('2026-01-01') TO ('2027-01-01'),
           employees_default DEFAULT

Consequently, after using the function, you will notice that the data from the default table has been moved to the newly created partitioned table.

partman=#  SELECT  employees_p2023  , employees_p2024 ,employees_p2025, employees_p2026, employees_default,employees_totalcnt from
( SELECT COUNT(*) FROM   employees_p2023 ) AS employees_p2023,( SELECT COUNT(*) FROM   employees_p2024 ) AS employees_p2024, 
( SELECT COUNT(*) FROM   employees_p2025) AS employees_p2025,( SELECT COUNT(*) FROM   employees_p2026) AS employees_p2026,
( SELECT COUNT(*) FROM   employees_default) AS employees_default , ( SELECT COUNT(*) FROM   employees ) AS employees_totalcnt ;
employees_p2023 | employees_p2024 | employees_p2025 | employees_p2026 | employees_default | employees_totalcnt 
-----------------+-----------------+-----------------+-----------------+-------------------+--------------------
(3294)          | (3397)          | (3293)          | (16)            | (0)               | (10000)
(1 row)

Creating the partitioned child tables ahead

Subsequently, we can modify the table partman.part_config settings to make changes by modifying the premake, which controls the partitions ahead.

partman=# update partman.part_config set premake = '4' where parent_table ='public.employees';
UPDATE 1

partman=# CALL partman.run_maintenance_proc();
CALL

partman=# d+ employees
                       Partitioned table "public.employees"
Column |         Type          | Collation | Nullable | Default | Storage  | Compression | Stats target | Description
--------+-----------------------+-----------+----------+---------+----------+--------
id     | integer               |           |          |         | plain    |                        
fname  | character varying(20) |           |          |         | extended |                         
lname  | character varying(20) |           |          |         | extended |                         
dob    | date                  |           | not null |         | plain    |                        
joined | date                  |           | not null |         | plain    |             
Partition key: RANGE (joined)
Partitions: employees_p2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01'),
           employees_p2024 FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'),
           employees_p2025 FOR VALUES FROM ('2025-01-01') TO ('2026-01-01'),
           employees_p2026 FOR VALUES FROM ('2026-01-01') TO ('2027-01-01'),
           employees_p2027 FOR VALUES FROM ('2027-01-01') TO ('2028-01-01'),
           employees_p2028 FOR VALUES FROM ('2028-01-01') TO ('2029-01-01'),
           employees_p2029 FOR VALUES FROM ('2029-01-01') TO ('2030-01-01'),
           employees_p2030 FOR VALUES FROM ('2030-01-01') TO ('2031-01-01'),
           employees_default DEFAULT

Automatic maintenance of partitions

You can plan to create partitions ahead of the INSERTS to prevent data from moving into the default_partition or the main table. You can achieve this by using functions to create newly partitioned tables and move the data.

For instance, we can use the function run_maintenance() to create new partitions in advance, and there are two ways to do it.

1. Using background worker
2. Cron Scheduler

1. Setting up background worker

To start with, we can set “shared_preload_libraries” for partition maintenance operations to run automatically. You can set the parameter either by using ALTER SYSTEM or by manually modifying the config file.

partman=# ALTER SYSTEM set shared_preload_libraries = 'pg_partman_bgw';
ALTER SYSTEM

Afterward, perform a restart of the cluster either using systemctl or pg_ctl.

ubuntu@ip-172-31-47-134:~$ sudo systemctl restart postgresql@14-main.service

shared_preload_libraries 
--------------------------
pg_partman_bgw
(1 row)

Configuration options background worker

1. pg_partman_bgw.interval: Number of seconds between calls to run_maintenance(). The default is 3600 (1 hour).
– For time-based partitions, intervals can be chosen based on the partition interval at least daily or twice daily.
– For serial-based partitions, you can choose the partition interval based on the data inserted on a daily basis.
2. pg_partman_bgw.dbname: Required. The database(s) that run_maintenance() will run on. If multiple, use a comma-separated list. If not set, BGW will do nothing.
3. pg_partman_bgw.role: The role that run_maintenance() will run as. The default is “postgres”. You are allowed to use only a single role name.
4. pg_partman_bgw.analyze: Optional argument, by default whenever a new child table is created, an analyze is run on the parent table of the partition set to ensure constraint exclusion works. This analyze can be skipped by setting this to false and help increase the speed of moving large amounts of data. If you set this to false, we strongly recommend that you perform a manual analyze of the partition upon completion to ensure the statistics are updated properly.

2. Cron scheduler

   00 01,12 * * * psql -c “SELECT pg_partman.run_maintenance()”

Advantages of pg_partman

  • Using pg_partman, you can define data retention to automatically move data into archived partitions or drop the partition.
  • Query Performance will improve when dealing with partitioned data specifically.
  • Vacuum efficiency will improve as data resides in smaller partitions, reducing the time required for table maintenance compared to dealing with large sets of data.
  • You can attach or detach partitions without downtime concurrently, and it will not impact ongoing database operations.

Limitations of pg_partman

  • pg_partman only supports partitioning of data types that are integer or date/timestamp.
  • Generally, when opting for native partitioning, the parent table should already be created as range partitioned but not list partitioned.
  • Unique constraints on partitioned tables must include all partitioning columns.

Conclusion

To summarize, we discussed creating and managing partitions effectively using PostgreSQL Partition Manager(pg_partman) along with the creation of the partition ahead of Inserting data using time-based. I will provide a more detailed usage of pg_partman in upcoming blog posts.

Please review the blogs below for reference:

Performing ETL Using Inheritance in PostgreSQL

PostgreSQL Partitioning Using Traditional Methods

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

 

Download Percona Distribution for PostgreSQL Today!

Sep
26
2023
--

The MySQL Clone Plugin Is Not Your Backup

MySQL clone plugin

This blog post discusses the limitations of the MySQL Clone plugin.

The MySQL clone plugin significantly simplifies the process of replica provisioning. All you need to do is:

  • Ensure that the source server has binary logs enabled
  • Grant appropriate permissions
  • Execute the
    CLONE INSTANCE  command on the recipient

This works extremely easily when you provision a new replica that doesn’t have any data.

Due to its simplicity, you may want to use the clone plugin instead of a backup to restore a server that survives data inconsistency or corruption. E.g., after crash.

However, if you have data on your replica, you need to consider how you will recover if the
CLONE INSTANCE  command fails with an error.

CLONE INSTANCE  command, by default, works as follows:

  • Checks prerequisites on the replica
  • Wipes out data directory
  • Copies data from the source server to the replica
  • Finalizes data on the replica

Let’s discuss these steps in detail.

Checking prerequisites on the replica

This is a safe operation that stops the clone if any of the prerequisites listed at https://dev.mysql.com/doc/refman/8.0/en/clone-plugin-remote.html#clone-remote-prerequisites are not met. Data and user settings are left intact if failure happens at this stage.

However, not every option affecting the future clone operation could be verified. Debugging options, such as
innodb_force_recovery, could cause clone operation failure at further steps, as reported in MySQL Bug 109035. Unfortunately, even if Oracle adds this option into the prerequisites check, it would not zero all risks of option incompatibility failures that could occur during the
CLONE INSTANCE  operation.

Wiping the data directory

By default, after prerequisites are checked, the clone plugin wipes out the content of the data directory. This leads to the situation when all subsequent failures during the
CLONE INSTANCE  operation will leave your server unusable.

This means that you may have to re-install the MySQL server and re-apply all clone-related settings in order to repeat the
CLONE INSTANCE operation.

To avoid the negative effect of possible failures, use clause
DATA DIRECTORY of the
CLONE INSTANCE command. It will add an extra step to the finalization step but will save you in case of failure. Note that you must have enough disk space to store your current data on the replica together with the data copied from the source server.

Copying data from the source server to the replica

This stage is the most risky because MySQL has limited control over data transfer. If something bad happens to your network while MySQL copies data, the clone operation will be stopped with an error. You need to be prepared for this, and if you decide not to use the
DATA DIRECTORY  option, be ready to re-install the replica MySQL server from scratch.

Note. Some options control data retrieval in case of network failure. So, the copy operation will be retrieved if the network is recovered within clone_donor_timeout_after_network_failure minutes. However, this option does not protect you from all possible disaster scenarios.

Finalizing data on the replica

If the clone operation completes successfully, the replica is automatically restarted. Otherwise, an error is reported.

Recovering from error if data is wiped out

After the error is reported, you need to fix the error before restarting the clone operation. If data is already wiped, you should try fixing the error without restarting the replica while your privileges are still in the memory cache. If fixing the error requires a replica restart, as in the case of mistakenly setting option
innodb_force_recovery, your only option is to re-initialize the MySQL instance on the replica from scratch as you would do for any new MySQL installation.

Recovering from error if you used option DATA DIRECTORY

In this case, just remove data in the directory specified by the option
DATA DIRECTORY, fix the error, and repeat the
CLONE INSTANCE command. In this case, you can safely restart the replica without the risk of losing your clone-related privileges and settings.

Conclusion

The MySQL clone plugin simplifies new replica provisioning but does not simplify server restore after failure unless you are ready to re-install your MySQL instance from scratch. Using the MySQL clone plugin on servers with data is not easier than using traditional backup tools, such as Percona XtraBackup.

MySQL clone plugin resources

Provisioning Replication With Clone Plugin

MySQL 8.0.17 Clone Plugin: How to Create a Slave from Scratch

The MySQL Clone Wars: Plugin vs. Percona XtraBackup

Sep
26
2023
--

Explore the New Feature of MySQL To Restrict Users From Creating a Table Without a Primary Key

Restrict Users From Creating a Table Without a Primary Key

As MySQL database administrators, we are well aware of the significance of implementing a primary key within a table. Throughout our careers, most of us have encountered situations where the absence of a primary key has led to operational challenges. Primary keys play an indispensable role in sound database design by uniquely identifying individual rows and significantly enhancing data retrieval, manipulation, and overall system performance.

From the MySQL documentation:

The PRIMARY KEY clause is a critical factor affecting the performance of MySQL queries and the space usage for tables and indexes. The primary key uniquely identifies a row in a table. Every row in the table should have a primary key value, and no two rows can have the same primary key value.

It is common for tables to be inadvertently created without a primary key, often leading to regrettable consequences that we only recognize when issues arise. One prominent issue we face is replication delay in row-based replication. When a delete or update operation is executed on the primary database, affecting multiple rows, it may execute swiftly on the primary server. However, this action can introduce delays on replica servers due to the need to scan each modified row. Even a powerful tool like pt-online-schema-change refuses to function without a primary key or unique key, which can be problematic when you need to make alterations to a table without disrupting read and write operations.

To mitigate such scenarios and the ensuing frustrations, MySQL has introduced a valuable feature in the form of the ‘sql_require_primary_key’ variable. This feature was introduced in MySQL 8.0.13 and serves as a safeguard against the creation of tables without a primary key. By default, this variable is set to OFF, allowing users to create tables without a primary key if they choose to do so. However, both session-level and global-level configurations are available for this variable.

To enable the ‘sql_require_primary_key’ variable, your MySQL version should be 8.0.13 or a later release. It is a dynamic variable that can be adjusted at both the global and session levels. By default, this variable remains inactive, as we can observe in this demonstration using MySQL version 8.0.25.”

As we can see, by default, the tsql_require_primary_key variable is disabled.

mysql1 > select @@version;
+-----------+
| @@version |
+-----------+
| 8.0.25    |
+-----------+
mysql1 > select @@sql_require_primary_key;
+---------------------------+
| @@sql_require_primary_key |
+---------------------------+
|                         0 |
+---------------------------+

I have created a table named students, which doesn’t have the primary key on it

mysql1 > show create table studentsG
*************************** 1. row ***************************
       Table: students
Create Table: CREATE TABLE `students` (
  `id` int unsigned NOT NULL,
  `name` varchar(30) NOT NULL,
  `Branch` varchar(30) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (0.01 sec)

I have enabled the sql_require_primary_key and see how this affects the operations.

mysql1 > select @@sql_require_primary_key;
+---------------------------+
| @@sql_require_primary_key |
+---------------------------+
|                         1 |
+---------------------------+

Creating a table without Primary_key fails with an error. 

mysql1 > CREATE TABLE `employee` (
    ->   `id` int unsigned NOT NULL AUTO_INCREMENT,
    ->   `name` varchar(30) NOT NULL,
    ->   `place` varchar(30) NOT NULL
    -> ) ENGINE=InnoDB;
ERROR 3750 (HY000): Unable to create or change a table without a primary key, when the system variable 'sql_require_primary_key' is set. Add a primary key to the table or unset this variable to avoid this message. Note that tables without a primary key can cause performance problems in row-based replication, so please consult your DBA before changing this setting.

The next intriguing question to ponder is the fate of tables that already exist without a primary key. To shed light on this, let’s consider a table called “students” that already resides on the node and carry out some operations on it.

The INSERT/UPDATE/DELETE works without any errors or warnings on the table, but the ALTER statement will fail on the table with an error.

mysql1 > INSERT INTO students VALUES (5, 'Lilly', 'CCE');
Query OK, 1 row affected (0.00 sec)

mysql1 > UPDATE students set Branch='CSE' where name='lilly';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql1 > DELETE from students where id=5;
Query OK, 1 row affected (0.00 sec)

mysql1 > ALTER TABLE students ADD COLUMN dob int NOT NULL;
ERROR 3750 (HY000): Unable to create or change a table without a primary key, when the system variable 'sql_require_primary_key' is set. Add a primary key to the table or unset this variable to avoid this message. Note that tables without a primary key can cause performance problems in row-based replication, so please consult your DBA before changing this setting.

Replication considerations

In a source replica configuration, the behavior of ‘sql_require_primary_key’ is influenced by the ‘REQUIRE_TABLE_PRIMARY_KEY_CHECK’ parameter in the ‘CHANGE MASTER TO/CHANGE REPLICATION SOURCE’ statement, which empowers a replica to determine its own policy regarding primary key checks. By default, this parameter is set to ‘STREAM,’ causing the replica to adopt the primary key check value replicated from the source for each transaction. 

Let’s test this. I have disabled the sql_require_primary_key on the source, enabled it on replica, and created a table without the primary key, REQUIRE_TABLE_PRIMARY_KEY_CHECK is set to STREAM by default, and you can see slave was able to replicate the table.

mysql1 > select @@sql_require_primary_key;
+---------------------------+
| @@sql_require_primary_key |
+---------------------------+
|                         0 |
+---------------------------+

mysql_s1 > show variables like '%prima%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| sql_require_primary_key | ON    |
+-------------------------+-------+

mysql1 > CREATE TABLE `hotels` (
    ->   `id` int unsigned NOT NULL,
    ->   `name` varchar(30) NOT NULL
    -> ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql_s1 > show create table test.hotelsG
*************************** 1. row ***************************
       Table: hotels
Create Table: CREATE TABLE `hotels` (
  `id` int NOT NULL,
  `name` varchar(30) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (0.00 sec)

If ‘REQUIRE_TABLE_PRIMARY_KEY_CHECK’ is configured as ‘ON,’ the replica consistently enforces the ‘sql_require_primary_key’ system variable with the ‘ON’ value during replication operations, mandating the presence of a primary key.

Conversely, when ‘REQUIRE_TABLE_PRIMARY_KEY_CHECK’ is set to ‘OFF,’ the replica consistently uses the ‘OFF’ value for the ‘sql_require_primary_key’ system variable in replication operations, ensuring that a primary key is never required, even if the source initially mandated one.

Let’s test this. I have disabled the sql_require_primary_key on source, REQUIRE_TABLE_PRIMARY_KEY_CHECK is set to OFF in CHANGE REPLICATION SOURCE, and I disabled the sql_require_primary_key on replica and created a table on source without primary_key, and it caused this replication error on the replica. So it is better and recommended not to run any changes directly on replicas.

mysql_s1 > STOP REPLICA;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql_s1 CHANGE REPLICATION SOURCE TO REQUIRE_TABLE_PRIMARY_KEY_CHECK = OFF ;
Query OK, 0 rows affected (0.01 sec)

mysql_s1 > STOP REPLICA;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql_s1 > select @@sql_require_primary_key;
+---------------------------+
| @@sql_require_primary_key |
+---------------------------+
|                         0 |
+---------------------------+
mysql1 > CREATE TABLE `test_3` (   `id` int unsigned NOT NULL,   `name` varchar(30) NOT NULL ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.02 sec)

Last_SQL_Error: Error 'Unable to create or change a table without a primary key, when the system variable 'sql_require_primary_key' is set. Add a primary key to the table or unset this variable to avoid this message. Note that tables without a primary key can cause performance problems in row-based replication, so please consult your DBA before changing this setting.' on query. Default database: 'test'. Query: 'CREATE TABLE `test_3` (   `id` int NOT NULL,   `name` varchar(30) NOT NULL ) ENGINE=InnoDB'

Conclusion

The “sql_require_primary_key” function, which was added to MySQL 8.0.13, is a useful addition that allows database administrators to ensure that tables are created exclusively with the primary key. Enforcing the presence of primary keys promotes better database design and minimizes the potential pitfalls associated with missing primary keys. With this feature, MySQL encourages users to adopt best practices, leading to more robust and efficient database systems.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Sep
25
2023
--

Talking Drupal #417 – The Recipes Initiative

Today we are talking about The Recipes Initiative, the future of install profiles, if distros are still a thing, and answering a bunch of listener questions with our guest Jim Birch. We’ll also cover Quick Links Kit as our module of the week.

For show notes visit: www.talkingDrupal.com/417

Topics

  • What are recipes
  • How do you use recipes
  • Is it a module, configuration or something else
  • How do recipes compare to install profiles
  • Are you stuck with them
  • What happens if the config is changed
  • Are there namespace collisions
  • How do recipes compare with Distributions
  • Can you include content
  • Listener James: Can recipes uninstall modules
  • Can we use recipes now
  • When will recipes be in core
  • Can recipes be used by tests
  • Listener Andy: Can recipes and startkits interact
  • Can themes require recipes
  • Listener Matthieu: How do recipes compare with Symfony recipes
  • Listener James: How easy will it be to make custom recipes
  • Listener Matthieu: Should contrib maintainers be watching recipes
  • How can we get involved

Resources

Guests

Jim Birch – @jimbirch

Hosts

Nic Laflin – nLighteneddevelopment.com nicxvan John Picozzi – epam.com johnpicozzi Jen Lampton – jenlampton.comjenlampton

MOTW Correspondent

Martin Anderson-Clutz – @mandclu

Quick Links Kit

  • Brief description:
    • Have you ever needed to add a set of quick links, essentially visual navigation prompts, to the home page or section pages of your Drupal site?
    • Quick Links Kit (different from Quicklink module created by last week’s guest, Mike Herschel)
  • Brief history
    • How old: I created in Apr 2021
  • Versions available:
    • 1.0.6 that supports Drupal 8, 9, and 10, and 2.0.1 that supports only D10
  • Maintainership
    • Sporadically maintained, but a pretty simple module
  • Number of open issues:
    • 1, and it’s not a bug
  • Usage stats:
    • 1 site
  • Maintainer(s):
    • Me!
  • Module features and usage
    • The module is really just a set of configuration, with an optional submodule that sets everything up, including the placement of the block on the home page, for sites using Olivero as their theme, so it’s perfect for a fresh install of Drupal
    • It allows for SVG icons to be set for each link, and sets their fill to inherit from the link style
    • The links can be created and managed without leaving the page on which they’re used, by using the settings tray, though it would be a quick configuration change to use a modal or a separate page instead, if preferred.
    • The 2.0 version also makes use of Drupal 10’s new responsive grid views display, so if you’ve wanting to try that out, this is an easy way to get started
    • I thought this module was appropriate for today’s episode because it’s an example of a module that will be a recipe once the infrastructure for them is ready. That said, the Olivero submodule does currently contain a little CSS to improve the display of the links, but that could easily be copied into your site’s custom theme.
Sep
25
2023
--

Restrict MySQL Connections to Broken Replica in ProxySQL

Restrict MySQL Connections to Broken Replica in ProxySQL

ProxySQL is a high-performance SQL proxy, which runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.

The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.

The proxy is designed to run continuously without needing to be restarted. Most configurations can be done at runtime using queries similar to SQL statements in the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings.

ProxySQL

Here, we will consider ProxySQL configured for async replication. Even when a replica is broken/stopped, ProxySQL still routes connections to replicas. It can be overcome by setting the appropriate value for mysql-monitor_slave_lag_when_null

mysql-monitor_slave_lag_when_null

When the replication check returns that Seconds_Behind_Master=NULL, the value of mysql-monitor_slave_lag_when_null (in seconds) is assumed to be the current replication lag. This allows us to either shun or keep online a server where replication is broken/stopped.

ProxSQL stops routing connections to replicas whenever the lag value is greater than the max_replication_lag value defined in ProxySQLl. When a replica is broken/stopped, the replication check will return that Seconds_Behind_Master=NULL. At this position, the value of mysql-monitor_slave_lag_when_null (in seconds) is assumed to be the current replication lag.

MySQL connections to broken/stopped replicas can be restricted by setting the value of mysql-monitor_slave_lag_when_null greater than max_replication_lag. Let’s see in the scenario below.

max_replication_lag is set to 600. ProxSQL stops routing connections to replicas when the lag is greater than 600.

ProxySQLAdmin> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+-----
---+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 127.0.0.1     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
| 2            | 127.0.0.2     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
2 rows in set (0.002 sec)

Let’s stop replication on replica 127.0.0.2:

Replica_127.0.0.2>stop replica;
Query OK, 0 rows affected (0.01 sec)

Still, ProxySQL routes connections to replica, though replication is stopped.

ProxySQLAdmin> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 127.0.0.1     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
| 2            | 127.0.0.2     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
2 rows in set (0.002 sec)

Now consider setting mysql-monitor_slave_lag_when_null = 610 as it is greater than the current value of max_replication_lag.

ProxySQLAdmin> select * from runtime_global_variables where variable_name='mysql-monitor_slave_lag_when_null';
+-----------------------------------+----------------+
| variable_name                     | variable_value |
+-----------------------------------+----------------+
| mysql-monitor_slave_lag_when_null | 60             |
+-----------------------------------+----------------+
1 row in set (0.002 sec)

ProxySQLAdmin> UPDATE global_variables SET variable_value=610 WHERE variable_name='mysql-monitor_slave_lag_when_null';
Query OK, 1 row affected (0.002 sec)

ProxySQLAdmin> load mysql variables to runtime;
Query OK, 0 rows affected (0.002 sec)

ProxySQLAdmin> save mysql variables to disk;
Query OK, 158 rows affected (0.006 sec)

ProxySQLAdmin> select * from runtime_global_variables where variable_name='mysql-monitor_slave_lag_when_null';
+-----------------------------------+----------------+
| variable_name                     | variable_value |
+-----------------------------------+----------------+
| mysql-monitor_slave_lag_when_null | 610            |
+-----------------------------------+----------------+
1 row in set (0.002 sec)

Let’s stop replication once again on replica 127.0.0.2:

Replica_127.0.0.2>stop replica;
Query OK, 0 rows affected (0.01 sec)

As soon as replication is stopped on replica 127.0.0.2, the replication check returns that Seconds_Behind_Master=NULL. As mysql-monitor_slave_lag_when_null is set to  610, it assumes replication lag is greater than 600 and shuns the replica — thus it stops connections to the replica.

ProxySQLAdmin> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status  | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 127.0.0.1     | 3306 | 0         | ONLINE  | 1      | 0           | 1000            | 600                 | 0       | 0            |           |
| 2            | 127.0.0.2     | 3306 | 0         | SHUNNED | 1      | 0           | 1000            | 600                 | 0       | 0            |           |
+--------------+---------------+------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
2 rows in set (0.002 sec)

When replication is started on replica 127.0.0.2:

Replica_127.0.0.2>start replica;
Query OK, 0 rows affected (0.01 sec)

ProxySQL will start routing connections to replica.

ProxySQLAdmin> select * from runtime_mysql_servers;
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1            | 127.0.0.1     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
| 2            | 127.0.0.2     | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 600                 | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
2 rows in set (0.002 sec)

Conclusion

By setting the appropriate value for mysql-monitor_slave_lag_when_null in ProxySQL, MySQL connections can be restricted to broken/stopped replicas.

Related links:

Download ProxySQL

How to Install ProxySQL From the Percona Repository

Install Percona Server for MySQL

How to set up a replica for replication in 6 simple steps with Percona XtraBackup

Sep
25
2023
--

pgBackRest’s Expire and Backup With Custom Lock-Path Files

pgBackRest Expire and Backup

PostgreSQL is a powerful and popular open source relational database management system. Ensuring the safety of your PostgreSQL data through regular backups is an important concern for database administrators. pgBackRest is a highly efficient and versatile tool for managing PostgreSQL backups. In this blog post, we’ll explore a technique to help your backup strategy: running pgBackRest’s expire and backup processes in parallel while utilizing custom lock-file paths.

Percona and PostgreSQL work better together.
Try Percona Distribution for PostgreSQL today.

This approach comes in handy when the “expire” command, responsible for cleaning up old backups, takes longer than expected and causes problems. It can clash with the next backup attempt, resulting in an error message like this:

ERROR: [050]: unable to acquire lock on file '/tmp/pgbackrest/backup/backup.lock': Resource temporarily unavailable
      HINT: is another pgBackRest process running?

This issue tends to occur when your backup storage is located offsite, your database is quite large, and the backup process generates a lot of files. Combined, these factors can make the “expire” step take a long time to finish, disrupting your backup schedule and causing conflicts with new backups.

To overcome these potential conflicts and further enhance your PostgreSQL backup strategy, we will delve into using different lock-paths for the parallel execution of pgBackRest’s expire and backup processes. Before we proceed with an example, let’s understand the idea behind a custom lock-path.

By default, pgBackRest uses the same lock-file path for the backup and expire processes. This can result in conflicts when these processes run in parallel. To avoid such conflicts, we can configure pgBackRest to use separate lock-file paths for each process. This way, they won’t interfere with each other, enabling efficient parallel execution.

Example: Parallelizing pgBackRest backup and expire with Custom Lock-File Paths

In the example below, we’ll set up pgBackRest with minimal settings to illustrate its functionality:

primary database server: 172.31.45.240
pgbackrest server: 172.31.32.62

On the primary database:

We have the following pgBackRest configuration, which includes information about the repository path and repository host details.

postgres@ip-172-31-45-240:~$ cat /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
repo1-host=172.31.32.62
repo1-host-user=postgres

[demo]
pg1-path=/var/lib/postgresql/14/main
pg1-port=5432

On the pgBackRest server:

We maintain two configuration files, specifically /etc/pgbackrest/pgbackrest.conf and /etc/pgbackrest/expire.conf. These files play distinct roles in our backup and expiration processes.

The /etc/pgbackrest/pgbackrest.conf file contains crucial information related to our primary database server. Notably, it includes a critical setting named lock-path=/tmp/pgbackrest/backup.lock. This lock-path is utilized during backup operations.

postgres@ip-172-31-32-62:~$ cat /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-host-user=postgres

[demo]
pg1-path=/var/lib/postgresql/14/main
pg1-host=172.31.45.240
pg1-host-user=postgres
pg1-port=5432
lock-path=/tmp/pgbackrest/backup.lock

On the other hand, the second configuration file, /etc/pgbackrest/expire.conf, focuses on managing the retention and expiration of backups from our primary database server. It specifies repo1-retention-full=2 as a critical setting, determining the retention period for full backups. In this case, with repo1-retention-full=2, pgBackRest will retain the most recent two full backups. Furthermore, it also defines lock-path=/tmp/pgbackrest/expire.lock, which is specifically used during the expire operation. Below is the content of the expire configuration file:

postgres@ip-172-31-32-62:~$ cat /etc/pgbackrest/expire.conf
[global]
repo1-path=/var/lib/pgbackrest
repo1-host-user=postgres
repo1-retention-full=2

[demo]
pg1-path=/var/lib/postgresql/14/main
pg1-host=172.31.45.240
pg1-host-user=postgres
pg1-port=5432
lock-path=/tmp/pgbackrest/expire.lock

Moving forward, we will execute a pgBackRest check command to verify the correct configuration of the archive_command settings for both archiving and backups within the designated stanza. This command will conduct checks across all repositories and databases configured on the host where the command is executed.

postgres@ip-172-31-32-62:~$  pgbackrest check --stanza=demo --log-level-console=info
2023-09-11 13:41:43.794 P00   INFO: check command begin 2.47: --exec-id=49365-ba0eb42b --log-level-console=info --pg1-host=172.31.45.240 --pg1-host-user=postgres --pg1-path=/var/lib/postgresql/14/main --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=demo
2023-09-11 13:41:45.500 P00   INFO: check repo1 configuration (primary)
2023-09-11 13:41:45.703 P00   INFO: check repo1 archive for WAL (primary)
2023-09-11 13:41:46.407 P00   INFO: WAL segment 00000001000000010000000A successfully archived to '/var/lib/pgbackrest/archive/demo/14-1/0000000100000001/00000001000000010000000A-5f8afc66975f348a66165bbfe45aae6c38dae4ad.gz' on repo1
2023-09-11 13:41:46.511 P00   INFO: check command end: completed successfully (2720ms)

Repository info:

On the backup server, we have three full backups in the repo:

postgres@ip-172-31-32-62:~$ pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

stanza: demo
    status: ok
    cipher: none

    db (current)

        wal archive min/max (14): 000000010000000100000005/00000001000000010000000C

  full backup: 20230911-084334F

            timestamp start/stop: 2023-09-11 08:45:36+00 / 2023-09-11 08:46:54+00

            wal start/stop: 000000010000000100000005 / 000000010000000100000005

            database size: 1.7GB, database backup size: 1.7GB

            repo1: backup set size: 295MB, backup size: 295MB

   full backup: 20230911-092018F

            timestamp start/stop: 2023-09-11 09:21:33+00 / 2023-09-11 09:23:23+00

            wal start/stop: 000000010000000100000009 / 000000010000000100000009

            database size: 1.7GB, database backup size: 1.7GB

            repo1: backup set size: 295MB, backup size: 295MB

    full backup: 20230911-135947F

            timestamp start/stop: 2023-09-11 13:59:47+00 / 2023-09-11 14:01:34+00

            wal start/stop: 00000001000000010000000C / 00000001000000010000000C

            database size: 1.7GB, database backup size: 1.7GB

            repo1: backup set size: 295MB, backup size: 295MB

Next, we will simultaneously execute the backup and expire processes using their respective configuration files. This action will trigger both the backup and expiration operations. The expire operation will remove a full backup, specifically the one named 20230911-084334F, in accordance with the repo1-retention-full=2 setting.

postgres@ip-172-31-32-62:~$ date;pgbackrest --config=/etc/pgbackrest/expire.conf --stanza=demo --log-level-console=detail expire
Mon Sep 11 14:04:49 UTC 2023
2023-09-11 14:04:49.030 P00   INFO: expire command begin 2.47: --config=/etc/pgbackrest/expire.conf --exec-id=49974-34c92e8e --lock-path=/tmp/pgbackrest/expire.lock --log-level-console=detail --repo1-path=/var/lib/pgbackrest --repo1-retention-full=2 --stanza=demo
2023-09-11 14:04:49.032 P00   INFO: repo1: expire full backup 20230911-084334F
2023-09-11 14:04:49.040 P00   INFO: repo1: remove expired backup 20230911-084334F
2023-09-11 14:04:49.086 P00 DETAIL: repo1: 14-1 archive retention on backup 20230911-092018F, start = 000000010000000100000009
2023-09-11 14:04:49.087 P00   INFO: repo1: 14-1 remove archive, start = 000000010000000100000005, stop = 000000010000000100000008
2023-09-11 14:04:49.087 P00   INFO: expire command end: completed successfully (64ms)

Full backup also started at the exact same time and ran simultaneously.

postgres@ip-172-31-32-62:~$date; pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=full

Mon Sep 11 14:04:49 UTC 2023 INFO: backup command begin 2.47: --config=/etc/pgbackrest/pgbackrest.conf --exec-id=50270-c22c874e --lock-path=/tmp/pgbackrest/backup.lock --log-level-console=detail --pg1-host=172.31.45.240 --pg1-host-user=postgres --pg1-path=/var/lib/postgresql/14/main --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=demo --type=full

        full backup: 20230911-140450F
            timestamp start/stop: 2023-09-11 14:04:50+00 / 2023-09-11 14:06:36+00
            wal start/stop: 00000001000000010000000E / 00000001000000010000000E
            database size: 1.7GB, database backup size: 1.7GB
            repo1: backup set size: 295MB, backup size: 295MB

Note:  Whenever a backup begins, if the pgbackrest.conf file lacks a retention setting, pgBackRest will not trigger the expire command. Instead, it will provide a space constraints warning, which can be disregarded since we handle the expire operation independently.

INFO: backup command begin 2.47: --config=/etc/pgbackrest/pgbackrest.conf --exec-id=53321-0623da5d --lock-path=/tmp/pgbackrest/backup.lock --log-level-console=detail --pg1-host=172.31.45.240 --pg1-host-user=postgres --pg1-path=/var/lib/postgresql/14/main --pg1-port=5432 --repo1-path=/var/lib/pgbackrest --stanza=demo --type=full
WARN: option 'repo1-retention-full' is not set for 'repo1-retention-full-type=count', the repository may run out of space
HINT: to retain full backups indefinitely (without warning), set option 'repo1-retention-full' to the maximum.
<backup starts >

P00 INFO: expire command begin 2.47: --config=/etc/pgbackrest/pgbackrest.conf --exec-id=53321-0623da5d --lock-path=/tmp/pgbackrest/backup.lock --log-level-console=detail --repo1-path=/var/lib/pgbackrest --stanza=demo
P00 INFO: option 'repo1-retention-archive' is not set - archive logs will not be expired
P00 INFO: expire command end: completed successfully (9ms)

Using this approach, backup and expiration can be initiated as separate processes, preventing the prolonged duration of the expiration operation from hindering subsequent backup attempts. However, maintaining effective monitoring of the repository’s storage capacity is essential to prevent it from running out of space.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

 

Download Percona Distribution for PostgreSQL Today!

Sep
22
2023
--

You Need to Actively Support Open Source Software or It Will Disappear

open source software

Percona is dedicated to open source software. But recently, open source software has come under attack. Once open source software is being locked away by changing licenses and code that you depended on. You either get to pay for the privilege of having less freedom or find yourself sequestered with rapidly aging software.

Before open source was available, there was little to no interoperability between computer systems. If you ran IBM hardware, then you ran IBM software, and that handy calendaring program that ran on Data General or Digital hardware/software was unavailable to you. You could not compile from scratch as the code was proprietary and could not be shared. If, by some miracle, you had source code, the chances are that you would be lacking the compiler, libraries, or another component that would keep the program from running the way you wanted. If your vendor’s code lacked a feature and you have the skills to make the required modifications, you probably could not make the changes due to your licensing restrictions. Most of the low-level development tools were either unavailable or extremely expensive. You had two choices – like it or lump it, and in both cases, you paid through the nose for the privilege.,

If your one vendor of hardware, software, training, and all tangential things discontinued your sole-sourced system in favor of a newer, better-margined system of their manufacture, then all your investment was suddenly worth next to nothing. Your software, your knowledge, your books, and your career options were also suddenly similarly valued. If you do not believe me, visit your favorite career website and try to find openings for HPUX, Tops-10, RSTS/E, or SUNoS personnel. And the marketplace for RCA, Singer, Apollo, and Wang computing equipment is pretty sad.

Blindsided by a license change

When your open source software suddenly morphs into something with a restrictive license, the possibilities are several. You can stay with the last non-restrictive version, hoping that no CVEs appear for that product, your staff likes working on obsolete software, and your other software does not develop dependencies on later versions.

You could fork your own version, starting with the last open release. But do you really have the skills, the knowledge, and the time to create a new version of an existing product? Is the passion there for you to continue support? Adopting a project is a big step that not too many can take.

Options? Do you have any other choices?

Maybe work-a-likes can do what you need? MongoDB changed its license, and many were caught unaware, facing a decision to use less restrictive code thrust upon them. The choice could be to switch to another document database or maybe use a relational database like PostgreSQL or MySQL that supports JSON data types. But that will require a big rewrite of your application to support the new database. New technologies like FerretDB, which is the MongoDB protocol running on top of PostgreSQL, may not need a major rewrite of the application if there are enough of the MongoDB features you depend upon in the FerretDB software. FerretDB just might be the answer to your problem or close enough to satisfy your requirements. If close enough is not good enough, you need to consider a major architectural change and the resulting costs in time, money, and sanity.

When a vendor changes their license, they may be saying that they need a better financial return on their investment, but they are also heavily discounting your existing investment in your technology. Your sunk costs in time, personnel, and product were just heavily reduced by their actions. You get no appeal. If this incenses you, then you need to do something about it.

Keeping open source software is no longer a passive act. If you like to use open source software, then you need to actively support it.

What can you do?

This is a plea for you to support open source software by participating in an event next March in Pasadena, California. The Southern California Linux Expo, or SCaLE. SCaLE is the largest community-run open source and free software conference in North America. The presentations have always come from the community and feature the gambit from well-known individuals in the community to those less famous but still passionate about their subjects. Percona has sponsored this show for many years, provided presenters, and participated in the expo hall. Thousands of people attend each year, and their support helps keep open source software flourishing.

As you would expect, Percona is involved in several of the presentation tracks that feature database or Kubernetes topics. We are instrumental in helping launch a Data on Kubernetes track. And we will be back in the PostgreSQL and MySQL tracks as well.

What you need to do is to participate actively

If you fear the future of open source software is waning and want to show your support for it, you need to be in Pasadena next March. Your attendance shows those on the forefront that you support open-source software. Submitting a talk allows the SCaLE track committees to present the best information to our community. Please make no mistake – open source software is vulnerable, and simply hoping all goes well may be disastrous. You need to participate.

And tell the people in the projects used in your organization that you appreciate their work. This simple act of acknowledging their efforts may not repay them for lost time with their families, make up for the lack of a Porsche in their driveway, or even help them to own part of a football team, but it does show your appreciation of what they do.

And look for your local user groups, Meetups, and conferences. The days of passive support must end, or you will find yourself with a huge increase in operational costs or be stuck with old software. Either of those two choices takes freedom, options, and your future away.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com