Nov
30
2021
--

Kids’ Fire Truck Beds For Future Firefighters!

Many children love to play firefighters, but it’s not just about pretending to fight fires. Firefighters are also responsible for saving people’s lives and keeping their communities safe. If you have a son or daughter who loves pretend play, why not encourage their interest in saving lives? Wouldn’t you feel better knowing that when they do fall asleep in a comfortable firetruck bed, they’ll be ready for any emergency?

A firetruck bed can open up a wide range of imaginative play experiences for your children. And having a child who is interested in becoming a firefighter one day will definitely benefit them in the future.

So here are some fun, creative firetruck beds for your children to enjoy!

Kidkraft Fire Truck Toddler Bed

This bright red wooden firetruck bed will be a hit with any boy or girl who loves all things emergency service. It’s the perfect size for toddlers, and it has everything they need to feel safe at bedtime. Because of this, this Kidkraft fire truck bed is absolutely worth considering if your children are between 15 months and 8 years old.

The Kidkraft Fire Truck Toddler Bed makes moving from a cot to a regular bed as simple as possible. It’s low enough to the ground so that children can get in and out with ease.

The design of this fire truck bed is very impressive. It can be set up in just minutes, and it looks great in any bedroom. As with all Kidkraft products, their fire truck beds are made of top-quality materials safe for children.

Because of its size, this fire truck bed is also excellent value for money. It’s the best option available in terms of price per quality! If you have little boys or girls who love to play firefighters at home, then the Kidkraft Fire Truck Toddler Bed is a safe choice.

Just keep in mind that the mattress does not follow the bed’s height, so you’ll have to buy it separately. However, this fire truck bed fits most crib mattresses.

DHP Junior Silver Metal Loft Bed with White Slide and Fire Department Curtain Set

This is the perfect gift for the firefighter in training! It’s ideal for children between 4 and 12 years old. The bed is designed to look like a fire truck, complete with a ladder and a slide.

This loft bed is very stable, so you don’t have to worry about it swaying while your children are playing. It’s a wise choice if you want something that will last for years! One of the most impressive things about this fire truck bed is its versatility. No matter how many times your children play with it, they’ll always find new things to do.

This is a good-looking bed that will get your children excited about going to sleep. It’s big enough for kids who weigh more than 50 pounds, and the ladder is sturdy yet comfortable.

The DHP Junior Fire Truck Loft Bed is straightforward to set up, which is something all parents are looking for. It comes with the necessary tools required to ensure that the construction process goes smoothly.

If you’re looking for a fire truck bed and slide combo, this is the way to go. It’s exceptionally well-built, and it comes with everything your children need to have fun while getting comfortable at night.

Delta Children Wood Toddler Bed, Nick Jr. PAW Patrol

If your children are Paw Patrol fans, they’ll definitely love this bed! This bed will also be perfect for any kid who dreams of being a firefighter when they grow up. Perhaps they’ll even want to become a fireman just like Marshall, the lead character in the Paw Patrol series.

This toddler bed is very durable, and it’s great for children who are between 15 months and 7 years old. It has all the features they need to feel secure in their room, and it’s an enjoyable way for them to master going from a cot to a bed.

The Delta Children PAW Patrol Wood Toddler Bed is very safe because of its low height. The slats are low enough so that your children can get in and out of bed on their own. It’s double-sided, so there are no sharp edges.

The construction of this fire truck toddler bed is made to last. It has a very smooth finish, and it’s sturdy enough to support kids who weigh up to 50 pounds!

If you have a Paw Patrol fan at home, the Delta Children PAW Patrol Wood Toddler Bed is a wise choice. It’s a great bed that will last for years, and it’s perfect for children who are transitioning to a regular bed.

Why kids’ would like to be a firefighter when they grow up?

Kids often want to be firefighters when they grow up because firefighters are the guardians of our cities and towns. As well as rescuing people from danger, firefighters also ensure that everyone follows fire safety rules.

Firefighters usually spend their days at the fire station, for it’s here that all communication happens between stations and other emergency services. In the event of an emergency, firefighters take action. They drive their fire engines to where they’re needed and rescue the people who need to be saved.

Firefighters are also responsible for putting out fires with their hoses or extinguishers. Sometimes, they must use special apparatus like backpacks that hold water to choose exactly where and how to use the water. Using cool-looking equipment is probably another reason why kids want to be a firefighter.

More importantly, firefighters also work with other emergency services like police and ambulance officers to reduce accidents and injuries.

Firefighters need to be brave and responsible, and they must always do their best even when there’s a risk of danger. They may have been born with characteristics such as these, or they may learn them as they grow up.

My child wants to become a firefighter – what can I do?

If you’ve noticed that your child is interested in becoming a firefighter when they grow up, that shouldn’t come as a surprise. Letting your children know about the duties of firefighters and showing them fire trucks and other emergency vehicles will help them prepare for a future career.

It’s also okay to let your children play with toy fire trucks and equipment. It will help them imagine being a firefighter when they grow up, and it will help them prepare as well. Just be sure that the toys are appropriate for their age and sturdy enough to withstand hours of play.

To inspire your children to become a firefighter when they grow up, it’s also a good idea for you to know more about the job. In addition, you should join fire safety campaigns in your neighborhood and city.

By showing your support for firefighters, teaching your kids about fire safety, and joining campaigns to educate everyone on how to prevent fires, you’re helping the next generation realize their dream.

The post Kids’ Fire Truck Beds For Future Firefighters! appeared first on Comfy Bummy.

Nov
29
2021
--

Talking Drupal #323 – Pantheon Autopilot

Welcome to Talking Drupal. Today we are talking about Pantheon Autopilot with Nathan Tyler.

TalkingDrupal.com/323

Topics

  • Nic – Firefly iii
  • Kevin – Visiting Asheville
  • Nathan – Working on 227 open source repos, soccer league
  • John – Drupal Providence
  • Pantheon Autopilot
  • Visual regression tests
  • How it works
  • Comparison with backstop js
  • Deployment workflow
  • Composer integration
  • Compatible Drupal versions
  • Other platforms
  • Pantheon upstreams
  • Development process
  • Acquisition
  • Automatic updates initiative in Drupal core
  • Developer reactions
  • Need for autopilot once automatic updates are supported
  • Roadmap
  • Adding it to your account
    • cost
  • Most surprising project from pantheon

Resources

Guests

Nathan Tyler – @getpantheon

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Kevin Thull – @kevinjthull

MOTW

Webform The Webform module allows you to build any type of form to collect any type of data, which can be submitted to any application or system. Every single behavior and aspect of your forms and their inputs are customizable. Whether you need a multi-page form containing a multi-column input layout with conditional logic or a simple contact form that pushes data to a SalesForce/CRM, it is all possible using the Webform module for Drupal 8/9.

Nov
29
2021
--

PostgreSQL 14 Database Monitoring and Logging Enhancements

PostgreSQL-14 Database Monitoring and Logging Enhancements

PostgreSQL-14 was released in September 2021, and it contained many performance improvements and feature enhancements, including some features from a monitoring perspective. As we know, monitoring is the key element of any database management system, and PostgreSQL keeps updating and enhancing the monitoring capabilities. Here are some key ones in PostgreSQL-14.

Query Identifier

Query identifier is used to identify the query, which can be cross-referenced between extensions. Prior to PostgreSQL-14, extensions used an algorithm to calculate the query_id. Usually, the same algorithm is used to calculate the query_id, but any extension can use its own algorithm. Now, PostgreSQL-14 optionally provides a query_id to be computed in the core. Now PostgreSQL-14’s monitoring extensions and utilities like pg_stat_activity, explain, and in pg_stat_statments use this query_id instead of calculating its own. This query_id can be seen in csvlog, after specifying in the log_line_prefix. From a user perspective, there are two benefits of this feature.

  • All the utilities/extensions will use the same query_id calculated by core, which provides an ease to cross-reference this query_id. Previously, all the utilities/extensions needed to use the same algorithm in their code to achieve this capability.
  • The second benefit is extension/utilities can use calculated query_id and don’t need to again, which is a performance benefit.

PostgreSQL introduces a new GUC configuration parameter compute_query_id to enable/disable this feature. The default is auto; this can be turned on/off in postgresql.conf file, or using the SET command.

  • pg_stat_activity

SET compute_query_id = off;

SELECT datname, query, query_id FROM pg_stat_activity;
 datname  |                                 query                                 | query_id 
----------+-----------------------------------------------------------------------+----------
 postgres | select datname, query, query_id from pg_stat_activity;                |         
 postgres | UPDATE pgbench_branches SET bbalance = bbalance + 2361 WHERE bid = 1; |

SET compute_query_id = on;

SELECT datname, query, query_id FROM pg_stat_activity;
 datname  |                                 query                                 |      query_id       
----------+-----------------------------------------------------------------------+---------------------
 postgres | select datname, query, query_id from pg_stat_activity;                |  846165942585941982
 postgres | UPDATE pgbench_tellers SET tbalance = tbalance + 3001 WHERE tid = 44; | 3354982309855590749

  • Log

In the previous versions, there was no mechanism to compute the query_id in the server core. The query_id is especially useful in the log files. To enable that, we need to configure the log_line_prefix configuration parameter. The “%Q” option is added to show the query_id; here is the example.

log_line_prefix = 'query_id = [%Q] -> '

query_id = [0] -> LOG:  statement: CREATE PROCEDURE ptestx(OUT a int) LANGUAGE SQL AS $$ INSERT INTO cp_test VALUES (1, 'a') $$;
query_id = [-6788509697256188685] -> ERROR:  return type mismatch in function declared to return record
query_id = [-6788509697256188685] -> DETAIL:  Function's final statement must be SELECT or INSERT/UPDATE/DELETE RETURNING.
query_id = [-6788509697256188685] -> CONTEXT:  SQL function "ptestx"
query_id = [-6788509697256188685] -> STATEMENT:  CREATE PROCEDURE ptestx(OUT a int) LANGUAGE SQL AS $$ INSERT INTO cp_test VALUES (1, 'a') $$;

  • Explain

The EXPLAIN VERBOSE will show the query_id if compute_query_id is true.

SET compute_query_id = off;

EXPLAIN VERBOSE SELECT * FROM foo;
                          QUERY PLAN                          
--------------------------------------------------------------

 Seq Scan on public.foo  (cost=0.00..15.01 rows=1001 width=4)
   Output: a
(2 rows)

SET compute_query_id = on;

EXPLAIN VERBOSE SELECT * FROM foo;
                          QUERY PLAN                          
--------------------------------------------------------------
 Seq Scan on public.foo  (cost=0.00..15.01 rows=1001 width=4)
   Output: a
 Query Identifier: 3480779799680626233
(3 rows)

autovacuum and auto-analyze Logging Enhancements

PostgreSQL-14 improves the logging of auto-vacuum and auto-analyze. Now we can see the I/O timings in the log, showing how much has been spent reading and writing.

automatic vacuum of table "postgres.pg_catalog.pg_depend": index scans: 1
pages: 0 removed, 67 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 89 removed, 8873 remain, 0 are dead but not yet removable, oldest xmin: 210871
index scan needed: 2 pages from table (2.99% of total) had 341 dead item identifiers removed
index "pg_depend_depender_index": pages: 39 in total, 0 newly deleted, 0 currently deleted, 0 reusable
index "pg_depend_reference_index": pages: 41 in total, 0 newly deleted, 0 currently deleted, 0 reusable

I/O timings: read: 44.254 ms, write: 0.531 ms

avg read rate: 13.191 MB/s, avg write rate: 8.794 MB/s
buffer usage: 167 hits, 126 misses, 84 dirtied
WAL usage: 85 records, 15 full page images, 78064 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.07 s

These logs are only available if track_io_timing is enabled.

Connecting Logging

PostgreSQL already logs the connection/disconnection if log_connections/log_disconnections is on. Therefore, PostgreSQL-14 now also logs the actual username supplied by the user. In case some external authentication is used, and mapping is defined in pg_ident.conf, it will become hard to identify the actual user name. Before PostgreSQL-14, you only see the mapped user instead of the actual user.

pg_ident.conf

# MAPNAME       SYSTEM-USERNAME         PG-USERNAME

pg              vagrant                 postgres

pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     peer map=pg

Before PostgreSQL-14

LOG:  database system was shut down at 2021-11-19 11:24:30 UTC
LOG:  database system is ready to accept connections
LOG:  connection received: host=[local]
LOG:  connection authorized: user=postgres database=postgres application_name=psql

PostgreSQL-14

LOG:  database system is ready to accept connections
LOG:  connection received: host=[local]
LOG:  connection authenticated: identity="vagrant" method=peer (/usr/local/pgsql.14/bin/data/pg_hba.conf:89)
LOG:  connection authorized: user=postgres database=postgres application_name=psql

Conclusion

Every major PostgreSQL release carries significant enhancements, and PostgreSQL-14 was no different.

Monitoring is a key feature of any DBMS system, and PostgreSQL keeps upgrading its capabilities to improve its logging and monitoring capabilities. With these newly added features, you have more insights into connections; one can easily track queries and observe performance, and identify how much time is being spent by the vacuum process in read/write operations. This can significantly benefit you in configuring vacuum parameters better.


As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL

Nov
29
2021
--

Magical Christmas for your kids? Here’s what you need!

Hey, Moms and Dads! Good news for you: Santa’s on his way! He’ll be here any minute now, and you don’t need to worry about your children waiting patiently for him. And while we wait, why not get ready and implement the Christmas spirit in your home already now?

Christmas time is magical, indeed! It’s the most wonderful time of the year when everyone has that warm-fuzzy feeling inside them. It’s also a time for sharing and caring, as well as spending some family time together. Everything can become magical when you put the right effort into it! Here are some tips to help you make this Christmas truly remarkable for your kids!

1. Let there always be light!

With cold weather outside, it’s essential that you make the atmosphere inside as cozy and warm as possible to give your kids a good start to the day. Start with some colorful lights to brighten up the house!

Cozy lights are easily accessible, and they don’t have to break your bank either. You can find them in all shapes and sizes, so there’s definitely something for everyone!

Our personal favorite are LANFU LED Icicle Lights; they make your house look like it’s decorated with sparkly icicles! These outdoor Christmas lights have eight different modes – with just one button, you may alter them. The ambiance of the lights in various situations changes to suit your moods, making you feel warm and cheerful.

2. Create a cozy atmosphere for your kids

The fireplace is a perfect centerpiece for creating a warm and welcoming environment in your home. With some help from today’s technology, you don’t need to worry about fire being dangerous for children – you can now get electric fireplaces!

An electric fireplace gives you warmth and coziness while increasing the overall aesthetics of your home. It creates a Christmas feeling without any smoke or fuss that comes with real fireplaces. Flames look super realistic too!

What else? Add some Christmas-themed decorations to make it even cozier. You could also spoon up some hot chocolate or other winter-themed drinks with your kids while you enjoy some nice music in the background. If you’re looking for an excellent way to start this day, then nothing can go wrong with listening to Christmas carols or singing together!

3. Involve the whole family in decorating

Let’s be honest here: who doesn’t like getting dressed up on Christmas? But don’t forget that the fun must not only be for the kids! Let them help you out in decorating, too!

Having them be involved in making the house look nice will also give them self-confidence and pride for their home. They’ll always have the memories of getting home Christmas-ready.

Christmas-themed chair covers are a great way to give your kids the ability to help you out. They are effortless to put on but give a strong effect!

You can choose from many different styles of chair covers for your home so that the kids will love the process even more! They’re inexpensive, unlike other decorations, but they surely add a lot of flavor to any interior. Here are some excellent examples!

Jhua Christmas Back Chair Covers (Set of 3)

The Christmas tree, snowflake, gnome elf pattern on the linen dining chair covers to match the holiday season. This Christmas-themed red, white, and green color will brighten up your day.

Linen and plaid cloth are used to make those chair back covers, which are long-lasting, wear-resistant, and pleasant to the touch.

CCINEE Christmas Chair Covers Santa Claus Hat (Set of 6)

Your kids will love these chair covers! They are made of the highest quality of fabric, making them super soft and comfortable to use. The Christmas chair back coverings are designed with a red Santa Claus hat and a white plush pom-pom on the top. It’s a lovely touch to your dining area. Cute and fun!

WYSRJ Christmas Chair Back Cover for Dining Room (Set of 6)

Three different styles of Christmas chair covers in one pack: Santa, Reindeer, and a Snowman. The set of 6 covers is a fantastic value for this purchase price. Cute and functional! These chair covers can make your interior look very stylish and festive. It’s a great addition to the holiday table!

4. Let’s bake some cookies!

There’s nothing more welcoming in a home than the smell of freshly baked cookies! But your kids will have even more fun if they get to help you out in making them, too! Also, this way, you can be sure that the ingredients are healthy and natural.

If you’re planning on baking some gingerbread cookies, then your house will smell like Christmas for sure. Your kids will also remember this day as a special family moment, and the time they got to spend with you making those tasty treats!

5. Climb into the Christmas spirit with some special activities

It’s important to take your kids out of the house for a while so they can get back all their energy. Down at the park, you could organize some snowman contests or have them build a snow fort!

Taking a walk is a great way to connect with nature and let your kids enjoy the outdoors. They will love walking across a snowy field, especially if there’s some freshly fallen snow from last night! You can even take a hiking trip if you need some outdoor adventure. Don’t forget to take an outdoor chair if your kids need some rest!

Kids’ outdoor chairs are specially designed to stand the test of time. They are comfortable, lightweight, and waterproof, making them perfect for use outside in any weather conditions.

Coleman Kids Quad Chair is a staple in this category. It comes at a low price, but it will definitely make your kids’ time outside extra comfy! It is so great that we wrote an entire article about it – Coleman Kids Quad Chair review!

6. Don’t forget to have a heart-to-heart talk with your kids

What’s the best way to understand what your kid is thinking? By asking them questions! This can be especially helpful in case one of them is feeling lonely since it’s Christmas. You could also ask about their wishes for this year so you can try to make them come true! Also, remind them that they are exceptional and that you love them very much.

7. Relax and enjoy the holiday spirit!

Christmas is a festive time of year, so it’s not wrong to relax and have some fun together with your family! For example, watching Christmas movies on TV could be a great way to finish this special day together. You can also try playing some board games together, like Jenga for instance. The important thing is to make sure that your kids are happy and safe! After all, this time should be all about your family!

The post Magical Christmas for your kids? Here’s what you need! appeared first on Comfy Bummy.

Nov
29
2021
--

MyDumper 0.11.3 is Now Available

MyDumper 0.11.3 MySQL

MyDumper 0.11.3 MySQLThe new MyDumper 0.11.3 version, which includes many new features and bug fixes, is now available.  You can download the code from here.

We are very proud to announce that we were able to achieve the two main objectives for the milestone ZSTD and Stream support!  We added four packages with ZSTD support because not all the distributions have support for v1.4 or higher. Package libzstd is required to use ZSTD compression. ZSTD Bullseye package is only available with libraries for Percona Server for MySQL 8.0. There are two main use cases for the Stream functionality:

  • Importing while you are exporting
  • Remote backups

The drawback is that it relies on the network throughput as we are using a single thread to send the files that have been closed. We are going to explain how this functionality works in another blog post!

Enhancement:

Bug/Fixes:

  • Escape double and float values because of -0 #326 #30
  • Fixing const issues after merge zstd #444 #134
  • WITH_ZSTD needs to be set before being used #442
  • Adding better error handling #441 #440
  • Revert #426 #433
  • Database schema creation control added #432 #431
  • Adding anonymized function [Phase I] #427 #428
  • Fixing comment error log in restore_data_in_gstring_from_file #426
  • Adding LC_ALL to allow multi-byte parameters #415 #199
  • Needs to notify main thread to go ahead when “–tables-list” is used without “-B”! #396 #428

Refactoring:

  • Fixing headers #425
  • sync_before_add_index is not needed anymore #416
  • Using generic functions to build filenames #414

Documentation:

  • Modify the log of error #430
  • Fixing readme #420
  • Warning for inconsistencies using multisource #417 #144
  • docs: add brew build dependencies instruction #412
  • [Document] add example #408 #407

Question Addressed:

  • [BUG] Can’t connect to MySQL server using host and port. #434
  • Could not execute query: Unknown error #335

Download MyDumper 0.11.3 Today!

Nov
26
2021
--

Percona Monthly Bug Report: November 2021

Percona Bug Report November 2021

Percona Bug Report November 2021Here at Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open source form, report back on any issues or bugs you might encounter along the way.

We constantly update our bug reports and monitor other boards to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. These posts are a central place to get information on the most noteworthy open and recently resolved bugs. 

In this November 2021 edition of our monthly bug report, we have the following list of bugs:

Percona Server for MySQL/MySQL Bugs

PS-7919:  Percona Server crashes during CREATE TABLE statements. The problem was seen when tables have many partitions and there were already many tablespaces present in the instance.

Affects Version/s: 8.0 [Tested/Reported version 8.0.25]

Fixed Version/s: 8.0.26

 

PS-7866 (MySQL#104961): MySQL crashes when running a complex query with JOINS. In MySQL error log it only adds a single line as “Segmentation fault (core dumped)” and there are no further details like backtraces in MySQL error log. Further analyzing the core dump shows that something happened at CreateIteratorFromAccessPath.

Affects Version/s: 8.0 [Tested/Reported version 8.0.23, 8.0.25]

 

PS-1653/PS-4935  (MySQL#78612, MySQL#97001 ):  Optimizer Issue for query with ORDER BY+ LIMIT. In some cases, Optimizer chooses a full table scan instead of an index range scan on query order by primary key with limit. 

  • It happens only when we use ORDER BY  and LIMIT together.
  • Issue reproducible for a table with and without primary key.

Affects Version/s: 5.7,8.0 [Tested/Reported version 5.7.27]

Fixed Version/s: 8.0.21, 5.7.33

 

Recently MySQL 8.0.27 was released with some major changes/ Improvements, Following are the few noticeable changes:

  • Previously, MySQL user accounts were authenticated to the server using a single authentication method. MySQL now supports multifactor authentication (MFA), which makes it possible to create accounts that have up to three authentication methods.
  • The default_authentication_plugin variable is deprecated as of MySQL 8.0.27, New replacement variable authentication_policy system variable, which is introduced in MySQL 8.0.27 with the multifactor authentication feature.
  • MySQL MTS Replication: Multithreading is now enabled by default for replica servers.

Defaults values for MTS variables as follows,

replica_parallel_workers=4

replica_preserve_commit_order=1.

replica_parallel_type=LOGICAL_CLOCK

 

Percona XtraDB Cluster

PXC-3724: PXC node crashes with long semaphore. This issue occurred due to locking when writing on multiple nodes in PXC cluster. This is critical as it blocks all nodes to perform any transactions and finally crashes PXC node.

Affects Version/s: 8.0  [Tested/Reported version 8.0.22,8.0.25]

 

PXC-3729: PXC node fails with two applier threads due to conflict. In a certain combination of a table having both primary key and (multi-column) unique key, and highly concurrent write workload updating similar UK values, parallel appliers on secondary nodes may conflict with each other, causing node aborts with inconsistency state. These can be seen even when writes are going to a single writer node only. As transactions were already certified and replicated, they should never fail on the appliers.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

PXC-3449: When ALTER TABLE (TOI) is executed in the user session, sometimes it happens that it conflicts (MDL) with high priority transaction, which causes BF-BF abort and server termination.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

PXC-3387: Server hits the assertion while the query performs an intermediate commit during update of table stats in the Data Dictionary. You will see this issue when autocommit=0

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

Percona XtraBackup

PXB-2629:  Downloading backup using xbcloud on Debian-10 not working. Download stuck at some point for forever or it fails with the Segmentation fault message at the end.

Found issue with curl version for this issue. The default curl version on Debian-10 is curl 7.64.0 libcurl/7.64.0 upgrading it to curl 7.74.0 (x86_64-pc-linux-gnu) libcurl/7.74.0 version fix the issue.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

Percona Toolkit

PT-1889: Incorrect output when using pt-show-grants for users based on MySQL roles. As a result, they can not be applied back properly on the MySQL server. Due to this, we can not use pt-show-grants for MySQL roles until this issue is fixed.

Affects Version/s:  3.2.1

 

PT-1747: pt-online-schema-change was bringing the database into a broken state when applying the “rebuild_constraints” foreign keys modification method if any of the child tables were blocked by the metadata lock.

Affects Version/s:  3.0.13

Fixed Version: 3.4.0

 

PMM  [Percona Monitoring and Management]

PMM-7116: MongoDB ReplSet Summary Dashboard shows incorrect replset member’s state: STARTUP instead of PRIMARY.

Affects Version/s: 2.x  [Tested/Reported version 2.12,2.20]

Fixed Version:  2.25.0

 

PMM-9085: Upgrading to 2.22 pmm-managed component crashes with the following error: panic: interface conversion: agentpb.AgentResponsePayload is nil, not *agentpb.GetVersionsResponse

Affects Version/s: 2.x  [Tested/Reported version 2.22]

Fixed Version:  2.25.0

 

PMM-9156: pmm-agent paths-base option not working for pmm2-client binary installation in PMM 2.23.0. Starting pmm-agent process gives “level=error msg=”Error reading textfile collector directory”

Affects Version/s: 2.x  [Tested/Reported version 2.23]

 

PMM-7846:  Adding MongoDB instance via pmm-admin with tls option not working and failing with error Connection check failed: timeout (context deadline exceeded).

Affects Version/s: 2.x  [Tested/Reported version 2.13, 2.16]

 

Percona Server for MongoDB

PSMDB-892:  RWC defaults pollute the logs with duplicate “Refreshed RWC defaults” messages as a result log is saturated with the message in the title.

Affects Version/s:  4.4.6

Fixed Version: 4.4.8

 

WT-7984:  This issue in MongoDB 4.4.8 causes a checkpoint thread to read and persist an incomplete version of data to disk. Data in memory remains correct unless the server crashes or experiences an unclean shutdown. Then, the inconsistent checkpoint is used for recovery and introduces corruption.

The bug is triggered on cache pages that receive an update during a running checkpoint and which are evicted during the checkpoint.

Affects Version/s:  4.4.8

Fixed Version: 4.4.9

 

PSMDB-671: createBackup returns ok:1 for archived backup even when there is no disk space available.

Affects Version/s: 4.0.12-6, 4.2.1-1, 3.6.15-3.5

Fixed Version:3.6.19-7.0, 4.0.20-13, 4.2.9-9

 

Percona Distribution for PostgreSQL

DISTPG-317:  Installing Percona-PostgreSQL13 from its repository, the package dependencies are such it is going to remove PostgreSQL Community 12 installed.

Affects Version/s: 13.4

 

Summary

We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, How to Report Bugs, Improvements, New Feature Requests for Percona Products.

For the most up-to-date information, be sure to follow us on Twitter, LinkedIn, and Facebook. 

Quick References:

Percona JIRA

MySQL Bug Report

Report a Bug in a Percona Product

MySQL 8.0.27 Release notes

https://jira.mongodb.org/

___

About Percona:

As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides expertise, software, support, and services no matter the technology.

Whether its enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.

Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.

Nov
24
2021
--

Querying Archived RDS Data Directly From an S3 Bucket

querying archived rds data from s3 bucket.png

querying archived rds data from s3 bucket.pngA recommendation we often give to our customers is along the lines of “archive old data” to reduce your database size. There is a tradeoff between keeping all our data online and archiving part of it to cold storage.

There could also be legal requirements to keep certain data online, or you might want to query old data occasionally without having to go through the hassle of restoring an old backup.

In this post, we will explore a very useful feature of AWS RDS/Aurora that allows us to export data to an S3 bucket and run SQL queries directly against it.

Archiving Data to S3

Let’s start by describing the steps we need to take to put our data into an S3 bucket in the required format, which is called Apache Parquet.

Amazon states the Parquet format is up to 2x faster to export and consumes up to 6x less storage in S3, compared to other text formats.

1. Create a snapshot of the database (or select an existing one)

2. Create a customer-managed KMS key to encrypt the exported data

Archiving Data to S3

 

3. Create an IAM role (e.g. exportrdssnapshottos3role)

4. Create an IAM policy for the export task and assign it to the role

{
    "Version": "2012-10-17",
    "Id": "Policy1636727509941",
    "Statement": [
        {
            "Sid": "Stmt1636727502144",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789:role/service-role/exportrdssnapshottos3role"
            },
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::test-athena",
                "arn:aws:s3:::test-athena/exports/*"
            ]
        }
    ]
}

5. Optional: Create an S3 bucket (or use an existing one)

6. Set a bucket policy to allow the IAM role to perform the export e.g.:

{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
            "Service": "export.rds.amazonaws.com"
          },
         "Action": "sts:AssumeRole"
       }
     ] 
   }

7. Export the snapshot to Amazon S3 as an Apache Parquet file. You can choose to export specific sets of databases, schemas, or tables

 

Export the snapshot to Amazon S3

IAM role

Querying the Archived Data

When you need to access the data, you can use Amazon Athena to query the data directly from the S3 bucket.

1. Set a query result location

Amazon Athena

2. Create an external table in Athena Query editor. We need to map the MySQL column types to equivalent types in Parquet

CREATE EXTERNAL TABLE log_requests (
  id DECIMAL(20,0),
  name STRING, 
  is_customer TINYINT,
  created_at TIMESTAMP,
  updated_at TIMESTAMP
)
STORED AS PARQUET
LOCATION 's3://aurora-training-s3/exports/2021/log/'
tblproperties ("parquet.compression"="SNAPPY");

3. Now we can query the external table from the Athena Query editor

SELECT name, COUNT(*)
FROM log_requests
WHERE created_at >= CAST('2021-10-01' AS TIMESTAMP)
  AND created_at < CAST('2021-11-01' AS TIMESTAMP)
GROUP BY name;

Removing the Archived Data from the Database

After testing that we can query the desired data from the S3 bucket, it is time to delete archived data from the database for good. We can use the pt-archiver tool for this task.

Having a smaller database has several benefits. To name a few: your backup/restore will be faster, you will be able to keep more data in memory so response times improve, you may even be able to scale down your server specs and save some money.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Nov
24
2021
--

The Best Animal Chairs for Kids

Children of all ages can benefit from adding a chair to their bedroom, playroom, or dorm. There are many factors parents should consider when choosing an appropriate seating option for kids, including safety and comfort levels. Animal chairs are not only adorable – but they also add color and character to any space. Whether your child is just starting on their own or growing up into a teenager, You can purchase animal chairs in various styles to suit any size and decor.

1. Animal Adventure | Sweet Seats | Teal Unicorn Children’s Plush Chair

If your child loves unicorns, this teal plush chair is the perfect addition to their bedroom. The soft beige head of the unicorn features a mane and wings that truly stand out against its vibrant color scheme.

Your little one will fall in love with this eye-catching piece from Sweet Seats. Featuring a comfortable and plush cushion seat, this chair is sure to become one of your child’s favorites. This piece, along with other Sweet Seats animal chairs, can be purchased from Amazon.

2. DEMDACO Polly Pink Puppy Large Children’s Plush Stuffed Animal Chair

If your child loves puppies, then this piece from DEMDACO should be added to their collection. The plush, upholstered puppy features a pink and brown color scheme and a sturdy base that can withstand significant weight.

Like other animal chairs from DEMDACO, this chair is an outstanding option for kids of all ages. This piece can be purchased from Amazon.

3. Soft Landing | Sweet Seats | Premium Monkey Children’s Plush Chair

If your child loves monkeys, they’ll fall in love with this plush piece from Sweet Seats. The adorable design features a monkey face and eyes on one side of the chair and a comfy back cushion. Your little one will love curling up in this cute seat to read or relax after a long day of play.

4. Delta Children Cozy Children’s Chair – Fun Animal Character, Panda

This fun, white, black and pink chair will fit perfectly with your little girl’s princess room. The rectangular design is attractive and straightforward yet still includes a cute panda smile. Your daughter will love curling up in this adorable piece for storytime or simply to relax after a long day of play.

5. Fantasy Fields – Happy Farm Animals Hand Crafted Kids Wooden Chair – Piggy

Although this piece is not plush, it is just as adorable for your child’s bedroom. The smiling pig design makes the perfect addition to a farm-themed room or any place where your little one needs a comfortable seat.

What to Look for in a Good Animal Chair

While there are many animal chairs out on the market today, it can be challenging to find one that is both safe and comfortable for your child. Here are a few things you should look for when shopping around:

Safety first

Your child’s safety comes first! The most important thing to look for in an animal chair is its safety rating. While most chairs on the market today come with a safety rating, you should always check to make sure that it matches your child’s age and weight. If you are concerned about your child using the chair unsupervised, look for pieces that have safety restraints included.

Design

Another essential thing to look for in an animal chair is a comfortable design. Children can get bored and frustrated if the seating options they have available are uncomfortable or impossibly small. Look for animal chair designs that include larger dimensions and a soft cushion seat to keep your child comfortable and happy throughout their playtime.

Practicality

The best animal chairs are those that you can use for more than just a play piece. Some of the most popular styles on the market today include storage options, extra seating, and even reading stands to make these pieces practical additions to any child’s room or play area.

Additional Features

More and more animal chairs are equipped with additional features to make them even more exciting for children. Some of the most popular add ons include sounds, lights, and music players that allow your child to become part of a fun adventure while sitting in their new chair. If your child loves these types of activities, look for an animal chair equipped with these features.

How to Keep Your Child’s Animal Chairs and Children’s Chair Covers Clean

Just like any other piece of furniture or toy, your child’s animal chairs and children’s chair covers will need regular cleaning to keep them looking great for years to come. Fortunately, most products on the market today are made from low-maintenance materials and can be easily wiped clean with a damp cloth. If your child’s chair cover is machine washable, always check the manufacturer’s care instructions before washing to ensure that you are using the best cleaning methods possible.

Where to Buy Animal Chairs for Kids

There are dozens of different places to purchase animal chairs for kids online and in stores near you. When looking for a retailer, keep in mind that not all of them will offer the same quality or customer service level. Here at Comfy Bummy, we chose to partner with Amazon.com because we feel they offer the best ratio between price and quality.

Final Thoughts on Animal Chairs for Kids

If your child loves bright and colorful furniture, animal chairs are sure to be some of their favorite pieces in the home. These fun pieces come in different colors and styles, so you should have little trouble finding something that meets your needs. Animal chairs can also help keep kids entertained throughout their playtime. Look for new animal chair covers to help make your little one’s bedroom more exciting.

The post The Best Animal Chairs for Kids appeared first on Comfy Bummy.

Nov
23
2021
--

Multi-Tenant Kubernetes Cluster with Percona Operators

multi-tenant kubernetes cluster

multi-tenant kubernetes clusterThere are cases where multiple teams, customers, or applications run in the same Kubernetes cluster. Such an environment is called multi-tenant and requires some preparation and management. Multi-tenant Kubernetes deployment allows you to utilize the economy of scale model on various levels:

  • Smaller compute footprint – one control plane, dense container deployments
  • Ease of management – one cluster, not hundreds

In this blog post, we are going to review multi-tenancy best practices, recommendations and see how Percona Kubernetes Operators can be deployed and managed in such Kubernetes clusters.

Multi-Tenancy

Generic

Multi-tenancy usually means a lot of Pods and workloads in a single cluster. You should always remember that there are certain limits when designing your infrastructure. For vanilla Kubernetes, these limits are quite high and hard to reach:

  • 5000 nodes
  • 10 000 namespaces
  • 150 000 pods

Managed Kubernetes services have their own limits that you should keep in mind. For example, GKE allows a maximum of 110 Pods per node on a standard cluster and only 32 on GKE Autopilot nodes.

The older AWS EKS CNI plugin was limiting the number of Pods per node to the number of IP addresses EC2 can have. With the prefix assignment enabled in CNI, you are still going to hit a limit of 110 pods per node.

Namespaces

Kubernetes Namespaces provides a mechanism for isolating groups of resources within a single cluster. The scope of k8s objects can either be cluster scope or namespace scoped. Objects which are accessible across all the namespaces like

ClusterRole

 are cluster scoped and those which are accessible only in a single namespace like Deployments are namespace scoped.

kubernetes namespaces

Deploying a database with Percona Operators creates pods that are namespace scoped. This provides interesting opportunities to run workloads on different namespaces for different teams, projects, and potentially, customers too. 

Example: Percona Distribution for MongoDB Operator and Percona Server for MongoDB can be run on two different namespaces by adding namespace metadata fields. Snippets are as follows:

# Team 1 DB running in team1-db namespace
apiVersion: psmdb.percona.com/v1-11-0
kind: PerconaServerMongoDB
metadata:
 name: team1-server
 namespace: team1-db

# Team 1 deployment running in team1-db namespace
apiVersion: apps/v1
kind: Deployment
metadata:
 name: percona-server-mongodb-operator-team1
 namespace: team1-db


# Team 2 DB running in team2-db namespace
apiVersion: psmdb.percona.com/v1-11-0
kind: PerconaServerMongoDB
metadata:
 name: team2-server
 namespace: team2-db

# Team 2 deployment running in team2-db namespace
apiVersion: apps/v1
kind: Deployment
metadata:
 name: percona-server-mongodb-operator-team2
 namespace: team2-db

Suggestions:

  1. Avoid using the standard namespaces like
    kube-system

    or

    default

    .

  2. It’s always better to run independent workloads on different namespaces unless there is a specific requirement to do it in a shared namespace.

Namespaces can be used per team, per application environment, or any other logical structure that fits the use case.

Resources

The biggest problem in any multi-tenant environment is this – how can we ensure that a single bad apple doesn’t spoil the whole bunch of apples?

ResourceQuotas

Thanks to Resource Quotas, we can restrict the resource utilization of namespaces.

ResourceQuotas

 also allows you to restrict the number of k8s objects which can be created in a namespace. 

Example of the YAML manifest with resource quotas:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: team1-quota         
  namespace: team1-db    # Namespace where operator is deployed
spec:
  hard:
    requests.cpu: "10"     # Cumulative CPU requests of all k8s objects in the namespace cannot exceed 10vcpu
    limits.cpu: "20"       # Cumulative CPU limits of all k8s objects in the namespace cannot exceed 20 vcpu
    requests.memory: 10Gi  # Cumulative memory requests of all k8s objects in the namespace cannot exceed 10Gi
    limits.memory: 20Gi    # Cumulative memory limits of all k8s objects in the namespace cannot exceed 20Gi
    requests.ephemeral-storage: 100Gi  # Cumulative ephemeral storage request of all k8s objects in the namespace cannot exceed 100Gi
    limits.ephemeral-storage: 200Gi    # Cumulative ephemeral storage limits of all k8s objects in the namespace cannot exceed 200Gi
    requests.storage: 300Gi            # Cumulative storage requests of all PVC in the namespace cannot exceed 300Gi
    persistentvolumeclaims: 5          # Maximum number of PVC in the namespace is 5
    count/statefulsets.apps: 2         # Maximum number of statefulsets in the namespace is 2
    # count/psmdb: 2                   # Maximum number of PSMDB objects in the namespace is 2, replace the name with proper Custom Resource

Please refer to the Resource Quotas documentation and apply quotas that are required for your use case.

If resource quotas are applied to a namespace, it is required to set containers’ requests and limits, otherwise, you are going to have an error similar to the following:

Error creating: pods "my-cluster-name-rs0-0" is forbidden: failed quota: my-cpu-memory-quota: must specify limits.cpu,requests.cpu

All Percona Operators provide the capability to fine-tune the requests and limits. The following example sets CPU and memory requests for Percona XtraDB Cluster containers:

spec:
  pxc:
    resources:
      requests:
        memory: 4G
        cpu: 2

LimitRange

With

ResourceQuotas

we can control the cumulative resources in the namespaces but if we want to enforce constraints on individual Kubernetes objects, LimitRange is a useful option. 

For example, if Team 1,2,3 are provided a namespace to run workloads,

ResourceQuota

will ensure that none of the team can exceed the quotas allocated and over-utilize the cluster… but what if a badly configured workload (say an operator run from team 1 with higher priority class) is utilizing all the resources allocated to the team?

LimitRange

can be used to enforce resources like compute, memory, ephemeral storage, storage with PVC. The example below highlights some of the possibilities.

apiVersion: v1
kind: LimitRange
metadata:
  name: lr-team1
  namespace: team1-db
spec:
  limits:
  - type: Pod                      
    max:                            # Maximum resource limit of all containers combined. Consider setting default limits
      ephemeral-storage: 100Gi      # Maximum ephemeral storage cannot exceed 100GB
      cpu: "800m"                   # Maximum CPU limits of the Pod is 800mVCPU
      memory: 4Gi                   # Maximum memory limits of the Pod is 4 GB
    min:                            # Minimum resource request of all containers combined. Consider setting default requests
      ephemeral-storage: 50Gi       # Minimum ephemeral storage should be 50GB
      cpu: "200m"                   # Minimum CPU request is  200mVCPU
      memory: 2Gi                   # Minimum memory request is 2 GB
  - type: PersistentVolumeClaim
    max:
      storage: 2Gi                  # Maximum PVC storage limit
    min:
      storage: 1Gi                  # Minimum PVC storage request

Suggestions:

  1. When it’s feasible, apply
    ResourceQuotas

    and

    LimitRanges

     to the namespaces where the Percona operator is running. This ensures that tenants are not overutilizing the cluster.

  2. Set alerts to monitor objects and usage of resources in namespaces. Automation of
    ResourceQuotas

     changes may also be useful in some scenarios.

  3. It is advisable to use a buffer on maximum expected utilization before setting the
    ResourceQuotas

    .

  4. Set
    LimitRanges

    to ensure workloads are not overutilizing resources in individual namespaces.

Roles and Security

Kubernetes provides several modes to authorize an API request. Role-Based access control is a popular way for authorization. There are four important objects to provide access:

ClusterRole Represents a set of permissions across the cluster (cluster scope)
Role Represents a set of permissions within a namespace ( namespace scope)
ClusterRoleBinding Granting permission to subjects across the cluster ( cluster scope )
RoleBinding Granting permissions to subjects within a namespace ( namespace scope)

Subjects in the

RoleBinding/ClusterRoleBinding

can be users, groups, or service accounts. Every pod running in the cluster will have an identity and a service account attached (“default” service account in the same namespace will be attached if not explicitly specified). Permissions granted to the service account with

RoleBinding/ClusterRoleBinding

dictate the access that pods will have. 

Going by the best policy of least privileges, it’s always advisable to use Roles with the least set of permissions and bind it to a service account with

RoleBinding

. This service account can be used to run the operator or custom resource to ensure proper access and also restrict the blast radius.

Avoid granting cluster-level access unless there is a strong use case to do it.

Example: RBAC in MongoDB Operator uses Role and

RoleBinding

restricting access to a single namespace for the service account. The same service account is used for both CustomResource and the Operator

Network Policies

Network isolation provides additional security to applications and customers in a multi-tenant environment. Network policies are Kubernetes resources that allow you to control the traffic between Pods, CIDR blocks, and network endpoints, but the most common approach is to control the traffic between namespaces:

kubernetes network policies

Most Container Network Interface (CNI) plugins support the implementation of network policies, however, if they don’t and

NetworkPolicy

is created, the resource is silently ignored. For example, AWS CNI does not support network policies, but AWS EKS can run Calico CNI which does.

It is a good approach to follow the least privilege approach, whereby default traffic is denied and access is granted granularly:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app1-db
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Allow traffic from Pods in namespace

app1

to namespace

app1-db

:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app1-db
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: app1
  policyTypes:
  - Ingress

Policy Enforcement

In a multi-tenant environment, policy enforcement plays a key role. Policy enforcement ensures that k8s objects pass the required quality gates set by administrators/teams. Some examples of policy enforcement could be:

  1. All the workloads have proper labels 
  2. Proper network policies are set for DB
  3. Unsafe configurations are not allowed (Example)
  4. Backups are always enabled (Example)

The K8s ecosystem offers a wide range of options to achieve this. Some of them are listed below:

  1. Open Policy Agent (OPA) is a CNCF graduated project which gives a high-level declarative language to author and enforces policies across k8s objects. (Examples from Google and OPA repo can be helpful)
  2. Mutating Webhooks can be used to modify API calls before it reaches the API server. This can be used to set required properties for k8s objects. (Example: mutating webhook to add
    NetworkPolicy

    for Pods created in production namespaces)

  3. Validating Webhooks can be used to check if k8s API follows the required policy, any API which doesn’t follow the policy will be rejected. (Example: validating webhook to ensure huge pages of 1GB is not used in the pod )

Cluster-Wide

Percona Distribution for MySQL Operator and Percona Distribution for PostgreSQL Operator both support cluster-wide mode which allows single Operator deploy and manage databases across multiple namespaces (support for cluster-wide mode in Percona Operator for MongoDB is on the roadmap). Is also possible to have an Operator per namespace:

Operator per namespace

For example, a single deployment of Percona Distribution for MySQL Operator can monitor multiple namespaces in cluster-wide mode. The use can specify them in WATCH_NAMESPACE environment variable in the

cw-bundle.yaml

file:

    spec:
      containers:
      - command:
        - percona-xtradb-cluster-operator
        env:
        - name: WATCH_NAMESPACE
          value: "namespace-a, namespace-b"

In a multi-tenant environment, it depends on the amount of freedom you want to give to the tenants. Usually when the tenants are highly trusted (for instance internal teams), then it is fine to choose namespace-scoped deployment, where each team can deploy and manage the Operator themselves.

Conclusion

It is important to remember that Kubernetes is not a multi-tenant system out of the box. Various levels of isolation were described in this blog post that would help you to run your applications and databases securely and ensure operational stability. 

We encourage you to try out our Operators:

CONTRIBUTING.md in every repository is there for those of you who want to contribute your ideas, code, and docs.

For general questions please raise the topic in the community forum.

Nov
23
2021
--

Upgrading PostGIS: A Proof Of Concept

Upgrading Postgis PostgreSQL

My last blog introduced the issues one can face when upgrading PostGIS and PostgreSQL at the same time. The purpose of this blog is to walk through the steps with an example.

For our purposes, we will confine ourselves to working with the community versions of 9.6 and 11 respectively, and use LXD in order to create a working environment prototyping the steps, and profiling the issues.

Creating the Demo Environment Using LXD

The first step is creating a template container with the requisite packages and configurations. This template is a basic distribution of Ubuntu 18.04, which has already been installed in the development environment.

# creating the template container
lxc cp template-ubuntu1804 template-BetterView
lxc start template-BetterView
lxc exec template-BetterView bash

These packages install the necessary supporting packages, installing PostgreSQL from the community repository:

apt install -y wget gnupg2

These steps are copied from the community download page for Ubuntu distributions:

echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list 
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

The aforementioned repository is now updated thus making it possible to install our two versions of PostgreSQL (i.e. 9.6 and 11, respectively). Installing pg_repack pulls in the requisite packages while installing this very useful package at the same time too:

apt update
apt install -y postgresql-11-repack postgresql-9.6-repack

These next packages are useful. Midnight Commander, mc, is a terminal-based shell navigator and file manager while the other package installs utilities, such as netstat, to monitor the status of all network-based services on the localhost:

apt install -y mc net-tools

This last step merely updates the man pages database and the mlocate database. It makes it easier to locate files on the host. Beware this can be a security risk if used on a production host.

mandb && updatedb

This little snippet of code creates our simulated production host. Creating the instance from a template container makes it much easier to try different variations in quick order:

# creating the POC upgrade container
lxc rm --force pg1-BV
lxc cp template-BetterView pg1-BV
lxc start pg1-BV
lxc exec pg1-BV bash

As per our scenario, upgrading PostGIS requires two different versions to be installed on the host. Notice that PostgreSQL version 9.6 has the older version of PostGIS, while version 11 has the newer one.

For our purposes, this presentation assumes upgrading both PostgreSQL and PostGIS is the method to be used.

ATTENTION: Executing this upgrade operation into two distinct phases is preferred. Either upgrade PostgreSQL and then upgrade PostGIS or upgrade PostGIS on the old version to match the new version on PostgreSQL and then upgrade the PostgreSQL data cluster. 

The underlying assumption is that application code can break between PostGIS version upgrades therefore pursuing an incremental process can mitigate potential issues.

https://PostGIS.net/docs/PostGIS_Extensions_Upgrade.html 

https://PostGIS.net/workshops/PostGIS-intro/upgrades.html

apt install -y postgresql-9.6-postgis-2.4 postgresql-11-postgis-3

About PostGIS

Available versions of PostGIS, as per the community repository at the time of this blog’s publication:

  • 9.6:
    • postgresql-9.6-postgis-2.4
    • postgresql-9.6-postgis-2.5
    • postgresql-9.6-PostGIS-3
  • 11:
    • postgresql-11-postgis-2.5
    • postgresql-11-postgis-3
  • PostGIS supported versions matrix

ATTENTION: Azure supports only PostgreSQL 9.6 with PostGIS 2.3.2.

Before You Upgrade

About

This query lists all user-defined functions that have been installed in your database. Use it to summarize not only what you’ve created but the entire suite of PostGIS function calls:

--
-- get list of all PostGIS functions
--
select nspname, proname
from pg_proc
join pg_namespace on pronamespace=pg_namespace.oid
where nspname not in ('pg_catalog','information_schema')
order by 1,2;

In order to validate your functions, you need to know which ones are being used, therefore tracking the functions prior to the upgrade process will identify them. Please note there are two settings i.e. pl, all. Out of an abundance of caution, it is suggested initially using all for an extended period of time:

--
-- postgresql.conf
-- track_functions = none                    # none, pl, all
--
alter system set track_functions=all;
select pg_reload_conf();

This view collects all the statistics related to function calls:

--
-- track function activity
--
            View "pg_catalog.pg_stat_user_functions"
   Column   |       Type       | Collation | Nullable | Default
------------+------------------+-----------+----------+---------
 funcid     | oid              |           |          |
 schemaname | name             |           |          |
 funcname   | name             |           |          |
 calls      | bigint           |           |          |
 total_time | double precision |           |          |
 self_time  | double precision |           |          |

Example

This is a simple example demonstrating tracking function call usage. Note there are two function calls and one of them is invoked in the other:

CREATE OR REPLACE FUNCTION f1 (
    in  a integer,
    out b integer
) AS
$$
BEGIN
    raise notice 'function f1 is called';
    perform pg_sleep(1);
    b = a+1;
END
$$
LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION f2 (
    in  c integer,
    out d integer
) as
$$
BEGIN
    raise notice 'function f2 is called';
    perform f1(c);
    raise notice 'returning from f2';
    d := 0;
END
$$
language plpgsql;

This SQL statement resets all statistics being tracked in the PostgreSQL database. Please note there are other functions that can be used to reset specific statistics while preserving others:

select * from pg_stat_reset();

And here’s our functions’ invocations:

db01=# select * from f1(4);
NOTICE:  function f1 is called
 b
---
 5
db01=# select * from f2(4);
NOTICE:  function f2 is called
NOTICE:  function f1 is called
NOTICE:  returning from f2
 d
---
 0
db01=# select * from pg_stat_user_functions;
 funcid | schemaname | funcname | calls | total_time | self_time
--------+------------+----------+-------+------------+-----------
  17434 | public     | f1       |     2 |   2002.274 |  2002.274
  17437 | public     | f2       |     1 |   1001.126 |     0.599

An Upgrade Example Using pg_upgrade

SYNOPSIS

There are two discrete upgrades:

  1. pg_upgrade: pg 9.6 -> pg 11
  2. PostGIS upgrade: postgis-2.4 -> postgis2.5 -> postgis-3

HOUSE CLEANING

An Ubuntu-based upgrade requires removing the target data cluster because installing PostgreSQL packages onto a Debian-based distro always includes creating a data cluster:

pg_lsclusters
Ver Cluster   Port   Status   Owner       Data directory
9.6 main      5432   online   postgres    /var/lib/postgresql/9.6/main
11  main      5434   online   postgres    /var/lib/postgresql/11/main  

pg_dropcluster --stop 11 main

For our purposes we are simply adding the extension, no user-defined functions have been included:

su - postgres
createdb -p 5432 db01
psql -p 5432 db01 -c "create extension PostGIS"
exit

Shutting down the source data cluster is the last step before the upgrade process can begin:

systemctl stop postgresql@9.6-main

Debian based distros provide a convenient CLI, making upgrades easy:

# /usr/bin/pg_upgradecluster [OPTIONS] <old version> <cluster name> [<new data directory>]
pg_upgradecluster -v 11 9.6 main

It’s important to check the upgrade logs before starting PostgreSQL version 11. This is a one-way process and once it’s active the old PostgreSQL 9.6 cluster is no longer available and must be destroyed:

systemctl start postgresql@11-main
pg_dropcluster --stop 9.6 main

Here’s confirmation of the PostgreSQL and PostGIS upgrade respectively:

su - postgres
psql -p 5432 db01
show server_version;
           server_version
------------------------------------
 11.14 (Ubuntu 11.14-1.pgdg18.04+1)
select * from PostGIS_version();
            PostGIS_version
---------------------------------------
 3.1 USE_GEOS=1 USE_PROJ=1 USE_STATS=1

PostGIS Function API, Inspection, and Review

This is critical; the process validates that the application logic works or that it must be updated.

METHOD: inspect each function call used between all versions:

  • from 2.4 -> 2.5
  • from 2.5 -> 3.0
  • from 3.0 -> 3.1

TIP: 3.1 documentation encapsulates all previous versions i.e. section 9.12

REFERENCES:

Regression Testing

  • In the current setup, pg 9.6
    • Identify all functions used in PostGIS
    • Execute a simple function call with every type of parameter typically used in your environment
    • Collect, record all variables returned
  • In the target setup, pg 11 or pg 13
    • Execute a simple function call with every type of parameter typically used in your environment
    • Collect, record all variables returned
  • Analysis
    • Compare values: similar values mean you don’t have a problem

Working With Cloud Provider Technology

Be advised, cloud environments are not ideal upgrade candidates. The aforementioned process is quite detailed and will facilitate a successful upgrade process.

  • AZURE
    • pg 9.6: PostGIS 2.3.2
    • pg 11: PostGIS 2.5.1
  • AMAZON
    • pg 11, 13: PostGIS 3.1.4
    • pg 9.6.*: PostGIS 2.3.[0247], 2.5.[25]

References:

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions#postgres-96-extensions

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions#postgres-11-extensions

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.PostGIS.html


As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com