Nov
23
2021
--

Upgrading PostGIS: A Proof Of Concept

Upgrading Postgis PostgreSQL

My last blog introduced the issues one can face when upgrading PostGIS and PostgreSQL at the same time. The purpose of this blog is to walk through the steps with an example.

For our purposes, we will confine ourselves to working with the community versions of 9.6 and 11 respectively, and use LXD in order to create a working environment prototyping the steps, and profiling the issues.

Creating the Demo Environment Using LXD

The first step is creating a template container with the requisite packages and configurations. This template is a basic distribution of Ubuntu 18.04, which has already been installed in the development environment.

# creating the template container
lxc cp template-ubuntu1804 template-BetterView
lxc start template-BetterView
lxc exec template-BetterView bash

These packages install the necessary supporting packages, installing PostgreSQL from the community repository:

apt install -y wget gnupg2

These steps are copied from the community download page for Ubuntu distributions:

echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list 
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

The aforementioned repository is now updated thus making it possible to install our two versions of PostgreSQL (i.e. 9.6 and 11, respectively). Installing pg_repack pulls in the requisite packages while installing this very useful package at the same time too:

apt update
apt install -y postgresql-11-repack postgresql-9.6-repack

These next packages are useful. Midnight Commander, mc, is a terminal-based shell navigator and file manager while the other package installs utilities, such as netstat, to monitor the status of all network-based services on the localhost:

apt install -y mc net-tools

This last step merely updates the man pages database and the mlocate database. It makes it easier to locate files on the host. Beware this can be a security risk if used on a production host.

mandb && updatedb

This little snippet of code creates our simulated production host. Creating the instance from a template container makes it much easier to try different variations in quick order:

# creating the POC upgrade container
lxc rm --force pg1-BV
lxc cp template-BetterView pg1-BV
lxc start pg1-BV
lxc exec pg1-BV bash

As per our scenario, upgrading PostGIS requires two different versions to be installed on the host. Notice that PostgreSQL version 9.6 has the older version of PostGIS, while version 11 has the newer one.

For our purposes, this presentation assumes upgrading both PostgreSQL and PostGIS is the method to be used.

ATTENTION: Executing this upgrade operation into two distinct phases is preferred. Either upgrade PostgreSQL and then upgrade PostGIS or upgrade PostGIS on the old version to match the new version on PostgreSQL and then upgrade the PostgreSQL data cluster. 

The underlying assumption is that application code can break between PostGIS version upgrades therefore pursuing an incremental process can mitigate potential issues.

https://PostGIS.net/docs/PostGIS_Extensions_Upgrade.html 

https://PostGIS.net/workshops/PostGIS-intro/upgrades.html

apt install -y postgresql-9.6-postgis-2.4 postgresql-11-postgis-3

About PostGIS

Available versions of PostGIS, as per the community repository at the time of this blog’s publication:

  • 9.6:
    • postgresql-9.6-postgis-2.4
    • postgresql-9.6-postgis-2.5
    • postgresql-9.6-PostGIS-3
  • 11:
    • postgresql-11-postgis-2.5
    • postgresql-11-postgis-3
  • PostGIS supported versions matrix

ATTENTION: Azure supports only PostgreSQL 9.6 with PostGIS 2.3.2.

Before You Upgrade

About

This query lists all user-defined functions that have been installed in your database. Use it to summarize not only what you’ve created but the entire suite of PostGIS function calls:

--
-- get list of all PostGIS functions
--
select nspname, proname
from pg_proc
join pg_namespace on pronamespace=pg_namespace.oid
where nspname not in ('pg_catalog','information_schema')
order by 1,2;

In order to validate your functions, you need to know which ones are being used, therefore tracking the functions prior to the upgrade process will identify them. Please note there are two settings i.e. pl, all. Out of an abundance of caution, it is suggested initially using all for an extended period of time:

--
-- postgresql.conf
-- track_functions = none                    # none, pl, all
--
alter system set track_functions=all;
select pg_reload_conf();

This view collects all the statistics related to function calls:

--
-- track function activity
--
            View "pg_catalog.pg_stat_user_functions"
   Column   |       Type       | Collation | Nullable | Default
------------+------------------+-----------+----------+---------
 funcid     | oid              |           |          |
 schemaname | name             |           |          |
 funcname   | name             |           |          |
 calls      | bigint           |           |          |
 total_time | double precision |           |          |
 self_time  | double precision |           |          |

Example

This is a simple example demonstrating tracking function call usage. Note there are two function calls and one of them is invoked in the other:

CREATE OR REPLACE FUNCTION f1 (
    in  a integer,
    out b integer
) AS
$$
BEGIN
    raise notice 'function f1 is called';
    perform pg_sleep(1);
    b = a+1;
END
$$
LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION f2 (
    in  c integer,
    out d integer
) as
$$
BEGIN
    raise notice 'function f2 is called';
    perform f1(c);
    raise notice 'returning from f2';
    d := 0;
END
$$
language plpgsql;

This SQL statement resets all statistics being tracked in the PostgreSQL database. Please note there are other functions that can be used to reset specific statistics while preserving others:

select * from pg_stat_reset();

And here’s our functions’ invocations:

db01=# select * from f1(4);
NOTICE:  function f1 is called
 b
---
 5
db01=# select * from f2(4);
NOTICE:  function f2 is called
NOTICE:  function f1 is called
NOTICE:  returning from f2
 d
---
 0
db01=# select * from pg_stat_user_functions;
 funcid | schemaname | funcname | calls | total_time | self_time
--------+------------+----------+-------+------------+-----------
  17434 | public     | f1       |     2 |   2002.274 |  2002.274
  17437 | public     | f2       |     1 |   1001.126 |     0.599

An Upgrade Example Using pg_upgrade

SYNOPSIS

There are two discrete upgrades:

  1. pg_upgrade: pg 9.6 -> pg 11
  2. PostGIS upgrade: postgis-2.4 -> postgis2.5 -> postgis-3

HOUSE CLEANING

An Ubuntu-based upgrade requires removing the target data cluster because installing PostgreSQL packages onto a Debian-based distro always includes creating a data cluster:

pg_lsclusters
Ver Cluster   Port   Status   Owner       Data directory
9.6 main      5432   online   postgres    /var/lib/postgresql/9.6/main
11  main      5434   online   postgres    /var/lib/postgresql/11/main  

pg_dropcluster --stop 11 main

For our purposes we are simply adding the extension, no user-defined functions have been included:

su - postgres
createdb -p 5432 db01
psql -p 5432 db01 -c "create extension PostGIS"
exit

Shutting down the source data cluster is the last step before the upgrade process can begin:

systemctl stop postgresql@9.6-main

Debian based distros provide a convenient CLI, making upgrades easy:

# /usr/bin/pg_upgradecluster [OPTIONS] <old version> <cluster name> [<new data directory>]
pg_upgradecluster -v 11 9.6 main

It’s important to check the upgrade logs before starting PostgreSQL version 11. This is a one-way process and once it’s active the old PostgreSQL 9.6 cluster is no longer available and must be destroyed:

systemctl start postgresql@11-main
pg_dropcluster --stop 9.6 main

Here’s confirmation of the PostgreSQL and PostGIS upgrade respectively:

su - postgres
psql -p 5432 db01
show server_version;
           server_version
------------------------------------
 11.14 (Ubuntu 11.14-1.pgdg18.04+1)
select * from PostGIS_version();
            PostGIS_version
---------------------------------------
 3.1 USE_GEOS=1 USE_PROJ=1 USE_STATS=1

PostGIS Function API, Inspection, and Review

This is critical; the process validates that the application logic works or that it must be updated.

METHOD: inspect each function call used between all versions:

  • from 2.4 -> 2.5
  • from 2.5 -> 3.0
  • from 3.0 -> 3.1

TIP: 3.1 documentation encapsulates all previous versions i.e. section 9.12

REFERENCES:

Regression Testing

  • In the current setup, pg 9.6
    • Identify all functions used in PostGIS
    • Execute a simple function call with every type of parameter typically used in your environment
    • Collect, record all variables returned
  • In the target setup, pg 11 or pg 13
    • Execute a simple function call with every type of parameter typically used in your environment
    • Collect, record all variables returned
  • Analysis
    • Compare values: similar values mean you don’t have a problem

Working With Cloud Provider Technology

Be advised, cloud environments are not ideal upgrade candidates. The aforementioned process is quite detailed and will facilitate a successful upgrade process.

  • AZURE
    • pg 9.6: PostGIS 2.3.2
    • pg 11: PostGIS 2.5.1
  • AMAZON
    • pg 11, 13: PostGIS 3.1.4
    • pg 9.6.*: PostGIS 2.3.[0247], 2.5.[25]

References:

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions#postgres-96-extensions

https://docs.microsoft.com/en-us/azure/postgresql/concepts-extensions#postgres-11-extensions

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.PostGIS.html


As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL

Nov
10
2021
--

Issues Upgrading PostGIS and PostgreSQL

PostGIS is arguably the best GIS implementation in the world. There may be other database technologies with GIS but the value proposition of quality, performance, and sophistication, and, of course, cost, is nowhere near what PostgreSQL offers. So let me set the stage for this blog of mine: you've been using PostGIS on your PostgreSQL server for a few years now but having reached its end-of-life it's time to upgrade the system. How do you do it? Most times upgrading PostgreSQL, although a significant activity, can be a pain-free experience. And for those people who've implemented their PostgreSQL database in the cloud, upgrading PostgreSQL is often a matter of just clicking a few buttons. The only real work is fleshing out the steps as you compose your run-book and rehearse drills before performing your scheduled maintenance window. Upgrading a PostgreSQL server with extensions is a matter of ensuring the same versions are present on both source and target hosts. In the case of extensions bundled with PostgreSQL, such as pg_stat_statements, there's nothing to worry about since compatibility is pretty much guaranteed. Things are just a touch more complex when using popular extensions such as pg_repack as you must install its package on the target host. Less popular extensions can be more challenging in that they may not have been as rigorously tested. But in the case of upgrading a PostGIS-enabled database engine, you can bid any hopes for a stress-free experience a sweet goodbye as it saunters out the door waving fondly at you. Looking at the library dependencies for pg_stat_statements, pg_repack and postgis respectively should help you appreciate the issues a little better: pg_stat_statements ldd /usr/lib/postgresql/12/lib/pg_stat_statements.so linux-vdso.so.1 (0x00007ffc3759c000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5e4ff60000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5e4fb6f000) /lib64/ld-linux-x86-64.so.2 (0x00007f5e50508000) pg_repack ldd /usr/lib/postgresql/12/lib/pg_repack.so linux-vdso.so.1 (0x00007ffeaaf8c000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9601226000) /lib64/ld-linux-x86-64.so.2 (0x00007f960181e000) postgis ldd /usr/lib/postgresql/12/lib/postgis-3.so linux-vdso.so.1 (0x00007fff7f736000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f953bf27000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f953bb9e000) libgeos_c.so.1 => /usr/lib/x86_64-linux-gnu/libgeos_c.so.1 (0x00007f953b96d000) libproj.so.12 => /usr/lib/x86_64-linux-gnu/libproj.so.12 (0x00007f953b704000) libjson-c.so.3 => /lib/x86_64-linux-gnu/libjson-c.so.3 (0x00007f953b4f9000) libprotobuf-c.so.1 => /usr/lib/x86_64-linux-gnu/libprotobuf-c.so.1 (0x00007f953b2f0000) libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f953af2f000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f953ad17000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f953a926000) /lib64/ld-linux-x86-64.so.2 (0x00007f953c612000) libgeos-3.7.1.so => /usr/lib/x86_64-linux-gnu/libgeos-3.7.1.so (0x00007f953a583000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f953a364000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f953a160000) libicuuc.so.60 => /usr/lib/x86_64-linux-gnu/libicuuc.so.60 (0x00007f9539da8000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9539b8b000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f9539965000) libicudata.so.60 => /usr/lib/x86_64-linux-gnu/libicudata.so.60 (0x00007f9537dbc000) PostGIS is probably the most sophisticated extension you will ever encounter on PostgreSQL. Its dependencies encompass not just the database system but a number of disparate libraries and projects that the RDBMS under normal circumstances has no relation. Let's itemize some key issues: The naming scheme protocol used in PostGIS is not the same as PostgreSQL. For example, PostgreSQL version 13.4 means the major version is "13" and the minor release or bug release is "4". But PostGIS still uses the old numbering system that the PostgreSQL project followed before version 10. For example, PostGIS version 2.4.1 means the major version is "2.4" and the minor release is "1". All currently supported versions of PostGIS are NOT being supported for all currently maintained versions of PostgreSQL. For example, in the case of Ubuntu, the PostgreSQL community repository currently makes available PostGIS 2.4 for PostgreSQL versions 10 and older but it isn't found for versions 11 and newer. Referring to the postgis support matrix, which lists all currently supported versions postgis and their end-of-life, one can see that technically speaking version 2.4 is supported on postgres version 11. On the other hand, there is support on the Red/Centos side of things. This isn't laziness on the part of the maintainers, rather there's so much in the way of advances that it's simply not practical to maintain this technical debt for longer than a few years at a time. Best practices require the same versioned extension to be present on both source and target data clusters. Keep in mind when it comes to major version differences some extensions are more flexible than others. Otherwise, one is forced to remove it from the source data cluster if it's not present on the target. For some extensions, such as the internal ones that come bundled with the PostgreSQL packages and certain 3rd party extensions, such as pg_repack, it is possible to upgrade between extension major versions without issue. In regards to PostGIS: when faced with different extension versions between the source and target data clusters one must begin by upgrading postgis on the source data cluster such that it matches BOTH the major and minor versions of the target data cluster and only thereafter performing the PostgreSQL upgrade. Alternatively, one can install a downgraded major/minor version of PostGIS on the target data cluster, matching the source data cluster's major and minor version of PostGIS, and then upgrade PostgreSQL (yes, this is convoluted). Some cloud providers do NOT offer more than one version of PostGIS for any particular version of PostgreSQL thereby making the previous recommendation redundant. The reason is simple, they are not prepared to handle PostGIS upgrades. There are ways of dealing with them but that's for another blog. Please note that this is a rapidly moving target and can/will probably change in the future. Because the PostgreSQL community repository removes all versions of PostgreSQL and their extensions when they have reached end-of-life. PostGIS package support becomes problematic when upgrading database servers that have reached end-of-life. Over the past twenty years, I've had occasion to upgrade some fairly large installations using PostGIS, the most complex being over 120TB data on a version of PostgreSQL that hadn't been upgraded in several years and the PostGIS was so old that it wasn't even installed using extensions. As a result of this and other experiences I have, where possible, adopted the practice of performing PostGIS and PostgreSQL upgrades as separate exercises. I can't emphasize enough that regression testing for each GIS function call be made thoroughly in a development environment before going into production. Because the PostGIS project's development cycle advances at the same torrid pace as the demand for GIS, it can happen that the intrinsic behavior of a function can change seemingly without warning. Therefore, merely validating the API via a document review is insufficient, i.e. TEST IT! Here's a tip: Turn on run-time parameter track_functions and you'll be able to identify all those wonderful PostGIS functions your application stack depends upon. Finally, by all means, take a look at an earlier blog "Working with PostgreSQL and PostGIS: How To Become A GIS Expert" for more information about PostGIS.

PostGIS is arguably the best GIS implementation in the world. There may be other database technologies with GIS but the value proposition of quality, performance, and sophistication, and, of course, cost, is nowhere near what PostgreSQL offers.

So let me set the stage for this blog of mine: you’ve been using PostGIS on your PostgreSQL server for a few years now but having reached its end-of-life, it’s time to upgrade the system.

How do you do it?

Most times upgrading PostgreSQL, although a significant activity, can be a pain-free experience. And for those people who’ve implemented their PostgreSQL database in the cloud, upgrading PostgreSQL is often a matter of just clicking a few buttons. The only real work is fleshing out the steps as you compose your run-book and rehearse drills before performing your scheduled maintenance window.

Upgrading a PostgreSQL server with extensions is a matter of ensuring the same versions are present on both source and target hosts. In the case of extensions bundled with PostgreSQL, such as pg_stat_statements, there’s nothing to worry about since compatibility is pretty much guaranteed. Things are just a touch more complex when using popular extensions such as pg_repack as you must install its package on the target host. Less popular extensions can be more challenging in that they may not have been as rigorously tested. But in the case of upgrading a PostGIS-enabled database engine, you can bid any hopes for a stress-free experience a sweet goodbye as it saunters out the door waving fondly at you.

Looking at the library dependencies for pg_stat_statements, pg_repack and postgis respectively should help you appreciate the issues a little better:

pg_stat_statements

ldd /usr/lib/postgresql/12/lib/pg_stat_statements.so
linux-vdso.so.1 (0x00007ffc3759c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5e4ff60000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5e4fb6f000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5e50508000)

pg_repack

ldd /usr/lib/postgresql/12/lib/pg_repack.so
linux-vdso.so.1 (0x00007ffeaaf8c000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9601226000)
/lib64/ld-linux-x86-64.so.2 (0x00007f960181e000)

postgis

ldd /usr/lib/postgresql/12/lib/postgis-3.so
linux-vdso.so.1 (0x00007fff7f736000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f953bf27000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f953bb9e000)
libgeos_c.so.1 => /usr/lib/x86_64-linux-gnu/libgeos_c.so.1 (0x00007f953b96d000)
libproj.so.12 => /usr/lib/x86_64-linux-gnu/libproj.so.12 (0x00007f953b704000)
libjson-c.so.3 => /lib/x86_64-linux-gnu/libjson-c.so.3 (0x00007f953b4f9000)
libprotobuf-c.so.1 => /usr/lib/x86_64-linux-gnu/libprotobuf-c.so.1 (0x00007f953b2f0000)
libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f953af2f000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f953ad17000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f953a926000)
/lib64/ld-linux-x86-64.so.2 (0x00007f953c612000)
libgeos-3.7.1.so => /usr/lib/x86_64-linux-gnu/libgeos-3.7.1.so (0x00007f953a583000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f953a364000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f953a160000)
libicuuc.so.60 => /usr/lib/x86_64-linux-gnu/libicuuc.so.60 (0x00007f9539da8000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9539b8b000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f9539965000)
libicudata.so.60 => /usr/lib/x86_64-linux-gnu/libicudata.so.60 (0x00007f9537dbc000)

PostGIS is probably the most sophisticated extension you will ever encounter on PostgreSQL. Its dependencies encompass not just the database system but a number of disparate libraries and projects that the RDBMS under normal circumstances has no relation.

Let’s itemize some key issues:

  • The naming scheme protocol used in PostGIS is not the same as PostgreSQL. For example, PostgreSQL version 13.4 means the major version is “13” and the minor release or bug release is “4”. But PostGIS still uses the old numbering system that the PostgreSQL project followed before version 10. For example, PostGIS version 2.4.1 means the major version is “2.4” and the minor release is “1”.
  • All currently supported versions of PostGIS are NOT being supported for all currently maintained versions of PostgreSQL. For example, in the case of Ubuntu, the PostgreSQL community repository currently makes available PostGIS 2.4 for PostgreSQL versions 10 and older but it isn’t found for versions 11 and newer. Referring to the postgis support matrix, which lists all currently supported versions postgis and their end-of-life, one can see that technically speaking version 2.4 is supported on postgres version 11. On the other hand, there is support on the Red/Centos side of things. This isn’t laziness on the part of the maintainers, rather there’s so much in the way of advances that it’s simply not practical to maintain this technical debt for longer than a few years at a time.
  • Best practices require the same versioned extension to be present on both source and target data clusters. Keep in mind when it comes to major version differences some extensions are more flexible than others. Otherwise, one is forced to remove it from the source data cluster if it’s not present on the target.
  • For some extensions, such as the internal ones that come bundled with the PostgreSQL packages and certain 3rd party extensions, such as pg_repack, it is possible to upgrade between extension major versions without issue.
  • In regards to PostGIS: when faced with different extension versions between the source and target data clusters one must begin by upgrading postgis on the source data cluster such that it matches BOTH the major and minor versions of the target data cluster and only thereafter performing the PostgreSQL upgrade. Alternatively, one can install a downgraded major/minor version of PostGIS on the target data cluster, matching the source data cluster’s major and minor version of PostGIS, and then upgrade PostgreSQL (yes, this is convoluted).
  • Some cloud providers do NOT offer more than one version of PostGIS for any particular version of PostgreSQL thereby making the previous recommendation redundant. The reason is simple, they are not prepared to handle PostGIS upgrades. There are ways of dealing with them but that’s for another blog. Please note that this is a rapidly moving target and can/will probably change in the future.
  • Because the PostgreSQL community repository removes all versions of PostgreSQL and their extensions when they have reached end-of-life. PostGIS package support becomes problematic when upgrading database servers that have reached end-of-life.

Over the past twenty years, I’ve had occasion to upgrade some fairly large installations using PostGIS, the most complex being over 120TB data on a version of PostgreSQL that hadn’t been upgraded in several years and the PostGIS was so old that it wasn’t even installed using extensions. As a result of this and other experiences I have, where possible, adopted the practice of performing PostGIS and PostgreSQL upgrades as separate exercises.

I can’t emphasize enough that regression testing for each GIS function call be made thoroughly in a development environment before going into production. Because the PostGIS project’s development cycle advances at the same torrid pace as the demand for GIS, it can happen that the intrinsic behavior of a function can change seemingly without warning. Therefore, merely validating the API via a document review is insufficient, i.e. TEST IT!

Here’s a tip: Turn on run-time parameter track_functions and you’ll be able to identify all those wonderful PostGIS functions your application stack depends upon.

Finally, by all means, take a look at an earlier blog “Working with PostgreSQL and PostGIS: How To Become A GIS Expert for more information about PostGIS.

As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL

Jul
19
2021
--

Performing ETL Using Inheritance in PostgreSQL

ETL Using Inheritance in PostgreSQL

Good database maintenance includes not only performing the standard adding, updating, and deleting records, etc., but periodic edits to the table schema too. Operations such as adding, editing, and removing table columns are part of today’s life-cycle reality too as new functionality is constantly being added.

In quieter, and less demanding times, one could get away with schema updates with minimal impact by performing the operation during low load times. However, as all database systems have become more critical to the business bottom line, maintenance windows have become, by necessity, much smaller and more time-sensitive.

So the question is asked: How can one update a multi-terabyte table with near-zero downtime?

Looking deep into our Postgres bag of tricks, we look not at the most recent and advanced features but instead, we leverage a very old capability that’s been part of Postgres since when it was first released as an open source project, i.e. its object-relational capacity implementation of inheritance.

Use Case

Before going into the details of the solution let’s define just what we are trying to solve.

Strictly speaking; there’s two use cases that come to mind when using inheritance as the prime ETL data migration mechanism:

  • Removing table OIDs, such as when moving to Postgres version 12 and greater.
  • Performing DML/DDL operations which include:
    • updating the data
    • adding or removing table columns

Life starts getting complicated when dealing with issues such as updating data types. But we’ll talk about this at the end of the blog.

About Inheritance

Unlike object-oriented programming languages, where the child inherits attributes from the parent, in Postgres it’s the parent that has the ability to inherit from the child ( if only this was true in real-life ;-)). Thus a table column from a child has the potential of becoming available in the parent relation.

Consider the following snippet of code: Two parents and three children are created and populated with records:

BEGIN;
    drop table if exists father, mother cascade;

    create table father(c1 int, c2 int, c3 int);
    create table mother(c1 int, c2 int, c4 int);

-- notice although not declared column "c4" from mother is added to these tables
    create table son(c1 int, c2 int, c3 int) inherits (father,mother);
    create table daughter(c2 int, c4 int, c5 int) inherits (father,mother);

-- this table inherits only those columns from "father"
    create table cousin (c1 int, c2 int, c3 int, c6 int) inherits (father);
COMMIT;
BEGIN;
    insert into son values(1,1,1,1);
    insert into daughter values(2,2,2,2,2);
    insert into cousin values(3,3,3,3);
    insert into father values(10,10,10);
    insert into mother values(11,11,11);
COMMIT;

Columns declared in the parents must therefore exist in the child, i.e. they are merged. But notice those columns unique to the child are not necessarily propagated to the parents:

Table "public.father"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
Number of child tables: 3 (Use \d+ to list them.)

 

Table "public.mother"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c4     | integer |           |          |
Number of child tables: 2 (Use \d+ to list them.)

 

 

Table "public.son"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
Inherits: father,
          mother
Table "public.daughter"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
 c5     | integer |           |          |
Inherits: father,
          mother
Table "public.cousin"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c6     | integer |           |          |
Inherits: father
 

And even though records populated in the child can be seen by the parent the reverse is NOT true when records are populated into the parent and are not seen by the child:

db02=# select * from father;
 c1 | c2 | c3
----+----+----
 10 | 10 | 10
  1 |  1 |  1
  2 |  2 |  2
  3 |  3 |  3
db02=# select * from mother;
 c1 | c2 | c4
----+----+----
 11 | 11 | 11
  1 |  1 |  1
  2 |  2 |  2
 
db02=# select * from son;
 c1 | c2 | c3 | c4
----+----+----+----
  1 |  1 |  1 |  1
db02=# select * from daughter;
 c1 | c2 | c3 | c4 | c5
----+----+----+----+----
  2 |  2 |  2 |  2 |  2
db02=# select * from cousin;
 c1 | c2 | c3 | c6
----+----+----+----
  3 |  3 |  3 |  3

Developing The ETL/Migration Model

Performing data migration under production conditions should take into consideration these four (4) distinct query operations:

  • SELECT from both the target and source tables at the same time.
  • UPDATE and/or DELETE records from both the target and source tables.
  • INSERT new records into the target table.
  • Moving data from the source to the target tables.

For the sake of discussion we’ll demonstrate using one source and target table respectively inheriting from a single parent:

create table parent(c1 int primary key, c2 int, c3 int);
create table source(like parent including all) inherits (parent);
create table target(like parent including all) inherits (parent);

Querying Both Target And Source Tables

Inheritance makes the SELECT query a straightforward operation. This query checks all the tables for the queried record(s):

explain select * from parent;
QUERY PLAN
--------------------------------------------------------------------------
Append (cost=0.00..81.21 rows=4081 width=12)
-> Seq Scan on parent parent_1 (cost=0.00..0.00 rows=1 width=12)
-> Seq Scan on source parent_2 (cost=0.00..30.40 rows=2040 width=12)
-> Seq Scan on target parent_3 (cost=0.00..30.40 rows=2040 width=12)

UPDATE and/or DELETE Records

Similarly to SELECT queries, one doesn’t have to worry about editing the existing application’s DML operation(s) when performing UPDATE and DELETE. Again, notice how both source and target tables are queried as well as the parent:

explain update parent set c2=0 where c1=1;
QUERY PLAN
---------------------------------------------------------------------------------
 Update on parent  (cost=0.00..16.34 rows=3 width=18)
   Update on parent
   Update on source
   Update on target
   ->  Seq Scan on parent  (cost=0.00..0.00 rows=1 width=18)
         Filter: (c1 = 1)
   ->  Index Scan using source_pkey on source  (cost=0.15..8.17 rows=1 width=18)
         Index Cond: (c1 = 1)
   ->  Index Scan using target_pkey on target  (cost=0.15..8.17 rows=1 width=18)
         Index Cond: (c1 = 1)

INSERT New Records Into The Target Table

The thing to keep in mind about INSERT is that without a redirect mechanism all records are inserted into the parent.

Since everybody already knows about triggers, I thought it would be fun to use a REWRITE RULE instead:

create rule child_insert as
on
    insert to parent
do instead
    insert into target values (NEW.*);

 

Table "public.parent"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description
--------+---------+-----------+----------+---------+---------+--------------+-------------
 c1     | integer |           | not null |         | plain   |              |
 c2     | integer |           |          |         | plain   |              |
 c3     | integer |           |          |         | plain   |              |
Indexes:
    "parent_pkey" PRIMARY KEY, btree (c1)
Rules:
    child_insert AS
    ON INSERT TO parent DO INSTEAD  INSERT INTO target (c1, c2, c3)
  VALUES (new.c1, new.c2, new.c3)
Child tables: source,
              target

And here’s our validation; notice how the INSERT is redirected from parent to target:

EXPLAIN insert into parent (c1,c2,c3) values (1,1,1);
QUERY PLAN
-----------------------------------------------------
 Insert on target  (cost=0.00..0.01 rows=1 width=12)
   ->  Result  (cost=0.00..0.01 rows=1 width=12)

Moving Records Between Source And Target Tables

It’s time to introduce the last mechanism needed to perform the actual data migration. Essentially, the data is moved in batches otherwise, if you can afford the downtime of moving your records in one very large transaction, this dog and pony show is a bit redundant.

In the real world we need to anticipate multiple processes attempting simultaneous EXCLUSIVE LOCKS. In the case that one or more records are locked by another operation the following example demonstrates how one can simply skip over them:

--
-- EX: insert a single record in table "source"
--
insert into source (c1,c2,c3) values (2,2,2);

-- move 1,000 records at a time
-- from table "source" to "target"
--
with a as (select * from source for update skip locked limit 1000),
     b as (delete from source using a where c1=a.c1
           returning    source.c1,
                        source.c2,
                        source.c3)
insert into target select * from b;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
 Insert on target  (cost=27.92..41.52 rows=680 width=12) (actual time=0.082..0.082 rows=0 loops=1)
   CTE b
     ->  Delete on source  (cost=9.42..27.92 rows=680 width=6) (actual time=0.050..0.053 rows=1 loops=1)
           ->  Bitmap Heap Scan on source  (cost=9.42..27.92 rows=680 width=6) (actual time=0.021..0.023 rows=1 loops=1)
                 Recheck Cond: (c1 >= 0)
                 Heap Blocks: exact=1
                 ->  Bitmap Index Scan on source_pkey  (cost=0.00..9.25 rows=680 width=0) (actual time=0.010..0.011 rows=1 loops=1)
                       Index Cond: (c1 >= 0)
   ->  CTE Scan on b  (cost=0.00..13.60 rows=680 width=12) (actual time=0.054..0.057 rows=1 loops=1)
 Planning Time: 0.237 ms
 Execution Time: 0.173 ms

Putting It All Together

It’s time to demonstrate a Proof Of Concept using pgbench.

SETUP

Initialize database db02:

dropdb --if-exists db02
createdb db02
pgbench -s 10 -i db02
List of relations
 Schema |       Name       | Type  |  Owner   |  Size   | Description
--------+------------------+-------+----------+---------+-------------
 public | pgbench_accounts | table | postgres | 128 MB  |
 public | pgbench_branches | table | postgres | 40 kB   |
 public | pgbench_history  | table | postgres | 0 bytes |
 public | pgbench_tellers  | table | postgres | 40 kB   |

Create the tables parent and child.

NOTE: In order to demonstrate data migration from a deprecated table, table pgbench_accounts is altered by adding OIDs.

create table parent (like pgbench_accounts including all) without oids;
create table child (like pgbench_accounts including all) inherits (parent) without oids;

alter table pgbench_accounts set with oids, inherit parent;

alter table pgbench_accounts rename to pgbench_accounts_deprecated;
alter table parent rename to pgbench_accounts;

TEST

This query is at the heart of the solution. Any exclusively locked record that it tries to move is automatically skipped and a new attempt can be made the next time this script is invoked.

with a as (select * from pgbench_accounts_deprecated order by 1 for update skip locked limit 10),
     b as (delete
                from pgbench_accounts_deprecated
                using a
                where pgbench_accounts_deprecated.aid=a.aid
                returning pgbench_accounts_deprecated.aid,
                          pgbench_accounts_deprecated.bid,
                          pgbench_accounts_deprecated.abalance,
                          pgbench_accounts_deprecated.filler)
insert into child select * from b;

VALIDATE

explain analyze select * from pgbench_accounts order by 1 limit 13;
QUERY PLAN
--------------------------------------------------------------------------------------------------
 Limit  (cost=0.72..1.45 rows=13 width=97) (actual time=0.012..0.016 rows=13 loops=1)
   ->  Merge Append  (cost=0.72..56212.42 rows=1000211 width=97) (actual time=0.011..0.013 rows=13 loops=1)
         Sort Key: pgbench_accounts.aid
         ->  Index Scan using parent_pkey on pgbench_accounts  (cost=0.12..8.14 rows=1 width=352) (actual time=0.002..0.002 rows=0 loops=1)
         ->  Index Scan using pgbench_accounts_pkey on pgbench_accounts_deprecated  (cost=0.42..43225.43 rows=1000000 width=97) (actual time=0.006..0.007 rows=3 loops=1)
         ->  Index Scan using child_pkey on child  (cost=0.14..51.30 rows=210 width=352) (actual time=0.002..0.003 rows=10 loops=1)
 Planning Time: 0.084 ms
 Execution Time: 0.030 ms

MIGRATION SCRIPT

The query has been incorporated into this script moving 1,000 records every 5 seconds.

#!/bin/bash

set -e

export PGHOST=/tmp PGPORT=10011 PGDATABASE=db02 PGUSER=postgres

SLEEP=5
REC=1000

SQL="
with a as (select * from pgbench_accounts_deprecated order by 1 for update skip locked limit $REC),
     b as (delete
                from pgbench_accounts_deprecated
                using a
                where pgbench_accounts_deprecated.aid=a.aid
                returning pgbench_accounts_deprecated.aid,
                          pgbench_accounts_deprecated.bid,
                          pgbench_accounts_deprecated.abalance,
                          pgbench_accounts_deprecated.filler)
insert into child select * from b;

with a(count_child)    as (select count(*) from child),
     b(count_accounts) as (select count(*) from pgbench_accounts_deprecated)
select a.count_child, b.count_accounts from a,b;
"

while true
do
    echo "--- $(date): Executing Query, moving $REC records now ... ---"
    psql <<<"$SQL"
    echo "sleeping: $SLEEP seconds ..." && sleep $SLEEP
done

BENCHMARKING

All the while the aforementioned script is active pgbench is running the bench-marking.

#
# doesn't block
#
pgbench -c 2 -j 4 --protocol=simple -T 120 db02

CAVEAT

This is a simple and powerful method but there are limitations: whereas common columns between tables must be of the same datatype.

For example, if column c1 in the table source is of datatype int and you want to migrate the data into table target with the same column c1 but with a datatype bigint then this method won’t work. An alternate solution however could take advantage of Updatable Views, which you can read more about here and using the appropriate triggers and rewrite rules.

Jun
28
2021
--

Enabling SSL/TLS Sessions In PgBouncer

Enabling SSL:TLS Sessions In PgBouncer

Enabling SSL:TLS Sessions In PgBouncerPgBouncer is a great piece of technology! Over the years I’ve put it to good use in any number of situations requiring a particular type of control over application processes connecting to a postgres data cluster. However, sometimes it’s been a bit of a challenge when it comes to configuration.

Today, I want to demonstrate one way of conducting a connection session using the Secure Socket Layer, SSL/TLS.

For our purposes we’re going to make the following assumptions:

  • We are using a typical installation found on CENTOS-7.
  • PostgreSQL version 13 is used, but essentially any currently supported version of postgres will work.

Here are the steps enabling SSL connection sessions:

  1. Setup postgres
    • install RPM packages
    • setup remote access
    • create a ROLE with remote login privileges
  2. Setup pgbouncer
    • install RPM packages
    • setup the minimal configuration permitting remote login without SSL
  3. Generate SSL/TSL private keys and certificates
    • TLS certificate for postgres
    • TLS certificate for pgbouncer
    • Create a Certificate Authority (CA) capable of signing the aforementioned certificates
  4. Configure for SSL encrypted sessions
    1. postgres
    2. pgbouncer

Step 1: Setup Postgres

Setting up your postgres server is straightforward:

  • Add the appropriate repository for postgres version 13.
    yum install openssl
    yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
    yum update -y
    yum install -y postgresql13-server
  • The datacluster is initialized.
    /usr/pgsql-12/bin/postgresql-12-setup initdb
  • The datacluster configuration files “pg_hba.conf” and “postgresql.auto.conf” are edited. Note that both IPv4 and IPv6 protocols have been configured.
    echo "
    ##############################################################
    #PG_HBA.CONF
    #
    # TYPE DATABASE USER ADDRESS METHOD
    # "local" is for Unix domain socket connections only
    local all all trust
    # IPv4 local connections:
    host all all 127.0.0.1/32 md5
    host all all 0.0.0.0/0 md5
    # IPv6 local connections:
    host all all ::1/128 md5
    host all all ::0/0 md5
    # Allow replication connections from localhost, by a user with the
    # replication privilege.
    local replication all trust
    host replication all 127.0.0.1/32 md5
    host replication all ::1/128 md5
    #
    ##############################################################" > /var/lib/pgsql/12/data/pg_hba.conf

    # update runtime variable "listen_addresses"
    echo "listen_addresses='*' " >> /var/lib/pgsql/12/data/postgresql.auto.conf

    # as root: server start
    systemctl start postgresql-12

2: Setup PgBouncer

# Install the postgres community package connection pooler
yum install -y pgbouncer
# Configure pgbouncer for non-SSL access
mv /etc/pgbouncer/pgbouncer.ini /etc/pgbouncer/pgbouncer.ini_backup

There’s not much to this first iteration configuring pgbouncer. All that is required is to validate that a connection can be made before updating the SSL configuration.

# edit pgbouncer.ini
echo "
[databases]
* = host=localhost

[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
;;any, trust, plain, md5, cert, hba, pam
auth_type = plain
auth_file = /etc/pgbouncer/userlist.txt
admin_users = postgres " > /etc/pgbouncer/pgbouncer.ini

NOTE: best practice recommends hashing the passwords when editing the file userlist.txt,. But for our purposes, keeping things simple, we’ll leave the passwords in the clear.

# edit userlist.txt 
echo " 
\"usr1\" \"usr1\" 
\"postgres\" \"postgres\" 
" > /etc/pgbouncer/userlist.txt

# as root: server start 
systemctl start pgbouncer

# test connectivity to the postgres server
psql 'host=localhost dbname=postgres user=postgres password=postgres' -c 'select now()' 
psql 'host=localhost dbname=postgres user=usr1 password=usr1' -c 'select now()' 

# test connectivity to pgbouncer 
psql 'host=localhost dbname=postgres user=postgres password=postgres port=6432' -c 'select now()' 
psql 'host=localhost dbname=postgres user=usr1 password=usr1 port=6432' -c 'select now()'

Step 3: Setup SSL/TSL Certificates

Create a root certificate fit for signing certificate requests:

#!/bin/bash
set -e
#################
HOST='blog'
ROOT='root'
OPENSSL_CNF='/etc/pki/tls/openssl.cnf'
#################

# GENERATE CERTIFICATE REQUEST
openssl req -new -nodes -text -out $ROOT.pem -keyout $ROOT.key -subj "/CN=$ROOT.$HOST"

# SIGN THE REQUEST WITH THE KEY TO CREATE A ROOT CERTIFICATE AUTHORITY
openssl x509 -req -in $ROOT.pem -text -days 3650 -extfile $OPENSSL_CNF -extensions v3_ca -signkey $ROOT.key -out $ROOT.crt

chmod 600 root.key
chmod 664 root.crt root.pem

Create two sets of keys and certificate requests, one for pgbouncer and postgres respectively. The certificate requests are signed with the newly created root certificate:

#!/bin/bash
#
# usage
# ./02.mkcert.sh <key name>
#
set -e
#################
HOST='blog'
SUBJ="/C=US/ST=Washington/L=Seattle/O=Percona/OU=Professional Services/CN=$HOST/emailAddress=robert.bernier@percona.com"
REQ="$1.pem"
KEY="$1.key"
CRT="$1.crt"

ROOT="root"
#################
# GENERATE PRIVATE KEY
openssl genrsa -out $KEY 2048

# GENERATE CERTIFICATE REQUEST
openssl req -new -sha256 -key $KEY -out $REQ -subj "$SUBJ"

#
# CERTIFICATE SIGNED BY ROOT CA
# which was generated by script "mkcert_root.sh"
#
openssl x509 -req -in $REQ -text -days 365 -CA $ROOT.crt -CAkey $ROOT.key -CAcreateserial -out $CRT

chmod 600 $KEY
chmod 664 $REQ
chmod 664 $CRT

Validate the signed certificates:

#!/bin/bash
set -e
# check: private key
for u in $(ls *.key)
do
 echo -e "\n==== PRIVATE KEY: $u ====\n"
 openssl rsa -in $u -check
done

# check: certificate request
for u in $(ls *.pem)
do
 echo -e "\n==== CERTIFICATE REQUEST: $u ====\n"
 openssl req -text -noout -verify -in $u
done

# check: signed certificate
for u in $(ls *.crt)
do
 echo -e "\n==== SIGNED CERTIFICATE: $u ====\n"
 openssl req -text -noout -verify -in $u
done

Step 4: Install Certificates and Configure Servers For SSL Connectivity

Update ownership for keys and certificates:

#!/bin/bash
set -e
chown pgbouncer:pgbouncer pgbouncer.*
chown postgres:postgres server.*

Move keys and certificates into their respective locations:

#!/bin/bash
set -e
# pgbouncer
mv pgbouncer.* /etc/pgbouncer
cp root.crt /etc/pgbouncer

# postgres
mv server.* /var/lib/pgsql/13/data
cp root.crt /var/lib/pgsql/13/data

Update pgbouncer.ini:

echo "
;;;
;;; TLS settings for connecting to backend databases
;;;
;server_tls_sslmode = prefer | require | verify-ca | verify-full
server_tls_sslmode = require
server_tls_ca_file = /etc/pgbouncer/root.crt
server_tls_key_file = /etc/pgbouncer/pgbouncer.key
server_tls_cert_file = /etc/pgbouncer/pgbouncer.crt

;;;
;;; TLS settings for accepting client connections
;;;
;client_tls_sslmode = prefer | require | verify-ca | verify-full
client_tls_sslmode = require
client_tls_ca_file = /etc/pgbouncer/root.crt
client_tls_key_file = /etc/pgbouncer/pgbouncer.key
client_tls_cert_file = /etc/pgbouncer/pgbouncer.crt
" >> /etc/pgbouncer/pgbouncer.ini

Update postgresql.auto.conf: 

echo "
ssl = 'on'
ssl_ca_file = 'root.crt'
" >> /var/lib/pgsql/12/data/postgresql.auto.conf

# update runtime parameters by restarting the postgres server
systemctl restart postgresql-13

# restarting connection pooler
systemctl restart pgbouncer

And validate SSL connectivity:

#
# validate ssl connectivity, note the use of "sslmode" # # connect to pgbouncer psql 'host=blog dbname=postgres user=postgres password=postgres port=6432 sslmode=require'<<<"select 'hello world' as greetings" /* greetings ------------- hello world */ # connect to postgres server psql 'host=blog dbname=postgres user=usr1 password=usr1 port=5432 sslmode=require' \ <<<"select datname,usename, ssl, client_addr from pg_stat_ssl join pg_stat_activity on pg_stat_ssl.pid = pg_stat_activity.pid where datname is not null and usename is not null order by 2;"
/* ATTENTION:
-- host name resolution is via IPv6
-- 1st row is a server connection from pgbouncer established by the previous query
-- 2nd row is connection generating the results of this query

 datname | usename | ssl | client_addr
---------+----------+-----+--------------------------
postgres | postgres | t | ::1
postgres | postgres | t | fe80::216:3eff:fec4:7769
*/

CAVEAT: A Few Words About Those Certificates

Using certificates signed by a Certificate Authority offers one the ability to yet go even further than simply enabling SSL sessions. For example, although not covered here, you can dispense using passwords and instead rely on the certificate’s identity as the main authentication mechanism.

Remember: you can still conduct SSL sessions via the use of self-signed certificates, it’s just that you can’t leverage the other cool validation methods in postgres.

# #######################################################
# PGBOUNCER.INI
# Only try an SSL connection. If a root CA file is present,
# verify the certificate in the same way as if verify-ca was specified
#
client_tls_sslmode = require
server_tls_sslmode = require
#
# Only try an SSL connection, and verify that the server certificate
# is issued by a trusted certificate authority (CA)
#
client_tls_sslmode = verify-ca
server_tls_sslmode = verify-ca
#
# Only try an SSL connection, verify that the server certificate
# is issued by a trusted CA and
# that the requested server host name
# matches that in the certificate
#
client_tls_sslmode = verify-full

And finally; don’t forget to save the root certificate’s private key, root.key, in a safe place!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com