Apr
03
2023
--

PostgreSQL Upgrade: Tricks With OID Columns and Extensions

PostgreSQL Upgrade

We are all aware that PostgreSQL 10 has reached end of support, and there is a need to upgrade to the latest version or any application-compatible PG version.

Below are some of the upgrade options available to upgrade to PG11 or above. 

  • Pg_dump and pg_restore:

          We can refer to PostgreSQL Upgrade Using pg_dump/pg_restore on dump/restore methods.

  • Using Logical Method:

          We can refer to Replication Between PostgreSQL Versions Using Logical Replication on upgrading using a logical replication method.

In this blog, we will upgrade PG10, which has tables with OID columns and some extensions to PG13, as an example, using the native pg_upgrade utility.

We have PostgreSQL 10 running with the below list of extensions and some tables with OID columns:

List of installed extensions:
        Name        | Version |   Schema   |                        Description
--------------------+---------+------------+------------------------------------------------------------
 pg_buffercache     | 1.3     | public     | examine the shared buffer cache
 pg_stat_statements | 1.6     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgis            | 3.2.3   | public     | PostGIS geometry and geography spatial types and functions
 postgres_fdw       | 1.0     | public     | foreign-data wrapper for remote PostgreSQL servers
(5 rows)

Table "public.oid_test"
 Column  |       Type        | Collation | Nullable | Default | Storage  | Stats target | Description
---------+-------------------+-----------+----------+---------+----------+--------------+-------------
 id      | integer           |           |          |         | plain    |              |
 name    | character varying |           |          |         | extended |              |
 address | character varying |           |          |         | extended |              |
 phone   | integer           |           |          |         | plain    |              |
Has OIDs: yes

Table "public.craft"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 name   | character varying(90) |           |          |         | extended |              |
Has OIDs: yes

Table "public.ports"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 name   | character varying(90) |           |          |         | extended |              |
Indexes:
    "ports_pkey" PRIMARY KEY, btree (oid)
Has OIDs: yes

What is OID?

Object identifiers (OIDs) are used internally by PostgreSQL as primary keys for various system tables. Type OID represents an object identifier.

If we have tables with OID created, then the pg_upgrade check will fail with the below error FATAL message:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/pg_upgrade --old-bindir=/usr/lib/postgresql/10/bin/ --new-bindir=/usr/lib/postgresql/13/bin --old-datadir=/var/lib/postgresql/10/main --new-datadir=/var/lib/postgresql/13/main --check
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               fatal

Your installation contains tables declared WITH OIDS, which is not
supported anymore.  Consider removing the oid column using
    ALTER TABLE ... SET WITHOUT OIDS;
A list of tables with the problem is in the file:
    tables_with_oids.txt
Failure, exiting

postgres@xx.xx.xx.xx:~$ more tables_with_oids.txt
In database: abs_test
  public.craft
  public.ports
In database: postgres
  public.oid_test

We can use the below SQL to find tables with OID, and it also generates the DDL to remove.

SELECT 'ALTER TABLE "' || n.nspname || '"."' || c.relname || '" SET WITHOUT OIDS;'
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE 1=1
  AND c.relkind = 'r'
  AND c.relhasoids = true
  AND n.nspname <> 'pg_catalog' 
order by n.nspname, c.relname;

Connect to each database in the cluster and run the above SQL query:

abs_test=# SELECT 'ALTER TABLE "' || n.nspname || '"."' || c.relname || '" SET WITHOUT OIDS;'
abs_test-# FROM pg_catalog.pg_class c
abs_test-# LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
abs_test-# WHERE 1=1
abs_test-#   AND c.relkind = 'r'
abs_test-#   AND c.relhasoids = true
abs_test-#   AND n.nspname <> 'pg_catalog'
abs_test-# order by n.nspname, c.relname;
                    ?column?
------------------------------------------------
 ALTER TABLE "public"."craft" SET WITHOUT OIDS;
 ALTER TABLE "public"."ports" SET WITHOUT OIDS;
(2 rows)
======================================================
postgres=# SELECT 'ALTER TABLE "' || n.nspname || '"."' || c.relname || '" SET WITHOUT OIDS;'
postgres-# FROM pg_catalog.pg_class c
postgres-# LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
postgres-# WHERE 1=1
postgres-#   AND c.relkind = 'r'
postgres-#   AND c.relhasoids = true
postgres-#   AND n.nspname <> 'pg_catalog'
postgres-# order by n.nspname, c.relname;
                     ?column?
---------------------------------------------------
 ALTER TABLE "public"."oid_test" SET WITHOUT OIDS;
(1 row)

Now we can execute the DDL to ALTER these tables.

postgres=#  ALTER TABLE "public"."oid_test" SET WITHOUT OIDS;
ALTER TABLE
postgres=# c abs_test
You are now connected to database "abs_test" as user "postgres".
abs_test=# ALTER TABLE "public"."craft" SET WITHOUT OIDS;
ALTER TABLE
abs_test=# ALTER TABLE "public"."ports" SET WITHOUT OIDS;
ALTER TABLE

Once ALTER is completed, we can perform once again pg_upgrade checks:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/pg_upgrade --old-bindir=/usr/lib/postgresql/10/bin/ --new-bindir=/usr/lib/postgresql/13/bin --old-datadir=/var/lib/postgresql/10/main --new-datadir=/var/lib/postgresql/13/main --old-options='-c config_file=/etc/postgresql/10/main/postgresql.conf' --new-options='-c config_file=/etc/postgresql/13/main/postgresql.conf' --old-port=5432 --new-port=5433 --check
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 fatal

Your installation references loadable libraries that are missing from the
new installation.  You can add these libraries to the new installation,
or remove the functions using them from the old installation.  A list of
problem libraries is in the file:
    loadable_libraries.txt

Failure, exiting

postgres@xx.xx.xx.xx:~$ more loadable_libraries.txt
could not load library "$libdir/postgis-3": ERROR:  could not access file "$libdir/postgis-3": No such file or directory
In database: postgres

It shows an error related to PostGIS Extension; hence we need to download the PostGIS package for PostgreSQL-13.

root@xx.xx.xx.xx:/home/ubuntu# sudo apt-get install postgresql-13-postgis-3

Re-run the pg_upgrade checks:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/pg_upgrade --old-bindir=/usr/lib/postgresql/10/bin/ --new-bindir=/usr/lib/postgresql/13/bin --old-datadir=/var/lib/postgresql/10/main --new-datadir=/var/lib/postgresql/13/main --old-options='-c config_file=/etc/postgresql/10/main/postgresql.conf' --new-options='-c config_file=/etc/postgresql/13/main/postgresql.conf' --old-port=5432 --new-port=5433 --check
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok

*Clusters are compatible*

Now we will perform the actual upgrade as shown below:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/pg_upgrade --old-bindir=/usr/lib/postgresql/10/bin/ --new-bindir=/usr/lib/postgresql/13/bin --old-datadir=/var/lib/postgresql/10/main --new-datadir=/var/lib/postgresql/13/main --old-options='-c config_file=/etc/postgresql/10/main/postgresql.conf' --new-options='-c config_file=/etc/postgresql/13/main/postgresql.conf' --old-port=5432 --new-port=5433
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting oldest XID for new cluster                          ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok
Checking for extension updates                              notice

Your installation contains extensions that should be updated
with the ALTER EXTENSION command.  The file
    update_extensions.sql
when executed by psql by the database superuser will update
these extensions.
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Start the Target DB cluster:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/pg_ctl -D /var/lib/postgresql/13/main --options='-c config_file=/etc/postgresql/13/main/postgresql.conf' start

Connect to DB and run the UPDATE Extension SQL command:

postgres=# dx
                                      List of installed extensions
        Name        | Version |   Schema   |                        Description
--------------------+---------+------------+------------------------------------------------------------
 pg_buffercache     | 1.3     | public     | examine the shared buffer cache
 pg_stat_statements | 1.6     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgis            | 3.2.3   | public     | PostGIS geometry and geography spatial types and functions
 postgres_fdw       | 1.0     | public     | foreign-data wrapper for remote PostgreSQL servers
(5 rows)

postgres=# ALTER EXTENSION "postgis" UPDATE;
ALTER EXTENSION
postgres=# ALTER EXTENSION "pg_stat_statements" UPDATE;
ALTER EXTENSION

postgres=# dx
                                      List of installed extensions
        Name        | Version |   Schema   |                        Description
--------------------+---------+------------+------------------------------------------------------------
 pg_buffercache     | 1.3     | public     | examine the shared buffer cache
 pg_stat_statements | 1.8     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgis            | 3.3.2   | public     | PostGIS geometry and geography spatial types and functions
 postgres_fdw       | 1.0     | public     | foreign-data wrapper for remote PostgreSQL servers
(5 rows)

Now we can perform vacuum analyze to update all statistics:

postgres@xx.xx.xx.xx:~$ /usr/lib/postgresql/13/bin/vacuumdb --all --analyze-only -p 5433
vacuumdb: vacuuming database "abs_test"
vacuumdb: vacuuming database "postgres"
vacuumdb: vacuuming database "template1"

Note: In case we want to retain OID post upgrade, then we need to follow a couple of steps:

Before the upgrade, on the source database:

ALTER TABLE has_oids ADD newoid bigint NOT NULL;
UPDATE has_oids SET newoid = oid;
ALTER TABLE has_oids SET WITHOUT OIDS;

After the upgrade, on the target database:

ALTER TABLE has_oids RENAME newoid TO oid;
CREATE SEQUENCE has_oids_oid_seq OWNED BY has_oids.oid;
ALTER TABLE has_oids ALTER oid SET DEFAULT nextval('has_oids_oid_seq');
SELECT setval('has_oids_oid_seq', ???);

Conclusion

We can use logical replication to upgrade from PostgreSQL 10 to PostgreSQL 13. However, there are some pros and cons associated, like the primary key on each table. Hence, we have performed the upgrade using pg_upgrade. Though it required downtime, it helped us to understand the entire cluster, and as per need, we can ALTER the OID column table and re-ALTER post upgrade if we really need OID or if we have made changes in the application code then we need to keep these tables without OID. Also, we have manually upgraded the extensions.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Dec
14
2022
--

Generate a pgBadger Report From PostgreSQL Flexible Server Logs

pgBadger Report From PostgreSQL Flexible Server Logs

In one of our previous posts Detailed Logging for Enterprise-Grade PostgreSQL, we discussed parameters to enable detailed logging and use Log Analyzer – pgBadger. In this blog post, we will configure a Microsoft Azure provisioned PostgreSQL Flexi Server to populate logs and generate a pgBadger report.

The latest pgBadger Utility provides support for JSON format logs. Microsoft Azure PostgreSQL Flexi Server does not provide PostgreSQL logs as we used to get with a single server or on-premises environment. It will get populated after enabling it in JSON Format.

In this blog, we will configure and generate a pgBadger report using JSON format logs and if we are using an older version of pgBadger utility, then convert it into regular logs.

Before downloading we need to tune parameters related to logging in the PostgreSQL.conf file and reload the configuration. You can download and install pgBadger from here.

Also, you can go through here and navigate to pgBadger to know more about it.

Configuration

From the Microsoft Azure Cloud console — https://portal.azure.com/#home  — we need to create a storage account as shown below:

Microsoft Azure Cloud console

Click on the CREATE option and fill in details like name and resource group as shown below:

Configure existing PostgreSQL Flexi Server to use the storage account to generate PostgreSQL Logs.

Select diagnostic settings and add the already-created storage account.

PostgreSQL storage account

 

Login into Microsoft Azure Cloud, navigate to the storage account and navigate to the respective storage account that has been created for PostgreSQL Flexible logs. Navigate to the location of the logs as shown below:

Navigate to the date and time to choose the hourly JSON file(s) required. Right-click on the .json file and download the log which will be in JSON format.

Sample .json logs look like the below:

 

Generate pgBadger report

Use the Jump server provisioned for pgBadger, and copy the JSON file from the local machine to the Jump server.

If you are using the latest pgBadger utility, you can pass the JSON format logs using the -f option to generate a pgBadger report:

pgbadger --prefix='%t-%c-' -f jsonlog PT1H.json -o pgbadger_report.html

If you do not have the option to use the latest pgBadger utility, then use the below Linux command to extract PostgreSQL logs from JSON File and generate a postgresql.log file.

cut -f9- -d\: PT1H.json| cut -f1 -d\} | sed -e 's/^."//' -e 's/"$//' > postgresql.log

Generate the PgBadger report from the postgresql.log file and parse it into an HTML file:

pgbadger --prefix='%t-%c-' postgresql.log -o pgbadger_report.html

Copy the pgbadger_report.html from the Jump server side to the local machine and review the PgBadger report.

Conclusion

The pgBadger utility is continuously emerging as the best log analyzer tool with each release, as it adds more features and functionalities. We can configure and generate pgBadger reports from Microsoft Azure’s Flexi server logs and it does make a DBA’s life easier! ?

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com