Aug
05
2022
--

An Illustration of PostgreSQL Bloat

An Illustration of PostgreSQL Bloat

I have been working with Postgres now for many years, most recently as a consultant at Percona for companies with new implementations of Postgres as a result of migrating from Oracle or some other legacy database engine. In some cases, these are Fortune 100 companies with many talented people working for them. However, not all databases work the same and one of the most common observations I make when reviewing a Postgres environment for a client is the amount of table bloat, index bloat, and lack of understanding of its impact on performance – and how to address it.

I wrote a blog about this topic a few years ago and never gave it much thought after its publication. However, with the large number of companies moving to Postgres for obvious reasons, and the lack of true Postgres database management skills needed to support fairly large databases, I thought I would rewrite this blog and bring it back to life with some clarity to help one understand bloat and why it happens.

What causes bloat?

In PostgreSQL, the culprit is Multi-Version Concurrency Control, commonly referred to as MVCC.

MVCC ensures that a transaction against a database will return only data that’s been committed, in a snapshot, even if other processes are trying to modify that data.

Imagine a database with millions of rows in a table. Anytime you update or delete a row, Postgres has to keep track of that row based on a transaction ID. For example, you may be running a long query with transaction ID 100 while someone named John just updated the same table using transaction ID 101. At this point, since you are still inside of transaction 100, which is older than 101 as far as your query is concerned, the changes made by John in transaction ID 101 are not relevant or visible to your query. You are in your own personal bubble of data back before the data has changed. Any new queries from you or anyone else with a transaction ID greater than 101 will see the changes made by John in transaction 101. After all, new transaction IDs are greater than 101, meaning no other transactions are currently in play with IDs less than 101 the data you saw in transaction ID 100 will no longer be needed by the database and will be considered dead but not gone. Hence, bloat!

At a high level, vacuuming is used to free up dead rows in a table so they can be reused. It also helps you avoid transaction ID wraparound.

Let’s go through a few steps to illustrate how all this takes place

In order for Postgres to know which transaction data should be in the result set of your query, the snapshot makes note of transaction information.

Essentially, if your transaction ID is 100, you will only see data from all transaction IDs leading up to 100. As stated above, you will not see data from transaction ID 101 or greater.

Setting up an example

Let’s start by creating a simple table for our example, called percona:

percona=# CREATE TABLE percona ( col1 int );
CREATE TABLE


percona=# INSERT INTO percona values (1);
INSERT 0 1


percona=# INSERT INTO percona values (2);
INSERT 0 1


percona=# INSERT INTO percona values (3);
INSERT 0 1


percona=# INSERT INTO percona values (4);
INSERT 0 1


percona=# INSERT INTO percona values (5);
INSERT 0 1

You can wrap multiple inserts into a single transaction with BEGIN and COMMIT:

percona=# BEGIN;
BEGIN


percona=*# INSERT INTO percona SELECT generate_series(6,10);
INSERT 0 5

percona=*# COMMIT;
COMMIT

Here we can see the 10 rows we inserted into the table, along with some hidden system columns:

percona=# SELECT xmin, xmax, * FROM percona;
   xmin   | xmax | col1
----------+------+------
 69099597 |    0 |    1
 69099609 |    0 |    2
 69099627 |    0 |    3
 69099655 |    0 |    4
 69099662 |    0 |    5
 69099778 |    0 |    6
 69099778 |    0 |    7
 69099778 |    0 |    8
 69099778 |    0 |    9
 69099778 |    0 |   10
(10 rows)

As you can see, values one through five (in the col1 column) have unique transaction IDs (represented in the xmin column)—they were the result of individual INSERT statements, made one after the other. The rows with values of six through 10 share the same transaction ID of 6909978; they were all part of the one transaction we created with the BEGIN and COMMIT statements.

At this point, you may be asking yourself what this has to do with vacuum or autovacuum. We will get there. First, you need to know about the transaction id logic and visually see it for a better understanding as shown above.

How does the table bloat?

In Postgres, the heap is a file containing a list of variable-sized records, in no particular order, that points to the location of a row within a page. (A Postgres page is 8k in size). The pointer to the location is called the CTID.

To view the heap without having to read the raw data from the file, we need to create the following extension inside of our database:

CREATE extension pageinspect;

Now we can inspect the heap for our newly created table and rows:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));


 lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |   8032 |        1 |     28 | 69099662 |      0 |        0 | (0,5)  |           1 |       2304 |     24 |        |       | \x05000000
  6 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7872 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
(10 rows)

The table above shows 10 entries with a few columns:

  • lp is the ID of the row/tuple
  • t_xmin is the transaction ID
  • t_ctid is the pointer
  • t_data is the actual data

Currently, the pointer for each row is pointing to itself as determined by the form (page,tupleid). Pretty straightforward.

Now, let’s perform a few updates on a specific row. Let’s change the value of five to 20, then to 30, and finally back to five.

percona=# UPDATE percona SET col1 = 20 WHERE col1 = 5;
UPDATE 1


percona=# UPDATE percona SET col1 = 30 WHERE col1 = 20;
UPDATE 1


percona=# UPDATE percona SET col1 = 5 WHERE col1 = 30;
UPDATE 1

These three changes took place under three different transactions.

What does this mean?  We changed the values for a column three times but never added or deleted any rows. So we should still have 10 rows, right?

percona=# SELECT COUNT(*) FROM percona;
 count
-------
    10
(1 row)

Looks as expected. But wait! Let’s look at the heap now. The real data on disk.

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));


 lp | lp_off | lp_flags | lp_len |  t_xmin  |  t_xmax  | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+----------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |        0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |        0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |        0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |        0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |   8032 |        1 |     28 | 69099662 | 69103876 |        0 | (0,11) |       16385 |       1280 |     24 |        |       | \x05000000
  6 |   8000 |        1 |     28 | 69099778 |        0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   7968 |        1 |     28 | 69099778 |        0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7936 |        1 |     28 | 69099778 |        0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7904 |        1 |     28 | 69099778 |        0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7872 |        1 |     28 | 69099778 |        0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |   7840 |        1 |     28 | 69103876 | 69103916 |        0 | (0,12) |       49153 |       9472 |     24 |        |       | \x14000000
 12 |   7808 |        1 |     28 | 69103916 | 69103962 |        0 | (0,13) |       49153 |       9472 |     24 |        |       | \x1e000000
 13 |   7776 |        1 |     28 | 69103962 |        0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

We have 13 rows, not 10. What the heck just happened?

Let’s examine our three separate update transactions (69103876, 69103916, 69103962) to see what’s happening with the heap:

t_xmin (691103876)

  • UPDATE percona SET col1 = 20 WHERE col1 = 5;
  • Logically DELETE tuple ID 5
  • Physically INSERT tuple ID 11
  • UPDATE tuple ID 5 pointer (t_tcid) to point to tuple ID 11

Tuple ID 5 becomes a dead row when its t_xmax gets set to the new transaction ID initiated by transaction 691103876.

t_xmin (69103916)

  • UPDATE percona SET col1 = 30 WHERE col1 = 20;
  • Logically DELETE tuple ID 11
  • Physically INSERT tuple ID 12
  • UPDATE tuple ID 11 pointer (t_tcid) to point to tuple ID 12

Once again, Tuple ID 11 becomes a dead row when its t_xmax set to the new transaction ID initiated by transaction  69103916.

t_xmin (69103962)

  • UPDATE percona SET col1 = 5 WHERE col1 = 30;
  • Logically DELETE tuple ID 12
  • Physically INSERT tuple ID 13
  • UPDATE tuple ID 12 pointer (t_tcid) to point to tuple ID 13

Tuple ID 13 is live and visible to other transactions. It has no t_xmax and the t_ctid (0,13) points to itself.

The key takeaway from this is that we have not added or deleted rows in our table. We still see 10 in the count, but our heap has increased to 13 by an additional three transactions being executed.

At a very high level, this is how PostgreSQL implements MVCC and why we have table bloat in our heap. In essence, changes to data results in a new row reflecting the latest state of the data. The old rows need to be cleaned or reused for efficiency.

Vacuuming the table

The way to deal with the table bloat is to vacuum the table:

percona=# vacuum percona;
VACUUM

Now, let’s examine the heap again:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));

lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |     13 |        2 |      0 |          |        |          |        |             |            |        |        |       |
  6 |   8032 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 12 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 13 |   7872 |        1 |     28 | 69103962 |      0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

After vacuuming the table, rows five, 11, and 12 are now free to be used again.

So let’s insert another row, with the value of 11 and see what happens:

percona=# INSERT INTO percona values (11);
INSERT 0 1

Let’s examine the heap once more:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));

 lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |     13 |        2 |      0 |          |        |          |        |             |            |        |        |       |
  6 |   8032 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |   7840 |        1 |     28 | 69750201 |      0 |        0 | (0,11) |           1 |       2048 |     24 |        |       | \x0b000000
 12 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 13 |   7872 |        1 |     28 | 69103962 |      0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

Our new tuple (with transaction ID 69750201) reused tuple 11, and now the tuple 11 pointer (0,11) is pointing to itself.

As you can see, the heap did not grow to accommodate the new row. It reused an open block for the new row that was made available when we vacuumed the table freeing up dead rows (rows that will no longer be visible in a transaction).

And there you have it. A step-by-step illustration of how bloat occurs in PostgreSQL!

Mar
11
2022
--

PostgreSQL 101 for Non-Postgres DBAs (Simple Backup and Restore)

PostgreSQL 101 Backup and Restore

It’s no surprise that PostgreSQL is becoming the de facto goto database for many. Just a few of the many reasons include advanced technology, scalability, and ways to save money. With that said, we see many experienced DBAs being tasked with migrating existing databases from Oracle, MySQL, SQL Server, and others to Postgres. Although fundamentally speaking, a good DBA should have a conceptual knowledge and understanding of database fundamentals, translating your existing way of performing daily tasks differs from one technology to the other. With that in mind, this blog is addressed to those experienced DBAs that have a well-known and proven set of routines in their old technology and want to know how to perform them in Postgres.

Postgres offers several utilities for performing both physical and logical backups and restores. We will talk about these and how to use them here.

For the purpose of this mini-tutorial, we are assuming all tasks will be performed by the user “postgres”, which has superuser privileges on the database unless otherwise noted.

I Want To …..

 

Logical Backups

Logical backups are processed with native tools such as pg_dump and pg_dumpall. These tools should be included in the default bin directory for postgres installation such as /usr/pgsql-11/bin. If your path is not set, you may want to include the bin directory in your path.

There are many options that can be used when running these tools to customize your data dumps. So, we will cover a few scenarios in this blog.

 

Physical Backups

Physical backups are processed with native tools such as pg_basebackup. Again, these tools should be included in the default bin directory for postgres installation such as /usr/pgsql-11/bin. If your path is not set, you may want to include the bin directory in your path.

You can also use system tools for physical backups such as tar or other archiving tools at your disposal.

 

Prerequisite for Remote Backups

The source database server has to allow a remote connection for the user performing the task. Remember, we are assuming for our examples that the user is postgres. 

 

  1. Create an entry in the pg_hba.conf file similar to the following under the IPv4 connections section.

host    all        postgres        0.0.0.0/0               md5

      2. Edit your postgresql.conf file or whatever file you may be loading for runtime configs and change the parameter listen_addresses to the following:

listen_addresses = ‘*’

Once the above changes are made, reload your configuration file or restart postgres. 

The above examples are pretty open. For security, you most likely will restrict the IP address in the hba.conf file to a more specific IP, Subnet.

In our example, we are allowing postgres to connect from anywhere with password authentication. Thus, the 0.0.0.0/0 and md5. You could change the 0.0.0.0/0 to the address of the other database server like 192.168.1.2/32 We also specify the user postgres with the -U option since it is the user we opened up in the pg_hba.conf file.

If the user running the commands has different credentials on source/target servers you will need to save the password to .pgpass or set the environment variable PGPASSWORD so you are not prompted for the password whenever it is needed.

I want to dump my entire database, including users and credentials to a file.

This is quite a simple task to perform if you have the correct privileges and configuration settings along with the storage needed depending on your database size.

Performing the Data Dump Locally

If you have only one instance of postgres running on your server and have minimal / default configuration for the pg_hba.conf file and your path includes the postgres bin directory, all you need to do as user postgres is ….

pg_dumpall > savedfile.sql

The above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump a specific instance all you do is …

pg_dumpall -p port > savedfile.sql

Replace the port above with the port number the instance you wish to dump is running on.

Performing the data dump remotely.

Although this is pretty much the same thing as on a local server, there are a few things you need to have configured in order to execute this data dump remotely. Plus, your prerequisites need to be addressed.

Now from our remote client or server, we can run the following commands as long as the postgres tools are installed.

pg_dumpall -h host -p port -U postgres > savedfile.sql

Replace the host above with the address of the source DB and port with the port number it is running on.

There are other flags and options you can use. Have a look here for the usage options

I want to dump a specific database only.

Performing the data dump locally.

Similar to the other commands with a slight variation

pg_dump -d dname > savedfile.sql

Like in other scenarios, the above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump from a specific instance all you do is …

pg_dump -p port -d dbname > savedfile.sql

I want to dump a specific database and specific table or tables only.

On a local server

Similar to the other commands with a slight variation

pg_dump -d dname -t tablename > savedfile.sql

Like in other scenarios, the above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump from a specific instance all you do is …

pg_dump -p port -d dbname -t tablename > savedfile.sql

If you want more than one table, list their names or patterns like so …

pg_dump -d dname -t table1 -t table2 -t table3 > savedfile.sql

From a remote server

Just like in previous examples, specify the connection options with -h host -p port

I only want to dump the users and credentials to restore them somewhere else.

This is just as simple as the above data dumps. However, keep in mind that this will not get you what you need if your instance is an RDS instance. Amazon really locks down what you can do as a privileged user on an RDS instance. Even as Postgres.

From a local server

pg_dumpall -g > users.sql

From a remote server or client. ( saves file locally )

pg_dumpall -g -h host -p port -U postgres > users.sql

You can edit the above dump file and remove any user you do not wish to apply when you restore the file to a different server.

Restoring a Logical Dump

Restoring the newly created backup is a simple task. There are several ways to accomplish this and we will go over a few of these just to get you going.  Keep in mind there is a pg_restore utility as well which we will not be addressing in this blog. Pg_restore lets you get more creative with your dumps and imports.

Again, we assume all actions here are executed as user postgres.

Restoring a pg_dumpall to a local server from a saved file.

psql postgres -f savedfile.sql

Restoring a pg_dumpall to a remote server from a saved file.

psql -h host -p port postgres -f savedfile.sql

Restoring a pg_dumpall to a remote server from the source server.

pg_dumpall | psql -h host -p port postgres

Restoring from a pg_dumpall from a remote server to a remote server.

pg_dumpall -h src_host -p src_port | psql -h target_host -p target_port postgres

Restoring a pg_dump of a specific database from a saved file.

psql dbname -f savedfile.sql

Restoring a pg_dump of a specific database to a remote server from a saved file.

psql -h host -p port dbname -f savedfile.sql

Restoring a pg_dump with a different owner on the target.

Sometimes you don’t have access to the users and credentials on a source database or want them to be different on your target/restored database. Follow these steps to achieve this.

  1. Perform your pg_dump command as noted previously but add the –no-owner option.
  2. Perform the restore as noted above but run the commands as the new owner. 

pg_dump -d database –no-owner > savedfile.sql

psql -U newowner dbname -f savedfile.sql

Remember for remote servers as noted in the other examples, use the -h host -p port and any other connection string option needed.

If the user’s credentials are different and you are prompted for passwords,  read the prerequisites section of this blog.

Let’s Get Physical with pg_baseback

A common way of performing physical backups in Postgres is with the use of pg_basebackup. This tool allows us to generate a physical backup with the necessary WAL files needed to restore or stand up a stand-alone instance.

There are many flags and options for this tool including compression but for the sake of this blog, we will focus on the basic use of pg_basebackup with minimal options.

For the purpose of this document, we will cover physical backups using the native pg_basebackup tool.

NOTE: Typically, one specifies the destination path for the physical backup. This is noted with the -D option of pg_basebackup.

Saving the backup to destination path

pg_basebackup -D /destination/path -Pv –checkpoint=fast

Sending the backup as tar files to the directory path specified

pg_basebackup -D /destination/path -Pv –checkpoint=fast -F t

The above will generate two tar files. A base.tar and a pg_wal.tar

Create a Physical Backup From a Remote Instance

Make sure you have set up the prerequisites as explained here

The only difference between remote and local execution is that for remote, we specify a source server with the -h remote_host and the port postgres is running on with the -p remote_port  

pg_basebackup -h host -p port -D /destination/path -Pv –checkpoint=fast

If the user executing pg_basebackup is not trusted directly from the server executing the pg_basebackup, add the additional option of -U username. For example …

pg_basebackup -U postgres -h host -p port -D /destination/path -Pv –checkpoint=fast

Stand up a Local Instance of Postgres using pg_basebackup

Tar file method

If you execute the pg_baseback with the tar file option, it will generate two tar files. A base.tar and a pg_wal.tar 

Extract the base.tar. If you do not have different WAL files to restore, extract the pg_wal.tar and place the wal segment file in the pg_wal directory.

Directory method

Make sure the directory where the new cluster will be located exists with the proper permissions and storage capacity. Remember, this will consume the same amount of space as the source database.

Define where the target database will reside.

  • mkdir -p /destination/path
  • chmod 700 /destination/path
  • chown postgres:postgres  /destination/path

As user postgres, run the following command assuming pg_basebackup is in your path.

Source database is local

pg_basebackup -D /destination/path-Pv –checkpoint=fast -X stream

Source database is on a remote server

pg_basebackup -h host -p port -D /destination/path-Pv –checkpoint=fast -X stream

What does the above do?

  1. Assumes postgres is running on the localhost using the default port of 5432 and the user executing it has the necessary privs to do so.
  2. initiate a pg_basebackup of the current and running instance of postgres.
  3. Save the copy to the path specified after the -D 
  4. Optionally, the -Pv will show the progress and verbose output of the process.
  5. Perform a fast checkpoint rather than spreading it out. Makes the backup start sooner.
  6. Stream the WAL changes that are happening on the running cluster and save them in the new cluster. This will allow for starting the new cluster without additional WALs.

The above applies to whether the database is remote or not.

Starting the separate instance of postgres

When the pg_basebackup completes, to start up the new local instance, go into the new data directory /destination/path modify the postgresql.conf file or whatever file you may have defined your previous port in..

  • Set the port to a number not in use such as 5433. I.e  port = 5433
  • Modify any memory parameters necessary
  • Make sure, if archiving is enabled, it archives to a different location than the original cluster.

You can then proceed to start the new instance of postgres as follows:

pg_ctl -D /destination/path -o “-p 5433” start

You should now be able to connect to the new cluster with the exact credentials as the source cluster with 

psql -p 5433

Stand up a remote cluster

This process is pretty much identical to the local cluster process above. The only difference is you will specify a host and credentials.

From the remote target host 

pg_basebackup -h source_server -p port -U username -D /destination/path  -Pv –checkpoint=fast -X stream

As you can see, we are simply adding a connection string to the original command we ran for the local copy. This will generate the backup on the remote host and save it to the local destination path.

Once the copy is placed on the target host, if necessary, change your port and archive location if archiving is enabled as mentioned above.

Last words

The above examples are meant to get you started with basic backups and restores. They do not cover more advanced options such as archiving of wal files, point in time recovery, etc … This will be addressed in a future blog or by simply searching online.  Furthermore, using backups to stand up replicas will also be addressed in future blog postings.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com