
PostgreSQL streaming physical replication with slots simplifies setup and maintenance procedures. Usually, you should estimate disk usage for the Write Ahead Log (WAL) and provide appropriate limitation to the number of segments and setup of the WAL archive procedure. In this article, you will see how to use replication with slots and understand what problems it could solve.
Introduction
PostgreSQL physical replication is based on WAL. Th Write Ahead Log contains all database changes, saved in 16MB segment files. Normally postgres tries to keep segments between checkpoints. So with default settings, just 1GB of WAL segment files is available.
Replication requires all WAL files created after backup and up until the current time. Previously, it was necessary to keep a huge archive directory (usually mounted by NFS to all slave servers). The slots feature introduced in 9.4 allows Postgres to track the latest segment downloaded by a slave server. Now, PostgreSQL can keep all segments on disk, even without archiving, if a slave is seriously behind its master due to downtime or networking issues. The drawback: the disk space could be consumed infinitely in the case of configuration error. Before continuing, if you need a better understanding of physical replication and streaming replication, I recommend you read “Streaming Replication with PostgreSQL“.
Create a sandbox with two PostgreSQL servers
To setup replication, you need at least two PostgreSQL servers. I’m using pgcli (pgc) to setup both servers on the same host. It’s easy to install on Linux, Windows, and OS X, and provides the ability to download and run any version of PostgreSQL on your staging server or even on your laptop.
python -c "$(curl -fsSL https://s3.amazonaws.com/pgcentral/install.py)"
mv bigsql master
cp -r master slave
$ cd master
master$ ./pgc install pg10
master$ ./pgc start pg10
$ cd ../slave
slave$ ./pgc install pg10
slave$ ./pgc start pg10
First of all :
master$ echo "host replication replicator 127.0.0.1/32 md5" >> ./data/pg10/pg_hba.conf
If you are running master and slave on different servers, please replace 127.0.0.1 with the slave’s address.
Next pgc creates a shell environment file with PATH and all the other variables required for PostgreSQL:
master$ source ./pg10/pg10.env
Allow connections from the remote host, and create a replication user and slot on master:
master$ psql
postgres=# CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replicator';
CREATE ROLE
postgres=# ALTER SYSTEM SET listen_addresses TO '*';
ALTER SYSTEM
postgres=# SELECT pg_create_physical_replication_slot('slot1');
pg_create_physical_replication_slot
-------------------------------------
(slot1,)
To apply system variables changes and hba.conf, restart the Postgres server:
master$ ./pgc stop ; ./pgc start
pg10 stopping
pg10 starting on port 5432
Test table
Create a table with lots of padding on the master:
master$ psql psql (10.6) Type "help" for help.
postgres=# CREATE TABLE t(id INT, pad CHAR(200));
postgres=# CREATE INDEX t_id ON t (id);
postgres=# INSERT INTO t SELECT generate_series(1,1000000) AS id, md5((random()*1000000)::text) AS pad;
Filling WAL with random data
To see the benefits of slots, we should fill the WAL with some data by running transactions. Repeat the update statement below to generate a huge amount of WAL data:
UPDATE t SET pad = md5((random()*1000000)::text);
Checking the current WAL size
You can check total size for all WAL segments from the shell or from psql:
master$ du -sh data/pg10/pg_wal
17M data/pg10/pg_wal
master$ source ./pg10/pg10.env
master$ psql
postgres=# \! du -sh data/pg10/pg_wal
17M data/pg10/pg_wal
Check maximum WAL size without slots activated
Before replication configuration, we can fill the WAL with random data and find that after 1.1G, the data/pg10/pg_wal
directory size does not increase regardless of the number of update queries.
postgres=# UPDATE t SET pad = md5((random()*1000000)::text); -- repeat 4 times
postgres=# \! du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal
postgres=# UPDATE t SET pad = md5((random()*1000000)::text);
postgres=# \! du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal
Backup master from the slave server
slave$ source ./pg10/pg10.env
slave$ ./pgc stop pg10
slave$ rm -rf data/pg10/*
# If you are running master and slave on different servers, replace 127.0.0.1 with master's IP address.
slave$ PGPASSWORD=replicator pg_basebackup -S slot1 -h 127.0.0.1 -U replicator -p 5432 -D $PGDATA -Fp -P -Xs -Rv
Unfortunately pg_basebackup hangs with: initiating base backup, waiting for checkpoint to complete
.
We can wait for the next checkpoint, or force the checkpoint on the master. Checkpoint happens every checkpoint_timeout seconds, and is set to five minutes by default.
Forcing checkpoint on master:
master$ psql
postgres=# CHECKPOINT;
The backup continues on the slave side:
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/92000148 on timeline 1
pg_basebackup: starting background WAL receiver
1073986/1073986 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/927FDDE8
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: base backup completed
The backup copies settings from the master, including its TCP port value. I’m running both master and slave on the same host, so I should change the port in the slave .conf file:
slave$ vim data/pg10/postgresql.conf
# old value port = 5432
port = 5433
Now and run some queries:
slave$ cd ../master
master$ source pg10/pg10.env
master$ psql
postgres=# UPDATE t SET pad = md5((random()*1000000)::text);
UPDATE t SET pad = md5((random()*1000000)::text);
By running these queries, the WAL size is now 1.4G, and it’s bigger than 1.1G! and the WAL grows to 2.8GB:
master$ du -sh data/pg10/pg_wal
2.8G data/pg10/pg_wal
Certainly, the WAL could grow infinitely until whole disk space is consumed.
How do we find out the reason for this?
postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/2A400630 | slot1 | 0/92000000 | 2.38
We have one slot behind the master of 2.38GB.
Let’s repeat the update and check again. The gap has increased:
postgres=# postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/8D400238 | slot1 | 0/92000000 | 3.93
Wait, though: we have already used slot1 for backup! Let’s start the slave:
master$ cd ../slave
slave$ ./pgc start pg10
Replication started without any additional change to recovery.conf:
slave$ cat data/pg10/recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replicator password=replicator passfile=''/home/pguser/.pgpass'' host=127.0.0.1 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres target_session_attrs=any'
primary_slot_name = 'slot1'
pg_basebackup -R option instructs backup to write to the recovery.conf file with all required options, including primary_slot_name.
WAL size, all slots connected
The gap reduced several seconds after the slave started:
postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/8D400238 | slot1 | 0/9A000000 | 3.80
And a few minutes later:
postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/9E5DECE0 | slot1 | 1/9EB17250 | -0.01
postgres=# \!du -sh data/pg10/pg_wal
1.3G data/pg10/pg_wal
Slave server maintenance
Let’s simulate slave server maintenance with ./pgc stop pg10
executed on the slave. We’ll push some data onto the master again (execute the UPDATE query 4 times).
Now, “slot1” is again 2.36GB behind.
Removing unused slots
By now, you might realize that a problematic slot is not in use. In such cases, you can drop it to allow retention for segments:
master$ psql
postgres=# SELECT pg_drop_replication_slot('slot1');
Finally the disk space is released:
master$ du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal
Important system variables
- archive_mode is not required for streaming replication with slots.
- wal_level – is
replica
by default
- max_wal_senders – set to 10 by default, a minimum of three for one slave, plus two for each additional slave
- wal_keep_segments – 32 by default, not important because PostgreSQL will keep all segments required by slot
- archive_command – not important for streaming replication with slots
- listen_addresses – the only option that it’s necessary to change, to allow remote slaves to connect
- hot_standby – set to on by default, important to enable reads on slave
- max_replication_slots – 10 by default https://www.postgresql.org/docs/10/static/runtime-config-replication.html
Summary
- Physical replication setup is really easy with slots. By default in pg10, all settings are already prepared for replication setup.
- Be careful with orphaned slots. PostgreSQL will not remove WAL segments for inactive slots with initialized restart_lsn.
- Check pg_replication_slots restart_lsn value and compare it with current redo_lsn.
- Avoid long downtime for slave servers with slots configured.
- Please use meaningful names for slots, as that will simplify debug.
References