Nov
09
2020
--

Deep Dive Into PostgreSQL Indexes – Free Course at Percona University Online

Postgresql indexes percona

Postgresql indexes perconaPercona University Online has released its second free course, “A Deep Dive Into PostgreSQL Indexes” by Ibrar Ahmed, Senior Software Engineer at Percona.

Indexes are a basic feature of relational databases. PostgreSQL offers a rich collection of index options for developers and designers. But users need to understand the basic concept of indexes, to be able to compare the different index types and how they apply to different application scenarios. Only then can you make the best decisions about index strategy and design. 

This course consists of 13 short videos. Pass a brief quiz afterward and receive a Certificate of Completion from Percona. Begin the course on Google Classroom here. If you’re prompted for a class identifier, enter code xk5k6fz. The lesson list:

  • Lesson 1: Overview
  • Lesson 2: Heap vs Index
  • Lesson 3: How to Create Indexes?
  • Lesson 4: How to create an Expression Index?
  • Lesson 5: Advantages of a Partial Index
  • Lesson 6: Index Types: B-Tree
  • Lesson 7: Index Types: What is the HASH Index?
  • Lesson 8: Index Types: What is the BRIN Index?
  • Lesson 9: Index Types: GIN and GIST
  • Lesson 10: How to use different types of Indexes?
  • Lesson 11: Index Only Scans
  • Lesson 12: How to Find Duplicate Indexes?
  • Lesson 13: Unused Indexes
  • Course Quiz & Certificate of Completion

You can view the lessons directly on YouTube but you’re eligible for the Certificate of Completion only by attending via Google Classroom. For questions or suggestions, visit the Percona Forum Training category.

In case you missed it, see our initial Percona University Online course How to Upgrade to MySQL 8.0.

PostgreSQL Indexes

Nov
05
2020
--

Deploying Percona Monitoring and Management 2 Without Access to the Internet

percona monitoring and management deployment

Normally it is quite easy to deploy Percona Monitoring and Management (PMM) Server as a Docker container as per the official documentation. However, when working in very restrictive environments, it is possible the server doesn’t have access to the public Internet, so pulling the image from the Docker hub is not possible. Fortunately, there are a few workarounds to get past this problem.

As previously described by Agustin for PMM 1, one way is to Docker pull and save the image somewhere else. Here I will show you another way to do it that doesn’t require a separate server running Docker, and also provide updated instructions for PMM 2.

1. Download the PMM Server image directly from the Percona website. Select the desired Version and choose ‘Server – Docker Image’ from the drop-down box, for example:

download percona monitoring and management

2. Copy the downloaded .docker file to the PMM server, for example via SCP:

scp -i my_private_key pmm-server-2.11.1.docker my_user@my_secure_server:

3. Load the image to the local Docker repository on your PMM server

sudo docker load < pmm-server-2.11.1.docker

4. Create the persistent data container. Normally we would use percona/pmm-server:2 as the image tag, but since we loaded a specific version we need to specify it as follows:

sudo docker create \
-v /srv \
--name pmm-data \
percona/pmm-server:2.11.1 \
/bin/true

5. If this is a production deployment, it is a good idea to move the data container to a dedicated volume.

6. Create the server container (again, specifying the image version we have loaded before):

sudo docker run \
--detach \
--restart always \
--publish 80:80 \
--publish 443:443 \
--volumes-from pmm-data \
--name pmm-server \
percona/pmm-server:2.11.1

7. Verify PMM Server installation by visiting server_hostname:80 or server_hostname:443 and reset the admin password. The default user/password is admin/admin.

All that is left now is to install the clients and start using your brand new Percona Monitoring and Management instance. If you have questions or run into trouble, feel free to reach out to us at the Percona Forums.

Oct
19
2020
--

Announcing Percona Distribution for PostgreSQL 13

Percona Distribution PostgreSQL

Percona Distribution PostgreSQLPercona is pleased to announce the release of Percona Distribution for PostgreSQL 13. This release is available for immediate download and includes all of the latest features of PostgreSQL 13 Core Distribution.

Reduce Resource Usage

This release includes enhancements to existing features that deliver a more streamlined and optimized environment, helping you to reduce overall resource usage. This includes minimizing bloat with parallel Vacuum, which helps the Vacuum process to run more quickly and efficiently, improving the use of storage and improving performance. Deduplication of B-Tree indexes also helps save storage space by reducing the amount of storage needed for indexes. It also uses resources more efficiently during query with additional opportunities to benefit from partition enhancements. This means that your query processes less data and frees up resources.

Improve Response Times

With enhancements that deliver more efficient use of indexes and smarter sorting capabilities, you can expect better overall performance and improved response times. This includes incremental sort which avoids sorting data that has already been sorted and delivering query results faster.  New partition wise joins break down a join between partitioned tables into joins between their partitions, resulting in smaller data volumes being processed more quickly. Improvements to GiST, SP-GiST, and GIN indexes, provide overall performance improvement and speed up query processing. There are also some new PostgreSQL commands and authentication changes.

Additionally, we are including a technical preview of pg_stat_monitor, a custom extension written by Percona.  This extension gathers and aggregates query performance data, enabling better and faster query analysis. It can be used alone, but its capabilities are best used when combined with the latest release of Percona Monitoring and Management. This enables you to easily analyze your PostgreSQL queries, using the pg_stat_monitor metrics, to quickly identify and remedy issues with scaling, bottlenecks, and potential outages.

Percona is also planning to extend our Managed Services offerings to include PostgreSQL early next year. This means that you will be able to have your MySQL, MongoDB, PostgreSQL, and Maria DB databases all managed by a single source. As always, we continue to provide support for these open-source database products and offer professional consulting and training services.

For more details on PostgreSQL 13, check out the release notes from PostgreSQL. To learn more about Percona Distribution for PostgreSQL, check out our release notes.

To provide feedback on the technical preview of pg_stat_monitor or its integration with Percona Monitoring and Management, visit the Percona Community Forum. Bugs should be submitted through Jira.

Download Percona Distribution for PostgreSQL

Download Percona Monitoring and Management

Oct
14
2020
--

Announcing pg_stat_monitor Tech Preview: Get Better Insights Into Query Performance in PostgreSQL

pg_stat_monitor Better Insights Query Performance in PostgreSQL

I am very passionate about database observability, and I believe query performance observability is the most important insight you can get in your database.  Why? Because if you look from an application developer’s point of view, once a database is provisioned and you can connect to it, responding to your queries promptly and correctly is essentially all that you need from the database. This applies both to the databases which you deploy on-prem or in cloud-based DBaaS offerings.

PostgreSQL has a fantastic extension for capturing query performance called pg_stat_statements which captures a lot of query execution details; however, it did not capture all the details we wanted to capture in order to provide deep query performance insights in Percona Monitoring and Management. So, we developed a new pg_stat_monitor plugin which has slightly different design goals and usage patterns, while providing everything you’re used to from pg_stat_statements, and more.

At this point, pg_stat_monitor is released as a separate project from Percona, so it is easier for us to experiment with new approaches and ideas. At the same time, we’re very much looking forward to working with the PostgreSQL community to see if there are any features that have enough consensus to be incorporated into pg_stat_statements, too.

pg_stat_monitor Differences

  • Time Bucketing: Instead of supplying one set of ever-increasing counts, pg_stat_monitor computes stats for a configured number of time intervals. This allows for much better data accuracy, especially in the case of high resolution or unreliable network.
  • Multi-Dimensional grouping:  While pg_stat_statements groups counters by (userid, dbid, queryid),  pg_stat_monitor uses (userid, clientip, dbid, queryid). This allows you to drill down into the performance of queries coming from particular client addresses, which we at Percona have found to be very valuable in a number of cases.
  • Capture Actual Parameters in the Queries:  pg_stat_monitor allows you to choose if you want to see queries with placeholders for parameters or actual query examples.  We found having the full query example is very helpful, as you can run EXPLAIN on it or easily play with modifying the query to make it run better, as well as making communication about the query clearer when discussing with other DBAs and application developers.
  • Store Information about tables accessed by statement: This allows us to identify all queries which accessed a given table easily.  Such an approach is more reliable than parsing queries to extract such information.
  • Response time distribution histogram:  While min/max/avg query execution statistics are great, especially when computed over a short time bucket, they are hard to understand when trying to look at them for a long period of time. For this reason, we have added a query response time histogram which can offer better insights.

It is important to note that we are able to achieve these additional features while having performance overhead comparable to the original pg_stat_statements extension.

pg_stat_monitor Planned Features

There are a number of features we want to implement but have not yet figured out how to achieve them with the PostgreSQL extension (and if it is possible at all without patching PostgreSQL code). If you have ideas on how to achieve any of these, let us know.

  • Capturing All the Queries:  Capture all queries (not only successful queries) so the queries which terminated with errors are not missed (or require another data source to capture). Once we have this, we can measure performance for successful and failed queries separately, so that an elevated failure rate can be seen as a different signal than different query performance.
  • Errors and Warnings Histogram:  Not all queries succeed and when they fail you really want to know why. We would like to see not only the number of times a given query failed but what the cause of failure was. The same query may fail due to a permission error or foreign key violation which requires an entirely different action to remediate.  It is also very helpful to know which query triggers particular warnings.
  • Accurate Wait Event Attribution to Queries:  For a given query type it would be great to see where its response time comes from. Wait Events is one way to capture this and currently, query attribution can be done through a high-frequency sampling of the pg_stat_activity table. However, it is not accurate and not scalable with a large number of active backends.

pg_stat_monitor  is now available as a Technical Preview and we’re releasing it to get feedback from the community on what we’re doing right and what we should do differently before we release pg_stat_monitor as a generally available version to be supported for years to come. Please check it out,  drop us a note, file an issue, or make a pull request!


Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

Oct
12
2020
--

Ask An Expert On Percona’s Community Forum!

Percona Community Forum

Percona Community ForumHave you recently visited Percona’s Community Forum? It’s your hub for direct Q&A with top database experts, including Percona CEO Peter Zaitsev and CTO Vadim Tkachenko. Last quarter over 450 users participated, including 45 engineers from Percona’s staff. Since it was first launched in 2006 and revamped earlier this year, our Forum has built up a mountain of easily-searched database expertise.

This free online community is open to everyone from newbies to maestros. Register as a user and then ask your question or visit the unanswered question list and help someone else. You’ll feel great doing so, plus you’ll earn points and advance in rank, building your online reputation like done by vaibhav_upadhyay40, Fan, Federico Razzoli, Ghan, djp, Björn, rdab100, Stateros, gordan, Venkat, and others.

Our Forum Q&A covers all flavors of MySQL, MongoDB, and PostgreSQL, as well as key utilities like ProxySQL and PMM. Plus it’s the only site announcing all of Percona’s new software releases and Percona’s occasional urgent software alerts. You can even personalize email notifications to track only certain categories and skip the rest.  And we promise to never spam you!

A few of our most popular posts illustrate how it all works:

So what’s the fine print? Most importantly, remember that it’s volunteers who answer, so answers may not be timely, might not be in in-depth, or occasionally might never arrive at all. Response time now averages five days. Remember to never share confidential details, as everything on the Forum is public.

The Forum operates on a spirit of self-help, so do a reasonable amount of your own research before popping off a question. And if you get help, try to give back help too. Everything depends on a spirit of reciprocity and goodwill. The Forum is for those who love databases and want to share their enthusiasm with others, and for those who want to master the database craft.

Finally, as our lawyers make us say, answers are “as-is” meaning Percona does not guarantee accuracy and disclaims all liability. Our Help Articles, Code of Conduct, and Terms of Service explain it all.

So register as a user and give the Percona Forum a try! As always we welcome your suggestions and feedback to Community-Team@Percona.com.

Oct
05
2020
--

Grab Your Percona Swag – For Free!

Percona Swag

Would you like to get the latest in Percona gear 100% free, shipped to you anywhere in the world? Maybe that sounds too good to be true, but it’s true!  It’s easy and takes as little as 20 minutes to earn your swag. Here are some examples of the swag items you can claim:

Percona Swag

So what’s the catch? Percona software products are now listed on four online software directories, but our listings are too new to have accumulated many user reviews. We need reviews!

So our offer is simple. You write one review, you pick one Percona swag item. You write two reviews, you pick two. Seven reviews, pick seven pieces of swag, our limit. But you must post your reviews by November 15, 2020!

Any meaningful review earns swag, be it positive, negative, or mixed. Write whatever you believe; only write something! There’s no swag for a review that gives a rating but says nothing at all or nothing meaningful, so make those reviews count!

Here’s where to post reviews:

Product Capterra   G2           TrustRadius   SourceForge
Percona Monitoring and Management Capterra G2 TrustRadius SourceForge
Percona Server For MySQL Capterra G2 TrustRadius SourceForge
Percona XtraDB Cluster Capterra G2 TrustRadius SourceForge
Percona XtraBackup Capterra G2 TrustRadius SourceForge
Percona Distribution for PostgreSQL n/a G2 TrustRadius SourceForge
Percona Backup for MongoDB n/a G2 TrustRadius SourceForge
Percona Server for MongoDB n/a n/a TrustRadius SourceForge
Percona Kubernetes Operator for Percona XtraDB Cluster n/a G2 TrustRadius SourceForge

You can review several different products and post them on one site, or you can write one product review and post it on multiple sites.  Or post any combination of reviews, up to a max of seven.  The more reviews you post, the more the swag delivered to your home address for free, courtesy of Percona.

To claim your swag, write to <community-team@percona.com>.  Include:

  • Links to each review you posted.
  • Your postal mailing address.
  • Your phone number (for delivery use only, never for marketing)

For t-shirt orders, also state:

  • Color (White, Black, or Blue)
  • Size (Small, Medium, Large, or Extra Large)

It’s that simple! Start writing now!

Oct
01
2020
--

PostgreSQL 13 New Feature: dropdb –force

postgresql dropdb --force

There have been many big features added to PostgreSQL 13, like Parallel Vacuum, de-duplication of indexes, etc., and a complete list can be found at PostgreSQL 13 release notes. Along with the big features, there are also small ones added, including dropdb –force.

Dropdb –force

A new command-line option is added to dropdb command, and a similar SQL option “FORCE” is also added in DROP DATABASE. Using the option -f or –force with dropdb command or FORCE with DROP DATABASE to drop the database, it will terminate all existing connections with the database. Similarly, DROP DATABASE FORCE will do the same.

In the first terminal, create a test database and a database test, and connect to the database.

vagrant@vagrant:~$ createdb test;
vagrant@vagrant:~$ psql test
psql (13.0)
Type "help" for help.

In the second terminal, try to drop the test database and you will get the error message that the test database is being used by another user.

vagrant@vagrant:/usr/local/pgsql.13/bin$ psql postgres
psql (13.0)
Type "help" for help.
postgres=# drop database test;
ERROR:  database "test" is being accessed by other users
DETAIL:  There is 1 other session using the database.

Now try the same command with the FORCE option. You will see that the database is dropped successfully.

postgres=# drop database test WITH ( FORCE );
DROP DATABASE

Note: you can also use the command line dropdb test -f.

The session on the first terminal will be terminated.

test=# \d
FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?> 

Looking for more info on other PostgreSQL 13 changes? Check out Ibrar’s previous post, Postgresql_fdw Authentication Changes in PostgreSQL 13!

Sep
30
2020
--

Postgresql_fdw Authentication Changes in PostgreSQL 13

Postgresql_fdw Authentication Changes in PostgreSQL 13

PostgreSQL 13 is released with some cool features, such as index enhancement, partition enhancements, and many others. Along with these enhancements, there are some security-related enhancements that require some explanation. There are two major ones: one is related to libpq and the other is related to postgres_fdw. As it is known that postgres_fdw  is considered to be a “reference implementation” for other foreign data wrappers, all other foreign data wrappers follow their footsteps in development.  This is a community-supported foreign-data wrapper. The blog will explain the security changes in postgresq_fdw.

1 – The superuser can permit the non-superusers to establish a password-less connection on postgres_fdw

Previously, only the superuser can establish a password-less connection with PostgreSQL using postgres_fdw. No other password-less authentication method was allowed. It had been observed that in some cases there is no password required, so it does not make sense to have that limitation. Therefore, PostgreSQL 13 introduced a new option (password_required) where superusers can give permission to non-superusers to use a password-less connection on postgres_fdw.

postgres=# CREATE EXTENSION postgres_fdw;
CREATE EXTENSION

postgres=# CREATE SERVER postgres_svr FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres');
CREATE SERVER

postgres=# CREATE FORIENG TABLE foo_for(a INT) SERVER postgres_svr OPTIONS(table_name 'foo');
CREATE FOREIGN TABLE

postgres=# create user MAPPING FOR vagrant SERVER postgres_svr;
CREATE USER MAPPING
postgres=# SELECT * FROM foo_for;
 a 
---
 1
 2
 3
(3 rows)

When we perform the same query from a non-superuser, then we will get this error message:

ERROR:  password is required
DETAIL:  Non-superusers must provide a password in the user mapping

postgres=# CREATE USER nonsup;
CREATE ROLE

postgres=# create user MAPPING FOR nonsup SERVER postgres_svr;
CREATE USER MAPPING

postgres=# grant ALL ON foo_for TO nonsup;
GRANT

vagrant@vagrant:/work/data$ psql postgres -U nonsup;
psql (13.0)
Type "help" for help.

postgres=> SELECT * FROM foo_for;
2020-09-28 13:00:02.798 UTC [16702] ERROR:  password is required
2020-09-28 13:00:02.798 UTC [16702] DETAIL:  Non-superusers must provide a password in the user mapping.
2020-09-28 13:00:02.798 UTC [16702] STATEMENT:  SELECT * FROM foo_for;
ERROR:  password is required
DETAIL:  Non-superusers must provide a password in the user mapping.

Now perform the same query from non-superuser after setting the new parameter password_required ‘false’ while creating the user mapping.

vagrant@vagrant:/work/data$ psql postgres
psql (13.0)
Type "help" for help.

postgres=# DROP USER MAPPING FOR nonsup SERVER postgres_svr;
DROP USER MAPPING

postgres=# CREATE USER MAPPING FOR nonsup SERVER postgres_svr OPTIONS(password_required 'false');
CREATE USER MAPPING

vagrant@vagrant:/work/data$ psql postgres -U nonsup;
psql (13.0)
Type "help" for help.

postgres=> SELECT * FROM foo_for;
 a 
---
 1
 2
 3
(3 rows)

2 – Authentication via an SSL certificate

A new option is provided to use an SSL certificate for authentication in postgres_fdw. To achieve this, the two new options added to use that feature are sslkey and sslcert.

Before performing this task we need to configure SSL for server and client. There are many blogs available (How to Enable SSL authentication for an EDB Postgres Advanced Server and SSL Certificates For PostgreSQL) to setup SSL for PostgreSQL, and this blog tries to configure SSL with minimum requirements.

Step 1: Generate Key in $PGDATA

vagrant@vagrant$  openssl genrsa -des3 -out server.key 1024
Generating RSA private key, 1024 bit long modulus (2 primes)
.+++++
..................+++++
e is 65537 (0x010001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:


vagrant@vagrant$ openssl rsa -in server.key -out server.key
Enter pass phrase for server.key:
writing RSA key

Step 2:  Change the mode of the server.key

vagrant@vagrant$  chmod og-rwx server.key

Step 3: Generate the certificate

vagrant@vagrant$ openssl req -new -key server.key -days 3650 -out server.crt -x509
-----
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:ISB
Locality Name (eg, city) []:Islamabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Percona
Organizational Unit Name (eg, section) []:Dev
Common Name (e.g. server FQDN or YOUR name) []:localhost
Email Address []:ibrar.ahmad@gmail.com


vagrant@vagrant$ cp server.crt root.crt

Now we need to generate the client certificate.

Step 4: Generate a Client key

vagrant@vagrant$ openssl genrsa -des3 -out /tmp/postgresql.key 1024
Generating RSA private key, 1024 bit long modulus (2 primes)
..........................+++++
.....................................................+++++
e is 65537 (0x010001)
Enter pass phrase for /tmp/postgresql.key:
Verifying - Enter pass phrase for /tmp/postgresql.key:



vagrant@vagrant$ openssl rsa -in /tmp/postgresql.key -out /tmp/postgresql.key
Enter pass phrase for /tmp/postgresql.key:
writing RSA key


vagrant@vagrant$ openssl req -new -key /tmp/postgresql.key -out 
-----
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:ISB
Locality Name (eg, city) []:Islamabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Percona
Organizational Unit Name (eg, section) []:Dev
Common Name (e.g. server FQDN or YOUR name) []:127.0.0.1
Email Address []:ibrar.ahmad@gmail.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:pakistan
An optional company name []:Percona 

Step 5:  Copy root.crt to the client

vagrant@vagrant$ cp data5555/root.crt /tmp/

Step 6: Test the connection using a certificate

vagrant@vagrant$ psql 'host=localhost port=5555 dbname=postgres user=ibrar sslmode=verify-full sslcert=/tmp/postgresql.crt sslkey=/tmp/postgresql.key sslrootcert=/tmp/root.crt'
psql (13.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \q

Now we are ready, and we can create a foreign server in PostgreSQL with certificates.

postgres=# CREATE server postgres_ssl_svr foreign data wrapper postgres_fdw options (dbname 'postgres', host 'localhost', port '5555', sslcert '/tmp/postgresql.crt', sslkey '/tmp/postgresql.key', sslrootcert '/tmp/root.crt');
CREATE SERVER

postgres=# create user MAPPING FOR vagrant SERVER postgres_ssl_svr;
CREATE USER MAPPING

postgres=# create foreign table foo_ssl_for(a int) server postgres_ssl_svr options(table_name 'foo');
CREATE FOREIGN TABLE

Now we are ready and set to query a foreign table by postgres_fdw using certificate authentication.

postgres=# select * from foo_ssl_for;

 a 
---
 1
 2
 3
(3 rows)

Note:  Only superusers can modify user mappings options sslcert and sslkey settings.


Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

Sep
29
2020
--

PostgreSQL Configuration Changes You Need to Make Post-Installation

PostgreSQL Configuration Changes

PostgreSQL Configuration ChangesSo you’ve installed postgres onto your machine, and you want to start working with it.

How?

The key to understanding the post-installation procedure is to realize that it “depends”.

  • It “depends” on the OS i.e. MSWindows vs Linux.
  • It “depends” on the flavor of Linux i.e. Debian vs RedHat.
  • It “depends” if it’s a package install or from source code.

Let’s start by working with the most basic steps common to all installs and we’ll break it down further from there.

A successfully installed postgres, no matter the version, is characterized by the following:

  1. a newly created datacluster is present
  2. a configuration file pg_hba.conf is to be edited
  3. a configuration file postgresql.conf is to be edited

There are other configuration files but we’ll work with these.

For the purposes of discussion let’s further assume you’ve started up the cluster and postgres is running on the host. Here’s an example of what you can see when you run a utility, such as netstat,  that reports the network connections:

$netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address  State     PID/Program name
tcp     0      0    127.0.0.1:5432  0.0.0.0:*        LISTEN     27929/postgres

The first thing is to look at is the “Local Address”. Notice how it says 127.0.0.1:5432. Okay, so that means that the server is currently listening on the localhost on port 5432. But you want 0.0.0.0:5432 otherwise remote connections cannot be accepted. With an editor, open up file pg_hba.conf and look at the “default” rules. Keep in mind that the configuration file can be located in one of several locations, we’ll cover that later.

ATTENTION: Setting the Address (CIDR) to 0.0.0.0 is for connectivity purposes only. As soon as you know everything works you should restrict this to as few permitted connections as possible. This is not something you should do on a production machine.

The actual “rules” per line can vary from one type of postgres installation to another. The good news is that RedHat/Centos look alike and all Debian/Ubuntu have their own similar styles too. The relevant settings are at the bottom of the file as all else above is commented documentation.

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 peer
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            md5
host    replication     all             ::1/128                 md5

Look at the first line, where TYPE is “local”. So long as you can log in locally, via UNIX DOMAIN SOCKETS, and sudo as the superuser, postgres is the default, you can access your service without a password.

METHOD should be peer but if it uses something else, like md5, you’ll need to change the string. Alternatively, if you feel particularly trustful of the other user accounts on the host, you can use the METHOD trust permitting free access to all locally logged-in UNIX accounts.

# ATTENTION:
# the service must be reloaded for any edits to pg_hba.conf to take effect
#
$sudo su - postgres
$psql -c "select 'hello world' as greetings"
greetings
-------------
hello world

Looking at the second line one sees that TYPE is IPV4. This rule, as well as the rule for TYPE IPv6, prevents localhost logins unless one knows the password:

$psql -h localhost -c "select 'hello world' as greetings"
Password for user postgres:

So let’s fix this by assigning a password to ROLE postgres by logging via UNIX DOMAIN SOCKETS since we already permit logins by METHOD peer:

--
-- example invocation, change the password to something real
--
ALTER ROLE postgres WITH PASSWORD 'mypassword';

TIP: edits to pg_hba.conf requires the service to reload the file i.e. SIGHUP

Now that we’ve had connectivity for localhost connections, we’re using an IP v4 socket for this example, we can now proceed to address remote connections.

You’re going to need to add another rule which should be placed after the localhost rule:

host all all 0.0.0.0/0 md5

And here’s a line you can write for IPV6:

host all all ::0/0 md5

TIP: The demonstrated example rules let everybody connect to the host. A knowledge of CIDR is key to enforcing network security.

Keeping in mind that your system will be unique, here’s what the pg_hba.conf should start to look like:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             0.0.0.0/0               md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
host    all             all             ::0/0                   md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            md5
host    replication     all             ::1/128                 md5

You’re almost there!

Now that you’ve added a password to the superuser and updated the configuration file pg_hba.conf, it’s time to visit another configuration file postgresql.conf.

Locate the file and edit runtime parameter listen_addresses. The default setting prohibits remote connections. Resetting the value either to a nic’s IP address or just using the wild card will make it accessible.

TIP: As postgres, execute the following in a psql session in order to locate your configuration files.

select distinct sourcefile from pg_settings;

For those people feeling fancy, one can bind the postgres service to more than one IP address as a comma-separated list:

listen_addresses = '*'
#listen_addresses = 'localhost' # what IP address(es) to listen on;
                                # comma-separated list of addresses;
                                # defaults to 'localhost'; use '*' for all
                                # (change requires restart)

An alternate method updating the runtime parameters can also be accomplished using the SQL statement:

postgres=# ALTER SYSTEM SET listen_addresses = '*';
ALTER SYSTEM

The final step, restarting the service, is where we start splitting hairs again:

  • Redhat distributions require dataclusters to be manually created before they can be administered.
  • PostgreSQL Debian distributions, including Ubuntu, automatically creates and starts up the datacluster.

Systemd:

  • Redhat/Centos:
    /usr/pgsql-11/bin/postgresql-12-setup initdb
    systemctl start|stop postgresql-12
  • Debian/Ubuntu:
    systemctl restart postgresql

Debian derived Linux Distributions include a collection of command-line utilities in order to administer the PostgreSQL service:

# example CLI
#
pg_ctlcluster
Usage: /usr/bin/pg_ctlcluster <version> <cluster> <action> [-- <pg_ctl options>]
# restarting postgres version 12 on a Debian derived distribution
pg_ctlcluster 12 main restart

After a successful service restart you should get something similar to the following:

Active Internet connections (only servers)
Proto Recv-Q Send-Q  Local Address    Foreign Address  State   PID/Program name
tcp     0      0      0.0.0.0:5432    0.0.0.0:*        LISTEN  27929/postgres

And finally, the remote connectivity test:

#
# THE REMOTE LOGIN
#
psql 'host=myhost user=postgres password=mypassword' -c "select 'hello world' as greeetings "

Then there’s replication, but that’s another blog altogether.

That’s it for now!


Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

Sep
22
2020
--

Using Security Definer to Monitor PostgreSQL 9.6 or Earlier Using Percona Monitoring and Management

security definer postgresql

security definer postgresqlI have previously written a blog post on the detailed steps involved in enabling PostgreSQL monitoring using PMM. In that post, you could see me talking about the role: pg_monitor that can be granted to monitoring users. The

pg_monitor

role restricts a monitoring user from accessing user data but only grants access to statistic views needed for monitoring. The following are the simple steps to create a monitoring user for using Percona Monitoring and Management (PMM).

CREATE USER pmm_user WITH ENCRYPTED PASSWORD 'secret'; 
GRANT pg_monitor to pmm_user;

However, this feature is only available from PostgreSQL 10. So, let’s discuss how to deal with the previous releases such as PostgreSQL 9.6 or earlier.

One of the easiest methods is to grant

SUPERUSER

role to the monitoring user. But, granting the SUPERUSER access may not work in all the environments as it has the privileges to access, modify, and alter the database objects. For that reason, we could use

SECURITY DEFINER

to safely grant access to selected statistics views. Let us first understand the difference between a

security invoker

and a

security definer

in PostgreSQL functions.

Security Invoker vs Security Definer in PostgreSQL

Security Invoker

When you execute a function in PostgreSQL, it is executed using the privileges of the user calling it. So, if the calling user does not have access to select a specific table, then, the SQL statements on that table may fail, so the execution of the function fails.

Security Definer

When you execute a function in PostgreSQL using SECURITY DEFINER, it is executed by the privileges of the user who created it. Even if the calling user does not have access to the database objects being queried in the function, the function execution succeeds when the user who created the function has the required privileges on those database objects.

Statistic views accessed by PMM that need access using a security definer:

To enable PostgreSQL monitoring using PMM, you should be granting access to some of the views being accessed by PMM as of today.

  1. pg_stat_activity
  2. pg_stat_statements

We shall now see the steps involved in creating a monitoring user (

pmm_user

) who should be given indirect access to the above-mentioned views.

Step 1: Create the monitoring user and the schema in which the functions and views can be created.

CREATE USER pmm_user WITH ENCRYPTED PASSWORD 'secret';
CREATE SCHEMA pmm AUTHORIZATION pmm_user;
ALTER USER pmm_user SET SEARCH_PATH TO pmm, pg_catalog;


Step 2:
Create the functions to access the views being accessed by PMM, as a SUPERUSER. Make sure to specify

SECURITY DEFINER

so that the user calling this function can use the superuser role to access the data from the views:

pg_stat_activity

and

pg_stat_statements

.

CREATE OR REPLACE FUNCTION pmm.get_pg_stat_activity() RETURNS SETOF pg_stat_activity AS
$$ SELECT * FROM pg_catalog.pg_stat_activity; $$
LANGUAGE sql
VOLATILE
SECURITY DEFINER;

GRANT EXECUTE ON FUNCTION pmm.get_pg_stat_activity() TO pmm_user;

-- The Following function needs to be created when you need QAN (Query Analytics) with PMM. 

CREATE OR REPLACE FUNCTION pmm.get_pg_stat_statements() RETURNS SETOF pg_stat_statements AS
$$ SELECT * FROM public.pg_stat_statements; $$
LANGUAGE sql
VOLATILE
SECURITY DEFINER;

GRANT EXECUTE ON FUNCTION pmm.get_pg_stat_statements() TO pmm_user;

While creating the last function

pmm.get_pg_stat_statements()

, it assumes that the extension

pg_stat_statements

has been created in the

public

schema. If you have specified another schema while creating this extension, please specify the appropriate schema prefix instead of

public

.


Step 3:
Create views with the same name as the original views in the

pmm

schema. Then, grant the SELECT access on the newly created view to the monitoring user.

CREATE VIEW pmm.pg_stat_activity AS
SELECT * FROM pmm.get_pg_stat_activity();

CREATE VIEW pmm.pg_stat_statements AS
SELECT * FROM pmm.get_pg_stat_statements();

GRANT SELECT ON pmm.pg_stat_activity TO pmm_user;
GRANT SELECT ON pmm.pg_stat_statements TO pmm_user;

So, when you connect to postgres using the monitoring user

pmm_user

, you would automatically query the view

pg_stat_activity

in the

pmm

schema, that is calling the function

pmm.get_pg_stat_activity()

using

SUPERUSER

privileges but not the

pg_catalog.pg_stat_activity

  view. This is because we have set the

search_path

of the

pmm_user

to

pmm, pg_catalog

in Step 1. If the view it is trying to access is not present in the

pmm

schema, it looks for that view in the

pg_catalog

schema.

Your monitoring user for Percona Monitoring and Management is successfully created once the above three steps are completed.


Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com