Mar
23
2023
--

Why PostgreSQL Is a Top Choice for Enterprise-level Databases

postgresql enterprise

When it comes to enterprise-level databases, there are several options available in the market, but PostgreSQL stands out as one of the most popular and reliable choices. PostgreSQL is a free and open source object-relational database management system (ORDBMS) that has existed since the mid-1990s.

Over the years, it has evolved into a robust and feature-rich database that offers several advantages over other database management systems. In this blog post, we will explore some of the reasons why PostgreSQL is a top choice for enterprise-level databases.

Open source and free

  • PostgreSQL is an open source database, which means it is free to use, distribute, and modify.
  • This makes it an attractive option for businesses of all sizes, especially for startups and small businesses that may not have the budget to invest in expensive database management systems.
  • PostgreSQL is backed by a large community of developers contributing to its development, support, and documentation.

Advanced features

PostgreSQL offers a wide range of advanced features that make it a top choice for enterprise-level databases. Some of the notable features include:

  • Support for JSON and XML data types
  • Full-text search capabilities
  • Built-in replication and high availability
  • Extensible architecture with support for custom data types, functions, and operators
  • Support for foreign keys and referential integrity constraints
  • Transactions with ACID (Atomicity, Consistency, Isolation, Durability) compliance

Scalability

  • PostgreSQL is designed to scale with growing businesses. It can handle large datasets and complex queries with ease, and it can be easily scaled horizontally or vertically.
  • PostgreSQL supports sharding, which allows data to be distributed across multiple servers, making it ideal for high-traffic websites and applications.

Reliability

  • PostgreSQL is known for its reliability and stability. It has a robust transactional system that ensures data integrity and consistency, even in high-transaction environments.
  • It has a proven track record of handling large volumes of data and high-traffic websites. PostgreSQL uses a multi-version concurrency control (MVCC) system, ensuring multiple users can access the same data simultaneously without conflicts or data loss.
  • It also supports multiple concurrent users and can handle complex database operations without downtime or data loss. This makes it an excellent choice for mission-critical applications that require high availability.

Flexibility

PostgreSQL offers a high degree of flexibility, allowing developers to customize it to their specific needs. It supports a wide range of data types, including JSON, and can handle complex data structures. It also offers a range of extensions and plugins that can be used to enhance its functionality, making it a versatile choice for production-ready databases.

Security

PostgreSQL offers robust security features to protect data from unauthorized access. It supports SSL encryption for secure communication and offers various authentication methods, including LDAP, Kerberos, and GSSAPI. It also supports role-based access control, which allows developers to control access to data at the user level. This makes it an ideal choice for businesses that handle sensitive data.

Integration with other tools and technologies

PostgreSQL integrates seamlessly with other tools and technologies commonly used in enterprise-level applications. It has connectors for programming languages such as Java, Python, and PHP, as well as integrations with popular data visualization tools such as Tableau and Power BI.

Conclusion

PostgreSQL is a top choice for production-ready databases due to its scalability, reliability, flexibility, security, and community support. Its features make it an ideal choice for businesses that require a robust and versatile database system. If you are looking for a database system for your enterprise-level application, PostgreSQL is definitely worth considering.

Percona provides managed PostgreSQL services that include database monitoring, performance tuning, and backup and recovery services. Percona also provides PostgreSQL Support Services for businesses and organizations that use PostgreSQL as their primary database technology.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

 

Join the Percona PostgreSQL Community

Mar
21
2023
--

Multi-tenants and Branches in Neon Serverless PostgreSQL

serverless postgresql

Announcement

I will be speaking at Percona Live 2023 about serverless PostgreSQL. Join us at this event if you are interested!

Introduction

Recently, Percona introduced Percona Builds for Neon (Introducing Percona Builds for Serverless PostgreSQL), which makes it easy to install and experiment with serverless PostgreSQL. And I followed it with how you can run easy experimentations with Neon using Docker (Using Docker To Deploy Neon Serverless PostgreSQL).

Before getting into more details about Neon functionality, we need to introduce two important concepts Neon operates with: Tenants and Data Branches.

Tenants

Multi-tenancy is a software architecture pattern where a single instance of a software application serves multiple tenants, allowing them to share resources like storage, processing power, and memory while maintaining separate, secure access to their respective data.

In the context of databases, a tenant’s data can be logically separated from other tenants’ data in the same database so that each tenant can only access and manage their own data. This is particularly useful in cloud-based applications, where multiple customers or organizations share the same underlying infrastructure to reduce costs and improve scalability.

The key aspect of multi-tenancy is the isolation of tenant data, ensuring that each tenant’s data remains secure, private, and separate from other tenants’ data.

In the case of Neon, the single pageserver already comes with multi-tenant functionality, and at the very least, you must have one tenant to create PostgreSQL compute node instances.

pageserver multi-tenant

The requirement is that each tenant will operate with its own compute node.

tenant compute-mode

Compute Nodes can be created/destroyed on demand; they are not required if there is no current need to access data for a given tenant.

Data branches

Neon enables you to seamlessly create branches of your Postgres database, enhancing your development workflow. With each code deployment, you can establish a separate branch for your testing environments. This way, developers and testers can obtain a data copy for testing or experimentation without impacting the main data.

Utilizing the “copy on write” method, branches are virtually cost-free. Neon refers to data branches as “timelines,” and these terms are often used interchangeably.

Neon serverless postgres

You can think (at least I do) of timelines as GitHub branches. Just keep in mind that data branches can’t be merged back.

In the example above:

  • Timeline 1 is branched from the Main timeline after two commits.
  • Timeline 2 is branched from the Main timeline after four commits.
  • Timeline 3 is branched from Timeline 1 after two more commits to Timeline 1.

And to expand on compute node requirements: each timeline requires a separate compute node to operate (and you can shut down the compute node if you do not need to work with the timeline right now).

There is a restriction to be aware of: each tenant/timeline can have only one compute node.

Schematically it looks like this:

postgres compute node

I will show the practical work with tenants/timelines in the following blog post, so subscribe for all updates.

Interested in serverless PostgreSQL?

You are welcome to experiment with Neon using our Docker images.

For feedback and questions, please use our Forums: (Latest Percona Labs/Percona Build for Serverless PostgreSQL topics – Percona Community Forum)

Follow our blog for more information about setups, performance, and unique features of serverless PostgreSQL.

If you want to talk with us about builds for serverless PostgreSQL or stay in touch for future updates – leave your contact details below.

Contact form

Also, if you are interested in possible Support for serverless PostgreSQL, fill out the Contact form, and I will get in touch with you.

Mar
13
2023
--

Using Docker To Deploy Neon Serverless PostgreSQL

serverless postgresql

Announcement

I will be speaking at Percona Live 2023 about serverless PostgreSQL. Join us at this event if you are interested!

Introduction

Recently, Percona introduced Percona Builds for Neon (Introducing Percona Builds for Serverless PostgreSQL), which makes it easy to install and experiment with serverless PostgreSQL. But now, there’s an even more convenient way to explore the capabilities of serverless PostgreSQL — Docker images. In this article, we’ll explore the benefits of using Docker images for experimenting with serverless PostgreSQL and show you how to get started with them.

Architecture

To understand more about deployment procedures, we need to look a little more at Neon architecture.

There is a section in our Documentation (Introduction to Serverless PostgreSQL) and a short overview of the primary components:

  • Page Server
    • The storage server with the primary goal of storing all data pages and WAL records
  • Safe Keeper
    • A component to store WAL records in memory (to reduce latency). Because of the critical importance of these components, the recommendation is to deploy multiple of them. They will connect in Paxos group to guarantee data consistency.
    • Later the WAL records will be shipped to Pageserver, where it will use them to update data pages
  • Compute Nodes
    • The component to take and handle user queries. Basically, you can view this as a PostgreSQL instance but without a storage layer
  • Storage Broker

Postgres pageserver

Storage Broker is a coordination component between WAL Service and Pageserver.

Storage Broker

The Pageserver listens for GetPage@LSN requests from the Compute Nodes and responds with pages from the repository.

postgres GetPage@LSN

Deployment with Docker

We published a Docker image with all components in Docker Hub:

perconalab/neon Tags | Docker Hub

And the source code is located in our build repository:

Percona-Lab/serverless-postgresql-build (github.com)

Deployment steps

For my experiments, I follow these steps. I prefer to test a distributed deployment where each component is placed on different servers or virtual machines, that’s why I do not put it into docker-compose.

The assumption is that I use a reachable server with IP address 172.16.0.9. Unfortunately, I have to use physical IP addresses for connectivity in multiple servers docker deployment.

1. Deploy storage broker

docker run -d -t --name storagebroker --net=host
--entrypoint "storage_broker"
perconalab/neon:latest -l 0.0.0.0:50051

2. Deploy safekeeper (or several of them for redundancy)

docker run -d -t --name safekeeper1 --net=host
--entrypoint "safekeeper"
perconalab/neon:latest
--id=1 -D /data --broker-endpoint=http://172.16.0.9:50051
-l 172.16.0.9:5454 --listen-http=0.0.0.0:7676

3. Deploy pageserver

docker run -d -t --name pageserver --net=host
--entrypoint "pageserver"
perconalab/neon:latest
-D /data -c "id=1" -c "broker_endpoint='http://172.16.0.9:50051'"
-c "listen_pg_addr='0.0.0.0:6400'" -c "listen_http_addr='0.0.0.0:9898'"
-c "pg_distrib_dir='/opt/neondatabase-neon/pg_install'"

4. Most interesting part – deploy compute nodes (components to handle client requests).

The most interesting part is deploying compute nodes, which are the components that handle client requests. Before we dive in, I need to introduce some new concepts that I’ll describe in more detail in future blog posts, so bear with me for now.

Concept 1: Tenants – Neon has the capability to serve multiple tenants, all located on the same pageserver but visible as separate PostgreSQL instances to clients.

Concept 2: Timelines – Each tenant can have multiple timelines and the ability to branch the current state into a new timeline. At a minimum, a tenant must have one timeline.

Now, let’s get back to deploying compute nodes.

Deploying compute node with creating new tenant and timeline:

docker run -d -t --name compute
--entrypoint "/compute.sh"
-p55432:55432 -e PAGESERVER=172.16.0.9
-e SAFEKEEPERS=172.16.0.9:5454 perconalab/neon:latest

This will create a lightweight PostgreSQL instance (compute node), accessible by port 55432. And you can connect to the instance with a normal PSQL client as:

psql -p55432 -h 127.0.0.1 -U cloud_admin postgres

Deploying compute node with existing tenant and timeline:

Assume we already have a tenant and timeline, and we want to attach a lightweight PostgreSQL instance (compute node):

To get timeline and tenant from the previous start, you can find them in docker logs (

docker logs compute

) identified as:

{
"name": "neon.timeline_id",
"value": "4b4541ad75370114cd7956e457cc875f",
"vartype": "string"
},
{
"name": "neon.tenant_id",
"value": "6c92c037a54c0e3a005cdd4a69d6e997",
"vartype": "string"
},

docker run -d -t --name compute1
--entrypoint "/compute.sh" -p55433:55432
-e PAGESERVER=172.16.0.9 -e SAFEKEEPERS=172.16.0.9:5454
-e TENANT=51021f53054316c6533d371c9d7e273c -e TIMELINE=e08a6f1526b3ad6249a7b08fc5585e0b
perconalab/neon:latest

Deploying compute node with branching from existing tenant and timeline:

This is the most exciting capability: we can fork (branch) existing data into a new timeline (more in the following blog posts):

docker run -d -t --name compute3
--entrypoint "/compute.sh" -p55435:55432
-e PAGESERVER=172.16.0.9
-e SAFEKEEPERS=172.16.0.9:5454
-e TENANT=6c92c037a54c0e3a005cdd4a69d6e997 -e TIMELINE=4b4541ad75370114cd7956e457cc875f
-e "CREATE_BRANCH=1" perconalab/neon:latest

Interested?

You are welcome to experiment with Neon using our Docker images.

For feedback and questions, please use our Forums: (Latest Percona Labs/Percona Build for Serverless PostgreSQL topics – Percona Community Forum)

Follow our blog for more information about setups, performance, and unique features of Serverless PostgreSQL.

If you would like to talk with us about builds for Serverless PostgreSQL or stay in touch for future updates – leave your contact details below.

Contact form

Also, if you are interested in possible Support for Serverless PostgreSQL, fill out the Contact form, and I will get in touch with you.

Mar
07
2023
--

Introducing Percona Builds for Serverless PostgreSQL

percona serverless postgres

Recently, “serverless” has become a buzzword, and for good reason. The goal is to simplify the provisioning and management of database capacity. One approach is to separate compute and storage to allow for independent scaling.

We are thrilled to announce our collaboration with Neon (Neon – Serverless, Fault-Tolerant, Branchable Postgres) to provide a Serverless PostgreSQL that you can control and manage yourself.

So, what is Neon? It’s an open source alternative to AWS Aurora Postgres that utilizes a serverless architecture. By separating storage and compute, Neon replaces the PostgreSQL storage layer with data nodes, and compute nodes are distributed across a cluster of nodes.

Architecture

A Neon installation consists of compute nodes and the Neon storage engine. Compute nodes are stateless PostgreSQL nodes backed by the Neon storage engine.

Neon Percona Serverless Postgres

One of the benefits of separating storage from compute nodes is the ability to scale them independently. For instance, if we run out of storage space, we can easily add additional storage nodes without affecting the PostgreSQL instances.

On the other hand, if we reach the limit of our compute nodes, we can simply add more compute nodes separately. This is because the compute nodes are stateless in this architecture, meaning they can be scaled without affecting the system’s performance.

By separating the storage from compute nodes, we’re able to optimize our resources and easily scale up as needed without disrupting the overall system. This approach allows for more efficient use of resources and improved performance.

Branches

Neon uses copy-on-write snapshots on Storage Nodes, which allows a very cheap way to branch data.

Have you ever wanted the ability to fork your data set and work on it independently, similar to using Git branches? This can now be achieved with ease using Serverless PostgreSQL. Simply fork your data, conduct experiments, and have peace of mind knowing that your primary data set remains unaffected.

main data tree

Please subscribe to our newsletter for more blog articles and documentation on how to use Branches!

Builds

Percona provides binary builds for Serverless PostgreSQL based on the Neon. This is the only place where you can get pre-built binaries and help with how to use them.

At this point, the binaries are EXPERIMENTAL and ONLY FOR TESTING purposes. Percona does not provide official support for the build at this moment.

Binary releases location

The binaries for releases are hosted on the GitHub release page:

Releases · Percona-Lab/neon (github.com)

Website location: https://percona.community/labs/serverless-postgresql/

Documentation: https://percona.community/labs/serverless-postgresql/docs/

Interested?

For feedback and questions, please use our Forums.

Follow our blog for more information about setups, performance, and unique features of Serverless PostgreSQL.

If you would like to talk with us about builds for Serverless PostgreSQL, are interested in support, or want to stay in touch for future updates, leave your contact details in the form below.

Contact form

Feb
24
2023
--

Reducing PostgreSQL Costs in the Cloud

Reducing PostgreSQL Costs in the Cloud

If you’re using PostgreSQL in the cloud, there’s a good chance you’re spending more than you need in order to get the results you need for your business.

Let’s take a look at how to get the benefits you need while spending less, based on the recommendations presented by Dani Guzmán Burgos, our Percona Monitoring and Management (PMM) Tech Lead, on this webinar (now available on demand) hosted in November last year.

Usage reduction: What to look for to reduce PostgreSQL cloud costs

The first step in cost reduction is to use what you need and not more. Don’t pay for capacity you don’t use or need. 

Usage reduction is a continuous process. Identifying which resources you can trim to reduce your monthly bill can be difficult, but looking at the right metrics will help you understand the application’s actual requirements.

In the Home Dashboard of PMM, a low CPU utilization on any of the database services that are being monitored could mean that the server is inactive or over-provisioned. Marked in red in Figure 1 is a server with less than 30%  of CPU usage. PMM can also show you historical data that can help you identify how long a service has been in a given state. Configuration of the CPU metrics can be changed in the dashboard. These color-coded states on panels are available in PMM 2.32.0 and later.

PMM Home Dashboard

Figure 1: PMM Home Dashboard

From the Amazon Web Services (AWS) documentation, an instance is considered over-provisioned when at least one specification of your instance, such as CPU, memory, or network, can be sized down while still meeting the performance requirements of your workload and no specification is under-provisioned. Over-provisioned instances may lead to unnecessary infrastructure costs.

Making the use of resources efficiently and ensuring that this does not impact the budget available for cloud computing is not a one-time fix but a continuous cycle of picking properly sized resources and eliminating over-provisioning.

Use reduction at scale requires a cultural shift, and engineers must consider the cost as they think of memory or bandwidth as another deployment KPI.

Think of a gaming company that creates a game that is getting popular, so the number of resources needed to support more users would increase considerably. But if the game loses popularity, the server would become over-provisioned, and the resources allocated must be re-sized to better fit the application’s needs.

Re-sizing to save costs

There are three approaches to usage reduction:

Regardless of the method you’re using to deploy your PostgreSQL instance, here are some metrics that would determine when re-sizing is needed:

  • CPU utilization
  • Memory usage
  • Network throughput
  • Storage usage

Remember that optimizing your infrastructure is intended for more than cost savings. You have to ensure that the operation is not impacted when you’re making decisions based on the metrics. The primary goal is to ensure that the services themselves do not run out of the required operating capacity.

PostgreSQL in the cloud

CPU

Considering AWS as your cloud platform of choice, the configuration made for your infrastructure will influence the performance of your application and monthly costs. For example, an Amazon Elastic Compute Cloud (EC2) instance with Graviton2 processors will be a better choice than non-ARM options, as it’s cheaper, and you will get real and faster cores which means the CPU cores are physical and not with hyper-threading. Graviton2 processors aim to use Reserved Instances to save costs in the long run.

Benefits of Graviton2 Processors

  • Best price performance for a broad range of workloads
  • Extensive software support
  • Enhanced security for cloud applications
  • Available with managed AWS services
  • Best performance per watt of energy used in Amazon EC2

Storage

Continuing with the AWS example, choosing the right storage option will be key to performance. Amazon Elastic Block Store (EBS) is your good-to-go option for disk space.

From AWS documentation, Amazon EBS is an easy-to-use, scalable, high-performance block-storage service designed for Amazon EC2.

Amazon Elastic Block Storage

Figure 2: Amazon Elastic Block Storage

Running relational or NoSQL databases is one of the use cases where EBS is recommended for. You can deploy and scale your databases, including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, MySQL, Cassandra, and MongoDB.

With EBS, you can configure HDD-based volumes optimized for large streaming workloads or SSD-based volumes (recommended for database workloads) optimized for transactional workloads.

An SSD volume can be any of the following types:

  • io1
  • io2
  • io2 Block Express
  • gp2
  • gp3

Which one is a better choice for storage? It will depend on the requirements of your workload, including disk space, Input/Output Operations per Second (IOPS), and throughput rate (MB/s), and your configuration must be cost-optimized as well.

Avi Drabkin’s blog post is a recommended reading on this matter, as he analyzes the configuration required for every volume type to satisfy the requirements of a particular use case. For more information on EBS volume types, check the Amazon EBS Volume Types page.

Multi AZ deployments vs. read replicas

Multi-AZ deployment

In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates the data to an instance in a different AZ. Amazon RDS automatically fails over to a standby instance without manual intervention when it detects a failure.

Amazon RDS Multi-AZ Deployment

Figure 3: Amazon RDS Multi-AZ Deployment

Read replica

Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.

Amazon RDS Read Replicas

Figure 4: Amazon RDS Read Replicas

Which option is better?

Multi-AZ deployments offer advantages, especially for HA and disaster recovery. The trade-off is that multi-AZ deployments are expensive.

A better option would be to deploy reader instances and combine them with the use of a reverse proxy, like pgpool-II or pgbouncer. The reader instances also cost more than a standard setup, but you can use them for production to handle everyday database traffic.

Pgpool-II can not only be used for reducing connection usage, which will be helpful to reduce CPU and memory usage but can also do load balancing. With load balancing, you can redistribute the traffic, sending the reading requests to your read replicas and writing requests to your main database instance automatically.

Regarding read replicas, in AWS, you cannot promote an RDS PostgreSQL read replica, which means a read replica can’t become the primary instance. Whenever you try to do this, the read replica detaches from the primary instance and become its own primary instance, and you will end up having two different clusters.

One solution is using the pglogical extension for creating replicas outside the RDS path. When combining the pglogical replication with a reverse proxy, you will still get the benefits of a managed database, including backups, minor upgrades maintenance, recovery support, and being tied to the Multi-AZ configuration, which translates to full control over planned failovers.

Also, converting a replica to the primary instance would be a better upgrade approach. For example, if you need to upgrade a database with a large amount of data, this process could take hours, and your instance won’t be available during this time. So, with this configuration, you can upgrade a replica and later convert that replica to the primary instance without interrupting operations.

Check this blog post for more information on how to use pglogical for upgrading your database instances.

Vacuum

As explained in this blog post, bloating in the database is created when tables or indexes are updated, an update is essentially a delete-and-insert operation. The disk space used by the delete is available for reuse but not reclaimed, creating the bloat.

How to remove bloat? That’s what the vacuum process is intended for with the help of autovacuum and vacuumdb.

Autovacuum is a daemon that automates the execution of VACUUM and ANALYZE (to gather statistics) commands. Autovacuum checks for bloated tables in the database and reclaims the space for reuse.

vacuumdb is a utility for cleaning a PostgreSQL database. vacuumdb will also generate internal statistics used by the PostgreSQL query optimizer. vacuumdb is a wrapper around the SQL command VACUUM.

The vacuum process is recommended to be scheduled at a time when your database has low traffic, usually at night. So, you must disable Autovacuum during the day and run vacuumdb at night will full power. This way, you guarantee that the resources will be available during the day when most operations occur.

Monitoring your database for dead tuples (bloat) is also recommended. For this matter, you can use the Experimental PostgreSQL Vacuum Monitoring. This experimental dashboard is not part of PMM, but you can try it and provide feedback.

What about serverless?

With serverless, you truly pay only for what you’re actively using, and unused resources aren’t typically easy to be left flying around, but the move to serverless isn’t without cost. The complexity of building any migration plan to serverless resides in execution and has very little to do with cost savings. There’s an entirely different lens through which you can evaluate serverless: total cost of ownership (TCO).

The TCO refers to the cost of engineering teams that are required to build a solution and the impact of time to market on the success and profitability of a service. Serverless allows you to delegate a lot of responsibility to the cloud provider.

Duties that DevOps engineers would typically perform (server management, scaling, provisioning, patching, etc.) become the responsibility of AWS, GCP, or Azure. And leaves dev teams with free time to focus on shipping differentiated features faster.

With TCO, you must consider that people’s costs may cancel out any infrastructure savings when considering moving from a monolithic application to a serverless one.

Returning to the benefits versus effort, you should consider the overall cost of redesigning services for serverless against the potential for reducing costs.

Conclusion

Knowing your project’s database requirements will be essential when choosing the services and hardware configuration you will pay your cloud service provider for. The configuration you make must guarantee the proper functioning of your application and will determine the monthly costs.

The number of resources required may vary over time, and metrics like CPU usage, memory usage, and disk storage will help you determine when re-sizing your infrastructure is needed.

For example, if the number of database transactions decreases after a certain period of time, you will have more resources than you need, and it will be important to change the configuration of your infrastructure to guarantee the new requirements and pay what is really being used.

Following the recommendations presented in this webinar, you can design an cost-optimized infrastructure without affecting your database’s performance.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL today!

Feb
20
2023
--

PostgreSQL Migration From Digital Ocean DBaaS to Digital Ocean Droplet

PostgreSQL Migration

Recently, one of our customers approached us with a unique challenge: they needed to migrate their entire PostgreSQL cluster from DigitalOcean’s Database as a Service (DBaaS) to a DigitalOcean Droplet. The reason for their migration from DBaaS to Droplets was to lower their cost. This task proved to be quite challenging, as DigitalOcean’s documentation clearly states that “We do not currently support migrating databases from clusters inside of DigitalOcean to other clusters inside of DigitalOcean.”

In short, we have to migrate the database as per the client’s request, and we gave them two options:

1. pg_dump

2. Logical replication

The pg_dump method requires downtime as we must take the dump and restore it on the new server. Logical replication keeps the source database operational while transferring data to the target database. Once we reach the desired state, we can cut over to the target database.

For migrating to the logical replication method, all tables required to get replicated must have a Primary Key/Unique Key.

Prerequisites for migration

To migrate an existing database to a DigitalOcean database cluster, we need to ensure logical replication is enabled on the source database, have the source database’s connection credentials and disable or update any firewalls between the databases.

Have Superuser permissions: For preparing a database for migration and to migrate a database, we need superuser permissions on the source database.

Make database publicly accessible: To migrate a database, the source database’s hostname or IP address must be accessible from the public internet. Public connection information for DigitalOcean databases are in the database’s Connection Details in the control panel.

Allow remote connections: First, verify that the database allows all remote connections. This is determined by the database’s listen_addresses variable, which allows all remote connections when its value is set to *. To check its current value, run the following query in the PostgreSQL (psql) terminal:

SHOW listen_addresses;
If enabled, the command line returns:
listen_addresses
-----------
*
(1 row)

If the output is different, allow remote connections in your database by running the following query:

ALTER SYSTEM SET listen_addresses = '*';

We must also change your local IPv4 connection to allow all incoming IPs. To do this, find the configuration file pg_hba.conf with the following query:

SHOW hba_file;

Open pg_hba.conf in your text editor, such as nano: nano pg_hba.conf

Under IPv4 local connections, find and replace the IP address with 0.0.0.0/0, which allows all IPv4 addresses:

# TYPE DATABASE USER ADDRESS METHOD

# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::/0 md5

Enable logical replication:

Most cloud database providers have logical replication enabled by default. Logical replication may not be enabled if you migrate a database from an on-premises server. If your database is not set up for logical replication, the migration process will not work because the database can only move your schemas, not your data.

To verify that logical replication has been enabled, run the following query in the PostgreSQL (psql) terminal:

show wal_level;
If enabled, the output returns:
wal_level
-----------
logical
(1 row)
If the output is different, enable logical replication in your database by setting wal_level to logical:
ALTER SYSTEM SET wal_level = logical;

Change max replication slots:

After enabling logical replication, we need to verify that the database’s max_replication_slots value is equal to or greater than the number of databases we have in your PostgreSQL server. To check the current value, run the following query in the PostgreSQL (psql) terminal:

show max_replication_slots;

The output returns:

max_replication_slots
-----------

(1 row)

If it is smaller than the number of databases in our PostgreSQL server, adjust it by running the following query, where use_your_number is the number of databases in our server:

ALTER SYSTEM SET max_replication_slots = use_your_number;

And restart the server.

Challenges we face during migration

There are some challenges when we implement a logical replication without having any primary key. There are two different methods to implement logical replication without having a PK column, the second being by using a unique key.

This can be implemented with a similar set of steps that we perform. Also its function is similar. Here, instead of the primary key, a unique key is going to keep updates.

Caveats

  • It does not support DELETE/UPDATE without a replica identity.
  • A unique index can not be used with a replica identity if NULLs are allowed.
  • Using REPLICA IDENTITY to FULL
  • When no appropriate index is found for replica identity, we may set replica identity to FULL. In this case, all the table columns collectively act as a primary key.
  • Due to supplemental logging, this generates a huge amount of WALs.
  • This may be slower than the traditional one.

Things to consider

We need to set the replica identity full for the tables that are logically migrated using only the UNIQUE key as otherwise DELETE/UPDATE won’t be supported.

After data gets synced from the DBaaS fork to the new droplet VM, we need to perform the pg_dump and pg_restore method for sequences. Now here is a question that arises: Why do we need to dump the sequence and why cannot we replicate it via logical replication?

Logical replication is designed to track the WAL changes and report to subscribers about the current states and values. It would be quite contradicting to replicate a sequence because the current sequence value does not equal the value stored in the WAL. To remedy this, PostgreSQL documentation suggests manually copying over the sequence values or using a utility such as pg_dump to do the copying.

  • Dump the sequences from the DBaaS DB fork
  • Stop the DBaaS DB fork
  • Restore the sequences on the new droplet
  • Disable the logical subscriptions

Below is the short summary that has been followed to migrate the environment:

Source: Digital Ocean DBasS
Destination: Digital Ocean Droplets
Process:

  • The client has chosen migration via a logical replication process to reduce downtime.
  • On the target VM, we installed Percona Distribution for PostgreSQL 13.7.
  • Dumped the roles from source to destination, i.e., DBasS.
  • Listed out the tables that don’t have PK and informed them.
  • The client added the PK for some tables and the UNIQUE key for some tables.
  • Installed the extensions on the VM that was present on the source cluster.
  • Dumped only schema from source, i.e., DBasS
  • Restored the schema on destination, i.e., Droplets
  • Adjusted the logical replication-related parameters on the source and destination like max_replication_slots, max_logical_replication_workers, and max_wal_senders.
  • Configured the logical replication by creating the publication and subscription between the source and destination.
  • Once the destination is in sync, disable the subscribers.
  • Dumped the sequences from the source and restored them on the destination.
  • Adjusted the listen_address, pg_hba files on the destination.
  • Dropped the subscribers on the destination

Conclusion

As we all know, PostgreSQL is an open source, object-relational database built with a focus on extensibility, data integrity, and speed. Its concurrency support makes it fully ACID-compliant, and we can achieve the result of migrating customer data from DBasS to Droplets by using one of the great features of PostgreSQL, i.e., logical replication, and we dumped the sequences from the source and restored them on the destination.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Feb
14
2023
--

End of Life Notification for PostgreSQL 10

PostgreSQL 10 end of life

The final PostgreSQL 10 release was published on November 10, 2022, according to the PostgreSQL versioning policy page. Please remember that the final PostgreSQL 11 release is planned for November 9, 2023.

Following Percona’s Release Lifecycle policies, we follow and recommend PostgreSQL community timelines. There won’t be any public builds available from both community and Percona for bugs or security fixes. Since the upstream development of PostgreSQL 10 has ended, we are removing PostgreSQL 10 from the officially supported software list. For all existing customers, we will continue the support on a best-effort basis.

Before updating it is worth checking out our post on updating legacy PostgreSQL made available as a part of a webinar on Upgrading / Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions.

We recommend that you upgrade at least to PostgreSQL 11, but considering the upcoming End of Life for this version as well, it is worth considering PostgreSQL 14. Remember that the newest release, PostgreSQL 15, is already available in the Percona Distribution for PostgreSQL, giving you the most product life and the newest features.

While planning the database upgrade, remember that Percona offers a range of professional services from the top of market experts. Whether you need support, managed services, consulting or training, Percona has your back!

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Feb
13
2023
--

PostgreSQL for MySQL DBAs Episode 12: Transactions

PostgreSQL for MySQL DBAs Transactions

Yes, MySQL DBAs can learn PostgreSQL! This series is for those who know MySQL and want to expand their knowledge, see how another database works, or are looking to expand their career horizons. In this episode we will look at transactions. Yes, MySQL with InnoDB does have the ability to perform transactions and this is a case where both MySQL and PostgreSQL work pretty much the same way. But you do get named rollback points and a handy reminder when using PSQL that you are in a transaction when you use PostgreSQL. The supporting video with bonus material can be found here.

So why emphasize transactions? There are a lot of developers who do not use them on a regular basis, if at all. It helps if a DBA not only has a solid knowledge of the subject but good examples to pass on to others.

So what is a transaction?

A transaction allows the grouping of one or more queries as a set. And that set can be committed or written to the database as a complete set. Or that set can be backed out without changing the database. The classical example is moving money from one bank account to another. If something goes wrong the entire transaction is canceled. If all goes well, the money is deducted from the first account and added to the second,

And to make this even more complex, you can have save points within a transaction so you can back up to that point but not lose all the work to that point.

test=# START TRANSACTION
test=*# insert into z (z) values (1);
INSERT 0 1
test=*# savepoint a;
SAVEPOINT
test=*# insert into z (z) values (2);
INSERT 0 1
test=*# select * from z;
z
---
1
2
(2 rows)

In the above example we start our transaction by issuing the START TRANSACTION statement.  From this point of time until the data is committed, the data is treated as one group of commands to be executed together. Consider metaphorically this as the data is being written on a dry-erase board in the server to make sure it is correct before you commit to carving it in stone when happy with the product. The first statement after the transaction is started in an insert of some data. We insert a row where the value of z is 1.

You may note that psql changes the prompt from test=# to test=*# to remind you that you are in a transaction.  You are not going to get that reminder with the standard MySQL Clients.

Next we create a save point, here creatively named ‘A.’  Note here that we can name the save points so the transaction can pinpoint what to back out of the transaction. This is another metaphorical dry-erase board for what comes after.  For some reason we are happy with the row with z = 1 as a good thing but the next statement may be something we are uncertain about. We start a save point that we can return to, erase the newest stuff on our imaginary dry-erase board, and proceed to add a second row of data. This time we add a row where z = 2.

So maybe we are happy with these two rows, or maybe not. We are undecided but carry on anyway.

test=*# savepoint b;
SAVEPOINT
test=*# insert into z (z) values (3);
INSERT 0 1
test=*# select * from z;
z
---
1
2
3
(3 rows)

At this point, we logically want another one of those imaginary dry-erase boards and issue a command to set a save point named ‘B.’  We add data and take a peek at it. But when we look at the data we have entered, something does not make us happy and we want to remove the last two rows from the dry-erase board.

test=*# rollback to savepoint a;
ROLLBACK
test=*# select * from z;
z
---
1
(1 row)

Something tells us that we do not need the last two rows. We issue a ROLLBACK TO SAVEPOINT, and all commands issued since we set that rollback are wiped off the dry-erase board. So within our transaction we have only the first row.

test=*# rollback;
ROLLBACK
test=# select * from z;
z
---
(0 rows)

test=#

Please note that just using ROLLBACK without a named save point wiped clean the dry-erase board, er, activities in the transaction to the point where we typed START TRANSACTION.  Usually you want to go back to a named save point and not wipe out everything, similar in action to forgetting a WHERE clause on a DELETE statement. So know where you really want to go back to.

But how does one do a real transaction?

test=# start transaction;
START TRANSACTION
test=*# insert into z (z) values (101);
INSERT 0 1
test=*# commit;
COMMIT
test=# select * from z;
z
-----
101
(1 row)

When the COMMIT statement is processed, everything between the START TRANSACTION and the COMMIT is executed as a block. This blog is not the place to go into ACID compliance or isolation levels, but for now take it for granted that all go in together. Metaphorically we like what we see on the dry-erase board and convert it to carved stone in the database.

test=# BEGIN;
BEGIN
test=*# update z set z = 102;
UPDATE 1
test=*# commit;
COMMIT
test=# select * from z;
z
-----
102
(1 row)

And you can use BEGIN to start a transaction if you so desire.

Summary

So transactions are transactions in both MySQL and PostgreSQL.  But you get more flexibility for rollbacks with PostgreSQL. The past videos for PostgreSQL for MySQL Database Administrators (DBA) can be found here: episode oneepisode twoepisode threeepisode fourepisode fiveepisode sixepisode seven, episode eight, episode nine, episode ten, and episode eleven.

Feb
02
2023
--

PostgreSQL Database Security: External Server-Based Authentication

PostgreSQL Database Security: External Server-Based Authentication

Database security is crucial to protecting sensitive information stored in a database. A critical aspect is authentication, which refers to verifying the identity of a user trying to access the database. The authentication process is typically accomplished through usernames and passwords and includes other factors such as security tokens and one-time passwords. Database Administrators (DBAs) must implement robust authentication mechanisms to ensure that only authorized users can access the database and its data. Additionally, it is best practice to monitor regularly and audit database authentication logs to detect and prevent unauthorized access attempts.

PostgreSQL server authentication refers to verifying the identity of a user trying to connect to the PostgreSQL server. This can be done using various methods, including password-based authentication, a client certificate, and external authentication methods such as GSSAPI, LDAP, and RADIUS. The pg_hba.conf file is used to configure the authentication methods the PostgreSQL server uses and can specify different methods for different types of connections. By default, PostgreSQL uses trust-based authentication, where any connection from localhost is considered trusted and doesn’t require a password. However, it’s important to note that it’s recommended to use password-based or external authentication methods for remote connections and production environments to ensure secure access to the data.

When talking about security, authentication is the first line of defense. PostgreSQL provides various methods of authentication, which are categorized into three categories.

 

PostgreSQL internal authentication and OS-based authentication have already been discussed in previous blogs. Now it’s time to discuss external authentication methods.

PostgreSQL external authentication

PostgreSQL supports a variety of external authentication methods, including GSSAPI, LDAP, and RADIUS. GSSAPI, or Generic Security Service Application Program Interface, which uses Kerberos to authenticate users. LDAP, or Lightweight Directory Access Protocol, authenticates users against an LDAP server. RADIUS, or Remote Authentication Dial-In User Service, authenticates users against a RADIUS server. These external authentication methods can secure a PostgreSQL database by providing a centralized way of managing user access and eliminating the need to store passwords in the database. However, it’s important to note that GSSAPI and RADIUS are only supported in PostgreSQL server versions 9.1 or later, and LDAP is a built-in feature for all versions.

LDAP (Lightweight Directory Access Protocol)

LDAP is a popular authentication method for enterprise environments, and it can also be used with PostgreSQL. In this section, we will go over how to set up LDAP authentication for PostgreSQL and provide an example of how it can be configured. Before you begin, you will need access to an LDAP server and a basic understanding of how it works. You will also need to have PostgreSQL installed and running on your server.

Step 1: Install the OpenLDAP server and client packages

sudo apt-get update 
sudo apt-get install slapd ldap-utils

Step 2: Configure the OpenLDAP server

sudo dpkg-reconfigure slapd

  • Select “No” when asked to configure the database with dbconfig-common.
  • Set the domain name for your LDAP server, for example, “example.com”.
  • Set the organization name for your LDAP server, for example, “Example Inc”.
  • Set the administrator password for your LDAP server.

Step 3: Create an LDAP directory structure

sudo ldapadd -x -D cn=admin,dc=example,dc=com -w <admin password> -f base.ldif

where “base.ldif” is a file containing the directory structure you want to create.

Step 4: Test the LDAP server

ldapsearch -x -b dc=example,dc=com -D cn=admin,dc=example,dc=com -w <admin password>

This command should return information about your LDAP directory.

Step 5: Add users and groups to the LDAP directory

ldapadd -x -D cn=admin,dc=example,dc=com -w <admin password> -f users.ldif

where “users.ldif” is a file containing the users and groups you want to add to the directory.

Step 6: Test the user authentication

ldapwhoami -x -D cn=<user>,dc=example,dc=com -w <user password>

This command should return the DN (Distinguished Name) of the user, indicating that the user

Step 7: Configure pg_hba.conf

The pg_hba.conf file controls the authentication methods used by PostgreSQL. To enable LDAP authentication, you’ll need to add a line to the pg_hba.conf file that specifies the LDAP server and method to be used.

For example:

host all all ldapserver=ldap.example.com ldapmethod=simple

This line tells PostgreSQL to use the LDAP server at ldap.example.com for authentication using the “simple” method.

Step 8: Restart PostgreSQL

After making changes to the pg_hba.conf file, you will need to restart the PostgreSQL service for the changes to take effect.

Step 9: Test the LDAP connection

You can test the connection to the LDAP server by running the following command:

ldapwhoami -h ldap.example.com -D "cn=admin,dc=example,dc=com" -w "adminpassword"

This command will bind to the LDAP server as the “cn=admin,dc=example,dc=com” user with the password “adminpassword”. If the connection is successful, you should see a message indicating the user you are currently bound as.

Step 10: Test LDAP authentication

To test LDAP authentication with PostgreSQL, you can connect to the database using the psql command-line interface. For example,

psql -U myldapuser -h localhost

If the LDAP authentication is set up correctly, you should be able to connect to the database using the myldapuser’s LDAP credentials.

GSSAPI (Generic Security Service Application Program Interface)

GSSAPI is an authentication method that allows users to authenticate to PostgreSQL using Kerberos. In this section, we will go over how to set up GSSAPI authentication for PostgreSQL and provide an example of how it can be configured.

Before you begin, you will need to have a Kerberos infrastructure and a basic understanding of how it works. You will also need to have PostgreSQL installed and running on your server and the Kerberos libraries (gssapi and gssapi_krb5) installed on the server where PostgreSQL is running.

Step 1: Configure pg_hba.conf

The pg_hba.conf file controls the authentication methods used by PostgreSQL. To enable GSSAPI authentication, you’ll need to add a line to the pg_hba.conf file that specifies the GSSAPI method to be used. For example:

hostgssenc postgres postgres 192.168.0.102/32 gss include_realm=0

This line tells PostgreSQL to use GSSAPI for authentication and not to include the Kerberos realm in the username. 

Step 2: Restart PostgreSQL

After making changes to the pg_hba.conf file, you will need to restart the PostgreSQL service for the changes to take effect. If you get this error, that means your server is not configured with –with-gssapi option.

2023-01-31 19:25:18.585 PKT [42302] LOG: hostgssenc record cannot match because GSSAPI is not supported by this build

Step 3: Kerberos configuration

You will need to configure Kerberos to work with PostgreSQL. You will need to create a service principal for the PostgreSQL server and add it to the Kerberos keytab file.

Step 4: Test the GSSAPI connection

You can test the GSSAPI connection by running the following command:

kinit -k -t /path/to/keytab postgres/your.postgres.host@YOURREALM.COM

This command will acquire a Kerberos ticket for the service principal you created in step 3.

Step 5: Test GSSAPI authentication

To test GSSAPI authentication with PostgreSQL, you can connect to the database using the psql command-line interface. For example:

psql "dbname=mydb host=myhost user=myuser"

If the GSSAPI authentication is set up correctly, you should be able to connect to the database using your Kerberos credentials.

SSPI (Security Support Provider Interface)

SSPI is an authentication method that allows users to authenticate to PostgreSQL using Windows’ built-in security features. It is similar to GSSAPI but specific to the Windows operating system.

The PostgreSQL server uses the SSPI library to negotiate authentication with the client using the Kerberos or NTLM protocols. To set up SSPI authentication in PostgreSQL, you must configure the pg_hba.conf file to use the “sspi” method and restart the PostgreSQL service.

For example, to enable SSPI authentication for all connections from the local host, you would add the following line to the pg_hba.conf file:

host all all 127.0.0.1/32 sspi

This line tells PostgreSQL to use SSPI for authentication for all connections coming from the IP address 127.0.0.1, which corresponds to the localhost. Once the changes are made, you will need to restart the PostgreSQL service for the changes to take effect. To test the SSPI authentication, you can connect to the database using the psql command-line interface. SSPI authentication is only supported on Windows and is available on PostgreSQL server version 9.1 or later.

RADIUS (Remote Authentication Dial-In User Service)

RADIUS is a widely used protocol for authenticating remote users. It can also be used to authenticate users in a PostgreSQL database. In this blog post, we will go over how to set up RADIUS authentication for PostgreSQL and provide an example of how it can be configured.

Before you begin, you will need access to a RADIUS server and a basic understanding of how it works.

Step 1: Install FreeRADIUS  

Here are the steps to install and configure a RADIUS server on Ubuntu:

sudo apt-get update sudo apt-get install freeradius freeradius-utils

Step 2: Configure the RADIUS server

  • The main configuration file for FreeRADIUS is located at /etc/freeradius/radiusd.conf.
  • The users file is located at /etc/freeradius/users.

Step 3: Add users to the RADIUS server

sudo nano /etc/freeradius/users

Add a new entry for each user in the following format:

username Auth-Type := Local, User-Password == "password"

Step 4: Restart the RADIUS server

sudo service freeradius restart

Step 5: Test the RADIUS server

sudo radtest username password 127.0.0.1 0 testing123

If the test is successful, you should receive an “Access-Accept” response.

Step 6: Configure pg_hba.conf

The pg_hba.conf file controls the authentication methods used by PostgreSQL. To enable RADIUS authentication, you’ll need to add a line to the pg_hba.conf file that specifies the RADIUS server and method to be used. For example:

host all all 0.0.0.0/0 radius radiusservers=192.168.1.1 radiussecrets=password radiusports=1812

This line tells PostgreSQL to use RADIUS for authentication.

Step 7: Restart PostgreSQL

After making changes to the pg_hba.conf and radius.conf files, you will need to restart the PostgreSQL service for the changes to take effect.

Step 8: Test RADIUS authentication

To test RADIUS authentication with PostgreSQL, you can try to connect to the database using the psql command-line interface. For example,

psql -U username -h localhost

If the RADIUS authentication is set up correctly, you should be able to connect to the database using your RADIUS credentials.

Conclusion

In conclusion, external authentication methods such as GSSAPI, LDAP, and RADIUS can enhance the security of a PostgreSQL database by providing a centralized way of managing user access and eliminating the need to store passwords in the database. GSSAPI uses Kerberos to authenticate users, LDAP authenticates users against an LDAP server, and RADIUS authenticates users against a RADIUS server.

It’s important to note that GSSAPI and RADIUS are only supported in PostgreSQL server versions 9.1 or later, and LDAP is a built-in feature for all versions. In addition to external authentication methods, the PostgreSQL server also supports password-based and client certificate-based authentication methods. It’s recommended to use password-based or external authentication methods for remote connections and production environments to ensure secure access to the data.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open source community in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Feb
02
2023
--

An Argument for Logical Failover Slots

Logical Failover Slots PostgreSQL

These days, the typical PostgreSQL cluster consists not only of a multi-node replication cluster, which is ordinarily an asynchronous streaming replication model but can sometimes include a logical replication component.

 

logical replication postgresql

 

Recall that logical replication works by using the PUB/SUB mode, where individual tables are published and are then subscribed by remotely connected databases/tables. DML operations, such as INSERT, UPDATE, DELETE, TRUNCATE, etc., are then replicated from one or more publishers to one or more subscribers.

For the most part it’s a pretty straightforward operation. The only real limitation is that one can only publish tables from a read-write server such as the PRIMARY.

 

There’s a problem, however, when it comes to executing failovers. While a STANDBY/REPLICA host can quickly take over with a simple select pg_promote() SQL statement, the same cannot be said regarding logical replication. Instead, failing over is, in fact, not a single action but a collection of discrete steps that must be performed in an exact and precise manner.

Consider the following steps typifying a failover:

  1. Promote the REPLICA, thus becoming the new PRIMARY.
  2. When present, redirect other REPLICAs pointing to the new PRIMARY.
  3. Failover the logical replication slot:
    1. Block all logical replication from the failed PRIMARY.
    2. Create a logical replication slot on the newly promoted PRIMARY.
    3. Alter the subscription and point to the newly promoted PRIMARY.
    4. Resume/restore logical replication between publisher and subscriber.

The challenge, of course, is reconstituting the logical replication process as quickly as possible, ideally without any data loss.

Two common solutions come to mind:

  1. Block all incoming connections before promoting the STANDBY while simultaneously installing the logical replication slot.
  2. Promote the STANDBY and then flush and recopy the entire contents of the subscribed table(s).

In both cases, these solutions make it problematic for a fast recovery, not to mention the additional effort required by the SRA/DBA/DEV preparing the requisite instructions.

failover postgresql

In an ideal world, the perfect implementation of a logical slot failover would have one already in place on the STANDBY, thus guaranteeing data consistency without fear of data loss no matter how fast or slow the actual promotion or altering the subscription’s connectivity parameters may take.

The good news is that there are hacks that can speed up the entire recovery process, although they all have their own quirky limitations. Patroni, for example, implements one such technique by copying the file named state, which is located in the data cluster’s subdirectory pg_repslot, over to the REPLICA’s data cluster in the same location.

# PGDATA on an Ubuntu install
/var/lib/postgresql/15/main/pg_replslot/pg3
??? state

However, the caveat is that it requires a full server restart to the REPLICA, in addition to being promoted, before the logical slot is fully active. And, of course, timing is critical.

failover desired state postgresql

 

So there you have it.

Too bad such a feature doesn’t exist… or does it?

 

 

 

 

Stay tuned folks; the game is afoot. ?

For the curious, I’ve included some references in past blogs, courtesy of Jobin:

And here’s another set of references: a valiant attempt to incorporate this feature into PostgreSQL a few years ago.

 

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open source community in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com