Aug
15
2022
--

Talking Drupal #360 – Backdrop Case Study

Today we are talking about Backdrop CMS with Eric Toupin.

www.talkingDrupal.com/360

Topics

  • What is backdrop
  • How did you hear about it
  • Tell us about Aten and your clients
  • What type of work is Aten doing with Stanford
  • Why was Backdrop CMS considered
    • How long was Backdrop out before you considered it
  • Are there features Backdrop has that Drupal does not have
  • What are some limitations of Backdrop
  • If someone has Drupal 7 what do you consider the criteria for Backdrop vs Drupal 9
  • Are you working on other Backdrop sites
  • Do you consider Backdrop it’s own CMS
  • Have you contributed anything back to Drupal from Backdrop
  • Does Aten consider Backdrop a service it provides

Resources

Guests

Eric Toupin – www.drupal.org/u/erictoupin

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Cathy Theys – @YesCT

MOTW

hreflang The core Content Translation module adds hreflang tags only on content entity pages. This module, on the other hand, adds hreflang tags to all pages, and can be configured to defer to Content Translation module on content entity pages. If for some reason you’d like to modify the hreflang tags on a page, you can do so by implementing

Aug
08
2022
--

Talking Drupal #359 – Contribution Events

Today we are talking about Contribution Events.

www.talkingDrupal.com/359

Topics

  • What are contribution events
  • What is the contribution event
  • What are the key goals
  • Can you give us a quick overview of how you started teh community initiative
  • Why did each of you feel this was important
  • How did you get involved
  • What was involved in the first event
  • What were lessons learned
  • What were the successes of the first event
  • How can someone have a contribution event
  • Are there differences in having events centered on various areas
  • What are the most important resources
  • How can someone get involved

Resources

Guests

Kristen Pol – www.drupal.org/u/kristen-pol @kristen_pol Surabhi Gokte – www.drupal.org/u/surabhi-gokte @SurabhiGokte

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Ryan Price – ryanpricemedia.com@liberatr

MOTW

Anonymous Login This is a very simple, lightweight module that will redirect anonymous users to the login page whenever they reach any admin-specified page paths, and will direct them back to the originally-requested page after successful login.

Aug
08
2022
--

Anti-Affinity Configuration in Percona Operator for PostgreSQL, Updated Grafana in PMM: Release Roundup August 8, 2022

Percona releases

It’s time for the release roundup!

Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since July 25, 2022. Take a look!

Percona Operator for PostgreSQL 1.3.0

On August 4, 2022, we released Percona Operator for PostgreSQL 1.3.0.  It is s based on best practices for configuration and setup of a Percona Distribution for PostgreSQL cluster. The benefits of the Operator are many, but saving time and delivering a consistent and vetted environment is key. Release highlights include

  • The automated upgrade is now disabled by default to prevent unplanned downtimes for user applications and to provide defaults more focused on strict user control over the cluster
  • Flexible anti-affinity configuration is now available, which allows the Operator to isolate PostgreSQL cluster instances on different Kubernetes nodes or to increase its availability by placing PostgreSQL instances in different availability zones

Download Percona Operator for PostgreSQL 1.3.0

Percona Monitoring and Management (PMM) 2.29.1

Percona Monitoring and Management 2.29.1 was released on July 28, 2022. It is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB. Release highlight is we have updated Grafana to fix critical CVEs impacting Alerting in PMM.

Download Percona Monitoring and Management 2.29.1

 

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Aug
05
2022
--

Run MongoDB in Kubernetes: Solutions, Pros and Cons

Run MongoDB in Kubernetes

Run MongoDB in KubernetesRunning MongoDB in Kubernetes is becoming the norm. The most common use cases for running databases in Kubernetes are:

  • Create a private Database as a Service (DBaaS).
  • Maintain coherent infrastructure, where databases and applications run on the same platform.
  • Avoid vendor lock-in and be able to run their databases anywhere utilizing the power of Kubernetes and containers.

There are multiple solutions that allow you to run MongoDB in Kubernetes, and in this blog post, we are going to compare these solutions and review the pros and cons of each of them.

Solutions that we are going to review are:

The summary and comparison table can be found in our documentation.

Bitnami Helm chart

Bitnami was acquired by VMWare in 2019 and is known for its Helm charts to deploy various applications in Kubernetes. The key word here is deploy, as there are no management capabilities in this solution. 

All Bitnami Helm charts are under Apache 2.0 license, pure open source. The community around Bitnami is quite big and active.

Deployment

Installation is a usual two-step process: add Helm repository, and deploy the chart.

Add the repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

You can tweak the chart installation through values. I decided to experiment with it and deploy a replica set vs the default standalone setup:

$ helm install my-mongo bitnami/mongodb --set architecture="replicaset" --set replicaCount=2

This command deploys a MongoDB replica set with two nodes and one arbiter. 

It is worth noting that Github readme is not the only piece of documentation, but there are official Bitnami docs with more examples and details.

Features

Management capabilities are not the strong part of this solution, but it is possible to scale the replica set horizontally (add more nodes to it) and vertically (add more resources to nodes). 

Monitoring is represented by

mongodb-exporter

as a sidecar. It is described in the Metrics parameters section. It’s a straightforward approach that would be enough for most of the use cases.

What I love about all Bitnami Helm charts is that they provide you with lots of flexibility for tuning Kubernetes primitives and components: labels, annotations, images,  resources, and all. 

If we look into MongoDB-related functionality in this solution, there are a few that are worth noting:

  1. Create users and databases during database provisioning. It is a bit weird, that you cannot create roles and assign them right away. But a nice move.
  2. Hidden node – you will not find it in any other solution in this blog post. 
  3. TLS – you can encrypt the traffic flowing between the replica set nodes. Good move, but for some reason, it is disabled by default. 

I have not found any information about supported MongoDB versions. The latest Helm chart version was deploying MongoDB 5.0.

You can deploy sharded clusters, but it is a separate Helm chart – MongoDB Sharded. But still, you cannot easily integrate it with your enterprise solutions – let’s say LDAP or Hashicorp vault. At least this is not going to be a straightforward integration.

KubeDB

KubeDB, created by AppsCode, is a Swiss-army knife Operator to deploy various databases in Kubernetes including MongoDB. 

This solution follows an open-core model, where limited functionality is available for free, but good stuff comes after you purchase the license. Here you can find a comparison of the Community vs Enterprise versions.

The focus of our review would be on the Community version, but we are going to mention which features are available in the Enterprise version.

Deployment

I found deployment a bit cumbersome as it requires downloading the license first, even for the Community version. 

Add the Helm repo:

$ helm repo add appscode https://charts.appscode.com/stable/

Install the Operator:

$ helm install kubedb appscode/kubedb \
 --version v2022.05.24 \
 --namespace kubedb --create-namespace \
 --set-file global.license=/path/to/the/license.txt

Deploy the database with the command below. It is worth mentioning that with the Community version you can deploy the database only in the demo namespace.

$ kubectl create -f https://github.com/kubedb/docs/raw/v2022.05.24/docs/examples/mongodb/clustering/mongo-sharding.yaml

This deploys a sharded cluster. You can find a bunch of examples in github.

Features

The Community version comes with almost no management capabilities. You can add and remove resources and nodes, so perform basic scaling, but nothing else.

Backups, restores, upgrades, encryption, and many more are all available in the Enterprise version only.

For monitoring, KubeDB follows the same approach as Bitnami and allows you to deploy a

mongodb-exporter

container as a sidecar. Read more here.

Definitely, the Community version comes with limited functionality which is not enough to run it in production. At the same time there are some interesting features:

  • One Operator, many databases. I like this approach as it can simplify your operational landscape if you run various database flavors.
  • Pick your MongoDB version. You can easily find various supported MongoDB versions and pick the one you need through custom resource:
$ kubectl get mongodbversions
NAME             VERSION   DISTRIBUTION   DB_IMAGE                                 DEPRECATED   AGE
3.4.17-v1        3.4.17    Official       mongo:3.4.17                                          20m
3.4.22-v1        3.4.22    Official       mongo:3.4.22                                          20m
3.6.13-v1        3.6.13    Official       mongo:3.6.13                                          20m
3.6.8-v1         3.6.8     Official       mongo:3.6.8                                           20m
4.0.11-v1        4.0.11    Official       mongo:4.0.11                                          20m
4.0.3-v1         4.0.3     Official       mongo:4.0.3                                           20m
4.0.5-v3         4.0.5     Official       mongo:4.0.5                                           20m
4.1.13-v1        4.1.13    Official       mongo:4.1.13                                          20m
4.1.4-v1         4.1.4     Official       mongo:4.1.4                                           20m
4.1.7-v3         4.1.7     Official       mongo:4.1.7                                           20m
4.2.3            4.2.3     Official       mongo:4.2.3                                           20m
4.4.6            4.4.6     Official       mongo:4.4.6                                           20m
5.0.2            5.0.2     Official       mongo:5.0.2                                           20m
5.0.3            5.0.3     Official       mongo:5.0.3                                           20m
percona-3.6.18   3.6.18    Percona        percona/percona-server-mongodb:3.6.18                 20m
percona-4.0.10   4.0.10    Percona        percona/percona-server-mongodb:4.0.10                 20m
percona-4.2.7    4.2.7     Percona        percona/percona-server-mongodb:4.2.7-7                20m
percona-4.4.10   4.4.10    Percona        percona/percona-server-mongodb:4.4.10                 20m

As you can see, it even supports Percona Server for MongoDB

  • Sharding. I like that the Community version comes with sharding support as it helps to experiment and understand the future fit of this solution for your environment. 

MongoDB Community Operator

Similar to KubeDB, MongoDB Corp follows the open core model. The Community version of the Operator is free, and there is an enterprise version available (MongoDB Enterprise Kubernetes Operator), which is more feature rich.

Deployment

Helm chart is available for this Operator, but at the same time, there is no helm chart to deploy the Custom Resource itself. This makes onboarding a bit more complicated.

Add the repository: 

$ helm repo add mongodb https://mongodb.github.io/helm-charts

Install the Operator:

$ helm install community-operator mongodb/community-operator

To deploy the database you need to modify this example and set the password. Once done apply it to deploy the replica set of three nodes:

$ kubectl apply -f mongodb.com_v1_mongodbcommunity_cr.yaml

Features

There are no expectations that anyone would run MongoDB Communty Operator in production as it lacks basic features: backups and restores, sharding, upgrades, etc. But at the same time as always there are interesting ideas that are worth mentioning:

  • User provisioning. Described here. You can create a MongoDB database user to authenticate to your MongoDB replica set using SCRAM. You create the Secret object and list users and their corresponding roles.
  • Connection string in the secret. Once a cluster is created, you can get the connection string, without figuring out the service endpoints or fetching the user credentials.
  "connectionString.standard": "mongodb://my-user:mySupaPass@example-mongodb-0.example-mongodb-svc.default.svc.cluster.local:27017,example-mongodb-1.example-mongodb-svc.default.svc.cluster.local:27017,example-mongodb-2.example-mongodb-svc.default.svc.cluster.local:27017/admin?replicaSet=example-mongodb&ssl=false",
  "connectionString.standardSrv": "mongodb+srv://my-user:mySupaPass@example-mongodb-svc.default.svc.cluster.local/admin?replicaSet=example-mongodb&ssl=false",
  "password": "mySupaPass",
  "username": "my-user"

Percona Operator for MongoDB

Last but not least in our solution list – a fully open source Percona Operator for MongoDB. It was created in 2018 and went through various stages of improvements reaching GA in 2020. The latest release came out in May 2022. 

If you are a Percona customer, you will get 24/7 support for the Operator and clusters deployed with it. 

Deployment

There are various ways to deploy Percona Operators, and one of them is through Helm charts. One chart for Operator, another one for a database. 

Add repository:

$ helm repo add percona https://percona.github.io/percona-helm-charts/

Deploy the Operator:

$ helm install my-operator percona/psmdb-operator

Deploy the cluster:

$ helm install my-db percona/psmdb-db

Features

Operator leverages Percona Distribution for MongoDB and this unblocks various enterprise capabilities right away. The Operator itself is feature rich and provides users with various features that would simplify the migration from your regular MongoDB deployment to Kubernetes: sharding, arbiters, backups and restores, and many more. 

Some key features that I would like to mention:

  • Automated upgrades. You can set up the schedule of upgrades and the Operator will automatically upgrade your MongoDB cluster’s minor version with no downtime to operations.
  • Point-in-time recovery. Percona Backup for MongoDB is part of our Distribution and provides backup and restore functionality in our Operator, and this includes the ability to store oplogs on an object storage of your choice. Read more about it in our documentation.
  • Multi-cluster deployment. Percona Operator supports complex topologies with cross-cluster replication capability. The most common use cases are disaster recovery and migrations.

Conclusion

Choosing a solution to deploy and manage MongoDB is an important technical decision, which might impact various business metrics in the future. In this blog post, we summarized and highlighted the pros and cons of various open source solutions to run MongoDB in Kubernetes. 

An important thing to remember is that the solution you choose should not only provide a way to deploy the database but also enable your teams to execute various management and maintenance tasks without drowning in MongoDB complexity. 

Aug
05
2022
--

An Illustration of PostgreSQL Bloat

An Illustration of PostgreSQL Bloat

I have been working with Postgres now for many years, most recently as a consultant at Percona for companies with new implementations of Postgres as a result of migrating from Oracle or some other legacy database engine. In some cases, these are Fortune 100 companies with many talented people working for them. However, not all databases work the same and one of the most common observations I make when reviewing a Postgres environment for a client is the amount of table bloat, index bloat, and lack of understanding of its impact on performance – and how to address it.

I wrote a blog about this topic a few years ago and never gave it much thought after its publication. However, with the large number of companies moving to Postgres for obvious reasons, and the lack of true Postgres database management skills needed to support fairly large databases, I thought I would rewrite this blog and bring it back to life with some clarity to help one understand bloat and why it happens.

What causes bloat?

In PostgreSQL, the culprit is Multi-Version Concurrency Control, commonly referred to as MVCC.

MVCC ensures that a transaction against a database will return only data that’s been committed, in a snapshot, even if other processes are trying to modify that data.

Imagine a database with millions of rows in a table. Anytime you update or delete a row, Postgres has to keep track of that row based on a transaction ID. For example, you may be running a long query with transaction ID 100 while someone named John just updated the same table using transaction ID 101. At this point, since you are still inside of transaction 100, which is older than 101 as far as your query is concerned, the changes made by John in transaction ID 101 are not relevant or visible to your query. You are in your own personal bubble of data back before the data has changed. Any new queries from you or anyone else with a transaction ID greater than 101 will see the changes made by John in transaction 101. After all, new transaction IDs are greater than 101, meaning no other transactions are currently in play with IDs less than 101 the data you saw in transaction ID 100 will no longer be needed by the database and will be considered dead but not gone. Hence, bloat!

At a high level, vacuuming is used to free up dead rows in a table so they can be reused. It also helps you avoid transaction ID wraparound.

Let’s go through a few steps to illustrate how all this takes place

In order for Postgres to know which transaction data should be in the result set of your query, the snapshot makes note of transaction information.

Essentially, if your transaction ID is 100, you will only see data from all transaction IDs leading up to 100. As stated above, you will not see data from transaction ID 101 or greater.

Setting up an example

Let’s start by creating a simple table for our example, called percona:

percona=# CREATE TABLE percona ( col1 int );
CREATE TABLE


percona=# INSERT INTO percona values (1);
INSERT 0 1


percona=# INSERT INTO percona values (2);
INSERT 0 1


percona=# INSERT INTO percona values (3);
INSERT 0 1


percona=# INSERT INTO percona values (4);
INSERT 0 1


percona=# INSERT INTO percona values (5);
INSERT 0 1

You can wrap multiple inserts into a single transaction with BEGIN and COMMIT:

percona=# BEGIN;
BEGIN


percona=*# INSERT INTO percona SELECT generate_series(6,10);
INSERT 0 5

percona=*# COMMIT;
COMMIT

Here we can see the 10 rows we inserted into the table, along with some hidden system columns:

percona=# SELECT xmin, xmax, * FROM percona;
   xmin   | xmax | col1
----------+------+------
 69099597 |    0 |    1
 69099609 |    0 |    2
 69099627 |    0 |    3
 69099655 |    0 |    4
 69099662 |    0 |    5
 69099778 |    0 |    6
 69099778 |    0 |    7
 69099778 |    0 |    8
 69099778 |    0 |    9
 69099778 |    0 |   10
(10 rows)

As you can see, values one through five (in the col1 column) have unique transaction IDs (represented in the xmin column)—they were the result of individual INSERT statements, made one after the other. The rows with values of six through 10 share the same transaction ID of 6909978; they were all part of the one transaction we created with the BEGIN and COMMIT statements.

At this point, you may be asking yourself what this has to do with vacuum or autovacuum. We will get there. First, you need to know about the transaction id logic and visually see it for a better understanding as shown above.

How does the table bloat?

In Postgres, the heap is a file containing a list of variable-sized records, in no particular order, that points to the location of a row within a page. (A Postgres page is 8k in size). The pointer to the location is called the CTID.

To view the heap without having to read the raw data from the file, we need to create the following extension inside of our database:

CREATE extension pageinspect;

Now we can inspect the heap for our newly created table and rows:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));


 lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |   8032 |        1 |     28 | 69099662 |      0 |        0 | (0,5)  |           1 |       2304 |     24 |        |       | \x05000000
  6 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7872 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
(10 rows)

The table above shows 10 entries with a few columns:

  • lp is the ID of the row/tuple
  • t_xmin is the transaction ID
  • t_ctid is the pointer
  • t_data is the actual data

Currently, the pointer for each row is pointing to itself as determined by the form (page,tupleid). Pretty straightforward.

Now, let’s perform a few updates on a specific row. Let’s change the value of five to 20, then to 30, and finally back to five.

percona=# UPDATE percona SET col1 = 20 WHERE col1 = 5;
UPDATE 1


percona=# UPDATE percona SET col1 = 30 WHERE col1 = 20;
UPDATE 1


percona=# UPDATE percona SET col1 = 5 WHERE col1 = 30;
UPDATE 1

These three changes took place under three different transactions.

What does this mean?  We changed the values for a column three times but never added or deleted any rows. So we should still have 10 rows, right?

percona=# SELECT COUNT(*) FROM percona;
 count
-------
    10
(1 row)

Looks as expected. But wait! Let’s look at the heap now. The real data on disk.

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));


 lp | lp_off | lp_flags | lp_len |  t_xmin  |  t_xmax  | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+----------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |        0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |        0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |        0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |        0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |   8032 |        1 |     28 | 69099662 | 69103876 |        0 | (0,11) |       16385 |       1280 |     24 |        |       | \x05000000
  6 |   8000 |        1 |     28 | 69099778 |        0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   7968 |        1 |     28 | 69099778 |        0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7936 |        1 |     28 | 69099778 |        0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7904 |        1 |     28 | 69099778 |        0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7872 |        1 |     28 | 69099778 |        0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |   7840 |        1 |     28 | 69103876 | 69103916 |        0 | (0,12) |       49153 |       9472 |     24 |        |       | \x14000000
 12 |   7808 |        1 |     28 | 69103916 | 69103962 |        0 | (0,13) |       49153 |       9472 |     24 |        |       | \x1e000000
 13 |   7776 |        1 |     28 | 69103962 |        0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

We have 13 rows, not 10. What the heck just happened?

Let’s examine our three separate update transactions (69103876, 69103916, 69103962) to see what’s happening with the heap:

t_xmin (691103876)

  • UPDATE percona SET col1 = 20 WHERE col1 = 5;
  • Logically DELETE tuple ID 5
  • Physically INSERT tuple ID 11
  • UPDATE tuple ID 5 pointer (t_tcid) to point to tuple ID 11

Tuple ID 5 becomes a dead row when its t_xmax gets set to the new transaction ID initiated by transaction 691103876.

t_xmin (69103916)

  • UPDATE percona SET col1 = 30 WHERE col1 = 20;
  • Logically DELETE tuple ID 11
  • Physically INSERT tuple ID 12
  • UPDATE tuple ID 11 pointer (t_tcid) to point to tuple ID 12

Once again, Tuple ID 11 becomes a dead row when its t_xmax set to the new transaction ID initiated by transaction  69103916.

t_xmin (69103962)

  • UPDATE percona SET col1 = 5 WHERE col1 = 30;
  • Logically DELETE tuple ID 12
  • Physically INSERT tuple ID 13
  • UPDATE tuple ID 12 pointer (t_tcid) to point to tuple ID 13

Tuple ID 13 is live and visible to other transactions. It has no t_xmax and the t_ctid (0,13) points to itself.

The key takeaway from this is that we have not added or deleted rows in our table. We still see 10 in the count, but our heap has increased to 13 by an additional three transactions being executed.

At a very high level, this is how PostgreSQL implements MVCC and why we have table bloat in our heap. In essence, changes to data results in a new row reflecting the latest state of the data. The old rows need to be cleaned or reused for efficiency.

Vacuuming the table

The way to deal with the table bloat is to vacuum the table:

percona=# vacuum percona;
VACUUM

Now, let’s examine the heap again:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));

lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |     13 |        2 |      0 |          |        |          |        |             |            |        |        |       |
  6 |   8032 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 12 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 13 |   7872 |        1 |     28 | 69103962 |      0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

After vacuuming the table, rows five, 11, and 12 are now free to be used again.

So let’s insert another row, with the value of 11 and see what happens:

percona=# INSERT INTO percona values (11);
INSERT 0 1

Let’s examine the heap once more:

percona=# SELECT * FROM heap_page_items(get_raw_page('percona',0));

 lp | lp_off | lp_flags | lp_len |  t_xmin  | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid |   t_data
----+--------+----------+--------+----------+--------+----------+--------+-------------+------------+--------+--------+-------+------------
  1 |   8160 |        1 |     28 | 69099597 |      0 |        0 | (0,1)  |           1 |       2304 |     24 |        |       | \x01000000
  2 |   8128 |        1 |     28 | 69099609 |      0 |        0 | (0,2)  |           1 |       2304 |     24 |        |       | \x02000000
  3 |   8096 |        1 |     28 | 69099627 |      0 |        0 | (0,3)  |           1 |       2304 |     24 |        |       | \x03000000
  4 |   8064 |        1 |     28 | 69099655 |      0 |        0 | (0,4)  |           1 |       2304 |     24 |        |       | \x04000000
  5 |     13 |        2 |      0 |          |        |          |        |             |            |        |        |       |
  6 |   8032 |        1 |     28 | 69099778 |      0 |        0 | (0,6)  |           1 |       2304 |     24 |        |       | \x06000000
  7 |   8000 |        1 |     28 | 69099778 |      0 |        0 | (0,7)  |           1 |       2304 |     24 |        |       | \x07000000
  8 |   7968 |        1 |     28 | 69099778 |      0 |        0 | (0,8)  |           1 |       2304 |     24 |        |       | \x08000000
  9 |   7936 |        1 |     28 | 69099778 |      0 |        0 | (0,9)  |           1 |       2304 |     24 |        |       | \x09000000
 10 |   7904 |        1 |     28 | 69099778 |      0 |        0 | (0,10) |           1 |       2304 |     24 |        |       | \x0a000000
 11 |   7840 |        1 |     28 | 69750201 |      0 |        0 | (0,11) |           1 |       2048 |     24 |        |       | \x0b000000
 12 |      0 |        0 |      0 |          |        |          |        |             |            |        |        |       |
 13 |   7872 |        1 |     28 | 69103962 |      0 |        0 | (0,13) |       32769 |      10496 |     24 |        |       | \x05000000
(13 rows)

Our new tuple (with transaction ID 69750201) reused tuple 11, and now the tuple 11 pointer (0,11) is pointing to itself.

As you can see, the heap did not grow to accommodate the new row. It reused an open block for the new row that was made available when we vacuumed the table freeing up dead rows (rows that will no longer be visible in a transaction).

And there you have it. A step-by-step illustration of how bloat occurs in PostgreSQL!

Aug
04
2022
--

Introducing Performance Improvement of Window Functions in PostgreSQL 15

Window Functions in PostgreSQL 15

When working with databases, there are always some projects oriented to performing analytics and reporting tasks over the information stored in the database. Usually, these tasks leverage window functions to do calculations “across a set of table rows that are somehow related to the current row,” as is described in the documentation.

There are several built-in windows functions available in PostgreSQL. In the latest release, PostgreSQL 15, some performance improvements were added for the rank(), row_number(), and count() functions. First, let’s review what these functions can do.

The window functions

As mentioned above, the window functions let us perform some calculations on a set of table rows related to the current one. The “set of table rows” is usually identified as a “partition” defined by a column or columns. As we read in the documentation, the named functions work for:

rank () ? bigint

Returns the rank of the current row, with gaps; that is, the row_number of the first row in its peer group.

row_number () ? bigint

Returns the current row number within its partition, counting from 1.

count ( * ) ? bigint

Computes the number of input rows.

Consider the following table and data:

                                          Table "public.employee"
    Column      |           Type           | Collation | Nullable |                 Default
-----------------+--------------------------+-----------+----------+------------------------------------------
emp_id          | integer                  |           | not null | nextval('employee_emp_id_seq'::regclass)
emp_name        | character varying(20)    |           | not null |
emp_country     | character varying(35)    |           | not null |
emp_salary      | integer                  |           | not null |
date_of_joining | timestamp with time zone |           | not null | now()
Indexes:
    "employee_pkey" PRIMARY KEY, btree (emp_id)

demo=# SELECT * FROM employee ;
emp_id  |   emp_name   | emp_country | emp_salary |    date_of_joining
--------+--------------+-------------+------------+------------------------
      1 | KELIO        | Japon       |       2000 | 2021-10-26 00:00:00+00
      2 | JEAN-VINCENT | Canada      |       6500 | 2021-01-22 00:00:00+00
      3 | JUNO         | Japon       |       4000 | 2021-02-27 00:00:00+00
      4 | GUY-EMMANUEL | Salvador    |       2000 | 2020-07-27 00:00:00+00
      5 | WALI         | Japon       |       7000 | 2021-01-31 00:00:00+00
      6 | HENRI-PAUL   | Canada      |       4500 | 2021-08-19 00:00:00+00
      7 | MUHAMED      | France      |       5000 | 2021-07-20 00:00:00+00
      8 | MUHITTIN     | Madagascar  |       2500 | 2021-12-31 00:00:00+00
      9 | DEVLIN       | Madagascar  |       7000 | 2022-04-03 00:00:00+00
     10 | JOSUE        | Salvador    |       5500 | 2020-09-25 00:00:00+00
(10 rows)

rank()

We can use the rank() function to get the rank of employees (id) per country based on their salary. Look at the next example.

demo=# SELECT 
         emp_id, 
         emp_salary, 
         emp_country, 
         rank() OVER (PARTITION BY emp_country ORDER BY emp_salary DESC) 
       FROM employee;

emp_id  | emp_salary | emp_country | rank
--------+------------+-------------+------
      2 |       6500 | Canada      |    1
      6 |       4500 | Canada      |    2
      7 |       5000 | France      |    1
      5 |       7000 | Japon       |    1
      3 |       4000 | Japon       |    2
      1 |       2000 | Japon       |    3
      9 |       7000 | Madagascar  |    1
      8 |       2500 | Madagascar  |    2
     10 |       5500 | Salvador    |    1
      4 |       2000 | Salvador    |    2
(10 rows)

row_number()

In the next example, the row_number() function gets a sorted list of employees’ names per country and their relative numeric position.

demo=# SELECT 
         emp_id, 
         emp_name, 
         emp_country, 
         row_number() OVER (PARTITION BY emp_country ORDER BY emp_name) 
       FROM employee;

emp_id  |   emp_name   | emp_country | row_number
--------+--------------+-------------+------------
      6 | HENRI-PAUL   | Canada      |          1
      2 | JEAN-VINCENT | Canada      |          2
      7 | MUHAMED      | France      |          1
      3 | JUNO         | Japon       |          1
      1 | KELIO        | Japon       |          2
      5 | WALI         | Japon       |          3
      9 | DEVLIN       | Madagascar  |          1
      8 | MUHITTIN     | Madagascar  |          2
      4 | GUY-EMMANUEL | Salvador    |          1
     10 | JOSUE        | Salvador    |          2
(10 rows)

count()

The count() function is an “old known” tool used by almost everyone with access to a SQL engine. This function is part of the aggregate functions list but can act as windows functions when the OVER clause follows the call. So we can use it to know how many employees share the same salary as a given employee name. 

demo=# SELECT 
         emp_name, 
         emp_salary, 
         emp_country, 
         count(*) OVER (PARTITION BY emp_salary) 
       FROM employee;

  emp_name   | emp_salary | emp_country | count
--------------+------------+-------------+-------
KELIO        |       2000 | Japon       |     2
GUY-EMMANUEL |       2000 | Salvador    |     2
MUHITTIN     |       2500 | Madagascar  |     1
JUNO         |       4000 | Japon       |     1
HENRI-PAUL   |       4500 | Canada      |     1
MUHAMED      |       5000 | France      |     1
JOSUE        |       5500 | Salvador    |     1
JEAN-VINCENT |       6500 | Canada      |     1
WALI         |       7000 | Japon       |     2
DEVLIN       |       7000 | Madagascar  |     2
(10 rows)

Window functions in PostgreSQL 15

Now that we have refreshed what the window functions are, let’s consider what the PostgreSQL 15 release notes say:

Improve the performance of window functions that use row_number(), rank(), and count() (David Rowley)

Accordingly, if we are users of the window functions and move from a previous version to version 15, we should see an improvement in the performance of our workload. Let’s test it.

Laboratory case

To test the performance of the window functions, I created three instances of PostgreSQL, (a) version 13, (b) version 14, and (c) version 15. 

I used the same public.employee table used in the previous examples, and I loaded it with 10K rows. Then I executed the same queries we saw before for the window functions. I got the output from an EXPLAIN (ANALYZE) command which executes the query, and we can see the timing for the specific window function.

The EXPLAIN (ANALYZE) output was the same for each version of PostgreSQL.

rank() 

PG15

                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
WindowAgg  (cost=1288.49..1488.49 rows=10000 width=25) (actual time=11.946..18.732 rows=10000 loops=1)
  ->  Sort  (cost=1288.49..1313.49 rows=10000 width=17) (actual time=11.928..12.803 rows=10000 loops=1)
        Sort Key: emp_country, emp_salary DESC
        Sort Method: quicksort  Memory: 980kB
        ->  Seq Scan on employee  (cost=0.00..180.00 rows=10000 width=17) (actual time=0.008..2.402 rows=10000 loops=1)
Planning Time: 0.143 ms
Execution Time: 19.268 ms
(7 rows)

PG14

                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
WindowAgg  (cost=844.39..1044.39 rows=10000 width=25) (actual time=12.585..20.921 rows=10000 loops=1)
  ->  Sort  (cost=844.39..869.39 rows=10000 width=17) (actual time=12.560..13.545 rows=10000 loops=1)
        Sort Key: emp_country, emp_salary DESC
        Sort Method: quicksort  Memory: 1020kB
        ->  Seq Scan on employee  (cost=0.00..180.00 rows=10000 width=17) (actual time=0.011..1.741 rows=10000 loops=1)
Planning Time: 0.449 ms
Execution Time: 21.407 ms
(7 rows)

PG13

                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
WindowAgg  (cost=844.39..1044.39 rows=10000 width=25) (actual time=18.949..28.619 rows=10000 loops=1)
  ->  Sort  (cost=844.39..869.39 rows=10000 width=17) (actual time=18.896..19.998 rows=10000 loops=1)
        Sort Key: emp_country, emp_salary DESC
        Sort Method: quicksort  Memory: 1020kB
        ->  Seq Scan on employee  (cost=0.00..180.00 rows=10000 width=17) (actual time=0.011..1.228 rows=10000 loops=1)
Planning Time: 0.460 ms
Execution Time: 29.111 ms
(7 rows)

We can easily see from the WindowAgg node that the total time was smaller in the PG15 than in the other two. The performance improvement is clear here.

To verify this is consistent, I got the Total Time for the WindowAgg node from 500 executions and plotted the next graph.

postgresql total time

We can see the timing from the PG15 version is better than the other versions. Also, I added a trending line. We see the PG13 performed the “worst,” and even when the PG14 showed a better trend, the PG15 was the best.

I did the same exercise for the row_number() and count() functions. 

row_number() 

postgresql row number

count()

I also got the results from 500 executions for these last two functions. In both cases, the difference was smaller than the rank() function, but the PG15 still showed better results, as seen in the trend line.

The performance improvement for the rank(), row_number(), and count() window function introduced in the new PostgreSQL 15 will let all those analytic and reporting projects process their data faster than in previous versions. As always, every environment has its characteristics and challenges, but as we saw in these quick tests, the new version delivers the improvement just out of the box.

Aug
03
2022
--

OS Platform End of Life (EOL) Announcement for Debian Linux 9

EOL) Announcement for Debian Linux 9

EOL) Announcement for Debian Linux 9After having been in long-term support (LTS) for the past two years, Debian 9 (“Stretch”) has now reached End of Life (EOL) on 2022-06-30.

We generally align our OS platform end-of-life dates with the upstream platform vendor. We publish the platform end-of-life dates in advance on our website on the Percona Release Lifecycle Overview page.

When an OS platform has reached End of Life, we stop providing new packages, binary builds, hotfixes, or bug fixes for Percona software. Still, we will continue providing downloads of the existing packages.

Per our policies, we are announcing that this EOL has gone into effect for Percona software on Debian Linux 9 on 2022-07-01.

Each operating system vendor has different supported migration or upgrade paths to their next major release. Please contact us if you need assistance migrating your database to a different supported OS platform – we will be happy to assist you!

Aug
03
2022
--

Managing MySQL Configurations with the PXC Kubernetes Operator V1.10.0 Part 3: Conclusion

Managing MySQL Configurations Kubernetes

Managing MySQL Configurations KubernetesIn part one and part two of this series, we introduced the different ways to manage MySQL configurations and precedence when using the Percona XtraDB Cluster (PXC) object and ConfigMap. In this post, we will see the precedence when secrets are used for MySQL configurations in Percona Operator for MySQL based on Percona XtraDB Cluster.

CASE-4: Secret with name cluster1-pxc and ConfigMap with name cluster1-pxc but without configuration in PXC object

When the MySQL configuration is present in the ConfigMap and secret but not in the PXC object, the following would be the state

# kubectl get pxc cluster1 -ojson | jq .spec.pxc.configuration
 
# kubectl get secrets cluster1-pxc -ojson | jq -r '.data."secret.cnf"' |base64 -d
[mysqld]
wsrep_debug=CLIENT
wsrep_provider_options="gcache.size=768M; gcache.recover=yes"
# kubectl get cm cluster1-pxc -ojson | jq .data
{
 "init.cnf": "[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=64M; gcache.recover=yes\"\n"
}

Let’s query the DB to see which value has been taken

# kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il
 
mysql> SHOW VARIABLES LIKE 'wsrep_provider_options'\G
*************************** 1. row ***************************
Variable_name: wsrep_provider_options
 </snip>       gcache.size = 768M;  … </snip>

As it can be seen, secrets take precedence over ConfigMap

CASE-5: Configuration present in PXC object, ConfigMap cluster1-pxc, secret cluster1-pxc

Current State:

# kubectl get pxc cluster1 -ojson | jq .spec.pxc.configuration
"[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
# kubectl get cm cluster1-pxc -ojson | jq .data
{
 "init.cnf": "[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
}

Let’s try to use secrets cluster1-pxc and see the effects.

# cat secret.cnf
[mysqld]
wsrep_debug=CLIENT
wsrep_provider_options="gcache.size=768M; gcache.recover=yes"
# kubectl create secret generic cluster1-pxc --from-file secret.cnf
secret/cluster1-pxc created

As it can be seen below, ConfigMap and pxc object had no changes.

# for i in `seq 1 100`; do kubectl get pxc cluster1 -ojson | jq .spec.pxc.configuration ; sleep 2 ; done
"[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
"[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
 
# for i in `seq 1 100`; do kubectl get cm cluster1-pxc -ojson | jq .data; sleep 2 ;done
{
 "init.cnf": "[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
}
{
 "init.cnf": "[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
}
{
 "init.cnf": "[mysqld]\nwsrep_debug=CLIENT\nwsrep_provider_options=\"gcache.size=128M; gcache.recover=yes\"\n"
}

However, the DB has taken configurations from secrets

mysql> SHOW VARIABLES LIKE 'wsrep_provider_options'\G
*************************** 1. row ***************************
Variable_name: wsrep_provider_options
 </snip>   gcache.size = 768M; …  </snip>

Secrets take precedence over pxc object and ConfigMap

Conclusion

  1. MySQL Configurations via secret cluster1-pxc takes precedence over ConfigMap cluster1-pxc or pxc object
  2. If secret cluster1-pxc is not present, MySQL configurations present with the PXC object take precedence over ConfigMap cluster1-pxc.
  3. The operator takes the configuration from the PXC object and overwrites the configuration in ConfigMap cluster1-pxc in the reconciliation loop.
  4. If the configuration is present in just ConfigMap or secret, the same is not written in the PXC object in the reconciliation loop.

We would love to hear how you are managing MySQL configurations, feel free to comment ?

Aug
02
2022
--

Tales From the MongoDB Field: When stepDown() Goes Wrong

When stepDown() Goes Wrong

When stepDown() Goes WrongWe get to see and troubleshoot a lot of different problems at Percona. Here’s the latest one that got me scratching my head for a while recently.

The scenario

We have a sharded cluster environment running MongoDB 4.0 that needs to be upgraded to MongoDB 4.2. Easy right? The only thing particular about this environment is that MongoDB was running on a custom Docker environment in AWS.

We started with the usual approach of disabling the balancer and upgrading the config server’s replica set first. In this case, the config server’s replica set had three members running MongoDB 4.0. Instead of upgrading them in place, we chose to add three new members running MongoDB 4.2. So now we have a total of six nodes. The next step was to stepDown the primary to one of the new 4.2 nodes and finally decommission the old servers.

Testing the migration plan

We started with the plan in a non-prod environment. At first, everything was ok; we got the config server’s replica set to six members and set the priorities so that one of the new 4.2 servers was the only candidate to become a primary after stepping down from the current one. So we went ahead and ran the rs.stepDown() command as usual and here’s when things started to go wrong. Clients suddenly started reporting the following message:

2022-07-15T14:49:59.629+0000 W  NETWORK  [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set testrs-cfg

My first thought was something must be wrong at the network layer, but checking connectivity between all hosts revealed no problems. Next, we looked at Docker, but everything seemed ok there as well.

Digging deeper

We connected locally to the server that was configured to become the primary, and there we saw a strange situation. The server was not able to complete the step-up process to become the primary, and was halfway through it:

testrs-cfg:PRIMARY> db.isMaster()
{
	"hosts" : [
		"host1:27019",
		"host2:27019"
	],
	"passives" : [
		"host3:27019",
		"host4:27019",
		"host5:27019",
		"host6:27019"
	],
	"setName" : "testrs-cfg",
	"setVersion" : 77,
	"ismaster" : false,
	"secondary" : true,
	"primary" : "host6:27019",
	"me" : "host6:27019",
…

Looking at the last four lines we can see that host6 is supposed to be the primary but it is not fully promoted. We also checked db.currentOp() and it revealed all sessions seemed to be waiting for some lock related to the replica set state transition, and the following operation seemed stuck creating an index for the config.chunks collection:

{
			"type" : "op",
			"host" : "host6:27019",
			"desc" : "rsSync-0",
			"active" : true,
			"currentOpTime" : "2022-07-18T12:07:22.381+0000",
			"effectiveUsers" : [
				{
					"user" : "__system",
					"db" : "local"
				}
			],
			"opid" : 2800,
			"secs_running" : NumberLong(5),
			"microsecs_running" : NumberLong(5494726),
			"op" : "command",
			"ns" : "config.$cmd",
			"command" : {
				"createIndexes" : "chunks",
				"indexes" : [
					{
						"name" : "ns_1_min_1",
						"key" : {
							"ns" : 1,
							"min" : 1
						},
						"unique" : true
					}
				],
				"$db" : "config"
			},
			"numYields" : 0,
			"waitingForLatch" : {
				"timestamp" : ISODate("2022-07-18T12:07:16.990Z"),
				"captureName" : "ReplicationCoordinatorImpl::_mutex"
			},
			"locks" : {
				"ReplicationStateTransition" : "W"
			},
			"waitingForLock" : false,

What was strange is that this collection contained only a few documents, so this operation should be really fast (also the index already existed).

At this point, we suspected we might be hitting some bug and started looking at the many problems reported about deadlocks in the stepDown/stepUp process. Unfortunately, we came up empty-handed again.

The solution

Next, we checked the configuration of the config server replica set itself and noticed something unusual about it on the “settings” section of rs.conf():

"settings" : {
		"chainingAllowed" : true,
		"heartbeatIntervalMillis" : 2000,
		"heartbeatTimeoutSecs" : 10,
		"electionTimeoutMillis" : 10000,
		"catchUpTimeoutMillis" : -1,
		"catchUpTakeoverDelayMillis" : 30000,
		"getLastErrorModes" : {

		},
		"getLastErrorDefaults" : {
			"w" : "majority",
			"j" : true,
			"wtimeout" : 0
		},
		"replicaSetId" : ObjectId("62cd9b0d1bb173ee7be7f2ef")
	}
}

The getLastErrorDefaults setting is omitted most of the time when running rs.initialize(), as the write concern is usually controlled on a per-session basis. In this case, the config server replica set had been initialized with a getLastErrorDefaults of { w: majority, j: true } instead of the default value of { w: 1 }.

We tried resetting the replica set configuration to the default value as follows:

cfg=rs.conf()
cfg.settings.getLastErrorDefaults= { "w" : 1, "wtimeout" : 0 }
rs.reconfig(cfg)

After doing this, the rs.stepDown() worked flawlessly and we were able to have our 4.2 primary in place.

Conclusion

The moral of the story is that we should check the default values for write concern at the replica set level when promoting a MongoDB 4.2 primary for the first time.

The non-default write concern hadn’t been a problem for elections before, so something changed in MongoDB 4.2 that triggered this behavior. This issue was specific to the config servers, the same didn’t happen with regular shards replica sets.

Interestingly, starting in MongoDB 5.0 we can no longer specify a default write concern using the settings.getLastErrorDefaults for a replica set.

This also proves the value of having a test setup that mimics the real production environment, to be able to test and catch these issues before they bite us in production.

Aug
01
2022
--

Talking Drupal #358 – A Project Browser Update

Today we are talking about The Project Browser with Leslie Glynn & Tim Plunkett.

www.talkingDrupal.com/358

Topics

  • What is Project Browser
  • How are you involved?
  • What is the Drupal Acceleration Team and how is it involved with Project Browser
  • Will it be in Drupal 10 or 11?
  • How are you organizing module data on d.o?
  • How does PB showcase the community working on modules?
  • How does PB work with other initiatives like Automatic Updates
  • How has management changed over the project?
  • Security?
  • How is Chris Wells as an initiative lead?
  • What makes a good project lead?
  • Anything else?

Resources

Guests

Tim Plunkett – @timplunkett Leslie Glynn – @leslieglynn

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Ryan Price – ryanpricemedia.com@liberatr

MOTW

Search API Algolia This module provides integration with the Algolia service, through Drupal’s Search API. This module is intended to be used by developers, as it does not currently provide any implementation of an actual search interface. Only indexing is currently supported. As a result, enabling that module will not have any visible effect on your application. Search functionality may be implemented using the Algolia Javascript API.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com