Jan
31
2022
--

Talking Drupal #332 – Permissions Management

Today we are talking about Permission Management with Benji Fisher.

www.talkingDrupal.com/332

Topics

  • John – Squid Games – Midcamp hat
  • Abby – Turkish Market and lentil soup
  • Benji – Fruition dedicating more time for open source
  • Nic – Config for Drupal 9.3
  • Overview of Permissions
  • Familiar examples
  • Biggest problem
    • UX nightmare
    • Phantom permissions
  • 9.3 significant improvements
  • Passion project
  • Process to get this in
  • Reviewer role
  • Next phases
  • How to help
  • Drupal puzzles

Resources

Guests

Benji Fisher – @benji17fisher

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Abby Bowman – www.linkedin.com/in/arbowman @abowmanr

MOTW

Flood Control Flood Control provides an interface for hidden flood control variables (e.g. login attempt limiters) and makes it possible for site administrators to remove IP addresses and user ID’s from the flood table.

Jan
31
2022
--

How PostgreSQL Pipeline Mode Works

How PostgreSQL Pipeline Mode Works

I’d like to introduce to you a very cool feature introduced in PostgreSQL, the Pipeline Mode.

So just what exactly is Pipeline Mode? Pipeline Mode allows applications to send a query without having to read the result of the previously sent query. In short, it provides a significant performance boost by allowing multiple queries and results to be sent and received in a single network transaction.

As with all good ideas, there is precedent: one can emulate such behavior with a little application code wizardry. Alternatively known as “Batch Mode”, running asynchronous communications between a client and its server has been around for some time. There are a number of existing solutions batching multiple queries in an asynchronous fashion. For example, PgJDBC has supported batch mode for many years using the standard JDBC batch interface. And of course, there’s the old reliable standby dblink.

What distinguishes Pipeline Mode is that it provides an out-of-the-box solution greatly reducing the application code’s complexity handling the client-server session.

Traditional BATCH MODE Operations

Traditional BATCH MODE Operations

 

Pipeline Mode

Pipeline Mode PostgreSQL

Although introduced in PostgreSQL 14, pipeline mode works against any currently supported version of postgres as the enhancement is in the LIBPQ which is used by the client and not the server itself!

And now for the bad news, of a sort; leveraging Pipeline Mode requires using “C” or an equivalent programming language capable of interfacing directly with LIBPQ. Unfortunately, there’s not too much out there yet in the way of ODBC development offering the requisite hooks taking advantage of this enhanced feature. Therefore, one is required to design and program the client-application session in the said programming language.

HINT: This is a great way for somebody to make a name for themselves and create a convenient interface to the LIBPQ Pipeline Mode.

How It Works

Now that I’ve issued the requisite caveat, let’s talk about how this mechanism works.

Keeping things simple

  1. The client first makes a connection to the postgres server.
  2. The client must then switch the connection to pipeline mode.
  3. Once in pipeline mode, SQL statements are sent to the server.
  4. Upon arrival to the server, the statements are immediately executed and results sent back to the client i.e. client/server acknowledgments are not required.
  5. Because each SQL statement is sent sequentially, the application logic can either use a state machine or take advantage of what is obviously a FIFO queue in order to process the results.
  6. Once all asynchronous statements have been executed and returned the client application explicitly terminates the pipeline mode and returns the connection to its default setting.

Since each SQL statement is essentially idempotent it is up to the client logic to make sense of the results. Sending SQL statements and pulling out results that have no relation with each other is one thing but life gets more complicated when working with logical outcomes that have some level of interdependence.

It is possible to bundle asynchronous SQL statements as a single transaction. But as with all transactions, failure of any one of these asynchronously sent SQL statements will result in a rollback for all the SQL statements.

Of course, the API does provide error handling in the case of pipeline failures. In the case of a FATAL condition, when the pipeline itself fails, the client connection is informed of the error thus flagging the remaining queued operations as lost. Thereafter normal processing is resumed as if the pipeline was explicitly closed by the client, and the client connection remains active.

Getting Into The, UGH, Details …

For the C programmer at heart, here’s a couple of references that I can share with you:

Caveat

  • Pipeline Mode is designed expressly for asynchronous mode. Synchronous mode is therefore not possible, which kinda defeats the purpose of pipeline mode.
  • One can only send a single SQL command at a time i.e. multiple SQL commands are disallowed.
  • COPY is disallowed.
  • In the case of sending a transaction COMMIT: The client cannot assume the transaction is committed until it receives the corresponding result.
  • Leveraging Pipeline Mode requires programming in either C or a language that can access the libpq API.

Remember to check with the postgres documentation which has more to say here.

Interesting References

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Jan
27
2022
--

Percona Distribution for MySQL Operator Based on Percona Server for MySQL – Alpha Release

Percona Distribution for MySQL Operator

Percona Distribution for MySQL OperatorOperators are a software framework that extends Kubernetes API and enables application deployment and management through the control plane. For such complex technologies as databases, Operators play a crucial role by automating deployment and day-to-day operations. At Percona we have the following production-ready and enterprise-grade Kubernetes Operators for databases:

Today we are glad to announce an alpha version of our new Operator for MySQL. In this blog post, we are going to answer some frequently asked questions.

Why the New Operator?

As mentioned above, our existing operator for MySQL is based on the Percona XtraDB Cluster (PXC). It is feature-rich and provides virtually-synchronous replication by utilizing Galera Write-Sets. Sync replication ensures data consistency and proved itself useful for critical applications, especially on Kubernetes.

But there are two things that we want to address:

  1. Our community and customers let us know that there are numerous use cases where asynchronous replication would be a more suitable solution for MySQL on Kubernetes.
  2. Support Group Replication (GR) – a native way to provide synchronous replication in MySQL without the need to use Galera.

We heard you! That is why our new Operator is going to run Percona Server for MySQL (PS) and provide both regular asynchronous (with semi-sync support) and virtually-synchronous replication based on GR.

What Is the Name of the New Operator?

We will have two Operators for MySQL and follow the same naming as we have for Distributions:

  • Percona Distribution for MySQL Operator – PXC (Existing Operator)
  • Percona Distribution for MySQL Operator – PS (Percona Server for MySQL)

Is It the Successor of the Existing Operator for MySQL?

Not in the short run. We want to provide our users MySQL clusters on Kubernetes with three replication capabilities:

  • Percona XtraDB Cluster
  • Group Replication
  • Regular asynchronous replication with semi-sync support

Will I Be Able to Switch From One Operator to Another?

We are going to provide instructions and tools for migrations through replication or backup and restore. As the underlying implementation is totally different there is no direct, automated path to switch available.

Can I Use the New Operator Now?

Yes, but please remember that it is an alpha version and we do not recommend it for production workloads. 

Our Operator is licensed under Apache 2.0 and can be found in the percona-server-mysql-operator repository on GitHub.

To learn more about our operator please see the documentation.

Quick Deploy

Run these two commands to spin up a MySQL cluster with 3 nodes with asynchronous replication:

$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mysql-operator/main/deploy/bundle.yaml
$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mysql-operator/main/deploy/cr.yaml

In a couple of minutes, the cluster is going to be up and running. Verify:

$ kubectl get ps
NAME       MYSQL   ORCHESTRATOR   AGE
cluster1   ready   ready          6m16s

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
cluster1-mysql-0                                    1/1     Running   0          6m27s
cluster1-mysql-1                                    1/1     Running   1          5m11s
cluster1-mysql-2                                    1/1     Running   1          3m36s
cluster1-orc-0                                      2/2     Running   0          6m27s
percona-server-for-mysql-operator-c8f8dbccb-q7lbr   1/1     Running   0          9m31s

Connect to the Cluster

First, you need to get the root user password, which was automatically generated by the Operator. By default system users’ passwords are stored in cluster1-secrets Secret resource:

$ kubectl get secrets cluster1-secrets -o yaml | grep root | awk '{print $2}' | base64 --decode

Start another container with a MySQL client in it:

$ kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il

Connect to a primary node of our MySQL cluster from this container:

$ mysql -h cluster1-mysql-primary -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 138
Server version: 8.0.25-15 Percona Server (GPL), Release 15, Revision a558ec2

Copyright (c) 2009-2021 Percona LLC and/or its affiliates
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

Consult the documentation to learn more about other operational capabilities and options.

What Is Currently Supported?

The following functionality is available in the Operator:

  • Deploy asynchronous and semi-sync replication MySQL clusters with Orchestrator on top of it
  • Expose clusters with regular Kubernetes Services
  • Monitor the cluster with Percona Monitoring and Management
  • Customize MySQL configuration

When Will It Be GA? What Is Going to Be Included?

Our goal is to release the GA version late in Q2 2022. We plan to include the following:

  • Support for both sync and async replication
  • Backups and restores, proxies integration
  • Certifications on various Kubernetes platforms and flavors
  • Deploy and manage MySQL Clusters with PMM DBaaS

Call for Action

Percona Distribution for MySQL Operator – PS just hatched and your feedback is highly appreciated. 

Open a bug or create a Pull Request for a chance to get awesome Percona Swag!

Jan
27
2022
--

Authenticating Your Clients to MongoDB on Kubernetes Using x509 Certificates

MongoDB on Kubernetes Using x509 Certificates

MongoDB on Kubernetes Using x509 CertificatesManaging database users and their passwords can be a hassle. Sometimes, they could even wait in various configuration files, hardcoded. Using certificates can help you avoid the toil of managing, rotating, and securing user passwords, so let’s see how to have x509 certificate authentication with the Percona Server for MongoDB Operator and cert-manager.

cert-manager is our recommended way to manage TLS certificates on Kubernetes clusters. The operator is already integrated with it to generate certificates for TLS and cluster member authentication. We’re going to leverage cert-manager APIs to generate valid certificates for MongoDB clients.

There are rules to follow to have a valid certificate for user authentication:

  1. A single Certificate Authority (CA) MUST sign all certificates.
  2. The certificate’s subject MUST be unique.
  3. The certificate MUST not be expired.

For the complete requirements, check the MongoDB docs.

Creating Valid Certificates for Clients

Let’s check our current certificates:

$ kubectl get cert
NAME                      READY   SECRET                    AGE
cluster1-ssl              True    cluster1-ssl              17h
cluster1-ssl-internal     True    cluster1-ssl-internal     17h

The operator configures MongoDB nodes to use “cluster1-ssl-internal” as the certificate authority. We’re going to use it to sign the client certificates to conform to Rule 1.

First, we need to create an Issuer:

$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: cluster1-psmdb-x509-ca
spec:
 ca:
   secretName: cluster1-ssl-internal
EOF

Then, our certificate:

$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
 name: cluster1-psmdb-egegunes
spec:
 secretName: cluster1-psmdb-egegunes
 isCA: false
 commonName: egegunes
 subject:
   organizations:
     - percona
   organizationalUnits:
     - cloud
 usages:
   - digital signature
   - client auth
 issuerRef:
   name: cluster1-psmdb-x509-ca
   kind: Issuer
   group: cert-manager.io
EOF

The “usages” field is important. You shouldn’t touch its values. You can change the “subject” and “commonName” fields as you wish. They’re going to construct the Distinguished Name (DN) and DN will be the username.

$ kubectl get secret cluster1-psmdb-egegunes -o yaml \
    | yq3 r - 'data."tls.crt"' \
    | base64 -d \
    | openssl x509 -subject -noout

subject=O = percona, OU = cloud, CN = egegunes

Let’s create the user:

rs0:PRIMARY> db.getSiblingDB("$external").runCommand(
 {
   createUser: "CN=egegunes,OU=cloud,O=percona",
   roles: [{ role: 'readWrite', db: 'test' }]
 }
)

{
       "ok" : 1,
       "$clusterTime" : {
               "clusterTime" : Timestamp(1643099623, 3),
               "signature" : {
                       "hash" : BinData(0,"EdPrmPJqfgRpMEZwGMeKNLdCe10="),
                       "keyId" : NumberLong("7056790236952526853")
               }
       },
       "operationTime" : Timestamp(1643099623, 3)
}

We’re creating the user in the “$external” database. You need to use “$external” as your authentication source. Note that we’re reversing the subject fields, this is important.

Authenticating With the Certificate

I have created a simple Go application to show how you can use x509 certificates to authenticate. It’s redacted here for brevity:

// ca.crt is mounted from secret/cluster1-ssl
caFilePath := "/etc/mongodb-ssl/ca.crt"

// tls.pem consists of tls.key and tls.crt, they're mounted from secret/cluster1-psmdb-egegunes
certKeyFilePath := "/tmp/tls.pem"

endpoint := "cluster1-rs0.psmdb.svc.cluster.local"

uri := fmt.Sprintf(
       "mongodb+srv://%s/?tlsCAFile=%s&tlsCertificateKeyFile=%s",
       endpoint,
       caFilePath,
       certKeyFilePath,
)

credential := options.Credential{
       AuthMechanism: "MONGODB-X509",
       AuthSource:    "$external",
}

opts := options.Client().SetAuth(credential).ApplyURI(uri)

client, _ := mongo.Connect(ctx, opts)

The important part is using “MONGODB-X509” as the authentication mechanism. We also need to pass the CA and client certificate in the MongoDB URI.

$ kubectl logs psmdb-x509-tester-688c989567-rmgxv
2022/01/25 07:50:09 Connecting to database
2022/01/25 07:50:09 URI: mongodb+srv://cluster1-rs0.psmdb.svc.cluster.local/?tlsCAFile=/etc/mongodb-ssl/ca.crt&tlsCertificateKeyFile=/tmp/tls.pem
2022/01/25 07:50:09 Username: O=percona,OU=cloud,CN=egegunes
2022/01/25 07:50:09 Connected to database
2022/01/25 07:50:09 Successful ping

You can see the complete example in this repository. If you have any questions, please add a comment or create a topic in the Percona Forums.

Jan
26
2022
--

Paw Patrol Toddler Bed – A Trendy Big Kid Bed For Your Little Paw Patrol Fan!

What kid doesn’t love Paw Patrol?! It is the #1 TV show for kids and has the highest ratings on Nickelodeon! My son is obsessed. All of his friends are obsessed, too. We keep hearing great things about Paw Patrol, and we just had to see what the fuss was all about: that’s why we have already compiled a list of Paw Patrol kids chairs!

Paw Patrol TV series tells a story about boy-and-his-dog adventures with his six best friends: Chase, Marshall, Skye, Rocky, Zuma, and Rubble. Their mission is to protect Adventure Bay from different threats with technology or superpowers. The pups have their own land, sea, and air transportation vehicles.

Its primary audience is 2-5 years old, but I think that older kids would enjoy this show as well. 4 of the pups are boys and 1 girl pup – Skye. They are all fun and cute, very lovable!

We invest in the toys and cartoons we buy our kids to help shape their development. They love to watch, read, and play it all (and ask for more!). So when you decide to look into getting your toddler their very first big boy/big girl bed, of course, you want to look for one that they would be excited about – and one that would continue their love for Paw Patrol!

Paw Patrol Toddler Bed

When looking for a kids’ bed, you want it to be safe, sturdy, and stable. But you also want it to have the cool factor – something that your kid will enjoy and love! That’s why we were so excited about this Paw Patrol toddler bed: we knew our son would go nuts over this! When in doubt, get them something with their favorite Paw Patrol character on it!

The Paw Patrol Toddler Bed is an excellent gift for your child. A new toddler bed with a Paw Patrol design will give them the feeling of being in their own room and of sleeping safely on their own.

Toddler beds are a great way to transition your toddler from the crib into a regular bed, and with this Paw Patrol Toddler Bed, they’ll feel like part of the action straight away!

We have done our best to provide you with the most popular models on the market – toddler beds that are durable, comfortable, and aesthetically pleasing. Have a look!

Delta Children Wood Toddler Bed

This Paw Patrol Toddler Bed from Delta Children is just perfect! It comes in a bright, fun red color. The bed’s frame looks like a firetruck, with all the Paw Patrol decals you’ll love. It is, no doubt, the perfect firetruck bed for toddlers!

This toddler bed has elevated bedsides to help children feel secure and cozy at night – which is very important for little ones transitioning away from a crib. Thanks to them, your little one is safe from falling out of bed or from climbing over the edge.

The Delta Children Wood Toddler Bed is a great transitional item for toddlers up through preschool-aged kids. Made from solid wood construction with non-toxic paint, this bed is sturdy and built to hold up against the roughest of pups.

The mattress is not included with this toddler bed, but it’s easy to buy one separately! Any standard toddler mattress will fit.

Delta Children Plastic Toddler Bed

This Paw Patrol toddler bed from Delta Children is an excellent option for those looking for something less pricey.

Made from plastic; it is lightweight and easy to move around the room. It easily assembles. The Paw Patrol graphics are very cute – just right for toddlers!

The material used to create this bed is covered with non-toxic paint. On top of that, it’s very durable and lightweight – perfect for kids transitioning into their first big boy/girl bed! It includes an attached guard rail to keep your pup secure.

The bed matches regular-sized toddler mattresses. You can also get a matching Paw Patrol toddler bed set!

This Paw Patrol toddler bed is definitely one that you should consider buying. It has everything that your little fan will love – it’s fun, comfortable, and safe! Not just that, but the great design of this bed makes it very easy to fit in any kid’s room. Let your child feel like part of the team with this awesome toddler bed!

Delta Children 3D-Footboard Toddler Bed

If your child is a huge fan of Paw Patrol, then this bed is going to be his favorite for sure! It is fun, comfy, and very safe. The best part? It has a guard rail to prevent your toddler from falling off! Let’s face it – toddlers are not the most coordinated creatures. They are still exploring their sense of balance so that this Paw Patrol toddler bed will provide maximum safety for them.

Also, the Paw Patrol 3D-Footboard Toddler Bed has a low height for ease of access; it’s perfect for toddlers transitioning from the cot to a bed. The bed itself is made of non-toxic PVC. It has the Paw Patrol design with Chase on top, and it’s straightforward to assemble and clean (which is great for parents).

This bed does not come with a mattress, but it accommodates a standard crib mattress size, so you won’t have any trouble finding the right fit for your toddler’s bed. Recommended for ages 15 months+/ holds up to 50 lbs.

This is a quality toddler bed at a very reasonable price. It’s sturdy and safe for your little Paw Patrol fan to use! We love it!

Conclusion

This was our selection of the best toddler beds for your little Paw Patrol fan. Just pick one, order it and let your child feel like part of the team! They will surely fall in love with his new big kid bed!

We hope that through our review, you can make up your mind about which toddler bed is the best one for you. So hurry up and get yours! Your little pup will surely love it!

The post Paw Patrol Toddler Bed – A Trendy Big Kid Bed For Your Little Paw Patrol Fan! appeared first on Comfy Bummy.

Jan
25
2022
--

PostgreSQL 14 and Recent SCRAM Authentication Changes – Should I Migrate to SCRAM?

PostgreSQL Migrate to SCRAM

Recently, a few PostgreSQL users reported that they got connection failures after switching to PostgreSQL 14.

Why do I get the error FATAL:  password authentication failed for a user in the new server?” has become one of the most intriguing questions.

At least in one case, it was a bit of a surprise that the application message was as follows:

FATAL: Connection to database failed: connection to server at “localhost” (::1), port 5432 failed: fe_sendauth: no password supplied

The reason for these errors is the defaults for password encryption are changed in new versions of PostgreSQL to SCRAM authentication. Even though the last one appears nothing directly related to SCRAM, oh yes, some post-installation script failed which was looking for “md5”.

SCRAM authentication is not something new in PostgreSQL. It was there from PostgreSQL 10 onwards but never affected DBA life in general because that has never been the default. It was an opt-in feature by explicitly changing the default settings. Those who do an opt-in generally understand and do it intentionally, and it’s never been known to cause any problem. The PostgreSQL community was reluctant to make it a prime method for years because many of the client/application libraries were not ready for SCRAM authentication.

But that is changing in PostgreSQL 14. With PostgreSQL 9.6 going out of support, the landscape is changing. Now we expect all old client libraries to get upgraded and SCRAM authentication is becoming the prime password authentication method. But, those who are completely unaware are going to be greeted with a surprise one day or another. The purpose of this post is to create a quick awareness for those who are not yet, and address some of the commonly asked questions.

What is SCRAM Authentication?

In simple words, the database client and the server prove and convince each other that they know the password without exchanging the password or the password hash. Yes, it is possible by doing a Salted Challenge and Responses, SCRAM-SHA-256, as specified by RFC 7677. This way of storing, communicating, and verifying passwords makes it very hard to break a password.

This method is more resistant to:

  • Dictionary attacks
  • Replay attacks
  • Stollen hashes

Overall it becomes very hard to break a password-based authentication.

What Has Changed Over Time?

Channel Binding

Authentication is only one part of secured communication. After authentication, a rogue server in the middle can potentially take over and fool the client connection. PostgreSQL 11 introduced SCRAM-SHA-256-PLUS which supports the channel binding. This is to make sure that there is no rogue server acting as a real server OR doing a man-in-middle attack.

From PostgreSQL 13 onwards, a client can request and even insist on channel binding.

For example:

psql -U postgres -h c76pri channel_binding=prefer
or
psql -U postgres -h c76pri channel_binding=require

The channel binding works over SSL/TLS, so SSL/TLS configuration is mandatory to get the channel binding work.

Setting Password Encryption

The md5 was the only available option for password encryption before PostgreSQL 10, so PostgreSQL allows settings to indicate that “password encryption is required” which is defaulted to md5.

–Upto PG 13
postgres=# set password_encryption TO ON;
SET

Due to the same reason, the above statement was effectively the same as:

postgres=# set password_encryption TO MD5;
SET

We could even use “true”, “1”,”yes” instead of “on” as an equivalent value.

But now we have multiple encryption methods and “ON” doesn’t really convey what we really want. So from PostgreSQL 14 onwards, the system expects us to specify the encryption method.

postgres=# set password_encryption TO 'scram-sha-256';
SET

postgres=# set password_encryption TO 'md5';
SET

Any attempt to use “on”/”true”, ”yes” will be rejected with an error.

–-From PG 14
postgres=# set password_encryption TO 'on';
ERROR:  invalid value for parameter "password_encryption": "on"
HINT:  Available values: md5, scram-sha-256.

So please check your scripts and make sure that they don’t have the old way of “enabling” encryption.

Some Frequently Asked Questions

  1. Does my logical backup and restore get affected?
    Logical backup and restore of PostgreSQL globals (pg_dumpall) won’t affect the SCRAM authentication, the same password should work after the restore. In fact, it will be interesting to recollect that the SCRAM authentication is more resilient to changes. For example, if we rename a USER the old MD5 password won’t work anymore, because the way PostgreSQL generates the MD5 it uses the username also.

    postgres=# ALTER USER jobin1 RENAME TO jobin;
    NOTICE:  MD5 password cleared because of role rename
    ALTER ROLE

    As the NOTICE indicates the password hash in the pg_authid will be cleared as the old one is no longer valid. But this won’t be the case with SCRAM authentication, as we can rename the users without affecting the password.

    postgres=# ALTER USER jobin RENAME TO jobin1;
    ALTER ROLE
  2. The existing/old method of encryption (md5) was a big vulnerability. Was there a big risk?
    This worry mainly comes from the name “MD5” which is way too silly for modern hardware. The way PostgreSQL uses md5 is different is not just the hash of the password, but it considers the username also.  Additionally, it is communicated over the wire after preparing a hash with a random salt provided by the server. Effectively what is communicated will be different from the password hash, so it is not too vulnerable. But prone to dictionary attacks and leaked username password hash problems.
  3. Is the new scram authentication ads complex to authenticate? Is my connection request is going to take more time?
    The wire protocol SCRAM is very efficient and not known to cause any degradation in connection time. Moreover, compared to other overheads of server-side connection management, the overhead created by SCRAM becomes very negligible
  4. Is it mandatory to use SCRAM authentication from PostgreSQL 14 and force all my user accounts to switch to it?
    Definitely not, only the defaults are changed. Old method md5 is still a valid method that works great, and if the access to the PostgreSQL environment is restricted by firewall/hba rules, there is already less risk in using md5.
  5. Why do I get the “: FATAL:  password authentication failed for user “ error when I switched to PostgreSQL 14?
    The most probable reason is the pg_hba.conf entries. If we specify “md5” as the authentication method, PostgreSQL will allow SCRAM authentication also. But the reverse won’t work. When you created the PostgreSQL 14 environment, most probably it may have “scram-sha-256” as the authentication method. In some of the PostgreSQL packages, the installation script automatically does it for you ? In case the authentication works from the PostgreSQL client tools and not from the application, please check the driver version check the scope for upgrade
  6. Why do I get other types of authentication errors?
    The most probable reasons are the post-installation scripts. It is a regular practice in many organizations to use DevOps tools (Ansible/Chef) or even shell scripts to do the post-installation customizations. Many of those will be doing a range of things that involves steps like set password_encryption TO ON; or even modification to pg_hba.conf using sed, which is expected to fail if it is trying to modify an entry that is not there anymore.

Why Should I Care and What To Do

Anything starting from automation/deployment scripts, tools, application connections, and connection poolers could potentially break. One of the major arguments for delaying this change till PostgreSQL 14 is that the oldest supported version (9.6) is going out of support soon. So this is the right time to inspect your environments to see if any of those environments have old PostgreSQL libraries (9.6 or older) and have a plan for the upgrade, as the old version PostgreSQL libraries cannot handle SCRAM negotiations.

In summary, having a good plan to migrate will help, even though it is not urgent.

    1. Inspect the environments and application drivers to see whether any of them are still using old versions of PostgreSQL client libraries and upgrade them wherever required.
      Please refer to: https://wiki.postgresql.org/wiki/List_of_drivers
      Encourage / Drive the upgrade of client libraries with a timeline
    2. If the existing environment is using md5, encourage users to switch to SCRAM authentication.
      Remember that the authentication method mentioned as “md5” in pg_hba.conf will continue to work for both SCRAM and MD5 authentication in PostgreSQL 14 also.
    3. Take every opportunity to test and migrate automation, connection poolers, and other infrastructure to SCRAM authentication.

By changing the default authentication, the PostgreSQL community is showing a clear direction about the future.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Jan
24
2022
--

Talking Drupal #331 – Migrating Paragraphs for The National Zoo

Today we are talking about Migrating Paragraphs for the National Zoo with Mohammed El-Khatib.

TalkingDrupal.com/331

Topics

  • Nic – Family flew home
  • Abby – Little free library – Hades game
  • Mohammed – Migrating D9 to Tailwind CSS and Alpine – Travel plans fell through
  • John – Listening to TD with kids
  • National Zoo
    • Favorite animal
  • How the National Zoo uses Drupal
  • Why the zoo needed to migrate paragraphs
  • Mapping migration strategy
  • Tool
    • Migrate Plus
    • Migrate Tools
    • Migrate Upgrade
  • Nested Paragraphs
  • Translation
  • Any strategies to migrate
  • Resources for help
  • Tips and Tricks
  • What is next for National Zoo
  • Anything to add?

Resources

Guests

Mo El-Khatib – mmelkhatib

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Abby Bowman – www.linkedin.com/in/arbowman @abowmanr

MOTW

  • Draggable views DraggableViews makes rows of a view “draggable” which means that they can be rearranged by Drag’n’Drop.
Jan
24
2022
--

Percona Distribution for MongoDB Operator with Local Storage and OpenEBS

MongoDB Operator with Local Storage and OpenEBS

Automating the deployment and management of MongoDB on Kubernetes is an easy journey with Percona Operator. By default, MongoDB is deployed using persistent volume claims (PVC). In the cases where you seek exceptional performance or you don’t have any external block storage, it is also possible to use local storage. Usually, it makes sense to use local NVMe SSD for better performance (for example Amazon’s i3 and i4i instance families come with local SSDs).

With PVCs, migrating the container from one Kubernetes node to another is straightforward and does not require any manual steps, whereas local storage comes with certain caveats. OpenEBS allows you to simplify local storage management on Kubernetes. In this blog post, we will show you how to deploy MongoDB with Percona Operator and leverage OpenEBS for local storage.

OpenEBS MongoDB

Set-Up

Install OpenEBS

We are going to deploy OpenEBS with a helm chart. Refer to OpenEBS documentation for more details.

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs --namespace openebs openebs/openebs --create-namespace

This is going to install OpenEBS along with

openebs-hostpath

storageClass:

kubectl get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
…
openebs-hostpath     openebs.io/local        Delete          WaitForFirstConsumer   false                  71s

Deploy MongoDB Cluster

We will use a helm chart for it as well and follow this document

helm repo add percona https://percona.github.io/percona-helm-charts/
helm repo update

Install the Operator:

helm install my-op percona/psmdb-operator

Deploy the database using local storage. We will disable sharding for this demo for simplicity:

helm install mytest percona/psmdb-db --set sharding.enabled=false \ 
--set "replsets[0].volumeSpec.pvc.storageClassName=openebs-hostpath" \ 
--set  "replsets[0].volumeSpec.pvc.resources.requests.storage=3Gi" \ 
--set "replsets[0].name=rs0" --set "replsets[0].size=3"

As a result, we should have a replica set with three nodes using

openebs-hostpath

storageClass.

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
my-op-psmdb-operator-58c74cbd44-stxqq   1/1     Running   0          5m56s
mytest-psmdb-db-rs0-0                   2/2     Running   0          3m58s
mytest-psmdb-db-rs0-1                   2/2     Running   0          3m32s
mytest-psmdb-db-rs0-2                   2/2     Running   0          3m1s

$ kubectl get pvc
NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
mongod-data-mytest-psmdb-db-rs0-0   Bound    pvc-63d3b722-4b31-42ab-b4c3-17d8734c92a3   3Gi        RWO            openebs-hostpath   4m2s
mongod-data-mytest-psmdb-db-rs0-1   Bound    pvc-2bf6d908-b3c0-424c-9ccd-c3be3295da3a   3Gi        RWO            openebs-hostpath   3m36s
mongod-data-mytest-psmdb-db-rs0-2   Bound    pvc-9fa3e21e-bfe2-48de-8bba-0dae83b6921f   3Gi        RWO            openebs-hostpath   3m5s

Local Storage Caveats

Local storage is the node storage. It means that if something happens with the node, it will also have an impact on the data. We will review various regular situations and how they impact Percona Server for MongoDB on Kubernetes with local storage.

Node Restart

Something happened with the Kubernetes node – server reboot, virtual machine crash, etc. So the node is not lost but just restarted. Let’s see what would happen in this case with our MongoDB cluster.

I will restart one of my Kubernetes nodes. As a result, the Pod will go into a Pending state:

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
my-op-psmdb-operator-58c74cbd44-stxqq   1/1     Running   0          58m
mytest-psmdb-db-rs0-0                   2/2     Running   0          56m
mytest-psmdb-db-rs0-1                   0/2     Pending   0          67s
mytest-psmdb-db-rs0-2                   2/2     Running   2          55m

In normal circumstances, the Pod should be rescheduled to another node, but it is not happening now. The reason is local storage and affinity rules. If you do

kubectl describe pod mytest-psmdb-db-rs0-1

, you would see something like this:

Events:
  Type     Reason             Age                From                Message
  ----     ------             ----               ----                -------
  Warning  FailedScheduling   72s (x2 over 73s)  default-scheduler   0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
  Normal   NotTriggerScaleUp  70s                cluster-autoscaler  pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict

As you see, the cluster is not scaled up as this Pod needs a specific node that has its storage. We can see this annotation in PVC itself:

$ kubectl describe pvc mongod-data-mytest-psmdb-db-rs0-1
Name:          mongod-data-mytest-psmdb-db-rs0-1
Namespace:     default
StorageClass:  openebs-hostpath
…
Annotations:   pv.kubernetes.io/bind-completed: yes
…
               volume.kubernetes.io/selected-node: gke-sergey-235-default-pool-9f5f2e2b-4jv3
…
Used By:       mytest-psmdb-db-rs0-1

In other words, this Pod will wait for the node to come back. Till it comes back your MongoDB cluster will be in a degraded state, running two nodes out of three. Keep this in mind when you perform maintenance or experience a Kubernetes node crash. With PVCs, this MongoDB Pod would be rescheduled to a new node right away.

Graceful Migration to Another Node

Let’s see what is the best way to migrate one MongoDB replica set Pod from one node to another when local storage is used. There can be multiple reasons – node maintenance, migration to another rack, datacenter, or newer hardware. We want to perform such a migration with no downtime and minimal performance impact on the database.

Firstly, we will add more nodes to the replica set by scaling up the cluster. We will use helm again and change the size from three to five:

helm upgrade mytest percona/psmdb-db --set sharding.enabled=false \ 
--set "replsets[0].volumeSpec.pvc.storageClassName=openebs-hostpath" \ 
--set  "replsets[0].volumeSpec.pvc.resources.requests.storage=3Gi" \ 
--set "replsets[0].name=rs0" --set "replsets[0].size=5"

This will create two more Pods in a replica set. Both Pods will use

openebs-hostpath

storage as well. By default, our affinity rules require you to run replica set nodes on different Kubernetes nodes, so either enable auto-scaling or ensure you have enough nodes in your cluster. We are adding more nodes to avoid performance impact.

Once all five replica set nodes are up, we will drain the Kubernetes node we need. This will remove all the pods from it gracefully.

kubectl drain gke-sergey-235-default-pool-9f5f2e2b-rtcz --ignore-daemonsets

As with the node restart described in the previous chapter, the replica set Pod will be stuck in Pending status waiting for the local storage.

kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
…
mytest-psmdb-db-rs0-2                   0/2     Pending   0          65s

The storage will not come back. To solve it we need to remove the PVC and delete the Pod: 

kubectl delete pvc mongod-data-mytest-psmdb-db-rs0-2
persistentvolumeclaim "mongod-data-mytest-psmdb-db-rs0-2" deleted


kubectl delete pod mytest-psmdb-db-rs0-2
pod "mytest-psmdb-db-rs0-2" deleted

This will trigger the creation of a new PVC and a Pod on another node:

NAME                                    READY   STATUS    RESTARTS   AGE
…
mytest-psmdb-db-rs0-2                   2/2     Running   2          1m

Again all five replica set pods are up and running. You can now perform the maintenance on your Kubernetes node.

What is left is to scale down replica set back to three nodes:

helm upgrade mytest percona/psmdb-db --set sharding.enabled=false \ 
--set "replsets[0].volumeSpec.pvc.storageClassName=openebs-hostpath" \ 
--set  "replsets[0].volumeSpec.pvc.resources.requests.storage=3Gi" \ 
--set "replsets[0].name=rs0" --set "replsets[0].size=3"

Node Loss

When the Kubernetes node is dead and there is no chance for it to recover, we will face the same situation as with graceful migration: Pod will be stuck in Pending status waiting for the node to come back. The recovery path is the same:

  1. Delete Persistent Volume
  2. Delete the Pod
  3. The pod will start on another node and sync the data to a new local PVC

 

Conclusion

Local storage can boost your database performance and remove the need for cloud storage completely. This can also lower your public cloud provider bill. In this blog post, we saw that these benefits come with a higher maintenance cost, that can be also automated. 

We encourage you to try out Percona Distribution for MongoDB Operator with local storage and share your results on our community forum.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Distribution for MongoDB Operator in CONTRIBUTING.md.

Jan
24
2022
--

Updated Percona Distributions for MySQL and MongoDB: Release Roundup January 24, 2022

Percona Software Update Jan 2022

It’s time for the release roundup!

Percona Software Update Jan 2022Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since January 3, 2022. Take a look!

 

Percona Distribution for MySQL (PXC-Based Variant) 8.0.26

Percona Distribution for MySQL (PXC-based variant) 8.0.26 was released on January 17, 2022. It is a single solution with the best and most critical enterprise components from the MySQL open source community, designed and tested to work together. This release is focused on the Percona XtraDB Cluster-based deployment variant, version 8.0.26-16.1.

The following are some of the notable fixes for MySQL 8.0.26, provided by Oracle, and included:

  • The TLSv1 and TLSv1.1 connection protocols are deprecated.

  • Identifiers with specific terms, such as “master” or “slave” are deprecated and replaced. See the Functionality Added or Changed section in the 8.0.26 Release Notes for a list of updated identifiers.

  • When using semisynchronous replication, either the old version or the new version of system variables and status variables are available. You cannot have both versions installed on an instance. Be aware, the new system variables are available when you use the new version, but the old values are not. The old system variables are available when you use the old version, but the new ones are not.

Download Percona Distribution for MySQL (PXC-based variant) 8.0.26

 

Percona XtraDB Cluster 8.0.26-16.1

On January 17, 20202, Percona XtraDB Cluster 8.0.26-16.1 was released. It supports critical business applications in your public, private, or hybrid cloud environment. Our free, open source, enterprise-grade solution includes the high availability and security features your business requires to meet your customer expectations and business goals. While there are a number of the notable fixes for MySQL 8.0.26, provided by Oracle, included in this release, there are also known unfixed issues that you should be aware of. Please read the release notes for further information.

Download Percona XtraDB Cluster 8.0.26-16.1

 

Percona Distribution for MongoDB 4.2.18

On January 20, 2022, we released Percona Distribution for MongoDB 4.2.18. It is a collection of solutions to run and operate your MongoDB efficiently with the data being consistently backed up. Percona Distribution for MongoDB includes Percona Server for MongoDB, a fully compatible open source, drop-in replacement for MongoDB, and  Percona Backup for MongoDB. Release highlights include these bug fixes provided by MongoDB:

  • Added the SetAllowMigrationsCommand command that prevents the balancer to migrate chunks on shards.
  • Added a flag for the config server that prevents new migrations to start and ongoing migrations to commit.
  • Improved the duplicate key handler behavior if the exact key already exists in the table. This fixes availability loss during the index build that was caused by checking many false duplicates.

Download Percona Distribution for MongoDB 4.2.18

 

Percona Server for MongoDB 4.2.18-18

Percona Server for MongoDB 4.2.18-18 was released on January 20, 2022. It is an enhanced, source-available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 4.2.18 Community Edition, supporting MongoDB 4.2.18 protocols and drivers. Bug fixes and improvements in this release, provided by MongoDB, are the following:

  • Added the SetAllowMigrationsCommand command that prevents the balancer to migrate chunks on shards.
  • Added a flag for the config server that prevents new migrations to start and ongoing migrations to commit.
  • Skip running duplicate key handler if the exact key already exists in the table. This fixes availability loss during the index build that was caused by checking many false duplicates.
  • Added periodic clean-up of logical sessions cache on arbiters.

Download Percona Server for MongoDB 4.2.18-18

 

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Jan
20
2022
--

Percona XtraBackup Changing to Strict by Default

Percona XtraBackup Changing to Strict by Default

Percona XtraBackup Changing to Strict by Default

Backups are a key part of a disaster recovery strategy, making sure you can continue or restore your business in case of an unwanted event with your data.

We always work on trying to improve Percona XtraBackup reliability, always favoring consistency, attempting to make unwanted outcomes be noticed as earlier as possible in the process.

Enabling –strict by Default

As of the upcoming release of 8.0.27, XtraBackup will no longer accept invalid parameters. Since the beginning of the times, validation of parameters has been a difficult task for XtraBackup as it mixes server-side and XtraBackup only parameters.

Starting at Percona XtraBackup 8.0.7 we implemented PXB-1493 which added –strict option defaulting to false.

This option issues a warning for each parameter/option that XtraBackup does not recognize. 

Having this as a warning is not good enough for a few reasons:

  1. It’s easy for it to go unnoticed as we mostly never read all the lines from the output of the backup, paying attention only to the end of the backup to validate if it completed ok.
  2. Having it as a warning doesn’t prevent us from shooting ourselves in the foot. As an example, when applying a –prepare on an incremental backup, if we mistype –apply-log-only parameter on an incremental backup that is not the last one, we will be executing the rollback phase making it rollback in-fly transactions that might have been completed on the next incremental backup.

And there are more.

With all the above in mind, we have decided to enable –strict mode by default.

What Does it Mean For You?

From the 8.0.27 release going forward, you might see XtraBackup failing in case you use an invalid option, either from the [xtrabackup] group of your configuration file or passed as an argument on the command line.

For the next few releases, we will still allow you to change back to the old behavior by passing the parameter –skip-strict, –strict=0, or –strict=OFF.

In the future, –strict will be the sole option, and disabling it will be forbidden.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com