Jun
22
2022
--

Missed Percona Live 2022? Watch It On Demand Now

Couldn’t make it to Austin this year for Percona Live? Don’t worry! We got you covered. 

This year’s show included 80+ sessions by leading open source database experts alongside big names like Shopify, Hubspot, Venmo, Amazon, and Facebook. And every session is available right now, on demand

Here are just a few of the sessions you can access right now. 

Database Incident Management: When Everything Goes Wrong

Speaker: Joshua Varner, Shopify

When database systems go down, the consequences are usually severe. And who’s the last line of defense protecting a company’s data? Database professionals. During this talk, Joshua Varner of Shopify discusses incident management practices that can help any DBA/DRE work as a team through the toughest of issues – even when the pressure is high. Joshua shares important tips that span the various stages of an incident – detection, alerting, response, postmortem, and prevention – as well as the unique challenges database teams face as compared to other disciplines.

Migrating Facebook/Meta to MySQL 8.0

Speakers: Herman Lee and Pradeep Nayak, FaceBook/Meta

MySQL powers some of Facebook/Meta’s most important workloads. Because the Facebook/Meta team actively develops new features in MySQL to support their evolving requirements, migrating workloads to majors versions requires significant time and effort. Their upgrade to version 5.6 took more than a year, and moving to 5.7 would have been no faster, as they were in the middle of developing their MyRocks storage engine. By MySQL 8.0, they had finished MyRocks and were ready to take advantage of that version’s compelling new features, like atomic DDL support. Even so, the migration would come with some tough challenges. In this talk, Herman Lee and Pradeep Nayak share how Facebook/Meta tackled their 8.0 migration project and some of the surprises they discovered along the way.

Percona Monitoring and Measuring (PMM) For Novices

Speaker: Dave Stokes, Percona

The use of open source databases comes with huge benefits like greater innovation, lower costs, liberal licensing, data portability, and the ability to extend the code. But how can you effectively monitor and manage your databases in the face of increasing complexity? Percona Monitoring and Management (PMM) is an open source database observability, monitoring, and management tool for MySQL, PostgreSQL, and MongoDB. In this session, open source database expert Dave Stokes walks you through how to install PMM, how to register databases with it, and how to use it to simplify operations, optimize performance, and improve the security of your database services. Tune into this session to see how Dave starts at zero to make you a hero at monitoring your databases. 

Simplifying the Top Complexities While Migrating From Oracle to PostgreSQL

Speaker: Avinash Vallarapu, MigOps

Migrating from Oracle to PostgreSQL is complex and can seem like a risky and daunting task. It doesn’t have to be. In this talk, Avinash Vallarapu shows you how to simplify some of the major complexities that may occur when moving from Oracle to PostgreSQL. Specifically, he discusses and demos:

  • Converting sysdate from Oracle to PostgreSQL. Among a variety of options available in Postgres, which one is correct?
  • Converting Hierarchical Queries from Oracle to PostgreSQL. Is Recursive CTE a correct alternative?
  • Understanding Internals of the available solutions.
  • Implementing Associative Arrays in PostgreSQL.
  • Understanding the correct alternatives for Oracle Array Indexes and other complexities.

Big Data…Or Just Too Much Data?

Speaker: Marcos Albe, Percona

Has your multi-terabyte MySQL instance become unmanageable? Are reporting queries taking longer and longer? Do backup storage costs keep increasing? Do simple queries take seconds to run? In this talk, Marcos Albe discusses the practical limits of MySQL, how to tame your data, and what the alternatives to MySQL are.

The Future of Percona Products

Speaker: Donnie Berkholz, Percona

Percona has long been a leader in the global open source community. As we head into the future, we’re excited to unveil our plans to help even more companies harness the benefits of open source databases. In this session, Donnie Berkholtz shares the Percona product roadmap for the next few years and makes a major announcement about Percona Platform. Whether you’re already using Percona software or services or considering it, you don’t want to miss this session.

MySQL Parallel Replication: All the 5.7 and 8.0 Details 9 (LOGICAL_CLOCK)

Speaker: Jean-François Gagné, HubSpot

Parallel Replication is an effective feature for achieving faster replication and reducing lag. While MySQL implements Parallel Replication through LOGICAL_CLOCK, fully benefiting from this mechanism requires more than just enabling it. In this talk, Jean-François Gagné covers important things you need to know about MySQL 5.7 and 8.0 Parallel Replication. He explains in detail how LOGICAL_CLOCK works and provides expert advice on how to optimize Parallel Replication in MySQL. He also discusses changes made in MySQL 8.0, and back-ported in 5.7, that greatly improve the potential for parallel execution on replicas. 

Deploying MongoDB Sharded Clusters Easily With Terraform and Ansible

Speaker : Ivan Groenewold, Percona

Installing big clusters can be a time-consuming task. In this talk, you’ll learn how to develop a complete pipeline to deploy MongoDB sharded clusters. By combining Terraform for the hardware provisioning and Ansible for the software installation, we can save time and provide a standardized reusable solution.

Sign up to see all Percona Live sessions

Learn from the best and brightest in the open source database industry. Watch these and other Percona Live 2022 sessions now.

 

Jun
22
2022
--

Percona Monitoring and Management in Kubernetes is now in Tech Preview

Percona Monitoring and Management in Kubernetes

Percona Monitoring and Management in KubernetesOver the course of the years, we see the growing interest in running databases and stateful workloads in Kubernetes. With Container Storage Interfaces (CSI) maturing and more and more Operators appearing, running stateful workloads in your favorite platform is not that scary anymore. Our Kubernetes story at Percona started with Operators for MySQL and MongoDB, adding PostgreSQL later on. 

Percona Monitoring and Management (PMM) is an open source database monitoring, observability, and management tool. It can be deployed in a virtual appliance or a Docker container. Our customers requested us to provide a way to deploy PMM in Kubernetes for a long time. We had an unofficial helm chart which was created as a PoC by Percona teams and the community (GitHub).

We are introducing the Technical Preview of the helm chart that is supported by Percona to easily deploy PMM in Kubernetes. You can find it in our helm chart repository here

Use cases

Single platform

If Kubernetes is a platform of your choice, currently you need a separate virtual machine to run Percona Monitoring and Management. No more with an introduction of a helm chart. 

As you know, Percona Operators all have integration with the PMM which enables monitoring for databases deployed on Kubernetes. Operators configure and deploy pmm-client sidecar container and register the nodes on a PMM server. Bringing PMM into Kubernetes simplifies this integration and the whole flow. Now the network traffic between pmm-client and PMM server does not leave the Kubernetes cluster at all.

All you have to do is to set the correct endpoint in a pmm section in the Custom Resource manifest. For example, for Percona Operator for MongoDB, the pmm section will look like this:

 pmm:
    enabled: true
    image: percona/pmm-client:2.28.0
    serverHost: monitoring-service

Where monitoring-service is the service created by a helm chart to expose a PMM server.

High availability

Percona Monitoring and Management has lots of moving parts: Victoria Metrics to store time-series data, ClickHouse for query analytics functionality, and PostgreSQL to keep PMM configuration. Right now all these components are a part of a single container or virtual machine, with Grafana as a frontend. To provide a zero-downtime deployment in any environment, we need to decouple these components. It is going to substantially complicate the installation and management of PMM.

What we offer instead right now are ways to automatically recover PMM in case of failure within minutes (for example leveraging the EC2 self-healing mechanism).

Kubernetes is a control plane for container orchestration that automates manual tasks for engineers. When you run software in Kubernetes it is best if you rely on its primitives to handle all the heavy lifting. This is what PMM looks like in Kubernetes:

PMM in Kubernetes

  • StatefulSet controls the Pod creation
  • There is a single Pod with a single container with all the components in it
  • This Pod is exposed through a Service that is utilized by PMM Users and pmm-clients
  • All the data is stored on a persistent volume
  • ConfigMap has various environment variable settings that can help to fine-tune PMM

In case of a node or a Pod failure, the StatefulSet is going to recover PMM Pod automatically and remount the Persistent Volume to it. The recovery time depends on the load of the cluster and node availability, but in normal operating environments, PMM Pod is up and running again within a minute.

Deploy

Let’s see how PMM can be deployed in Kubernetes.

Add the helm chart:

helm repo add percona https://percona.github.io/percona-helm-charts/
helm repo update

Install PMM:

helm install pmm --set service.type="LoadBalancer" percona/pmm

You can now login into PMM using the LoadBalancer IP address and use a randomly generated password stored in a

pmm-secret

Secret object (default user is admin).

The Service object created for PMM is called

monitoring-service

:

$ kubectl get services monitoring-service
NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)         AGE
monitoring-service   LoadBalancer   10.68.29.40   108.59.80.108   443:32591/TCP   3m34s

$ kubectl get secrets pmm-secret -o yaml
apiVersion: v1
data:
  PMM_ADMIN_PASSWORD: LE5lSTx3IytrUWBmWEhFTQ==
…

$ echo 'LE5lSTx3IytrUWBmWEhFTQ==' | base64 --decode && echo
,NeI<w#+kQ`fXHEM

Login to PMM by connecting to HTTPS://<YOUR_PUBLIC_IP>.

Customization

Helm chart is a template engine for YAML manifests and it allows users to customize the deployment. You can see various parameters that you can set to fine-tune your PMM installation in our README

For example, to set choose another storage class and set the desired storage size, set the following flags:

helm install pmm percona/pmm \
--set storage.storageClassName="premium-rwo" \
-–set storage.size=”20Gi”

You can also change these parameters in values.yaml and use “-f” flag:

# values.yaml contents
storage:
  storageClassName: “premium-rwo”
  size: 20Gi


helm install pmm percona/pmm -f values.yaml

Maintenance

For most of the maintenance tasks, regular Kubernetes techniques would apply. Let’s review a couple of examples.

Compute scaling

It is possible to vertically scale PMM by adding or removing resources through

pmmResources

variable in values.yaml. 

pmmResources:
  requests:
    memory: "4Gi"
    cpu: "2"
  limits:
    memory: "8Gi"
    cpu: "4"

Once done, upgrade the deployment:

helm upgrade -f values.yaml pmm percona/pmm

This will restart a PMM Pod, so better plan it carefully not to disrupt your team’s work.

Storage scaling

This depends a lot on your storage interface capabilities and the underlying implementation. In most clouds, Persistent Volumes can be expanded. You can check if your storage class supports it by describing it:

kubectl describe storageclass standard
…
AllowVolumeExpansion:  True

Unfortunately, just changing the size of the storage in values.yaml (storage.size) will not do the trick and you will see the following error:

helm upgrade -f values.yaml pmm percona/pmm
Error: UPGRADE FAILED: cannot patch "pmm" with kind StatefulSet: StatefulSet.apps "pmm" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

We use the StatefulSet object to deploy PMM, and StatefulSets are mostly immutable and there are a handful of things that can be changed on the fly. There is a trick though.

First, delete the StatefulSet, but keep the Pods and PVCs:

kubectl delete sts pmm --cascade=orphan

Recreate it again with the new storage configuration:

helm upgrade -f values.yaml pmm percona/pmm

It will recreate the StatefulSet with the new storage size configuration. 

Edit Persistent Volume Claim manually and change the storage size (the name of the PVC can be different for you). You need to change the storage in spec.resources.requests.storage section:

kubectl edit pvc pmm-storage-pmm-0

The PVC is not resized yet and you can see the following message when you describe it:

kubectl describe pvc pmm-storage-pmm-0
…
Conditions:
  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message
  ----                      ------  -----------------                 ------------------                ------  -------
  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Thu, 16 Jun 2022 11:55:56 +0300           Waiting for user to (re-)start a pod to finish file system resize of volume on node.

The last step would be to restart the Pod:

kubectl delete pod pmm-0

Upgrade

Running helm upgrade is a recommended way. Either once a new helm chart is released or when you want to upgrade the newer version of PMM by replacing the image in the image section. 

Backup and restore

PMM stores all the data on a Persistent Volume. As said before, regular Kubernetes techniques can be applied here to backup and restore the data. There are numerous options:

  • Volume Snapshots – check if it is supported by your CSI and storage implementation
  • Third-party tools, like Velero, can handle the backups and restores of PVCs
  • Snapshots provided by your storage (ex. AWS EBS Snapshots) with manual mapping to PVC during restoration

What is coming

To keep you excited there are numerous things that we are working on or have planned to enable further Kubernetes integrations.

OpenShift support

We are working on building a rootless container so that OpenShift users can run Percona Monitoring and Management there without having to grant elevated privileges. 

Microservices architecture

This is something that we have been discussing internally for some time now. As mentioned earlier, there are lots of components in PMM. To enable proper horizontal scaling, we need to break down our monolith container and run these components as separate microservices. 

Managing all these separate containers and Pods (if we talk about Kubernetes), would require coming up with separate maintenance strategies. This brings up the question of creating a separate Operator for PMM only to manage all this, but it is a significant effort. If you have an opinion here – please let us know on our community forum.

Automated k8s registration in DBaaS

As you know, Percona Monitoring and Management comes with a technical preview of  Database as a Service (DBaaS). Right now when you install PMM on a Kubernetes cluster, you still need to register the cluster to deploy databases. We want to automate this process so that after the installation you can start deploying and managing databases right away.

Conclusion

Percona Monitoring and Management enables database administrators and site reliability engineers to pinpoint issues in their open source database environments, whether it is a quick look through the dashboards or a detailed analysis with Query Analytics. Percona’s support for PMM on Kubernetes is a response to the needs of our customers who are transforming their infrastructure.

Some useful links that would help you to deploy PMM on Kubernetes:

Jun
21
2022
--

PostgreSQL for MySQL Database Administrators: Episodes 3 and 4

PostgreSQL for MySQL DBA

PostgreSQL for MySQL DBAThe videos for PostgreSQL for MySQL Database Administrators (DBA) episodes three and four are live here and here.  Episode three covers a simple backup and restoration while episode four covers some handy PSL commands.  For those of you who missed the first two videos in this series you can find them here: Episode one and Episode two.

Many MySQL DBAs hear about PostgreSQL, and this is a guided introductory series on setting up and using PostgreSQL.  The notes for the two latest episodes are below.   Each of the videos in this series shows you steps and commands, and then shows a video of those commands being executed.  If you are following along, these two episodes build on the previous ones.

Episode three

What we are going to do

  • Backup a database
  • Examine the backup
  • Create a new database
  • Load backup into new database

Creating a backup using pg_dump

$ pg_dump dvdrental > backup.sql

  • pg_dump is the name of the program
  • dvdrental is the name of the database to be backed up
  • Dumping the output to file backup.sql

Create a new database

$ sudo su – postgres
$ psql
(psql 14.3 (Ubuntu 2:14.3-3-focal))
Type “help” for help.
dvdrental=# CREATE DATABASE newdvd;
dvdrental=# \q
$ ^d

Restoration

$ psql -d newdvd -f backup.sql

Episode four

What we are going to do

  • Look at some PSQL commands
  • Warn you about some PSQL commands
  • Show you some PSQL commands

A quick summary

\c dbname Switch connection to a new database
\l List available databases
\dt List available tables
\d table_name Describe a table such as a column, type, modifiers of columns, etc.
\dn List all schemes of the currently connected database
\df List available functions in the current database
\dv List available views in the current database
\du List all users and their assigned roles
SELECT version(); Retrieve the current version of PostgreSQL server
\g Execute the last command again
\s Display command history
\s filename Save the command history to a file
\i filename Execute psql commands from a file
\? Know all available psql commands
\h Get help Eg:to get detailed information on ALTER TABLE statement use the \h ALTER TABLE
\e Edit command in your own editor
\a Switch from aligned to non-aligned column output
\H Switch the output to HTML format
\q Exit psql shell

Using \g To Repeat Commands

Using \c to switch databases

Using \d to see the Contents of a Database and More

Toggling Output Formats with \s and \H

Be sure to check out episode three and episode four of PostgreSQL for MySQL DBAs!

Jun
20
2022
--

Talking Drupal #352 – D7 to D9 Migration

Today we are talking about D7 to D9 Migration with Mauricio Dinarte.

www.talkingDrupal.com/352

Topics

  • Why are you passionate about migration
  • First thing to think about when migrating
  • Timeline
    • Factors
  • Tips and tricks
  • Helpful tools and migrations
  • Tricky things to migrate
  • Data structure inconsistencies
  • Embedded media
  • Data management
  • Source sets
    • CSV
    • Json
    • DB connection
  • Understanddrupal.com
  • Who is the audience
  • Any new content

Resources

Guests

Mauricio Dinarte – understanddrupal.com@dinarcon

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Donna Bungard – @dbungard

MOTW

Event Platform The Event Platform is actually a set of modules, each of which provides functionality designed to satisfy the needs of anyone creating a site for a Drupal Camp or similar event.

Jun
15
2022
--

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDB

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDB

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDBPercona Backup for MongoDB (PBM) is a distributed backup and restore tool for sharded and non-sharded clusters. In 1.8.0, we added the replset-remapping functionality that allows you to restore data on a new compatible cluster topology.

The new environment can have different replset names and/or serve on different hosts and ports. PBM handles this hard work for you. Making such migration indistinguishable from the usual restore. In this blog post, I’ll show you how to migrate to a new cluster practically.

The Problem

Usually to change a cluster topology you do lots of manual steps. PBM reduces the process.

Let’s have a look at a case where we will have an initial cluster and a desired one.

Initial cluster:

configsrv: "configsrv/conf:27017"
shards:
  - "rs0/rs0:27017,rs1:27017,rs2:27017"
  - "extra-shard/extra:27018"

The cluster consists of the configsrv configsvr replset with a single node and two shards: rs0 (3 nodes in the replset) and extra-shard (1 node in the replset). The names, hosts, and ports are not conventional across the cluster but we will resolve this.

Target cluster:

configsrv: "cfg/cfg0:27019"
shards:
  - "rs0/rs00:27018,rs01:27018,rs02:27018"
  - "rs1/rs10:27018,rs11:27018,rs12:27018"
  - "rs2/rs20:27018,rs21:27018,rs22:27018"

Here we have the cfg configsvr replset with a single node and 3 shards rs0rs2 where each shard is 3-nodes replset.

Think about how you can do this.

With PBM, all that we need is deployed cluster and logical backup made with PBM 1.5.0 or later. The following simple command will do the rest:

pbm restore $BACKUP_NAME --replset-remapping "cfg=configsrv,rs1=extra-shard"

Migration in Action

Let me show you how it looks in practice. I’ll provide details at the end of the post. In the repo, you can find all configs, scripts, and output used here.

As mentioned above, we need a backup. For this, we will deploy a cluster, seed data, and then make the backup.

Deploying the initial cluster

$> initial/deploy >initial/deploy.out
$> docker compose -f "initial/compose.yaml" exec pbm-conf \
     pbm status -s cluster
 
Cluster:
========
configsvr:
  - configsvr/conf:27019: pbm-agent v1.8.0 OK
rs0:
  - rs0/rs00:27017: pbm-agent v1.8.0 OK
  - rs0/rs01:27017: pbm-agent v1.8.0 OK
  - rs0/rs02:27017: pbm-agent v1.8.0 OK
extra-shard:
  - extra-shard/extra:27018: pbm-agent v1.8.0 OK

links: initial/deployinitial/deploy.out

The cluster is ready and we can add some data.

Seed data

We will insert the first 1000 numbers in a natural number sequence: 1 – 1000.

$> mongosh "mongo:27017/rsmap" --quiet --eval "
     for (let i = 1; i <= 1000; i++)
       db.coll.insertOne({ i })" >/dev/null

Getting the data state

These documents should be partitioned across all shards at insert time. Let’s see, in general, how. We will use thedbHash command on all shards to have the collections’ state. It will be useful for verification later.

We will also do a quick check on shards and mongos.

$> initial/dbhash >initial/dbhash.out && cat initial/dbhash.out
 
# rs00:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# extra:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs00:27017  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# extra:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# mongo:27017
[ 1000, true ]

links: initial/dbhashinitial/dbhash.out

All rs0 members have the same data. So secondaries replicate from primary correctly.

The quickcheck.js used in the initial/dbhash script describes our documents. It returns the number of documents and whether these documents make the natural number sequence.

We have data for the backup. Time to make the backup.

Making a backup

$> docker compose -f initial/compose.yaml exec pbm-conf bash
pbm-conf> pbm backup --wait
 
Starting backup '2022-06-15T08:18:44Z'....
Waiting for '2022-06-15T08:18:44Z' backup.......... done
 
pbm-conf> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [complete: 2022-06-15T08:18:49Z]

We have a backup. It’s enough for migration to the new cluster.

Let’s destroy the initial cluster and deploy the target environment. (Destroying the initial cluster is not a requirement. I just don’t want to waste resources on it.)

Deploying the target cluster

pbm-conf> exit
$> docker compose -f initial/compose.yaml down -v >/dev/null
$> target/deploy >target/deploy.out

links: target/deploy, target/deploy.out

Let’s check the PBM status.

PBM Status

$> docker compose -f target/compose.yaml exec pbm-cfg0 bash
pbm-cfg0> pbm config --force-resync  # ensure agents sync from storage
 
Storage resync started
 
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [incompatible: Backup doesn't match current cluster topology - it has different replica set names. Extra shards in the backup will cause this, for a simple example. The extra/unknown replica set names found in the backup are: extra-shard, configsvr. Backup has no data for the config server or sole replicaset] [2022-06-15T08:18:49Z]

As expected, it is incompatible with the new deployment.

See how to make it work

Resolving PBM Status

pbm-cfg0> export PBM_REPLSET_REMAPPING="cfg=configsvr,rs1=extra-shard"
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [complete: 2022-06-15T08:18:49Z]

Nice. Now we can restore.

Restoring

pbm-cfg0> pbm restore '2022-06-15T08:18:44Z' --wait
 
Starting restore from '2022-06-15T08:18:44Z'....Started logical restore.
Waiting to finish.....Restore successfully finished!

The –wait flag blocks the shell session till the restore completes. You could not wait but check it later.

pbm-cfg0> pbm list --restore
 
Restores history:
  2022-06-15T08:18:44Z

Everything is going well so far. Almost done

Let’s verify the data.

Data verification

pbm-cfg0> exit
$> target/dbhash >target/dbhash.out && cat target/dbhash.out
 
# rs00:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs10:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs11:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs12:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs20:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs21:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs22:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs00:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# rs10:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# rs20:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ ]
# mongo:27017
[ 1000, true ]

links: target/dbhash, target/dbhash.out

As you can see, the rs2 shard is empty. The other two have the identical dbHash and the quickcheck results as in the initial cluster. I think balancer can tell something about this

Balancer status

$> mongosh "mongo:27017" --quiet --eval "sh.balancerCollectionStatus('rsmap.coll')"
 
{
  balancerCompliant: false,
  firstComplianceViolation: 'chunksImbalance',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1655281436, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1655281436, i: 1 })
}

We know what to do. Starting balancer and checking status again.

$> mongosh "mongo:27017" --quiet --eval "sh.startBalancer().ok"

1
 
$> mongosh "mongo:27017" --quiet --eval "sh.balancerCollectionStatus('rsmap.coll')"
 
{
  balancerCompliant: true,
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1655281457, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1655281457, i: 1 })
}
 
$> target/dbhash >target/dbhash-2.out && cat target/dbhash-2.out

# rs00:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs10:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs11:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs12:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs20:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs21:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs22:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs00:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# rs10:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# rs20:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 229, false ]
# mongo:27017
[ 1000, true ]

links: target/dbhash-2.out

Interesting. rs2 shard has some data. However, rs1 and rs2 haven’t changed. It’s expected that mongos moves some chunks to rs2 and updates the router config. Physically deletion of chunks on a shard is a separate step. That’s why querying data directly on a shard is inaccurate. The data could disappear at any time. The cursor returns all available documents in a replset at the moment despite the router config.

Anyway, we shouldn’t care about it anymore. It is mongos/mongod responsibility now to update router config, query right shards, and remove moved chunks from shards by demand. In the end, we have valid data through mongos.

That’s it.

But wait, we didn’t make a backup! Never forget to make another solid backup.

Making a new backup

Better to change the storage so that we will have backups for the new deployment in a different place and will not see errors about incompatible backups from the initial cluster further.

$> pbm config --file "$NEW_PBM_CONFIG" >/dev/null
$> pbm config --force-resync >/dev/null
$> pbm backup -w >/dev/null
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:25:44Z 165.34KB <logical> [complete: 2022-06-15T08:25:49Z]

Now we’re done. And can sleep better.

One More Thing: Possible Misconfiguration

Let’s review another imaginal case to explain all possible errors.

Initial cluster: cfg, rs0, rs1, rs2, rs3, rs4, rs5

Target cluster: cfg, rs0, rs1, rs2, rs3, rs4, rs6

If we apply remapping:rs0=rs0,rs1=rs2,rs2=rs1,rs3=rs4, we will get error like “missed replsets: rs3, rs5. And nothing about rs6.

The missed rs5 should be obvious: backup topology has rs5 replset, but it is missed on target. And target rs6 does not have data to restore from. Adding rs6=rs5 fixes this.

But the missed rs3 could be confusing. Let’s visualize:

init | curr
-----+-----
cfg     cfg  # unchanged
rs0 --> rs0  # mapped. unchanged
rs1 --> rs2
rs2 --> rs1
rs3 -->      # err: no shard
rs4 --> rs3
     -> rs4  # ok: no data
rs5 -->      # err: no shard
     -> rs6  # ok: no data

When we remap the backup from rs4 to rs3, the target rs3 is reserved. The rs3 in the backup does not have a target replset now. Just remapping rs3 to available rs4 will fix it too.

This reservation avoids data duplication. That’s why we use the quick check via mongos.

Details

Compatible topology

Simply speaking, compatible topology is equal to or has a larger number of shards in the target deployment. In our example, we had initial 2 shards but restored them to 3 shards. PBM restored data on two shards only. MongoDB can distribute it with the remaining shards later when the balancer is enabled (sh.startBalancer()). The number of replset members does not matter because PBM takes backup from a member (per replset) and restores it to primary only. Other data-bearing members replicate data from the primary. So you could make a backup from a multi-members replset and then restore it to a single member replset.

You cannot restore to a different replset type like from shardsvr to configsvr.

Preconfigured environment

The cluster should be deployed with all shards added. Users and permissions should be added and assigned in advance. PBM agents should be configured to the same storage and be accessible to it from the new cluster.

Note: PBM agents store backup metadata on storage and keep the cache in MongoDB. pbm config –force-resync lets you refresh the cache from the storage. Do it on a new cluster right after deployment to see backups/oplog chunks made from the initial cluster.

Understanding replset remapping

You can remap replset names by the –replset-remapping flag or PBM_REPLSET_REMAPPING environment variable. If both sets, the flag has precedence.

For full restore, point-in-time recovery, and oplog replay, PBM CLI sends the mapping as a parameter in the command. Each command gets a separate explicit mapping (or none). It can be done only by CLI. Agents do not use the environment variable nor have the flag.

pbm status and pbm list use the flag/envvar to remap replsets in backups/oplog metadata and apply this mapping to the current deployment to show them properly. If backup and present replset names do not match, pbm list will not show these backups, and pbm status prints an error with missed replset names.

Restoring with remapping works with logical backups only.

How does PBM do this?

During restore, PBM reviews current topology and assigns members’ snapshots and oplog chunks to each shard/replset by name, respectively. The remapping changes the default assignment.

After the restore is done, PBM agents sync the router config to make the restored data “native” to this cluster.

Behind the scene

The config.shards collection describes the current topology. PBM uses it to know where and what to restore. The collection is not modified by PBM. But restored data contains some other router configurations for initial topology.

We updated two collections to replace old shard names with new ones in restored data:

  • config.databases – primary shard for non-sharded databases
  • config.chunks – shards where chunks are

After this, MongoDB knows where databases, collections, and chunks are in the new cluster.

CONCLUSION

Migration of a cluster requires much attention, knowledge, and calm. The replset-remapping functionality in Percona Backup for MongoDB reduces complexity during migration between two different environments. I would say, it is near to a routine job now.

Have a nice day ?

Jun
13
2022
--

Talking Drupal #351 – Core Theming $h!t

Today we are talking about Core Theming with Cristina Chumillas.

www.talkingDrupal.com/351

Topics

  • What’s newe in core themeing?
  • Why is Claro in core important?
  • Why is Olivero in core important?
  • Why was it so long between new themes?
  • Continuous improvement?
  • What is the biggest improvement?
  • What happens to old themes?
  • Accessibility
  • CSS
    • Build tools
  • Drupal 10
    • IE
    • UC
  • Compound elements
  • Getting involved

Resources

Guests Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Mike Herchel – herchel.com@mikeherchel

MOTW

Quicklink This module provides an implementation of Google Chrome Lab’s Quicklink library for Drupal. Quicklink is a lightweight (< 1kb compressed) JavaScript library that enables faster subsequent page-loads by prefetching in-viewport links during idle time.

Jun
13
2022
--

New Percona Distributions for PostgreSQL, Percona Server for MySQL 5.7.38-41: Release Roundup June 13, 2022

Percona Releases June 13 2022

It’s time for the release roundup!

Percona Releases June 13 2022Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since May 31, 2022. Take a look!

Percona Distribution for PostgreSQL 14.3

On June 1, 2022, we released Percona Distribution for PostgreSQL 14.3. It is a collection of tools to assist you in managing PostgreSQL. Percona Distribution for PostgreSQL installs PostgreSQL and complements it with a selection of extensions that enable solving essential practical tasks efficiently. This release is based on PostgreSQL 14.3. The set of extensions supplied with Percona Distribution for PostgreSQL now includes the HAProxy – a high-availability and load-balancing solution. A full list is available in the release notes.

Download Percona Distribution for PostgreSQL 14.3

 

Percona Distribution for PostgreSQL 13.7

June 2, 2022, saw the release of Percona Distribution for PostgreSQL 13.7. This release of Percona Distribution for PostgreSQL is based on PostgreSQL 13.7. Along with HAProxy, this release also includes the following packages:

  • llvm 12.0.1 packages for Red Hat Enterprise Linux 8 and derivatives. This fixes compatibility issues with LLVM from upstream.
  • supplemental ETCD packages that can be used for setting up Patroni clusters. These packages are available for the following operating systems:

Download Percona Distribution for PostgreSQL 13.7

 

Percona Distribution for PostgreSQL 12.11

Percona Distribution for PostgreSQL 12.11 was released on June 7, 2022. This release of Percona Distribution for PostgreSQL is based on PostgreSQL 12.11.

Download Percona Distribution for PostgreSQL 12.11

 

Percona Distribution for PostgreSQL 11.16

On June 7, 2022, we released Percona Distribution for PostgreSQL 11.16. Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains “a set of library functions that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries.”

Download Percona Distribution for PostgreSQL 11.16

 

Percona Server for MySQL 5.7.38-41

Percona Server for MySQL 5.7.38-41 was released on June 2, 2022. It is a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior performance, scalability, and instrumentation. This release includes all the features and bug fixes available in MySQL 5.7.38 Community Edition in addition to enterprise-grade features developed by Percona. Improvements and bug fixes provided by Oracle for MySQL 5.7.38 and included in Percona Server for MySQL are the following:

  • If a statement cannot be parsed, for example, if the statement contains syntax errors, that statement is not written to the slow query log.
  • Loading an encrypted table failed if purge threads processed the undo records for that table.
  • There was a memory leak when mysqldump was used on more than one table with the –order-by-primary option. The memory allocated to sort each row in a table is now released after every table.

Download Percona Server for MySQL 5.7.38-41

 

Percona Operator for MySQL based on Percona XtraDB Cluster 1.11.0

Percona Operator for MySQL based on Percona XtraDB Cluster 1.11.0 was released on June 3, 2022. It contains everything you need to quickly and consistently deploy and scale Percona XtraDB Cluster instances in a Kubernetes-based environment on-premises or in the cloud.

Download Percona Operator for MySQL based on PXC 1.11.0

 

Percona Backup for MongoDB 1.8.0

On June 9, 2022, we released Percona Backup for MongoDB 1.8.0. It’s a distributed, low-impact solution for consistent backups of MongoDB sharded clusters and replica sets. This is a tool for creating consistent backups across a MongoDB sharded cluster (or a single replica set), and for restoring those backups to a specific point in time. Release highlights include:

Download Percona Backup for MongoDB 1.8.0

 

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

Jun
07
2022
--

Migration of MongoDB Enterprise/Community Edition to Percona Server for MongoDB

Migration of MongoDB to Percona Server for MongoDB

Migration of MongoDB to Percona Server for MongoDBIn this blog post, we will discuss how we can migrate from the enterprise/community edition of MongoDB to Percona Server for MongoDB. But before we begin, let’s take a second to explain why you should migrate to Percona Server for MongoDB. 

Percona Distribution for MongoDB is a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together. Percona customers benefit from no lock-in and lower total cost of ownership, along with the freedom to run their MongoDB environment wherever they want to – in a public or private cloud, on-premises, or hybrid environment.

Percona Server for MongoDB offers the same, or equivalent, security features as MongoDB Enterprise without the price tag, and Percona experts are always available to help, bringing in-depth operational knowledge of MongoDB and open source tools so you can optimize database performance. If you’d like to learn more, please click here

Anyway, let’s get back to the purpose of the blog: migrating from the enterprise/community edition of MongoDB to Percona Server for MongoDB. 

Before starting the migration process it’s recommended that you perform a full backup (if you don’t have one already). See this post for MongoDB backup best practices.

The migration procedure:

  1. Backup the config files of the Mongo process.
  2. Stop the Mongo process. If it’s a replica set, then do it in a rolling fashion.
  3. Remove the package of MongoDB community/enterprise edition. For the replica set, do it in a rolling fashion.
  4. Install the Percona Server for MongoDB (PSMDB). It can be downloaded from here. Do it in a rolling fashion for a replica set and start the Mongo service.

Detailed migration plan:

Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL):

   1. Backup the mongo config and service file:

sudo cp /etc/mongod.conf /etc/mongod.conf_bkp

Debian:

sudo cp /lib/systemd/system/mongod.service /lib/systemd/system/mongod.service_bkp

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service /usr/lib/systemd/system/mongod.service_bkp

   2. Stop mongo service first and then remove mongodb-community/enterprise packages and repo:

To stop mongo services, connect to admin database and shutdown as below:

>use admin

>db.shutdownServer()

Remove the package:

Debian:

sudo apt-get remove mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
sudo rm /etc/apt/sources.list.d/mongodb-org-4.0.list

RHEL/CentOS:

sudo yum erase $(rpm -qa | grep mongodb-org)

If it’s OpsManager then:

   a. Unmanage the project in OpsManager GUI.

   b. Make sure to uncheck enforce users in Opsmanager GUI. 

   c. Disable the automation agent with the below:

sudo apt disable mongodb-mms-automation-agent

   d. Remove the automation agent with:

sudo systemctl remove mongodb-mms-automation-agent

   3. Configure percona repo and install Percona Server for MongoDB (PSMDB):

Debian:

sudo wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb

sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

Enable the repo:

sudo percona-release enable psmdb-44 release

sudo apt-get update

Install the package:

sudo apt-get install percona-server-mongodb

RHEL/CentOS:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable the repo:

sudo percona-release setup pdmdb-44

Install the package:

sudo yum install percona-server-mongodb

   4. Copy back the mongod config and service file:

sudo cp /etc/mongod.conf_bkp /etc/mongod.conf

Debian:

sudo cp /lib/systemd/system/mongod.service_bkp /lib/systemd/system/mongod.service

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service_bkp /usr/lib/systemd/system/mongod.service

NOTE: Kindly check that the permissions and ownership of the data directory, keyfile, and log directory are properly updated for the mongod user. 

Also, if the SELinux policy is enabled, then set the necessary SELinux policy for dbPath, keyFile, and logs as below:

sudo semanage fcontext -a -t mongod_var_lib_t '/dbPath/mongod.*'
sudo chcon -Rv -u system_u -t mongod_var_lib_t '/dbPath/mongod'
sudo restorecon -R -v '/dbPath/mongod'
sudo semanage fcontext -a -t mongod_log_t '/logPath/log.*'
sudo chcon -Rv -u system_u -t mongod_log_t '/logPath/log'
sudo restorecon -R -v '/logPath/log'

   5. Enable and start mongod service:

sudo systemctl daemon-reload
sudo systemctl enable mongod
sudo systemctl start mongod
sudo systemctl status mongod

Migrate Replica set MongoDB Enterprise/Community edition to Percona Server for MongoDB (Debian/RHEL):

This migration process involves stopping the Mongo process in the hidden/secondary node first, removing the MongoDB community/enterprise edition packages, installing Percona Server for MongoDB, and starting it with the same data files. Then, step down the current primary node and repeat the same process.

   a. Make sure to check the current Primary and Secondary/hidden nodes.

db.isMaster().primary

   b. Start with the hidden node (if there is no hidden node then start with one of the secondary nodes with the least priority) first.

   c. Repeat steps from 1 to 5 from the section Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL).

   d. Wait for each node to be synced with Primary. Verify it with 

rs.printSecondaryReplicationInfo()

   e. Once completed for all secondary nodes, step down the current primary with

rs.stepDown()

   f. Wait for the new node to be elected as a Primary node and repeat steps 1 to 5 from the section Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL) and wait for the former primary node to be synced with the newly elected Primary.

Migrate Sharded cluster MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL):

   1. Stop the balancer first:

sh.stopBalancer()

   2. Back up the configuration and service files for shards, CSRS, and Query router.

Backup the mongo config and service file for shards and CSRS:

sudo cp /etc/mongod.conf /etc/mongod.conf_bkp

Debian:

sudo cp /lib/systemd/system/mongod.service /lib/systemd/system/mongod.service_bkp

For router:

sudo cp /etc/mongos.conf /etc/mongos.conf_bkp

sudo cp /lib/systemd/system/mongos.service /lib/systemd/system/mongos.service_bkp

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service /usr/lib/systemd/system/mongod.service_bkp

For router:

sudo cp /etc/mongos.conf /etc/mongos.conf_bkp
sudo cp /usr/lib/systemd/system/mongos.service /usr/lib/systemd/system/mongos.service_backup

   3. Start with the hidden node of the CSRS first (if there is no hidden node then start with one of the secondary nodes).

Repeat steps a to f from the section Migrate Replica set MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL)

Once migrated, the CSRS to Percona Server for MongoDB moves to Shards for the migration. Repeat steps a to f from the section Migrate Replica set MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL)

   4. After the migration of the CSRS and Shards, start migrating the MongoS. Connect to one router at a time and execute the below steps followed by the remaining routers.

   5. Stop mongo service and then remove mongodb-community/enterprise packages and repo:

sudo systemctl stop mongos

Debian:

sudo apt-get remove mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
sudo rm /etc/apt/sources.list.d/mongodb-org-4.0.list

RHEL/CentOS:

sudo yum erase $(rpm -qa | grep mongodb-org)

   6. Configure percona repos and install Percona Server for MongoDB (PSMDB):

Debian:

sudo wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

Enable the repo:

sudo percona-release enable psmdb-44 release
sudo apt-get update

Install the package:

sudo apt-get install percona-server-mongodb

RHEL/CentOS:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable the repo:

sudo percona-release setup pdmdb-44

Install the package:

sudo yum install percona-server-mongodb

Copy back the config and service file:

sudo cp /etc/mongos.conf_bkp /etc/mongos.conf

Debian:

sudo cp /lib/systemd/system/mongos.service_bkp /lib/systemd/system/mongos.service

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongos.service_bkp /usr/lib/systemd/system/mongos.service

NOTE: Kindly check that the permissions and ownership of keyfile and log directory are properly updated for the mongod user.

   7. Enable and start mongos service:

sudo systemctl daemon-reload
sudo systemctl enable mongos
sudo systemctl start mongos
sudo systemctl status mongos

   8. Re-enable the balancer with below:

sh.startBalancer()

Conclusion:

To learn more about the enterprise-grade features available in the license-free Percona Server for MongoDB, we recommend going through our blog MongoDB: Why Pay for Enterprise When Open Source Has You Covered? 

We also encourage you to try our products for MongoDB like Percona Server for MongoDB, Percona Backup for MongoDB, or Percona Operator for MongoDB.

Jun
06
2022
--

Talking Drupal #350 – Accessibility Scanning & Testing

Today we are talking about Accessibility Scanning & Testing with Mike Gifford & Daniel Mundra.

www.talkingDrupal.com/350

Topics

  • Accessibility Scanning and Testing
  • Goals
  • Popular tools
  • Drupal tools
  • Storybook
  • VPAT
  • OpenACR
  • How it replaces VPAT
  • OpenACR and Drupal
  • Tackling Accessibility
  • Tools to use
  • Automation
  • CI/CD
  • Issues that will not be caught

Resources

Guests

Mike Gifford – mgifford.medium.com @mgifford Daniel Mundra – danielmundra.com

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Mike Herchel – herchel.com@mikeherchel

MOTW

Editoria11y Editoria11y (“editorial accessibility”) is a user-friendly checker that addresses three critical needs for content authors

  1. It runs automatically. Modern spellcheck works so well because it is always running; put spellcheck behind a button and few users remember to run it!
  2. It runs in context. Views, Layout Builder, Media and all the other modules Drupal uses to assemble a complex page means checkers that run on individual fields cannot “see” errors that appear on render.
  3. It focuses exclusively on content issues: things page editors can easily understand and easily fix. Editoria11y is meant to supplement, not replace, testing with comprehensive tools and real assistive devices.
Jun
06
2022
--

Percona Platform and Percona Account Benefits

Percona Platform and Percona Account

On the 15th of April, Percona introduced the general availability of Percona Platform, and with that step, our company started a new, interesting period.

Popular questions we have received after this date include: 

  • What is Percona Platform, and how does it differ from Pecona Account? 
  • What is the difference between Percona Platform and Percona Portal? 
  • Why should Percona users connect to Percona Platform?

In this blog post, I will answer these questions and provide a summary of the benefits of a Percona Account. 

What is Percona Platform, and how does it differ from Pecona Account? 

First, let’s make sure we understand the following concepts:

Percona PlatformA unified experience for developers and database administrators to monitor, manage, secure, and optimize database environments on any infrastructure. It includes open source software and tools, access to Percona Expert support, automated insights, and self-service knowledge base articles.

Percona Account – A single login provides users with seamless access across all Percona properties, as well as insights into your database environment, knowledge base, support tickets, and guidance from Percona Experts. By creating a Percona Account, you get access to Percona resources like Forums, Percona Portal, ServiceNow (for current customers), and, in the future, other Percona resources (www.percona.com, Jira, etc.).

Percona Portal One location for your Percona relationship, with a high-level overview of your subscription and key resources like the knowledge base, support tickets, and ability to access your account team from anywhere.

Next, let’s try to understand the complete flow users will have to go through when they decide to start using the Percona Platform. A Percona Account is foundational to Percona Platform. Users can create their own account here on Percona Portal. There are two options for Percona Account creation, either via an email and password or by using a social login from Google or Github.  

What is the difference between Percona Platform and Percona Portal? 

From the description above Percona Platform is a great experience in open source software, services, tools, and resources Percona is offering to all users. Percona Portal is just a part of Percona Platform and could be thought of as the core component of Percona Platform to discover the value Percona is offering to registered users.

Why should Percona users connect to Percona Platform?

Percona Platform brings together enterprise-ready distributions of MySQL, PostgreSQL, and MongoDB and a range of open source tools for database monitoring, backup, and management, making it easier to run complex database environments.

Percona Platform consists of several essential units: 

  • Percona Account, 
  • Percona Monitoring & Management, which can assure alerting, advisor insight for users’ environments, backups, private DBaaS, etc.
  • Percona Portal, 
  • Support and services.

Percona Monitoring and Management, DBaaS 

The heart of Percona Platform is Percona Monitoring and Management (PMM), which provides the observability required to understand database health while offering actionable insights to remediate database incidents or performance issues. 

PMM has already won the hearts of millions of DBAs around the world with its Query Analytics (QAN) and Advisors and now with Percona Platform release we made it even cooler! 

The new technical preview feature that I’m talking about is Private DBaaS. 

With less than 20 minutes to configure, Percona Private DBaaS enables you to provide self-service databases to your internal teams in any environment. 

Percona Private DBaaS enables Developers to create and manage database clusters through familiar UI and well-documented API, providing completely open source and ready-to-use clusters with MySQL or MongoDB that contain all necessary Kubernetes settings. Each cluster is set up with high availability, load balancing, and essential monitoring and management.

Private DBaaS feature of PMM is free from vendor lock-in and does not enforce any cloud or on-prem infrastructure usage restrictions.

Learn more about DBaaS functionality in our Documentation

Percona Portal 

Percona Portal is a simple UI to check and validate your relationship with Percona software and services. Stay tuned for updates. 

percona platform

To start getting the value from Percona Portal, just create an organization in the Portal and connect Percona Monitoring & Management using a Percona account. Optionally, you can also invite other members to this group in order to share a common view of your organization, any PMM connection details, a list of current organization members, etc. 

When existing Percona customers create their accounts with corporate email, Percona Portal already knows about their organization. They are automatically added as members and can see additional details about their account, like opened support tickets, entitlements, and contact details.

Percona Portal

Percona Account and SSO

Percona Platform is a combination of open source software and services. You can start discovering software advantages right from installing PMM and start monitoring and managing your environments. You could ask yourself what role of the Platform connection is there? Let’s briefly clarify this.

SSO is the first advantage you should see upon connecting your PMM instance to Percona Platform. All users in the organization are able to sign in to PMM using their Percona Account credentials. They also will be granted the same roles in PMM and Portal.  For example, if a user is an admin on Percona Portal, they do not need to create a new user in PMM, but will instead be automatically added as an admin in PMM. Portal Technical users will be granted a Viewer role in PMM.

Let’s assume a Percona Platform user has already created an account, has his own organization with a couple of members in it, and has also connected PMM to the Platform with a Percona Account. Then, he is also able to use SSO for the current organization. 

Percona Advisors

Advisors provide automated insights and recommendations within Percona Monitoring and Management, ensuring your database performs at its best. Constantly evolving with technology and user feedback, the Advisors check for:

  • Availability at risk,
  • Replication inconsistencies,
  • Durability at risk,
  • Passwordless users,
  • Unsecure connections,
  • Unstable OS configuration,
  • Available performance improvements and more.

 Advisors workflow consists of the following:

  1. If the user has just started PMM and set up monitoring of some databases and environments, we provide him with a set of basic Advisors.  To check the list the user should go to the Advisors section in PMM.
    Percona Advisors
  2. After the user connects PMM and Platform on Percona Portal, this list gets bigger and the set of advisors is updated. 
  3. The final step happens with a Percona Platform subscription. If a user subscribes to Percona Platform, we provide the most advanced list of Advisors, which cover a lot of edge cases and check database environments.

See the full list of Advisors and various tiers in our documentation here

As an open source company, we are always eager to engage the community and invite them to innovate with us. 

Here you can find the developer guides on how to create your own Advisors.

How to register a Percona Account?

There are several options to create a Percona account. The first way is to specify your basic data – email, last name, first name, and password.

  1. Go toDon’t have an account? Create one” on the landing page for Percona Portal. 
  2. Enter a minimal set of information (work email, password, first and last name), confirm that you agree with Percona’s Terms of Service and the Privacy Policy agreement, and hit Create.
  3. register percona accountTo complete your registration, check your email and follow the steps provided there. Click the confirmation link in order to activate your account. You may see messages coming from Okta, this is because Percona Platform uses Okta as an identity provider.
  4. After you have confirmed your account creation, you can log in to Percona Portal.

You can also register a Percona Account if you already have a Google or Github account created.

When you create a Percona account with Google and Github, Percona will store only your email and authenticate you using data from the given identity providers.

  1. Make sure you have an account on Google/Github.
  2. On the login page, there are dedicated buttons that will instruct you to confirm account information usage by Percona.
  3. Confirm you are signing in to your account if 2FA for your Google is enabled.
  4. Hurray! Now you have an active account on Percona Portal.

The next option to register a Percona Account is to Continue with Github. The process is similar to Google. 

  1.  An important precondition has to be met prior to account usage and registration: Set your email address in Github to Public.
  2. From Settings, go to the Emails section and uncheck the Keep my email address private option. These settings will be saved automatically. If you do not want to change this option in Github, so you could use other accounts like Google, or create your own with email.
  3. If you already registered an account, you are ready to go with Percona Portal. Use your Google/Github credentials every time you log in to your account.

Conclusions

Start your Percona Platform experience with Percona Monitoring & Management – open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB.

Set up Percona Monitoring and Management (version >2.27.0) following PMM Documentation

The next step is to start exploring Percona Platform and get more value and experience from Percona database experts by creating a Percona Account on Percona Portal. The key benefits of Percona  Platform include: 

  • A Percona Account as a single authentication mechanism to use across all Percona resources, 
  • Access to Percona content like blogs and forums, knowledge base.
  • Percona Advisors that help optimize, manage and monitor database environments.

Once you create a Percona account, you will get basic advisors, open source software, and access to community support and forums.

When connecting Percona Monitoring and Management to Percona Account you will get a more advanced set of advisors for your systems security, configuration, performance, and data design.

Percona Platform subscription is offering even more advanced advisors for Percona customers and also a fully supported software experience, private DBaaS, and an assigned Percona Expert to ensure your success with the Platform. 

Please also consider sharing your feedback and experience with Percona on our Forums.

Visit Percona Platform online help to view Platform-related documentation, and follow our updates on the Platform what’s new page.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com