May
23
2022
--

Talking Drupal #348 – A Website’s Carbon Footprint

Today we are talking about A Website’s Carbon Footprint with Gerry McGovern.

www.talkingDrupal.com/348

Topics

  • Earth day
  • What is a carbon footprint
  • How do websites contribute
  • How can you calculate your site’s impact
  • Cloud vs dedicated hosting
  • How do you determine a vendor’s impact
  • Small sites VS FAANG
  • How to improve your site

Resources

Guests

Gerry McGovern – gerrymcgovern.com @gerrymcgovern

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

MOTW

Config Pages At some point I was tired of creating custom pages using menu and form API, writing tons of code just to have a page with an ugly form where a client can enter some settings, and as soon as a client wants to add some interactions to the page (drag&drop, ajax etc) things starts to get hairy. The same story was with the creation of dedicated CT just to theme a single page (like homepage) and explaining why you can only have 1 node of this type, or force it programmatically.

May
18
2022
--

Super Potty Trainer – A Shark Tank Product For Toddlers Which Takes The Internet By Storm!

Super Potty Trainer is the newest and most innovative potty training product on the market. It was recently featured on Shark Tank and has taken the Internet by storm! This unique product makes potty training easier and more fun for both parents and toddlers.

The Super Potty Trainer features a patented design that allows toddlers to sit on the regular toilet seat, giving them extra back support and making it more comfortable. This simple idea has made potty training more efficient and less messy for parents and less stressful, and more fun for toddlers!

The history behind the Super Potty Trainer that makes it so special

Super Potty Trainer was created by a mom – Judy Abrahams – who was potty training her own toddler. She saw how difficult and stressful it was for both parents and toddlers: her child was afraid of falling in, and she constantly had to clean up accidents. To make things worse, Judy’s daughter was so stressed out that she would become constipated!

Judy realized that there had to be a better way to potty train and set out to find it. After months of research and development, she created the Super Potty Trainer: a potty training seat that gives toddlers the extra back support they need, making it more comfortable and less stressful for them.

The product was an instant success with parents and toddlers alike and quickly became a must-have for any family potty training their child. It has even been featured on Shark Tank, where it received rave reviews from the sharks!

The Internet is abuzz with Super Potty Trainer reviews, and it is quickly becoming the go-to product for potty training. If you are looking for a potty training solution that is effective and fun, look no further than Super Potty Trainer!

How does the Super Potty Trainer work?

The Super Potty Trainer is a potty training system that helps toddlers transition from diapers to the toilet. The way this innovative product works is simple; it attaches to the regular toilet and provides extra back support for toddlers. This back support makes it more comfortable for toddlers to sit on the toilet and helps prevent them from making a mess.

See how it works – it is easy as 1-2-3!

  1. Lift the toilet seat. Place the Super Potty Trainer on a clean, dry toilet rim.
  2. Lower the seat to keep it in place.
  3. Your toddler is ready to start potty training!

The Super Potty Trainer allows you to adjust the depth of your toilet seat so it is always a comfortable fit for your toddler: simply move Super Potty Trainer forward or backward on the toilet to suit your child’s preferences.

This product is also easy to clean and is made from durable, high-quality materials.

Will Super Potty Trainer fit my toilet?

The Super Potty Trainer is designed to fit most toilets. There is an easy way to check if your toilet is compatible with the product. All you need to do is check if there is a little gap between the toilet seat and the rim of your toilet. If there is a gap, your toilet is compatible with the Super Potty Trainer.

It is also possible to make your toilet compatible with Super Potty Trainer if there is no gap between the toilet seat and the rim. You can do this by replacing the toilet seat for the one that has bumpers – these will create a small gap and make the toilet compatible with the product.

Loved by pediatricians and parents

Both pediatricians and parents love Super Potty Trainer. Pediatricians recommend Super Potty Trainer because it’s a simple and safe way to potty train toddlers. It allows a toddler to have a proper position on the toilet, which is essential for right elimination. And that prevents constipation and urinary infections.

Parents love Super Potty Trainer because it makes potty training less stressful for both the parent and toddlers. The product is also very affordable and easy to use.

If you are looking for an easy, stress-free way to potty train your toddler, then Super Potty Trainer is your product!

The post Super Potty Trainer – A Shark Tank Product For Toddlers Which Takes The Internet By Storm! appeared first on Comfy Bummy.

May
16
2022
--

Talking Drupal #347 – GitLab CI

Today we are talking about GitLab CI with Chris Wells.

   

www.talkingDrupal.com/347

Topics

  • CI
  • GitLab CI
  • What is Drupal transitioning from?
  • Benefits of CI
  • Key concepts and terminology
  • Commonly used CI tools
  • Community Benefits
  • GitLab CI with other tools
  • Coolest integration at Redfin

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

May
16
2022
--

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Our goal is to have a Rocket.Chat deployment which uses highly available Percona Server for MongoDB cluster as the backend database and it all runs on Kubernetes. To get there, we will do the following:

  • Start a Google Kubernetes Engine (GKE) cluster across multiple availability zones. It can be any other Kubernetes flavor or service, but I rely on multi-AZ capability in this blog post.
  • Deploy Percona Operator for MongoDB and database cluster with it
  • Deploy Rocket.Chat with specific affinity rules
    • Rocket.Chat will be exposed via a load balancer

Rocket.Chat will be exposed via a load balancer

Percona Operator for MongoDB, compared to other solutions, is not only the most feature-rich but also comes with various management capabilities for your MongoDB clusters – backups, scaling (including sharding), zero-downtime upgrades, and many more. There are no hidden costs and it is truly open source.

This blog post is a walkthrough of running a production-grade deployment of Rocket.Chat with Percona Operator for MongoDB.

Rock’n’Roll

All YAML manifests that I use in this blog post can be found in this repository.

Deploy Kubernetes Cluster

The following command deploys GKE cluster named

percona-rocket

in 3 availability zones:

gcloud container clusters create --zone us-central1-a --node-locations us-central1-a,us-central1-b,us-central1-c percona-rocket --cluster-version 1.21 --machine-type n1-standard-4 --preemptible --num-nodes=3

Read more about this in the documentation.

Deploy MongoDB

I’m going to use helm to deploy the Operator and the cluster.

Add helm repository:

helm repo add percona https://percona.github.io/percona-helm-charts/

Install the Operator into the percona namespace:

helm install psmdb-operator percona/psmdb-operator --create-namespace --namespace percona

Deploy the cluster of Percona Server for MongoDB nodes:

helm install my-db percona/psmdb-db -f psmdb-values.yaml -n percona

Replica set nodes are going to be distributed across availability zones. To get there, I altered the affinity keys in the corresponding sections of psmdb-values.yaml:

antiAffinityTopologyKey: "topology.kubernetes.io/zone"

Prepare MongoDB

For Rocket.Chat to connect to our database cluster, we need to create the users. By default, clusters provisioned with our Operator have

userAdmin

user, its password is set in

psmdb-values.yaml

:

MONGODB_USER_ADMIN_PASSWORD: userAdmin123456

For production-grade systems, do not forget to change this password or create dedicated secrets to provision those. Read more about user management in our documentation.

Spin up a client Pod to connect to the database:

kubectl run -i --rm --tty percona-client1 --image=percona/percona-server-mongodb:4.4.10-11 --restart=Never -- bash -il

Connect to the database with

userAdmin

:

[mongodb@percona-client1 /]$ mongo "mongodb://userAdmin:userAdmin123456@my-db-psmdb-db-rs0-0.percona/admin?replicaSet=rs0"

We are going to create the following:

  • rocketchat

    database

  • rocketChat

    user to store data and connect to the database

  • oplogger

    user to provide access to oplog for rocket chat

    • Rocket.Chat uses Meteor Oplog tailing to improve performance. It is optional.
use rocketchat
db.createUser({
  user: "rocketChat",
  pwd: passwordPrompt(),
  roles: [
    { role: "readWrite", db: "rocketchat" }
  ]
})

use admin
db.createUser({
  user: "oplogger",
  pwd: passwordPrompt(),
  roles: [
    { role: "read", db: "local" }
  ]
})

Deploy Rocket.Chat

I will use helm here to maintain the same approach. 

helm install -f rocket-values.yaml my-rocketchat rocketchat/rocketchat --version 3.0.0

You can find rocket-values.yaml in the same repository. Please make sure you set the correct passwords in the corresponding YAML fields.

As you can see, I also do the following:

  • Line 11: expose Rocket.Chat through
    LoadBalancer

    service type

  • Line 13-14: set number of replicas of Rocket.Chat Pods. We want three – one per each availability zone.
  • Line 16-23: set affinity to distribute Pods across availability zones

Load Balancer will be created with a public IP address:

$ kubectl get service my-rocketchat-rocketchat
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
my-rocketchat-rocketchat   LoadBalancer   10.32.17.26   34.68.238.56   80:32548/TCP   12m

You should now be able to connect to

34.68.238.56

and enjoy your highly available Rocket.Chat installation.

Rocket.Chat installation

Clean Up

Uninstall all helm charts to remove MongoDB cluster, the Operator, and Rocket.Chat:

helm uninstall my-rocketchat
helm uninstall my-db -n percona
helm uninstall psmdb-operator -n percona

Things to Consider

Ingress

Instead of exposing Rocket.Chat through a load balancer, you may also try ingress. By doing so, you can integrate it with cert-manager and have a valid TLS certificate for your chat server.

Mongos

It is also possible to run a sharded MongoDB cluster with Percona Operator. If you do so, Rocket.Chat will connect to mongos Service, instead of the replica set nodes. But you will still need to connect to the replica set directly to get oplogs.

Conclusion

We encourage you to try out Percona Operator for MongoDB with Rocket.Chat and let us know on our community forum your results.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

Percona Operator for MongoDB contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. The Operator enables you to improve time to market with the ability to quickly deploy standardized and repeatable database environments. Deploy your database with a consistent and idempotent result no matter where they are used.

May
16
2022
--

KMIP in Percona Distribution for MongoDB, Preview Release of PMM 2.28.0: Release Roundup May 16, 2022

Percona Releases May 16 2022

It’s time for the release roundup!

Percona Releases May 16 2022Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since May 2, 2022. Take a look!

Percona Distribution for MongoDB 5.0.8

On May 10, 2022, we released Percona Distribution for MongoDB 5.0.8. It is a freely available MongoDB database alternative, giving you a single solution that combines enterprise components from the open source community, designed and tested to work together. It includes Percona Server for MongoDB and Percona Backup for MongoDB. This release of Percona Distribution for MongoDB includes the support of master key rotation for data encrypted using the Keys Management Interoperability Protocol (KMIP) (tech preview feature). Thus, users can comply with regulatory standards for data security.

Download Percona Distribution for MongoDB 5.0.8

 

Percona Server for MongoDB 5.0.8-7

Percona Server for MongoDB 5.0.8-7 was released on May 10, 2022. It is an enhanced, source available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 5.0.8 Community Edition. It supports MongoDB 5.0.8 protocols and drivers. It now supports master key rotation for data encrypted using the Keys Management Interoperability Protocol (KMIP) protocol (tech preview feature). This improvement allows users to comply with regulatory standards for data security.

Download Percona Server for MongoDB 5.0.8-7

 

Percona Operator for MongoDB 1.12.0

On May 5, 2022, Percona Operator for MongoDB 1.12.0 was released. It automates the creation, modification, or deletion of items in your Percona Server for MongoDB environment. The Operator contains the necessary Kubernetes settings to maintain a consistent Percona Server for MongoDB instance. Release highlights include the ability to use the Amazon Web Services feature of authenticating applications running on EC2 instances based on Identity and Access Management (IAM) roles assigned to the instance, support for the Multi Cluster Services (MCS), and the OpenAPI schema is now generated for the Operator CRD, which allows Kubernetes to perform Custom Resource validation and saves users from occasionally applying deploy/cr.yaml with syntax typos

Download Percona Operator for MongoDB 1.12.0

 

Percona Monitoring and Management 2.28.0

Percona Monitoring and Management 2.28.0 was released on May 12, 2002. It is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB. Highlights in this release include Advisor checks enabled by default, Ubuntu 22.04 LTS support, and VictoriaMetrics upgraded to 1.76.1.

Download Percona Monitoring and Management 2.28.0

 

Updates to Percona Distribution for PostgreSQL 11.15, 12.10, 13.6, 14.2

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together. Patroni, pgBackRest, pg_repack, and pgaudit are amongst the innovative components we utilize.

Released on May 5, 2022, these updates include the GA version of pg_stat_monitor 1.0.0.

https://docs.percona.com/postgresql/14/release-notes-v14.2.upd2.html

https://docs.percona.com/postgresql/13/release-notes-v13.6.upd2.html

https://docs.percona.com/postgresql/12/release-notes-v12.10.upd2.html

https://docs.percona.com/postgresql/11/release-notes-v11.15.upd2.html

 

Percona Distribution for MySQL (PS-Based Variant) 8.0.28

On May 12, 2022, Percona Distribution for MySQL (PS-based variant) 8.0.28 was released. It is a single solution with the best and most critical enterprise components from the MySQL open-source community, designed and tested to work together. This release is focused on the Percona Server for MySQL-based deployment variant and is based on Percona Server for MySQL 8.0.28-19.

The following lists a number of the bug fixes for MySQL 8.0.28, provided by Oracle, and included in Percona Server for MySQL and Percona Distribution for MySQL:

  • The ASCII shortcut for CHARACTER SET latin1 and UNICODE shortcut for CHARACTER SET ucs2 are deprecated and raise a warning to use CHARACTER SET instead. The shortcuts will be removed in a future version.
  • A stored function and a loadable function with the same name can share the same namespace. Add the schema name when invoking a stored function in the shared namespace. The server generates a warning when function names collide.
  • InnoDB supports ALTER TABLE ... RENAME COLUMN operations when using ALGORITHM=INSTANT.
  • The limit for innodb_open_files now includes temporary tablespace files. The temporary tablespace files were not counted in the innodb_open_files in previous versions.

Download Percona Distribution for MySQL (PS-based variant) 8.0.28

 

Percona Server for MySQL 8.0.28-19

Percona Server for MySQL 8.0.28-19 was released on May 13, 2022.  It is a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior performance, scalability, and instrumentation. Improvements in this release include when using the SET_VAR syntax, MyRocks variables can be set dynamically, and the ability to change log file locations dynamically is restricted.

Note: Starting with Percona Server for MySQL 8.0.28-19, the TokuDB storage engine is no longer supported. We have removed the storage engine from the installation packages and the storage engine is disabled in our binary builds.

Download Percona Server for MySQL 8.0.28-19

 

Percona XtraBackup 2.4.26

On May 9, 2022, we released Percona XtraBackup 2.4.26. It enables MySQL backups without blocking user queries. Percona XtraBackup is ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. Note: Percona XtraBackup 2.4 does not support making backups of databases created in MySQL 8.0, Percona Server for MySQL 8.0, or Percona XtraDB Cluster 8.0. Use Percona XtraBackup 8.0 to make backups for these versions.

Download Percona XtraBackup 2.4.26

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

May
12
2022
--

Percona Operator for MongoDB and Kubernetes MCS: The Story of One Improvement

Percona Operator for MongoDB and Kubernetes MCS

Percona Operator for MongoDB supports multi-cluster or cross-site replication deployments since version 1.10. This functionality is extremely useful if you want to have a disaster recovery deployment or perform a migration from or to a MongoDB cluster running in Kubernetes. In a nutshell, it allows you to use Operators deployed in different Kubernetes clusters to manage and expand replica sets.Percona Kubernetes Operator

For example, you have two Kubernetes clusters: one in Region A, another in Region B.

  • In Region A you deploy your MongoDB cluster with Percona Operator. 
  • In Region B you deploy unmanaged MongoDB nodes with another installation of Percona Operator.
  • You configure both Operators, so that nodes in Region B are added to the replica set in Region A.

In case of failure of Region A, you can switch your traffic to Region B.

Migrating MongoDB to Kubernetes describes the migration process using this functionality of the Operator.

This feature was released in tech preview, and we received lots of positive feedback from our users. But one of our customers raised an internal ticket, which was pointing out that cross-site replication functionality does not work with Multi-Cluster Services. This started the investigation and the creation of this ticket – K8SPSMDB-625

This blog post will go into the deep roots of this story and how it is solved in the latest release of Percona Operator for MongoDB version 1.12.

The Problem

Multi-Cluster Services or MCS allows you to expand network boundaries for the Kubernetes cluster and share Service objects across these boundaries. Someone calls it another take on Kubernetes Federation. This feature is already available on some managed Kubernetes offerings,  Google Cloud Kubernetes Engine (GKE) and AWS Elastic Kubernetes Service (EKS). Submariner uses the same logic and primitives under the hood.

MCS Basics

To understand the problem, we need to understand how multi-cluster services work. Let’s take a look at the picture below:

multi-cluster services

  • We have two Pods in different Kubernetes clusters
  • We add these two clusters into our MCS domain
  • Each Pod has a service and IP-address which is unique to the Kubernetes cluster
  • MCS introduces new Custom Resources –
    ServiceImport

    and

    ServiceExport

    .

    • Once you create a
      ServiceExport

      object in one cluster,

      ServiceImport

      object appears in all clusters in your MCS domain.

    • This
      ServiceImport

        object is in

      svc.clusterset.local

      domain and with the network magic introduced by MCS can be accessed from any cluster in the MCS domain

Above means that if I have an application in the Kubernetes cluster in Region A, I can connect to the Pod in Kubernetes cluster in Region B through a domain name like

my-pod.<namespace>.svc.clusterset.local

. And it works from another cluster as well.

MCS and Replica Set

Here is how cross-site replication works with Percona Operator if you use load balancer:

MCS and Replica Set

All replica set nodes have a dedicated service and a load balancer. A replica set in the MongoDB cluster is formed using these public IP addresses. External node added using public IP address as well:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: 123.45.67.89

All nodes can reach each other, which is required to form a healthy replica set.

Here is how it looks when you have clusters connected through multi-cluster service:

Instead of load balancers replica set nodes are exposed through Cluster IPs. We have ServiceExports and ServiceImports resources. All looks good on the networking level, it should work, but it does not.

The problem is in the way the Operator builds MongoDB Replica Set in Region A. To register an external node from Region B to a replica set, we will use MCS domain name in the corresponding section:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: rs0-4.mongo.svc.clusterset.local

Now our rs.status() will look like this:

"name" : "my-cluster-rs0-0.mongo.svc.cluster.local:27017"
"role" : "PRIMARY"
...
"name" : "my-cluster-rs0-1.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "my-cluster-rs0-2.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "rs0-4.mongo.svc.clusterset.local:27017"
"role" : "UNKNOWN"

As you can see, Operator formed a replica set out of three nodes using

svc.cluster.local

domain, as it is how it should be done when you expose nodes with

ClusterIP

Service type. In this case, a node in Region B cannot reach any node in Region A, as it tries to connect to the domain that is local to the cluster in Region A. 

In the picture below, you can easily see where the problem is:

The Solution

Luckily we have a Special Interest Group (SIG), a Kubernetes Enhancement Proposal (KEP) and multiple implementations for enabling Multi-Cluster Services. Having a KEP is great since we can be sure the implementations from different providers (i.e GCP, AWS) will follow the same standard more or less.

There are two fields in the Custom Resource that control MCS in the Operator:

spec:
  multiCluster:
    enabled: true
    DNSSuffix: svc.clusterset.local

Let’s see what is happening in the background with these flags set.

ServiceImport and ServiceExport Objects

Once you enable MCS by patching the CR with

spec.multiCluster.enabled: true

, the Operator creates a

ServiceExport

object for each service. These ServiceExports will be detected by the MCS controller in the cluster and eventually a

ServiceImport

for each

ServiceExport

will be created in the same namespace in each cluster that has MCS enabled.

As you see, we made a decision and empowered the Operator to create

ServiceExport

objects. There are two main reasons for doing that:

  • If any infrastructure-as-a-code tool is used, it would require additional logic and level of complexity to automate the creation of required MCS objects. If Operator takes care of it, no additional work is needed. 
  • Our Operators take care of the infrastructure for the database, including Service objects. It just felt logical to expand the reach of this functionality to MCS.

Replica Set and Transport Encryption

The root cause of the problem that we are trying to solve here lies in the networking field, where external replica set nodes try to connect to the wrong domain names. Now, when you enable multi-cluster and set

DNSSuffix

(it defaults to

svc.clusterset.local

), Operator does the following:

  • Replica set is formed using MCS domain set in
    DNSSuffix

     

  • Operator generates TLS certificates as usual, but adds
    DNSSuffix

    domains into the picture

With this approach, the traffic between nodes flows as expected and is encrypted by default.

Replica Set and Transport Encryption

Things to Consider

MCS APIs

Please note that the operator won’t install MCS APIs and controllers to your Kubernetes cluster. You need to install them by following your provider’s instructions prior to enabling MCS for your PSMDB clusters. See our docs for links to different providers.

Operator detects if MCS is installed in the cluster by API resources. The detection happens before controllers are started in the operator. If you installed MCS APIs while the operator is running, you need to restart the operator. Otherwise, you’ll see an error like this:

{
  "level": "error",
  "ts": 1652083068.5910048,
  "logger": "controller.psmdb-controller",
  "msg": "Reconciler error",
  "name": "cluster1",
  "namespace": "psmdb",
  "error": "wrong psmdb options: MCS is not available on this cluster",
  "errorVerbose": "...",
  "stacktrace": "..."
}

ServiceImport Provisioning Time

It might take some time for

ServiceImport

objects to be created in the Kubernetes cluster. You can see the following messages in the logs while creation is in progress: 

{
  "level": "info",
  "ts": 1652083323.483056,
  "logger": "controller_psmdb",
  "msg": "waiting for service import",
  "replset": "rs0",
  "serviceExport": "cluster1-rs0"
}

During testing, we saw wait times up to 10-15 minutes. If you see your cluster is stuck in initializing state by waiting for service imports, it’s a good idea to check the usage and quotas for your environment.

DNSSuffix

We also made a decision to automatically generate TLS certificates for Percona Server for MongoDB cluster with

*.clusterset.local

domain, even if MCS is not enabled. This approach simplifies the process of enabling MCS for a running MongoDB cluster. It does not make much sense to change the

DNSSuffix

 field, unless you have hard requirements from your service provider, but we still allow such a change. 

If you want to enable MCS with a cluster deployed with an operator version below 1.12, you need to update your TLS certificates to include

*.clusterset.local

SANs. See the docs for instructions.

Conclusion

Business relies on applications and infrastructure that serves them more than ever nowadays. Disaster Recovery protocols and various failsafe mechanisms are routine for reliability engineers, not an annoying task in the backlog. 

With multi-cluster deployment functionality in Percona Operator for MongoDB, we want to equip users to build highly available and secured database clusters with minimal effort.

Percona Operator for MongoDB is truly open source and provides users with a way to deploy and manage their enterprise-grade MongoDB clusters on Kubernetes. We encourage you to try this new Multi-Cluster Services integration and let us know your results on our community forum. You can find some scripts that would help you provision your first MCS clusters on GKE or EKS here.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

May
10
2022
--

Spring Cleaning: Discontinuing RHEL 6/CentOS 6 (glibc 2.12) and 32-bit Binary Builds of Percona Software

Discontinuing RHEL 6/CentOS 6

Discontinuing RHEL 6/CentOS 6As you are probably aware, Red Hat Enterprise Linux 6 (RHEL 6 or EL 6 in short) officially reached “End of Life” (EOL) on 2020-11-30 and is now in the so-called Extended Life Phase, which basically means that Red Hat will no longer provide bug fixes or security fixes.

Even though EL 6 and its compatible derivatives like CentOS 6 had reached EOL some time ago already, we continued providing binary builds for selected MySQL-related products for this platform.

However, this became increasingly difficult, as the MySQL code base continued to evolve and now depends on tools and functionality that are no longer provided by the operating system out of the box. This meant we already had to perform several modifications in order to prepare binary builds for this platform, e.g. installing custom compiler versions or newer versions of various system libraries.

As of MySQL 8.0.26, Oracle announced that they deprecated the TLSv1 and TLSv1.1 connection protocols and plan to remove these in a future MySQL version in favor of the more secure TLSv1.2 and TLSv1.3 protocols. TLSv1.3 requires that both the MySQL server and the client application be compiled with OpenSSL 1.1.1 or higher. This version of OpenSSL is not available in binary package format on EL 6 anymore, and manually rebuilding it turned out to be a “yak shaving exercise” due to the countless dependencies.

Our build & release team was able to update the build environments on all of our supported platforms (EL 7, EL 8, supported Debian and Ubuntu versions) for this new requirement. However, we have not been successful in getting all the required components and their dependencies to build on EL 6, as it would have required rebuilding quite a significant amount of core OS packages and libraries to achieve this.

Moreover, switching to this new OpenSSL version would have also required us to include some additional shared libraries in our packages to satisfy the runtime dependencies, adding more complexity and potential security issues.

In general, we believe that running a production system on an OS that is no longer actively supported by a vendor is not a recommended best practice from a security perspective, and we do not want to encourage such practices.

Because of these reasons and to simplify our build/release and QA processes, we decided to drop support for EL 6 for all products now. Percona Server for MySQL 8.0.27 was the last version for which we built binaries for EL 6 against the previous version of OpenSSL.

Going forward, the following products will no longer be built and released on this platform:

  • Percona Server for MySQL 5.7 and 8.0
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4 and 8.0
  • Percona Toolkit 3.2

This includes stopping both building RPM packages for EL 6 and providing binary tarballs that are linked against glibc 2.12.

Note that this OS platform was also the last one on which we still provided 32-bit binaries.

Most of the Enterprise Linux distributions have stopped providing 32-bit versions of their operating systems quite some time ago already. As an example, Red Hat Enterprise Linux 7 (released in June 2014) was the first release to no longer support installing directly on 32-bit Intel/AMD hardware (i686/x86). Already back in 2018, we had taken the decision that we will no longer be offering 32-bit binaries on new platforms or new major releases of our software.

Given today’s database workloads, we also think that 32-bit systems are simply not adequate anymore, and we already stopped building newer versions of our software for this architecture.

The demand for 32-bit downloads has also been declining steadily. A recent analysis of our download statistics revealed that only 2.3% of our total binary downloads are referring to i386 binaries. Looking at IP addresses, these downloads originated from 0.4% of the total range of addresses.

This change affects the following products:

  • Percona Server for MySQL 5.7
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4
  • Percona Toolkit

We’ve updated the Percona Release Lifecycle Overview web page accordingly to reflect this change. Previously released binaries for these platforms and architectures will of course remain accessible from our repositories.

If you’re still running EL 6 or a 32-bit database or OS, we strongly recommend upgrading to a more modern platform. Our Percona Services team would be happy to help you with that!

May
09
2022
--

Talking Drupal #346 – Open Source Compensation

Today we are talking about Open Source Compensation with Tim Lehnen.

www.talkingDrupal.com/346

Topics

  • How was DrupalCon?
  • Suggestion from listener
  • Open Source like cURL and OpenSSL
  • Developer burnout and frustration
  • Question about boosting other contribution to C-Level
  • Great ways to compensate
  • What are you working on now?

Resources

Guests

Tim Lehnen – @timlehnen

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

MOTW

Tome Tome is a static site generator, and a static storage system for content.

May
09
2022
--

MySQL 8.0.29 and Percona XtraBackup Incompatibilities

MySQL 8.0.29 and Percona XtraBackup Incompatibilities

MySQL 8.0.29 and Percona XtraBackup IncompatibilitiesEarlier last week, Oracle released their Q2 releases series. Unlike previous releases, backward compatibility has now been broken with previous versions of MySQL.

MySQL 8.0.29 extended the support for the online DDL algorithm INSTANT. Prior to 8.0.29 only adding columns to the end of the table was supported.

In 8.0.29, this functionality was extended to allow the INSTANT algorithm the ability to add columns in any position of the table as well to drop columns. This new functionality required the redo log version to increase and new redo log types to be added, thus making it incompatible with older versions of the MySQL server and also older versions of Percona Xtrabackup. Please note that an in-place minor version downgrade of the server is also not supported.

We are currently working on making Percona Xtrabackup compatible with those changes. Until then users relying on Percona Xtrabackup for backups are advised to not upgrade the MySQL server to 8.0.29.

May
09
2022
--

MyDumper 0.12.3-1 is Now Available

MyDumper 0.12.3-1

MyDumper 0.12.3-1The new MyDumper 0.12.3-1 version, which includes many new features and bug fixes, is now available.  You can download the code from here. MyDumper is Open Source and maintained by the community, it is not a Percona, MariaDB, or MySQL product.

In this new version we focused on:

  • Refactoring tasks: Splitting mydumper.c and myloader.c into multiple files will allow us to find bugs and implement features easily.
  • Adding compression capabilities on stream: One of the more interesting features added to MyDumper was being able to stream to another server and now, compression and transmission will be available and much faster.
  • Support for the Percona Monitoring and Management (PMM) tool: Being able to monitor scheduled processes is always desired, even more, if you know what the process is doing internally and that is why we used the textfile extended metrics to show how many jobs have been done and queue status.

Enhancement:

  • Adding PMM support for MyDumper #660 #587
  • Adding compression capabilities on stream #605
  • Dockerfile: use multi-stage builds #603
  • restore defaults-file override behavior #601

Fix:

  • Fixing when DEFINER is absent and stream/intermediate sync when finish #662 #659
  • Adding better message when schema-create files are not provided #655 #617
  • re adding –csv option #653
  • Fixing trigger absent when –no-schema is used #647 #541
  • Allow compilation on Clang #641 #563
  • Fixing version number #639 #637
  • Adding critical message when DDL fails #636 #634
  • Fix print time #633 #632
  • print sigint confirmation to stdout instead of to a log file #616 #610
  • Dockerfile: add missing libglib2.0 package #608
  • [BUG] mydumper zstd doesn’t run on CentOS due to missing libzstd package dependency #602

Refactoring:

Help Wanted:

  • [BUG] myloader always uses socket to connect to db #650
  • [BUG] real_db_name isn’t found #613

Documentation:

  • Update README.md #614
  • Use 0.11.5-2 in installation examples #606
  • Question: What exactly is inconsistent about inconsistent backups? #644

Download MyDumper 0.12.3 Today!

MyDumper has its own merch now, too. By shopping for the merch you contribute to the project. Check it out! 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com