May
24
2022
--

Store and Manage Logs of Percona Operator Pods with PMM and Grafana Loki

Store and Manage Logs of Percona Operator Pods with PMM and Grafana Loki

While it is convenient to view the log of MySQL or MongoDB pods with

kubectl logs

, sometimes the log is purged when the pod is deleted, which makes searching historical logs a bit difficult. Grafana Loki, an aggregation logging tool from Grafana, can be installed in the existing Kubernetes environment to help store historical logs and build a comprehensive logging stack.

What Is Grafana Loki

Grafana Loki is a set of tools that can be integrated to provide a comprehensive logging stack solution. Grafana-Loki-Promtail is the major component of this solution. The Promtail agent collects and ships the log to the Loki datastore and then visualizes logs with Grafana. While collecting the logs, Promtail can label, convert and filter the log before sending. Next step, Loki receives the logs and indexes the metadata of the log. At this step, the logs are ready to be visualized in the Grafana and the administrator can use Grafana and Loki’s query language, LogQL, to explore the logs. 

 

     Grafana Loki

Installing Grafana Loki in Kubernetes Environment

We will use the official helm chart to install Loki. The Loki stack helm chart supports the installation of various components like

promtail

,

fluentd

,

Prometheus

and

Grafana

$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm search repo grafana

NAME                                CHART VERSION APP VERSION DESCRIPTION
grafana/grafana                     6.29.2       8.5.0      The leading tool for querying and visualizing t...
grafana/grafana-agent-operator      0.1.11       0.24.1     A Helm chart for Grafana Agent Operator
grafana/fluent-bit                  2.3.1        v2.1.0     Uses fluent-bit Loki go plugin for gathering lo...
grafana/loki                        2.11.1       v2.5.0     Loki: like Prometheus, but for logs.
grafana/loki-canary                 0.8.0        2.5.0      Helm chart for Grafana Loki Canary
grafana/loki-distributed            0.48.3       2.5.0      Helm chart for Grafana Loki in microservices mode
grafana/loki-simple-scalable        1.0.0        2.5.0      Helm chart for Grafana Loki in simple, scalable...
grafana/loki-stack                  2.6.4        v2.4.2     Loki: like Prometheus, but for logs.

For a quick introduction, we will install only

loki-stack

and

promtail

. We will use the Grafana pod, that is deployed by Percona Monitoring and Management to visualize the log. 

$ helm install loki-stack grafana/loki-stack --create-namespace --namespace loki-stack --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=5Gi

Let’s see what has been installed:

$ kubectl get all -n loki-stack
NAME                            READY   STATUS    RESTARTS   AGE
pod/loki-stack-promtail-xqsnl   1/1     Running   0          85s
pod/loki-stack-promtail-lt7pd   1/1     Running   0          85s
pod/loki-stack-promtail-fch2x   1/1     Running   0          85s
pod/loki-stack-promtail-94rcp   1/1     Running   0          85s
pod/loki-stack-0                1/1     Running   0          85s

NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/loki-stack-headless   ClusterIP   None           <none>        3100/TCP   85s
service/loki-stack            ClusterIP   10.43.24.113   <none>        3100/TCP   85s

NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/loki-stack-promtail   4         4         4       4            4           <none>          85s

NAME                          READY   AGE
statefulset.apps/loki-stack   1/1     85s

Promtail

and

loki-stack

pods have been created, together with a service that

loki-stack

will use to publish the logs to Grafana for visualization.

You can see the promtail pods are deployed by a

daemonset

 which spawns a promtail pod in every node. This is to make sure the logs from every pod in all the nodes are collected and shipped to the

loki-stack

  pod for centralizing storing and management.

$ kubectl get pods -n loki-stack -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP           NODE                 NOMINATED NODE   READINESS GATES
loki-stack-promtail-xqsnl   1/1     Running   0          2m7s   10.42.0.17   phong-dinh-default   <none>           <none>
loki-stack-promtail-lt7pd   1/1     Running   0          2m7s   10.42.1.12   phong-dinh-node1     <none>           <none>
loki-stack-promtail-fch2x   1/1     Running   0          2m7s   10.42.2.13   phong-dinh-node2     <none>           <none>
loki-stack-promtail-94rcp   1/1     Running   0          2m7s   10.42.3.11   phong-dinh-node3     <none>           <none>
loki-stack-0                1/1     Running   0          2m7s   10.42.0.19   phong-dinh-default   <none>           <none>

Integrating Loki With PMM Grafana

Next, we will add Loki as a data source of PMM Grafana, so we can use PMM Grafana to visualize the logs. You can do it from the GUI or with kubectl CLI. 

Below is the step to add data source from PMM GUI:

Navigate to

https://<PMM-IP-addres>:9443/graph/datasources

  then select

Add data source

 

Integrating Loki With PMM Grafana

Then Select Loki

Next, in the Settings, in the HTTP URL box, enter the DNS records of the

loki-stack

  service 

Loki settings

You can also use the below command to add the data source. Make sure to specify the name of PMM pod, in this command,

monitoring-0

  is the PMM pod. 

$ kubectl -n default exec -it monitoring-0 -- bash -c "curl 'http://admin:verysecretpassword@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{ \"orgId\": 1, \"name\": \"Loki\", \"type\": \"loki\", \"typeLogoUrl\": \"\", \"access\": \"proxy\", \"url\": \"http://loki-stack.loki-stack.svc.cluster.local:3100\", \"password\": \"\", \"user\": \"\", \"database\": \"\", \"basicAuth\": false, \"basicAuthUser\": \"\", \"basicAuthPassword\": \"\", \"withCredentials\": false, \"isDefault\": false, \"jsonData\": {}, \"secureJsonFields\": {}, \"version\": 1, \"readOnly\": false }'"

Exploring the Logs in PMM Grafana

Now, you can explore the pod logs in Grafana UI, navigate to Explore, and select Loki in the dropdown list as the source of metrics:

Exploring the Logs in PMM Grafana

In the Log Browser, you can select the appropriate labels to form your first LogQL query, for example, I select the following attributes

 

{app="percona-xtradb-cluster", component="pxc", container="logs",pod="cluster1-pxc-1"} 

 

Click Show logs  and you can see all the logs of

cluster-pxc-1

pod

You can perform a simple filter with |= “message-content”. For example, filtering all the messages related to State transfer  by

{app="percona-xtradb-cluster", component="pxc", container="logs",pod="cluster1-pxc-1"} |= "State transfer"

Conclusion

Deploying Grafana Loki in the Kubernetes environment is feasible and straightforward. Grafana Loki can be integrated easily with Percona Monitoring and Management to provide both centralized logging and comprehensive monitoring when running Percona XtraDB Cluster and Percona Server for MongoDB in Kubernetes.

It would be interesting to know if you have any thoughts while reading this, please share your comments and thoughts.

May
24
2022
--

A Quick Peek at the PostgreSQL 15 Beta 1

PostgreSQL 15 Beta 1

PostgreSQL 15  Beta 1 was announced on May 15th and to keep with the spirit of past peeks at MySQL and MariaDB products, this is a cursory glance at the release notes to see what is coming.  This is a preview of all the features that will be in the General Available version when it is released this fall.  Beta 1 is not in production but the developers encourage you to test this code to find bugs or other problems before the GA release.  My comments are in italics and I tend to highlight the stuff I find of note.

The TL;DR

If you are used to the once-a-quarter releases of MySQL or  MariaDB, this is an overwhelming amount of new goodies being delivered. PostgreSQL releases less frequently but packs plenty of solid engineering into each release.  The claim of being the most advanced open source relational database is well-founded.  My one-word summary is ‘wow’.  It is very evident that the code is delivered when it is ready and not due to a deadline.

For Developers

SQL codes get MERGE from the SQL Standard for conditionally performing writes (INSERT, UPDATE, or DELETE) when synchronizing two tables.  This could be emulated in previous releases by using INSERT .. ON CONFLICT but more batch orientated.  This is also known as an upsert.  The ‘how do I combine two tables based on the values’ type question on Reddit and Stackoverflow can now be pointed here.

More JSON support including a very impressive JSON_TABLE function to temporarily turn unstructured JSON data into a relational table. Very handy!

And more regular expression functions – regexp_count, regexp_instr, regexp_like, regexp_substr, and range_agg.  

In prior versions, NULL values were always indexed as distinct values but this can be changed by using UNIQUE NULLS NOT DISTINCT.

Performance

Reportedly there is a big gain for sorting large dataset, a paralleled SELECT DISTINCT, faster window functions (row_number(), rank(), and count()), and a \copy bulk loader built into psql. If you use the foreign data wrapper’s postgres_fdw you now can have transactions committed in parallel. 

Compression

LZ4 is added to Zstd compression into many components.  The WAL can not take advantage of those two compression tools plus  pg_basebackup can now take advantage of those two plus gzip.

Logical Replication

Row filtering and column filtering are now offered and there is also the ability to publish all the tables in a schema instead of just the tables in the database. Yes, MySQL-ites, there is a difference

DBAs/SREs Get Goodies Too

Logging can now be done using the jsopnlog format for the consumption of programs that aggregate and analyze logs. Slow checkpoint and autovacumms are automatically logged.  You can now see configuration values by using \dconfig with psql and the default shows you the non-default values. This will be loved by anyone who tried to figure out why two servers behave differently and get tired of using diff.   All server-level stats are stored in shared memory and adios to statistic collector processes.  And there is now a pager option for /watch.

Views now can execute using the privileges of the user instead of the creator of the view. Anyone bit by this type of problem is now sighing with relief.

There is now the ability to get extended statistics for a parent and all children to augment the existing ability to record parent/child stats with regular statistics separately. 

Generic periodic log messages telling you the case of the delay are generated during slow server starts. Finally, the reason for ‘why the #$@ is in not up yet?’ can be told. 

Conclusion

PostgreSQL 15, like all its predecessors, has an impressive amount of solid engineering in each release.  For someone coming from less featured open source relational databases like me, the amount of work being delivered is staggering.  My 10,000-foot view is that an awesome amount of things to make PostgreSQL more reliable, more useful, and maintain the reputation of being on the cutting edge has handed over for us to kick the tires rev the engine, and take it for a spin.  I am sure a lot of backend database specialists will be muttering ‘gee, I could really use that as they pour over the items in the release notes.  

May
23
2022
--

Talking Drupal #348 – A Website’s Carbon Footprint

Today we are talking about A Website’s Carbon Footprint with Gerry McGovern.

www.talkingDrupal.com/348

Topics

  • Earth day
  • What is a carbon footprint
  • How do websites contribute
  • How can you calculate your site’s impact
  • Cloud vs dedicated hosting
  • How do you determine a vendor’s impact
  • Small sites VS FAANG
  • How to improve your site

Resources

Guests

Gerry McGovern – gerrymcgovern.com @gerrymcgovern

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

MOTW

Config Pages At some point I was tired of creating custom pages using menu and form API, writing tons of code just to have a page with an ugly form where a client can enter some settings, and as soon as a client wants to add some interactions to the page (drag&drop, ajax etc) things starts to get hairy. The same story was with the creation of dedicated CT just to theme a single page (like homepage) and explaining why you can only have 1 node of this type, or force it programmatically.

May
18
2022
--

Super Potty Trainer – A Shark Tank Product For Toddlers Which Takes The Internet By Storm!

Super Potty Trainer is the newest and most innovative potty training product on the market. It was recently featured on Shark Tank and has taken the Internet by storm! This unique product makes potty training easier and more fun for both parents and toddlers.

The Super Potty Trainer features a patented design that allows toddlers to sit on the regular toilet seat, giving them extra back support and making it more comfortable. This simple idea has made potty training more efficient and less messy for parents and less stressful, and more fun for toddlers!

The history behind the Super Potty Trainer that makes it so special

Super Potty Trainer was created by a mom – Judy Abrahams – who was potty training her own toddler. She saw how difficult and stressful it was for both parents and toddlers: her child was afraid of falling in, and she constantly had to clean up accidents. To make things worse, Judy’s daughter was so stressed out that she would become constipated!

Judy realized that there had to be a better way to potty train and set out to find it. After months of research and development, she created the Super Potty Trainer: a potty training seat that gives toddlers the extra back support they need, making it more comfortable and less stressful for them.

The product was an instant success with parents and toddlers alike and quickly became a must-have for any family potty training their child. It has even been featured on Shark Tank, where it received rave reviews from the sharks!

The Internet is abuzz with Super Potty Trainer reviews, and it is quickly becoming the go-to product for potty training. If you are looking for a potty training solution that is effective and fun, look no further than Super Potty Trainer!

How does the Super Potty Trainer work?

The Super Potty Trainer is a potty training system that helps toddlers transition from diapers to the toilet. The way this innovative product works is simple; it attaches to the regular toilet and provides extra back support for toddlers. This back support makes it more comfortable for toddlers to sit on the toilet and helps prevent them from making a mess.

See how it works – it is easy as 1-2-3!

  1. Lift the toilet seat. Place the Super Potty Trainer on a clean, dry toilet rim.
  2. Lower the seat to keep it in place.
  3. Your toddler is ready to start potty training!

The Super Potty Trainer allows you to adjust the depth of your toilet seat so it is always a comfortable fit for your toddler: simply move Super Potty Trainer forward or backward on the toilet to suit your child’s preferences.

This product is also easy to clean and is made from durable, high-quality materials.

Will Super Potty Trainer fit my toilet?

The Super Potty Trainer is designed to fit most toilets. There is an easy way to check if your toilet is compatible with the product. All you need to do is check if there is a little gap between the toilet seat and the rim of your toilet. If there is a gap, your toilet is compatible with the Super Potty Trainer.

It is also possible to make your toilet compatible with Super Potty Trainer if there is no gap between the toilet seat and the rim. You can do this by replacing the toilet seat for the one that has bumpers – these will create a small gap and make the toilet compatible with the product.

Loved by pediatricians and parents

Both pediatricians and parents love Super Potty Trainer. Pediatricians recommend Super Potty Trainer because it’s a simple and safe way to potty train toddlers. It allows a toddler to have a proper position on the toilet, which is essential for right elimination. And that prevents constipation and urinary infections.

Parents love Super Potty Trainer because it makes potty training less stressful for both the parent and toddlers. The product is also very affordable and easy to use.

If you are looking for an easy, stress-free way to potty train your toddler, then Super Potty Trainer is your product!

The post Super Potty Trainer – A Shark Tank Product For Toddlers Which Takes The Internet By Storm! appeared first on Comfy Bummy.

May
16
2022
--

Talking Drupal #347 – GitLab CI

Today we are talking about GitLab CI with Chris Wells.

   

www.talkingDrupal.com/347

Topics

  • CI
  • GitLab CI
  • What is Drupal transitioning from?
  • Benefits of CI
  • Key concepts and terminology
  • Commonly used CI tools
  • Community Benefits
  • GitLab CI with other tools
  • Coolest integration at Redfin

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

May
16
2022
--

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Our goal is to have a Rocket.Chat deployment which uses highly available Percona Server for MongoDB cluster as the backend database and it all runs on Kubernetes. To get there, we will do the following:

  • Start a Google Kubernetes Engine (GKE) cluster across multiple availability zones. It can be any other Kubernetes flavor or service, but I rely on multi-AZ capability in this blog post.
  • Deploy Percona Operator for MongoDB and database cluster with it
  • Deploy Rocket.Chat with specific affinity rules
    • Rocket.Chat will be exposed via a load balancer

Rocket.Chat will be exposed via a load balancer

Percona Operator for MongoDB, compared to other solutions, is not only the most feature-rich but also comes with various management capabilities for your MongoDB clusters – backups, scaling (including sharding), zero-downtime upgrades, and many more. There are no hidden costs and it is truly open source.

This blog post is a walkthrough of running a production-grade deployment of Rocket.Chat with Percona Operator for MongoDB.

Rock’n’Roll

All YAML manifests that I use in this blog post can be found in this repository.

Deploy Kubernetes Cluster

The following command deploys GKE cluster named

percona-rocket

in 3 availability zones:

gcloud container clusters create --zone us-central1-a --node-locations us-central1-a,us-central1-b,us-central1-c percona-rocket --cluster-version 1.21 --machine-type n1-standard-4 --preemptible --num-nodes=3

Read more about this in the documentation.

Deploy MongoDB

I’m going to use helm to deploy the Operator and the cluster.

Add helm repository:

helm repo add percona https://percona.github.io/percona-helm-charts/

Install the Operator into the percona namespace:

helm install psmdb-operator percona/psmdb-operator --create-namespace --namespace percona

Deploy the cluster of Percona Server for MongoDB nodes:

helm install my-db percona/psmdb-db -f psmdb-values.yaml -n percona

Replica set nodes are going to be distributed across availability zones. To get there, I altered the affinity keys in the corresponding sections of psmdb-values.yaml:

antiAffinityTopologyKey: "topology.kubernetes.io/zone"

Prepare MongoDB

For Rocket.Chat to connect to our database cluster, we need to create the users. By default, clusters provisioned with our Operator have

userAdmin

user, its password is set in

psmdb-values.yaml

:

MONGODB_USER_ADMIN_PASSWORD: userAdmin123456

For production-grade systems, do not forget to change this password or create dedicated secrets to provision those. Read more about user management in our documentation.

Spin up a client Pod to connect to the database:

kubectl run -i --rm --tty percona-client1 --image=percona/percona-server-mongodb:4.4.10-11 --restart=Never -- bash -il

Connect to the database with

userAdmin

:

[mongodb@percona-client1 /]$ mongo "mongodb://userAdmin:userAdmin123456@my-db-psmdb-db-rs0-0.percona/admin?replicaSet=rs0"

We are going to create the following:

  • rocketchat

    database

  • rocketChat

    user to store data and connect to the database

  • oplogger

    user to provide access to oplog for rocket chat

    • Rocket.Chat uses Meteor Oplog tailing to improve performance. It is optional.
use rocketchat
db.createUser({
  user: "rocketChat",
  pwd: passwordPrompt(),
  roles: [
    { role: "readWrite", db: "rocketchat" }
  ]
})

use admin
db.createUser({
  user: "oplogger",
  pwd: passwordPrompt(),
  roles: [
    { role: "read", db: "local" }
  ]
})

Deploy Rocket.Chat

I will use helm here to maintain the same approach. 

helm install -f rocket-values.yaml my-rocketchat rocketchat/rocketchat --version 3.0.0

You can find rocket-values.yaml in the same repository. Please make sure you set the correct passwords in the corresponding YAML fields.

As you can see, I also do the following:

  • Line 11: expose Rocket.Chat through
    LoadBalancer

    service type

  • Line 13-14: set number of replicas of Rocket.Chat Pods. We want three – one per each availability zone.
  • Line 16-23: set affinity to distribute Pods across availability zones

Load Balancer will be created with a public IP address:

$ kubectl get service my-rocketchat-rocketchat
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
my-rocketchat-rocketchat   LoadBalancer   10.32.17.26   34.68.238.56   80:32548/TCP   12m

You should now be able to connect to

34.68.238.56

and enjoy your highly available Rocket.Chat installation.

Rocket.Chat installation

Clean Up

Uninstall all helm charts to remove MongoDB cluster, the Operator, and Rocket.Chat:

helm uninstall my-rocketchat
helm uninstall my-db -n percona
helm uninstall psmdb-operator -n percona

Things to Consider

Ingress

Instead of exposing Rocket.Chat through a load balancer, you may also try ingress. By doing so, you can integrate it with cert-manager and have a valid TLS certificate for your chat server.

Mongos

It is also possible to run a sharded MongoDB cluster with Percona Operator. If you do so, Rocket.Chat will connect to mongos Service, instead of the replica set nodes. But you will still need to connect to the replica set directly to get oplogs.

Conclusion

We encourage you to try out Percona Operator for MongoDB with Rocket.Chat and let us know on our community forum your results.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

Percona Operator for MongoDB contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. The Operator enables you to improve time to market with the ability to quickly deploy standardized and repeatable database environments. Deploy your database with a consistent and idempotent result no matter where they are used.

May
16
2022
--

KMIP in Percona Distribution for MongoDB, Preview Release of PMM 2.28.0: Release Roundup May 16, 2022

Percona Releases May 16 2022

It’s time for the release roundup!

Percona Releases May 16 2022Percona is a leading provider of unbiased open source database solutions that allow organizations to easily, securely, and affordably maintain business agility, minimize risks, and stay competitive.

Our Release Roundups showcase the latest Percona software updates, tools, and features to help you manage and deploy our software. It offers highlights and critical information, as well as links to the full release notes and direct links to the software or service itself to download.

Today’s post includes those releases and updates that have come out since May 2, 2022. Take a look!

Percona Distribution for MongoDB 5.0.8

On May 10, 2022, we released Percona Distribution for MongoDB 5.0.8. It is a freely available MongoDB database alternative, giving you a single solution that combines enterprise components from the open source community, designed and tested to work together. It includes Percona Server for MongoDB and Percona Backup for MongoDB. This release of Percona Distribution for MongoDB includes the support of master key rotation for data encrypted using the Keys Management Interoperability Protocol (KMIP) (tech preview feature). Thus, users can comply with regulatory standards for data security.

Download Percona Distribution for MongoDB 5.0.8

 

Percona Server for MongoDB 5.0.8-7

Percona Server for MongoDB 5.0.8-7 was released on May 10, 2022. It is an enhanced, source available, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 5.0.8 Community Edition. It supports MongoDB 5.0.8 protocols and drivers. It now supports master key rotation for data encrypted using the Keys Management Interoperability Protocol (KMIP) protocol (tech preview feature). This improvement allows users to comply with regulatory standards for data security.

Download Percona Server for MongoDB 5.0.8-7

 

Percona Operator for MongoDB 1.12.0

On May 5, 2022, Percona Operator for MongoDB 1.12.0 was released. It automates the creation, modification, or deletion of items in your Percona Server for MongoDB environment. The Operator contains the necessary Kubernetes settings to maintain a consistent Percona Server for MongoDB instance. Release highlights include the ability to use the Amazon Web Services feature of authenticating applications running on EC2 instances based on Identity and Access Management (IAM) roles assigned to the instance, support for the Multi Cluster Services (MCS), and the OpenAPI schema is now generated for the Operator CRD, which allows Kubernetes to perform Custom Resource validation and saves users from occasionally applying deploy/cr.yaml with syntax typos

Download Percona Operator for MongoDB 1.12.0

 

Percona Monitoring and Management 2.28.0

Percona Monitoring and Management 2.28.0 was released on May 12, 2002. It is an open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB. Highlights in this release include Advisor checks enabled by default, Ubuntu 22.04 LTS support, and VictoriaMetrics upgraded to 1.76.1.

Download Percona Monitoring and Management 2.28.0

 

Updates to Percona Distribution for PostgreSQL 11.15, 12.10, 13.6, 14.2

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together. Patroni, pgBackRest, pg_repack, and pgaudit are amongst the innovative components we utilize.

Released on May 5, 2022, these updates include the GA version of pg_stat_monitor 1.0.0.

https://docs.percona.com/postgresql/14/release-notes-v14.2.upd2.html

https://docs.percona.com/postgresql/13/release-notes-v13.6.upd2.html

https://docs.percona.com/postgresql/12/release-notes-v12.10.upd2.html

https://docs.percona.com/postgresql/11/release-notes-v11.15.upd2.html

 

Percona Distribution for MySQL (PS-Based Variant) 8.0.28

On May 12, 2022, Percona Distribution for MySQL (PS-based variant) 8.0.28 was released. It is a single solution with the best and most critical enterprise components from the MySQL open-source community, designed and tested to work together. This release is focused on the Percona Server for MySQL-based deployment variant and is based on Percona Server for MySQL 8.0.28-19.

The following lists a number of the bug fixes for MySQL 8.0.28, provided by Oracle, and included in Percona Server for MySQL and Percona Distribution for MySQL:

  • The ASCII shortcut for CHARACTER SET latin1 and UNICODE shortcut for CHARACTER SET ucs2 are deprecated and raise a warning to use CHARACTER SET instead. The shortcuts will be removed in a future version.
  • A stored function and a loadable function with the same name can share the same namespace. Add the schema name when invoking a stored function in the shared namespace. The server generates a warning when function names collide.
  • InnoDB supports ALTER TABLE ... RENAME COLUMN operations when using ALGORITHM=INSTANT.
  • The limit for innodb_open_files now includes temporary tablespace files. The temporary tablespace files were not counted in the innodb_open_files in previous versions.

Download Percona Distribution for MySQL (PS-based variant) 8.0.28

 

Percona Server for MySQL 8.0.28-19

Percona Server for MySQL 8.0.28-19 was released on May 13, 2022.  It is a free, fully compatible, enhanced, and open source drop-in replacement for any MySQL database. It provides superior performance, scalability, and instrumentation. Improvements in this release include when using the SET_VAR syntax, MyRocks variables can be set dynamically, and the ability to change log file locations dynamically is restricted.

Note: Starting with Percona Server for MySQL 8.0.28-19, the TokuDB storage engine is no longer supported. We have removed the storage engine from the installation packages and the storage engine is disabled in our binary builds.

Download Percona Server for MySQL 8.0.28-19

 

Percona XtraBackup 2.4.26

On May 9, 2022, we released Percona XtraBackup 2.4.26. It enables MySQL backups without blocking user queries. Percona XtraBackup is ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. Note: Percona XtraBackup 2.4 does not support making backups of databases created in MySQL 8.0, Percona Server for MySQL 8.0, or Percona XtraDB Cluster 8.0. Use Percona XtraBackup 8.0 to make backups for these versions.

Download Percona XtraBackup 2.4.26

That’s it for this roundup, and be sure to follow us on Twitter to stay up-to-date on the most recent releases! Percona is a leader in providing best-of-breed enterprise-class support, consulting, managed services, training, and software for MySQL, MongoDB, PostgreSQL, MariaDB, and other open source databases in on-premises and cloud environments.

May
12
2022
--

Percona Operator for MongoDB and Kubernetes MCS: The Story of One Improvement

Percona Operator for MongoDB and Kubernetes MCS

Percona Operator for MongoDB supports multi-cluster or cross-site replication deployments since version 1.10. This functionality is extremely useful if you want to have a disaster recovery deployment or perform a migration from or to a MongoDB cluster running in Kubernetes. In a nutshell, it allows you to use Operators deployed in different Kubernetes clusters to manage and expand replica sets.Percona Kubernetes Operator

For example, you have two Kubernetes clusters: one in Region A, another in Region B.

  • In Region A you deploy your MongoDB cluster with Percona Operator. 
  • In Region B you deploy unmanaged MongoDB nodes with another installation of Percona Operator.
  • You configure both Operators, so that nodes in Region B are added to the replica set in Region A.

In case of failure of Region A, you can switch your traffic to Region B.

Migrating MongoDB to Kubernetes describes the migration process using this functionality of the Operator.

This feature was released in tech preview, and we received lots of positive feedback from our users. But one of our customers raised an internal ticket, which was pointing out that cross-site replication functionality does not work with Multi-Cluster Services. This started the investigation and the creation of this ticket – K8SPSMDB-625

This blog post will go into the deep roots of this story and how it is solved in the latest release of Percona Operator for MongoDB version 1.12.

The Problem

Multi-Cluster Services or MCS allows you to expand network boundaries for the Kubernetes cluster and share Service objects across these boundaries. Someone calls it another take on Kubernetes Federation. This feature is already available on some managed Kubernetes offerings,  Google Cloud Kubernetes Engine (GKE) and AWS Elastic Kubernetes Service (EKS). Submariner uses the same logic and primitives under the hood.

MCS Basics

To understand the problem, we need to understand how multi-cluster services work. Let’s take a look at the picture below:

multi-cluster services

  • We have two Pods in different Kubernetes clusters
  • We add these two clusters into our MCS domain
  • Each Pod has a service and IP-address which is unique to the Kubernetes cluster
  • MCS introduces new Custom Resources –
    ServiceImport

    and

    ServiceExport

    .

    • Once you create a
      ServiceExport

      object in one cluster,

      ServiceImport

      object appears in all clusters in your MCS domain.

    • This
      ServiceImport

        object is in

      svc.clusterset.local

      domain and with the network magic introduced by MCS can be accessed from any cluster in the MCS domain

Above means that if I have an application in the Kubernetes cluster in Region A, I can connect to the Pod in Kubernetes cluster in Region B through a domain name like

my-pod.<namespace>.svc.clusterset.local

. And it works from another cluster as well.

MCS and Replica Set

Here is how cross-site replication works with Percona Operator if you use load balancer:

MCS and Replica Set

All replica set nodes have a dedicated service and a load balancer. A replica set in the MongoDB cluster is formed using these public IP addresses. External node added using public IP address as well:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: 123.45.67.89

All nodes can reach each other, which is required to form a healthy replica set.

Here is how it looks when you have clusters connected through multi-cluster service:

Instead of load balancers replica set nodes are exposed through Cluster IPs. We have ServiceExports and ServiceImports resources. All looks good on the networking level, it should work, but it does not.

The problem is in the way the Operator builds MongoDB Replica Set in Region A. To register an external node from Region B to a replica set, we will use MCS domain name in the corresponding section:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: rs0-4.mongo.svc.clusterset.local

Now our rs.status() will look like this:

"name" : "my-cluster-rs0-0.mongo.svc.cluster.local:27017"
"role" : "PRIMARY"
...
"name" : "my-cluster-rs0-1.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "my-cluster-rs0-2.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "rs0-4.mongo.svc.clusterset.local:27017"
"role" : "UNKNOWN"

As you can see, Operator formed a replica set out of three nodes using

svc.cluster.local

domain, as it is how it should be done when you expose nodes with

ClusterIP

Service type. In this case, a node in Region B cannot reach any node in Region A, as it tries to connect to the domain that is local to the cluster in Region A. 

In the picture below, you can easily see where the problem is:

The Solution

Luckily we have a Special Interest Group (SIG), a Kubernetes Enhancement Proposal (KEP) and multiple implementations for enabling Multi-Cluster Services. Having a KEP is great since we can be sure the implementations from different providers (i.e GCP, AWS) will follow the same standard more or less.

There are two fields in the Custom Resource that control MCS in the Operator:

spec:
  multiCluster:
    enabled: true
    DNSSuffix: svc.clusterset.local

Let’s see what is happening in the background with these flags set.

ServiceImport and ServiceExport Objects

Once you enable MCS by patching the CR with

spec.multiCluster.enabled: true

, the Operator creates a

ServiceExport

object for each service. These ServiceExports will be detected by the MCS controller in the cluster and eventually a

ServiceImport

for each

ServiceExport

will be created in the same namespace in each cluster that has MCS enabled.

As you see, we made a decision and empowered the Operator to create

ServiceExport

objects. There are two main reasons for doing that:

  • If any infrastructure-as-a-code tool is used, it would require additional logic and level of complexity to automate the creation of required MCS objects. If Operator takes care of it, no additional work is needed. 
  • Our Operators take care of the infrastructure for the database, including Service objects. It just felt logical to expand the reach of this functionality to MCS.

Replica Set and Transport Encryption

The root cause of the problem that we are trying to solve here lies in the networking field, where external replica set nodes try to connect to the wrong domain names. Now, when you enable multi-cluster and set

DNSSuffix

(it defaults to

svc.clusterset.local

), Operator does the following:

  • Replica set is formed using MCS domain set in
    DNSSuffix

     

  • Operator generates TLS certificates as usual, but adds
    DNSSuffix

    domains into the picture

With this approach, the traffic between nodes flows as expected and is encrypted by default.

Replica Set and Transport Encryption

Things to Consider

MCS APIs

Please note that the operator won’t install MCS APIs and controllers to your Kubernetes cluster. You need to install them by following your provider’s instructions prior to enabling MCS for your PSMDB clusters. See our docs for links to different providers.

Operator detects if MCS is installed in the cluster by API resources. The detection happens before controllers are started in the operator. If you installed MCS APIs while the operator is running, you need to restart the operator. Otherwise, you’ll see an error like this:

{
  "level": "error",
  "ts": 1652083068.5910048,
  "logger": "controller.psmdb-controller",
  "msg": "Reconciler error",
  "name": "cluster1",
  "namespace": "psmdb",
  "error": "wrong psmdb options: MCS is not available on this cluster",
  "errorVerbose": "...",
  "stacktrace": "..."
}

ServiceImport Provisioning Time

It might take some time for

ServiceImport

objects to be created in the Kubernetes cluster. You can see the following messages in the logs while creation is in progress: 

{
  "level": "info",
  "ts": 1652083323.483056,
  "logger": "controller_psmdb",
  "msg": "waiting for service import",
  "replset": "rs0",
  "serviceExport": "cluster1-rs0"
}

During testing, we saw wait times up to 10-15 minutes. If you see your cluster is stuck in initializing state by waiting for service imports, it’s a good idea to check the usage and quotas for your environment.

DNSSuffix

We also made a decision to automatically generate TLS certificates for Percona Server for MongoDB cluster with

*.clusterset.local

domain, even if MCS is not enabled. This approach simplifies the process of enabling MCS for a running MongoDB cluster. It does not make much sense to change the

DNSSuffix

 field, unless you have hard requirements from your service provider, but we still allow such a change. 

If you want to enable MCS with a cluster deployed with an operator version below 1.12, you need to update your TLS certificates to include

*.clusterset.local

SANs. See the docs for instructions.

Conclusion

Business relies on applications and infrastructure that serves them more than ever nowadays. Disaster Recovery protocols and various failsafe mechanisms are routine for reliability engineers, not an annoying task in the backlog. 

With multi-cluster deployment functionality in Percona Operator for MongoDB, we want to equip users to build highly available and secured database clusters with minimal effort.

Percona Operator for MongoDB is truly open source and provides users with a way to deploy and manage their enterprise-grade MongoDB clusters on Kubernetes. We encourage you to try this new Multi-Cluster Services integration and let us know your results on our community forum. You can find some scripts that would help you provision your first MCS clusters on GKE or EKS here.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

May
10
2022
--

Spring Cleaning: Discontinuing RHEL 6/CentOS 6 (glibc 2.12) and 32-bit Binary Builds of Percona Software

Discontinuing RHEL 6/CentOS 6

Discontinuing RHEL 6/CentOS 6As you are probably aware, Red Hat Enterprise Linux 6 (RHEL 6 or EL 6 in short) officially reached “End of Life” (EOL) on 2020-11-30 and is now in the so-called Extended Life Phase, which basically means that Red Hat will no longer provide bug fixes or security fixes.

Even though EL 6 and its compatible derivatives like CentOS 6 had reached EOL some time ago already, we continued providing binary builds for selected MySQL-related products for this platform.

However, this became increasingly difficult, as the MySQL code base continued to evolve and now depends on tools and functionality that are no longer provided by the operating system out of the box. This meant we already had to perform several modifications in order to prepare binary builds for this platform, e.g. installing custom compiler versions or newer versions of various system libraries.

As of MySQL 8.0.26, Oracle announced that they deprecated the TLSv1 and TLSv1.1 connection protocols and plan to remove these in a future MySQL version in favor of the more secure TLSv1.2 and TLSv1.3 protocols. TLSv1.3 requires that both the MySQL server and the client application be compiled with OpenSSL 1.1.1 or higher. This version of OpenSSL is not available in binary package format on EL 6 anymore, and manually rebuilding it turned out to be a “yak shaving exercise” due to the countless dependencies.

Our build & release team was able to update the build environments on all of our supported platforms (EL 7, EL 8, supported Debian and Ubuntu versions) for this new requirement. However, we have not been successful in getting all the required components and their dependencies to build on EL 6, as it would have required rebuilding quite a significant amount of core OS packages and libraries to achieve this.

Moreover, switching to this new OpenSSL version would have also required us to include some additional shared libraries in our packages to satisfy the runtime dependencies, adding more complexity and potential security issues.

In general, we believe that running a production system on an OS that is no longer actively supported by a vendor is not a recommended best practice from a security perspective, and we do not want to encourage such practices.

Because of these reasons and to simplify our build/release and QA processes, we decided to drop support for EL 6 for all products now. Percona Server for MySQL 8.0.27 was the last version for which we built binaries for EL 6 against the previous version of OpenSSL.

Going forward, the following products will no longer be built and released on this platform:

  • Percona Server for MySQL 5.7 and 8.0
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4 and 8.0
  • Percona Toolkit 3.2

This includes stopping both building RPM packages for EL 6 and providing binary tarballs that are linked against glibc 2.12.

Note that this OS platform was also the last one on which we still provided 32-bit binaries.

Most of the Enterprise Linux distributions have stopped providing 32-bit versions of their operating systems quite some time ago already. As an example, Red Hat Enterprise Linux 7 (released in June 2014) was the first release to no longer support installing directly on 32-bit Intel/AMD hardware (i686/x86). Already back in 2018, we had taken the decision that we will no longer be offering 32-bit binaries on new platforms or new major releases of our software.

Given today’s database workloads, we also think that 32-bit systems are simply not adequate anymore, and we already stopped building newer versions of our software for this architecture.

The demand for 32-bit downloads has also been declining steadily. A recent analysis of our download statistics revealed that only 2.3% of our total binary downloads are referring to i386 binaries. Looking at IP addresses, these downloads originated from 0.4% of the total range of addresses.

This change affects the following products:

  • Percona Server for MySQL 5.7
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4
  • Percona Toolkit

We’ve updated the Percona Release Lifecycle Overview web page accordingly to reflect this change. Previously released binaries for these platforms and architectures will of course remain accessible from our repositories.

If you’re still running EL 6 or a 32-bit database or OS, we strongly recommend upgrading to a more modern platform. Our Percona Services team would be happy to help you with that!

May
09
2022
--

Talking Drupal #346 – Open Source Compensation

Today we are talking about Open Source Compensation with Tim Lehnen.

www.talkingDrupal.com/346

Topics

  • How was DrupalCon?
  • Suggestion from listener
  • Open Source like cURL and OpenSSL
  • Developer burnout and frustration
  • Question about boosting other contribution to C-Level
  • Great ways to compensate
  • What are you working on now?

Resources

Guests

Tim Lehnen – @timlehnen

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chris Wells – redfinsolutions.com@chrisfromredfin

MOTW

Tome Tome is a static site generator, and a static storage system for content.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com