Feb
05
2021
--

Webinar February 24: What’s Old Is New, What’s Coming Is Here – Percona Offerings for MySQL 5.6 and DBaaS in PMM

Percona Webinar February 24

Percona Webinar February 24Join Brian Walters and Brandon Fleisher for an interactive discussion around new service and software offerings coming from Percona.

Oracle has decided that MySQL 5.6 will reach its end of life in February 2021. This means there will be no more updates and, more importantly, no more security fixes for discovered vulnerabilities. Yikes! Did you know you have options? Did you know Percona can solve this problem? Tune in to find out how.

Choosing a database-as-a-service (DBaaS) provider can be a daunting challenge. Not ready to move to DBaaS in the public cloud? Or, are you constrained to on-prem deployment due to regulations or other market conditions? Tune-in to learn how Percona is bringing the benefits of simplified DBaaS deployment to the on-prem, private cloud, and hybrid datacenter.

This interactive webinar will be conducted in a roundtable forum style. Bring your interests, bring your concerns and Percona will share our thoughts on addressing these topics and more. See you there!

Please join Percona’s Brian Walters, Director of Solution Engineering, Percona, and Brandon Fleischer, Sr. Outbound Product Manager, Percona, on Wednesday, February 24, 2021, at 11:30 AM EST for their webinar “What’s Old Is New; What’s Coming Is Here – Percona Offerings for MySQL 5.6 and DBaaS in PMM“.

Register for Webinar

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

Feb
03
2021
--

Q&A on Webinar “Using PMM to Identify and Troubleshoot Problematic MySQL Queries”

Problematic MySQL Queries

Problematic MySQL QueriesHi and thanks to all who attended my webinar on Tuesday, January 26th titled Using PMM to Identify & Troubleshoot Problematic MySQL Queries!

Like we do after all our webinars, we compile the list of questions that were answered verbally and also those that were posed yet remained unanswered since we ran out of time during the broadcast.  Before we get to the questions, I wanted to make sure to include a link to the RED Method for MySQL Queries by Peter Zaitsev, Percona’s CEO:

https://grafana.com/grafana/dashboards/12470

RED Method for MySQL Queries

Hi Michael, you suggested that table create and update times should be ignored. Surely these values come from information_schema.tables? Does that not reflect what I would see if I do ls -l in datadir?

Yes, I did make this suggestion, but after further research, I ought to qualify my response. TLDR; you will only see useful information in the CREATE_TIME field.

As per the MySQL Manual for SHOW TABLE STATUS which defines the fields CREATE_TIME, UPDATE_TIME, and CHECK_TIME, you will find that only CREATE_TIME for InnoDB tables provides accurate information for when the table was originally created.  You will see either NULL or a recent-ish timestamp value for UPDATE_TIME, but this cannot be trusted as features such as InnoDB Change Buffering will skew this value, and thus the timestamp will not necessarily reflect when the SQL write happened, but only when the delayed write to the ibd file occurred.  Further, if you have your table stored in the system tablespace (like the example below) you will continue to see NULL for the UPDATE_TIME.

mysql> select * from information_schema.tables where table_name = 't1'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: michael
TABLE_NAME: t1
TABLE_TYPE: BASE TABLE
ENGINE: InnoDB
VERSION: 10
ROW_FORMAT: Dynamic
TABLE_ROWS: 0
AVG_ROW_LENGTH: 0
DATA_LENGTH: 16384
MAX_DATA_LENGTH: 0
INDEX_LENGTH: 0
DATA_FREE: 0
AUTO_INCREMENT: NULL
CREATE_TIME: 2021-01-28 19:26:27
UPDATE_TIME: NULL
CHECK_TIME: NULL
TABLE_COLLATION: utf8mb4_0900_ai_ci
CHECKSUM: NULL
CREATE_OPTIONS:
TABLE_COMMENT:
1 row in set (0.00 sec)

To your point about ls -l on the datadir, or the stat command: you cannot rely on this information at any level of accuracy.  Since ls -l is equivalent to the Modify field of the output of stat, we’ll use this  command to show the behaviour once you create the table, and what it reports after you restart mysqld on your datadir.  So let’s see this in action via an example.

Before restarting you’ll notice that Access time is equivalent to what Percona Server for MySQL reports for CREATE_TIME:

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:26:27.571903770 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

However after you restart mysqld, you will no longer be able to tell the create time as MySQL will have updated the Access time on disk, and now the values don’t have very much material relevance as to the access patterns on the table.

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:30:08.557438038 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

Can I use Percona Monitoring and Management (PMM) with an external bare-metal server of Clickhouse?

PMM leverages an instance of Clickhouse inside the docker container (or your AMI, or your OVF destination) for storage of MySQL query data.  At this time we are shipping PMM as an appliance and therefore we don’t provide instructions on how to connect Query Analytics to an external instance of Clickhouse.

If the question is about “can I monitor Clickhouse database metrics using PMM” the answer is Yes absolutely you can!  In fact, PMM will work with any of the Prometheus Exporters and the way to enable this is via the feature we call External Services – take a look at our Documentation for the correct syntax to use!  Usage of External Services will get you pretty metrics, whereas Grafana (which is what we use in PMM to provide the visuals) already contains a native Clickhouse datasource which you can use to run SQL queries from within PMM against Clickhouse.  Simply define the datasource and you’re done!

All PMM2 features are compatible with MySQL 8?

The latest release of PMM 2.14 (January 28th, 2021) supports MySQL 8 and Percona Server for MySQL 8.  PMM now supports not only traditional asynchronous replication but also MySQL InnoDB Group Replication, and of course Percona’s own Percona XtraDB Cluster (PXC) write-set replication (aka wsrep via Galera).  When using Query Analytics with Percona Server for MySQL or PXC, you’ll also benefit from the Extended Slow Log Format, which provides for a very detailed view of activity at the InnoDB storage engine level:

PMM Query Analytics Detail screen

I added several dbs to PMM, however the QAN shows only few and not all. What could be an issue? How do I approach you for Percona support on such things?

There could be a few things going on here that you’ll want to review from Percona’s Documentation:

  1. Do you have a user provisioned with appropriate access permissions in MySQL?
  2. If sourcing from PERFORMANCE_SCHEMA, is P_S actually enabled & properly configured?
  3. Is long_query_time and other slow log settings properly configured to write events?

Slow Log Configuration

These are the recommended settings for the slow log on Percona Server for MySQL. I prefer the slow log vs P_S because you get the InnoDB storage engine information along with other extended query properties (which are not available in upstream MySQL, nor in RDS, or via PERFORMANCE_SCHEMA):

log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1

User Permissions

You’ll want to use this permissions for the PMM user:

CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost';

PERFORMANCE_SCHEMA

Using PERFORMANCE_SCHEMA is less detailed but comes with the benefit that you’re writing and reading from an in-memory only object, so you’re saving IOPS to disk. Further if you’re in AWS or other DBaaS you generally don’t get raw access to the on-disk slow log, so PERFORMANCE_SCHEMA can be your only option.

PERFORMANCE_SCHEMA turned on

By default, the latest versions of Percona Server for MySQL and Community MySQL ship with PERFORMANCE_SCHEMA enabled by default, but sometimes users disable it.  If you find it is disabled, a restart of mysqld is required in order to enable.

You want to make sure your my.cnf includes:

performance_schema=ON

You can check via a running MySQL instance by executing:

mysql> show global variables like 'performance_schema';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| performance_schema | ON    |
+--------------------+-------+

1 row in set (0.04 sec)

PERFORMANCE_SCHEMA configuration

You’ll need to make sure you enable the following consumers so that mysqld writes events to the relevant P_S tables:

select * from setup_consumers WHERE ENABLED='YES';
+----------------------------------+---------+
| NAME                             | ENABLED |
+----------------------------------+---------+
| events_statements_current        | YES     |
| events_statements_history        | YES     |
| global_instrumentation           | YES     |
| thread_instrumentation           | YES     |
| statements_digest                | YES     |
+----------------------------------+---------+

5 rows in set (0.00 sec)

Are there any specific limitations when using PMM for monitoring AWS Aurora?

The most significant limitation is that you cannot access the Slow Log and thus must configure for PERFORMANCE_SCHEMA as the query datasource.  See the previous section on how to configure PERFORMANCE_SCHEMA as needed for PMM Query Analytics.

One great feature of PMM is our native support for AWS Aurora. We have a specific dashboard for those Aurora-only features:

PMM MySQL Amazon Aurora Details dashboard

Thanks for attending!

If you attended (or watched the video), please share via comments any takeaways or further questions you may have!   And let me know if you enjoyed my jokes ?

Jan
29
2021
--

Webinar February 10: Kubernetes – The Path to Open Source DBaaS

Kubernetes The Path to Open Source DBaaS

Kubernetes The Path to Open Source DBaaSDon’t miss out! Join Peter Zaitsev, Percona CEO, as he discusses DBaaS and Kubernetes.

DBaaS is the fastest-growing way to deploy databases. It is fast and convenient yet it is typically done using proprietary software and tightly coupled to the cloud vendor. Percona believes Kubernetes finally enables a fully Open Source DBaaS Solution capable of being deployed anywhere Kubernetes runs – on the Public Cloud or in your private data center.

In this presentation, he will describe the most important user requirements and typical problems you would encounter building a DBaaS Solution and explain how you can solve them using the Kubernetes Operator framework.

Please join Peter Zaitsev, Percona CEO, on Wednesday, February 10, 2021, at 11:00 AM EST for his webinar on DBaaS and Kubernetes.

Register for Webinar

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

Jan
14
2021
--

Webinar January 28: Tuning PostgreSQL for High Performance and Optimization

Tuning PostgreSQL webinar

Tuning PostgreSQL webinarPostgreSQL is one of the leading open-source databases, but, out of the box, the default PostgreSQL configuration is not tuned for any workload. Thus, any system with the least resources can run it. PostgreSQL does not give optimum performance on high permanence machines because it is not using all available resources. PostgreSQL provides a system where you can tune your database according to your workload and machine specifications. In addition to PostgreSQL, we can also tune our Linux box so that the database load can work optimally.

In this webinar on Tuning PostgreSQL for High Performance and Optimization, we will learn how to tune PostgreSQL and we’ll see the results of that tuning. We will also touch on tuning some Linux kernel parameters.

Please join Ibrar Ahmed, Percona Software Engineer, on January 28, 2021, at 1 pm EST as he presents his webinar “Tuning PostgreSQL for High Performance and Optimization.”

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Jan
12
2021
--

Webinar January 26: Using Percona Monitoring and Management to Identify and Troubleshoot Problematic MySQL Queries

Troubleshoot Problematic MySQL Queries webinar

Troubleshoot Problematic MySQL Queries webinarJoin us as Michael Coburn, Percona Product Manager, discusses two methods to identify and troubleshoot problematic MySQL queries using the RED Method and Percona Monitoring and Management (PMM) Query Analytics. He will also highlight specific Dashboards in PMM that visualize the rate, errors, and duration of MySQL events that may be impacting the stability and performance of your database instance.

Please join Michael Coburn, Product Manager, Percona, on Tuesday, January 26th, 2021 at 2:30 pm for his webinar “Using Percona Monitoring and Management to Identify and Troubleshoot Problematic MySQL Queries”.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Jan
07
2021
--

Webinar January 19: Maximize the Benefits of Using Open Source MongoDB with Percona Distribution for MongoDB

Benefits of Using Open Source MongoDB with Percona Distribution for MongoDB

Benefits of Using Open Source MongoDB with Percona Distribution for MongoDBMany organizations require an Enterprise subscription of MongoDB for the support coverage and additional features that it provides. However, many are unaware that there is an open source alternative offering all the features and benefits of a MongoDB Enterprise subscription without the licensing fees and the vendor lock-in.

In this joint NessPRO webinar geared at the Israeli marketplace, we will cover the Percona approach and features that make Percona Server for MongoDB an enterprise-grade, license-free alternative to MongoDB Enterprise edition. In this discussion we will address::

– Brief History of MongoDB

– MongoDB Enterprise versus Percona Server for MongoDB features including:

– Authentication and Authorization

– Encryption

– Governance

– Audit logging

– Backups

– Kubernetes Operator

– Monitoring & Alerting

Please join Michal Nosek, Senior Pre-Sales Solution Engineer, Percona, and Eli Alafi, Director, Data & Innovation at NessPRO on Tuesday, January 19th, 2021 at 10:00 am GMT+2 for the webinar “How to Maximize the Benefits of Using Open Source MongoDB with Percona Distribution for MongoDB”.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Jan
06
2021
--

Google Cloud Platform: MySQL at Scale with Reliable HA Webinar Q&A

MySQL at Scale with Reliable HA

Earlier in November, we had a chance to present the “Google Cloud Platform: MySQL at Scale with Reliable HA.” We discussed different approaches to hosting MySQL in Google Cloud Platform with the available options’ pros and cons. This webinar was recorded and can be viewed here at any time. We had several great questions, which we would like to address and elaborate on the answers given during the webinar.

MySQL at Scale with Reliable HA

Q: What is your view on Cloud SQL High Availability in Google Cloud?

A: Google Cloud SQL provides High Availability through regional instances. If your Cloud SQL database is regional, it means that there’s a standby instance in another zone within the same region. Both instances (primary and standby) are kept synced through synchronous replication on the persistent disk level. Thanks to this approach, in case of an unexpected failover, no data is lost. The biggest disadvantage of this approach is that you have to pay for standby resources even though you can’t use the standby instance for any traffic, which means you double your costs with no performance benefits. Failover typically takes more than 30 seconds.

To sum up, High Availability in Google Cloud SQL is reliable but can be expensive, and failover time is not always enough for critical applications.

 

Q: How would one migrate from Google Cloud SQL to AWS RDS?

A: The easiest way to migrate if you can afford downtime is stopping the write workload to the Cloud SQL instance, taking a logical backup (mysql or mydumper), restoring it on AWS RDS, and then moving the entire workload to AWS RDS. In most cases, it’s not enough. The situation is more complex when you want to make it with no (or minimal) downtime.

To avoid downtime, you need to establish replication between your Cloud SQL (source) and RDS instances (replica). Cloud SQL can be used as a source instance for external replicas, as described in this documentation. You can take a logical backup from running a Cloud SQL instance (e.g., using mydumper), restore it to RDS and establish the replication between Cloud SQL and RDS. Using an external source for RDS is described here. It’s typically a good idea to use a VPN connection between both cloud regions to ensure your connection is secure and the database is not exposed to the public internet. Once replication is established, the steps are as follows:

  • Stop write traffic on Google Cloud SQL instance
  • Wait for the replication to catch up (synch all binlogs)
  • Make RDS instance writable and stop Cloud SQL -> RDS replication
  • Move write traffic to the RDS instance
  • Decommission your Cloud SQL instance

AWS DMS service can also be used as an intermediary in this operation.

 

Q: Is replication possible cross-cloud, e.g., Google Cloud SQL to AWS RDS, AWS RDS to Google Cloud SQL? If GCP is down, will RDS act as a primary and vice versa?

A: In general, replication between clouds is possible (see the previous question). Both Google Cloud SQL and AWS RDS can act as source and replica, including external instances as a part of your replication topology. High-availability solutions, though, in both cases, are very specific for a cloud provider implementation, and they can’t cooperate. So it’s not possible to automatically failover from RDS to GCP and vice versa. For such setups, we would recommend custom installation on Google Compute Instance and AWS EC2 with Percona Managed Database Services – if you don’t want to manage such a complex setup on your own.



Q: How did you calculate IOPS and throughput for the storage options?

A: We did not calculate the presented values in any way. Those are taken directly from Google Cloud Platform Documentation.


Q: How does GCP achieve synchronous replication?

A: Synchronous replication is possible only between the source and respective standby instance; it’s impossible to have synchronous replication between the primary and your read replicas. Each instance has its own persistent disk. Those disks are kept in sync – so replication happens on the storage layer, not the database layer. There are no implementation details about how it works available.


Q: Could you explain how to keep the primary instance available and writable during the maintenance window?

A: It’s not possible to guarantee the primary instance availability. Remember that even if you choose your maintenance window when you can accept downtime, it may or may not be followed (it’s just a preference). Maintenance events can happen at any point in time if they’re critical and may not be finished during the assigned window. If that’s not possible to accept by your application, we recommend designing a highly-available solution, e.g., with Percona XtraDB Cluster on Google Compute Engine instances instead. Such a solution won’t have such maintenance window problems.

Jan
06
2021
--

Webinar January 20: MariaDB Observability

MariaDB Observability

MariaDB ObservabilityDon’t miss out! Join Peter Zaitsev, Percona CEO, as he discusses MariaDB observability!

A broken MariaDB means a broken application, so maintaining insights in MariaDB operational performance is critical. Thankfully MariaDB offers a lot in terms of observability to quickly resolve problems and get great insights into optimization opportunities.

In this webinar, we will cover the most important observability improvements in MariaDB, ranging from Performance Schema and Information Schema to enhanced error logging and optimizer trace.

If you’re a Developer or DBA passionate about observability or want to be empowered to resolve MariaDB problems quickly and efficiently, you should attend this talk.

Please join Peter Zaitsev, Percona CEO, on Wednesday, January 20th at 11:00 AM EST for his webinar on MariaDB Observability.

Register for Webinar

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

Jan
04
2021
--

Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&AWe had great attendance, questions, and feedback from our “Converting MongoDB to Percona Server for MongoDB” webinar, which was recorded and can be viewed here. You can view another Q&A from a previous webinar on “Converting MongoDB to Percona Server for MongoDB” here. Without further ado, here are your questions and their responses.

 

Q: If migrating from MongoDB Enterprise Edition to Percona Server for MongoDB, can implementations that use LDAP or Kerberos be migrated using the replica set takeover method to avoid excessive downtime?

A: The intended design when using these two features is that Percona Server for MongoDB (PSMDB) remains a true drop-in replacement. This means no configuration changes will be necessary. This should be tested before go-live to ensure success. 

The above is valid for the Audit Plugin as well, which is a free feature that is included in the enterprise version.

 

Q: Does the replica set takeover method also work for sharded implementations of MongoDB that use configdb replica sets?

A: Yes, it does, although there are a few more steps to consider as you do not want chunks migrating while you are upgrading. 

To convert a sharded cluster to Percona Server for MongoDB, you would:

  1. Disable the balancer (sh.stopBalancer() )
  2. Convert the config servers (replica set takeover method)
  3. Upgrade the shards (replica set takeover method)
  4. Upgrade the mongos instances
  5. Re-enable the balancer (sh.startBalancer())

 

Q: Is it necessary to make any change on the driver/application side?

A: No. The drivers have the same compatibility for PSMDB.

 

Q: Does Percona Monitoring and Management (PMM) support alerting?

A: Yes, PMM supports alerting. This blog post discusses the new and upcoming PMM native alerting, this documentation shows how to configure Prometheus AlertManager integration, and this documentation shows how to utilize Grafana Alerts. 

 

Q: When is best to migrate from MySQL to MongoDB, and in what scenario would MongoDB be the best replacement?

A: This is a complicated question without a simple answer. It depends on the application workload, internal business directives, internal technology directives, and in-house expertise availability. In general, we recommend choosing the right tool for the right job. In that sense, MySQL was built for structured, relational, and transactional workloads, while MongoDB was built for an unstructured, JSON document model without a lot of transactions linking collections. While you technically can cross-pollinate both models between MySQL and MongoDB, we do not recommend doing so unless there is good reason to do so.  This is the perfect scenario to engage with Percona consulting for true expert input in the decision making process.

Nov
30
2020
--

Webinar December 15: The Open Source Alternative to Paying for MongoDB

Open Source Alternative to Paying for MongoDB

Open Source Alternative to Paying for MongoDBPlease join Barrett Chambers, Percona Solutions Engineer, for an engaging webinar on MongoDB vs. Open Source.

Many organizations require Enterprise subscriptions for the coverage and features that it provides. However, many are unaware that there is an open source alternative offering all the features and benefits of a MongoDB Enterprise subscription without the licensing fees. In this talk, we will cover features that make Percona Server for MongoDB an enterprise-grade, license-free alternative to MongoDB Enterprise edition.

In this discussion we will address:

– Brief History of MongoDB

– MongoDB Enterprise versus Percona Server for MongoDB features, including:

  • Authentication
  • Authorization
  • Encryption
  • Governance
  • Audition
  • Storage Engines
  • Monitoring & Alerting

Please join Barrett Chambers, Percona Solutions Engineer, on Tuesday, December 15 at 11:00 AM EST for his webinar “The Open Source Alternative to Paying for MongoDB”.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com