Sep
09
2021
--

Q&A on Webinar “Using Open Source Software to Optimize and Troubleshoot Your MySQL Environment”

Optimize and Troubleshoot Your MySQL Environment

Optimize and Troubleshoot Your MySQL EnvironmentThanks to everyone who attended last week’s webinar on Using Open Source Software to Optimize and Troubleshoot Your MySQL Environment; hopefully you’ve found the time we spent in Percona Monitoring and Management (PMM) useful.

We had a record-breaking number of questions during the talk and unfortunately weren’t able to answer them all live, so we decided to answer them separately. Also, there were several requests for best practices around installation and configuration. This is something we are considering for the next webinar in this series, so stay tuned!

If you weren’t able to attend, the recording is available for viewing. But now, without further ado, here are the questions that we didn’t have time to cover during the presentation.

 

Q: Can PMM also be used for a web hosting server (Cpanel, Directadminetc)?

PMM by default can monitor a node to provide vital statistics on the health of the host.  From there, you can use external exporters to monitor other applications and send the data to PMM to visualize and create alerts.

 

Q: Does it provide any query optimization suggestions if my query is bad? 

Not at present…that’s planned for the future query advisor

 

Q: How soon we will be able to use the alerting manager in production?

We are looking at late Sept to early Oct. When it’s ready, you will hear about it!

 

Q: Capturing Queries Data for performance checking can be costly and some monitoring systems capture data every few seconds. At what level of data is captured here and analyzed…live systems with lots of database traffic? What percentage (all of it,  2 seconds, 1 second, etc.)?

We adhere to ‘do no harm’ so the impact of PMM  is typically 1-4% of the busiest systems.  We offer custom resolutions to adjust the scrape frequency to balance the need for information with the need for performance.

 

Q: Are long-running queries captured that potentially slow down the system over time & shown as graph/alert? Also, is there potentially more than one instance of these types running over again by a user.?

This is something we are going to include in our Alerting capabilities (coming soon, see above).

 

Q: Can more than one of the metrics be compared against each other to gain more insight into a problem in graphical form? Can you in effect play with these graphs?

Yes, you can, this is in fact how most of the dashboards are designed, where we connect different metric series together to drive graphs that explain system performance.  While you may be able to edit the existing graphs, Percona recommends that you instead make a copy of the dashboard you’d like to modify and make your changes on the copy.  The reason for this is if you modify a dashboard distributed by PMM, it will be overwritten on the next upgrade, and you’ll lose your changes.

 

Q: Could you list what can be monitored using PMM? And explain what recommended plugins are available and what they are used for? 

Natively, any Linux system and pretty much all flavors of MySQL, MariaDB, MongoDB, and PostgreSQL. You can use external exporters to gather even more data than default and using Grafana as the basis for visualization of PMM allows you to create custom dashboards and a wealth of community plugins.

 

Q: Can you choose to monitor a particular set of users? Set of queries? Set of schema? 

You can filter it down to view based on username, particular schema, and then filter those results by particular query strings.  We can monitor as much or as little about your database as the user you define to pull data.

 

Q: How can we work on optimization when using cloud-based services like RDS where we have limited access?

PMM can monitor RDS instances and has simplified the connection and selection process of its remote monitoring capabilities.  We can provide nearly the same data as an on-prem database however we don’t have access to the node level statistics.

 

Q: For Oracle MySQL 5.7.29, if you have many tables/objects in the database, will the PMM query information_schema and load the DB?

We have a predefined limit of 1000 tables that will disable polling information schema but you can configure this to your liking both with the client and with remote monitoring. This CAN have a more significant impact on your system though especially with large table and row counts.

 

Q: At what point do I know I’ve done enough optimization? 

HA! It’s a never-ending game of cat and mouse considering the sheer volume of variables in play. It’s these times where monitoring data for before and after become vital.

 

Q: Can a database monitoring package be the source of database performance issues? In particular, mysqld_exporter is installed as a docker container, as I’m seeing “out of resources” on a trace on mysqld_exporter.

Of course, there are plenty of ways to generate database performance issues and it’s possible monitoring can result in some overhead. For an extreme example, here’s one way to replicate some overhead: start the pmm-client on a MySQL database and restore a blank DB from mysqldump. A few million rows at a time should generate LOTS of chaos and load between QAN and exporters. Our pmm client runs the exporter natively so no need to use a container.

 

Q: Is the query analytics somehow slowing down the database server as well? Or is it save to enable/use it without further impact?

The impact is minimal.  Most of the Query Analytics processing is done at the PMM server, the only impact to the client is retrieving the queries from slowlog or performance schema so this can have a bigger impact for the most extremely active DB’s but still should remain below 5% CPU hit.

 

Q: Did I understand correctly that PMM is not for RDS users and that AWS tools are available?

PMM certainly is for RDS! Since RDS is managed by AWS, PMM cannot collect CPU/Disk/Memory metrics but all MySQL metrics are still available even in RDS.

 

Q: Do you have any instructions/steps to install PMM to monitor MySQL RDS? 

  • Gear icon ? PMM Inventory ? Add Instance
  • Choose AWS/RDS Add Remote Instance
  • Use your AWS credentials to view your available RDS & Aurora nodes
  • Ensure that performance_schema is enabled

 

Watch the Recording

Jul
23
2021
--

Free Webinar July 27: Why Fiserv Moved to Percona to Run MongoDB

Fiserve Percona MongoDB Webinar

Fiserv, a Percona customer, will discuss the business case and results they achieved working with Percona to run MongoDB.  Register for a Fireside Chat taking place on Tuesday, July 27, 1 PM EDT!

If you use MongoDB to build business applications that need to scale quickly, the chances are that your company is paying dearly for enterprise licenses. In fact, proprietary license fees are continuing to go up and up. In addition, MongoDB may be pushing you towards their cloud service, Atlas.

If this sounds familiar, you should join our Fireside chat with Kim Thomas, Database Architect at Fiserv, on July 27, 2021, at 1:00 PM EDT.

 

Kim will be sharing Fiserv’s journey from MongoDB to Percona Software, Services, and Support. During this event you’ll discover:

  • Why Fiserv used Percona to run MongoDB
  • How Fiserv and Percona are building innovative solutions together
  • How using open source databases gives Fiserv a competitive edge

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

 

Register for Webinar

About the Speaker 

Kim Thomas

Kim Thomas is a Database Architect at Fiserv, primarily responsible for delivering open source database solutions across the Enterprise. Knowledgeable in DBaaS, Relational, Columnar, Big Data, OLAP, OLTP, NoSQL, and DB Operator technologies, he has been working with various database technologies for the past 25 years.

Emily Lockhart, Percona’s Director of Customer Success will lead the conversation.

 

Feb
03
2021
--

Q&A on Webinar “Using PMM to Identify and Troubleshoot Problematic MySQL Queries”

Problematic MySQL Queries

Problematic MySQL QueriesHi and thanks to all who attended my webinar on Tuesday, January 26th titled Using PMM to Identify & Troubleshoot Problematic MySQL Queries!

Like we do after all our webinars, we compile the list of questions that were answered verbally and also those that were posed yet remained unanswered since we ran out of time during the broadcast.  Before we get to the questions, I wanted to make sure to include a link to the RED Method for MySQL Queries by Peter Zaitsev, Percona’s CEO:

https://grafana.com/grafana/dashboards/12470

RED Method for MySQL Queries

Hi Michael, you suggested that table create and update times should be ignored. Surely these values come from information_schema.tables? Does that not reflect what I would see if I do ls -l in datadir?

Yes, I did make this suggestion, but after further research, I ought to qualify my response. TLDR; you will only see useful information in the CREATE_TIME field.

As per the MySQL Manual for SHOW TABLE STATUS which defines the fields CREATE_TIME, UPDATE_TIME, and CHECK_TIME, you will find that only CREATE_TIME for InnoDB tables provides accurate information for when the table was originally created.  You will see either NULL or a recent-ish timestamp value for UPDATE_TIME, but this cannot be trusted as features such as InnoDB Change Buffering will skew this value, and thus the timestamp will not necessarily reflect when the SQL write happened, but only when the delayed write to the ibd file occurred.  Further, if you have your table stored in the system tablespace (like the example below) you will continue to see NULL for the UPDATE_TIME.

mysql> select * from information_schema.tables where table_name = 't1'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: michael
TABLE_NAME: t1
TABLE_TYPE: BASE TABLE
ENGINE: InnoDB
VERSION: 10
ROW_FORMAT: Dynamic
TABLE_ROWS: 0
AVG_ROW_LENGTH: 0
DATA_LENGTH: 16384
MAX_DATA_LENGTH: 0
INDEX_LENGTH: 0
DATA_FREE: 0
AUTO_INCREMENT: NULL
CREATE_TIME: 2021-01-28 19:26:27
UPDATE_TIME: NULL
CHECK_TIME: NULL
TABLE_COLLATION: utf8mb4_0900_ai_ci
CHECKSUM: NULL
CREATE_OPTIONS:
TABLE_COMMENT:
1 row in set (0.00 sec)

To your point about ls -l on the datadir, or the stat command: you cannot rely on this information at any level of accuracy.  Since ls -l is equivalent to the Modify field of the output of stat, we’ll use this  command to show the behaviour once you create the table, and what it reports after you restart mysqld on your datadir.  So let’s see this in action via an example.

Before restarting you’ll notice that Access time is equivalent to what Percona Server for MySQL reports for CREATE_TIME:

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:26:27.571903770 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

However after you restart mysqld, you will no longer be able to tell the create time as MySQL will have updated the Access time on disk, and now the values don’t have very much material relevance as to the access patterns on the table.

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:30:08.557438038 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

Can I use Percona Monitoring and Management (PMM) with an external bare-metal server of Clickhouse?

PMM leverages an instance of Clickhouse inside the docker container (or your AMI, or your OVF destination) for storage of MySQL query data.  At this time we are shipping PMM as an appliance and therefore we don’t provide instructions on how to connect Query Analytics to an external instance of Clickhouse.

If the question is about “can I monitor Clickhouse database metrics using PMM” the answer is Yes absolutely you can!  In fact, PMM will work with any of the Prometheus Exporters and the way to enable this is via the feature we call External Services – take a look at our Documentation for the correct syntax to use!  Usage of External Services will get you pretty metrics, whereas Grafana (which is what we use in PMM to provide the visuals) already contains a native Clickhouse datasource which you can use to run SQL queries from within PMM against Clickhouse.  Simply define the datasource and you’re done!

All PMM2 features are compatible with MySQL 8?

The latest release of PMM 2.14 (January 28th, 2021) supports MySQL 8 and Percona Server for MySQL 8.  PMM now supports not only traditional asynchronous replication but also MySQL InnoDB Group Replication, and of course Percona’s own Percona XtraDB Cluster (PXC) write-set replication (aka wsrep via Galera).  When using Query Analytics with Percona Server for MySQL or PXC, you’ll also benefit from the Extended Slow Log Format, which provides for a very detailed view of activity at the InnoDB storage engine level:

PMM Query Analytics Detail screen

I added several dbs to PMM, however the QAN shows only few and not all. What could be an issue? How do I approach you for Percona support on such things?

There could be a few things going on here that you’ll want to review from Percona’s Documentation:

  1. Do you have a user provisioned with appropriate access permissions in MySQL?
  2. If sourcing from PERFORMANCE_SCHEMA, is P_S actually enabled & properly configured?
  3. Is long_query_time and other slow log settings properly configured to write events?

Slow Log Configuration

These are the recommended settings for the slow log on Percona Server for MySQL. I prefer the slow log vs P_S because you get the InnoDB storage engine information along with other extended query properties (which are not available in upstream MySQL, nor in RDS, or via PERFORMANCE_SCHEMA):

log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1

User Permissions

You’ll want to use this permissions for the PMM user:

CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost';

PERFORMANCE_SCHEMA

Using PERFORMANCE_SCHEMA is less detailed but comes with the benefit that you’re writing and reading from an in-memory only object, so you’re saving IOPS to disk. Further if you’re in AWS or other DBaaS you generally don’t get raw access to the on-disk slow log, so PERFORMANCE_SCHEMA can be your only option.

PERFORMANCE_SCHEMA turned on

By default, the latest versions of Percona Server for MySQL and Community MySQL ship with PERFORMANCE_SCHEMA enabled by default, but sometimes users disable it.  If you find it is disabled, a restart of mysqld is required in order to enable.

You want to make sure your my.cnf includes:

performance_schema=ON

You can check via a running MySQL instance by executing:

mysql> show global variables like 'performance_schema';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| performance_schema | ON    |
+--------------------+-------+

1 row in set (0.04 sec)

PERFORMANCE_SCHEMA configuration

You’ll need to make sure you enable the following consumers so that mysqld writes events to the relevant P_S tables:

select * from setup_consumers WHERE ENABLED='YES';
+----------------------------------+---------+
| NAME                             | ENABLED |
+----------------------------------+---------+
| events_statements_current        | YES     |
| events_statements_history        | YES     |
| global_instrumentation           | YES     |
| thread_instrumentation           | YES     |
| statements_digest                | YES     |
+----------------------------------+---------+

5 rows in set (0.00 sec)

Are there any specific limitations when using PMM for monitoring AWS Aurora?

The most significant limitation is that you cannot access the Slow Log and thus must configure for PERFORMANCE_SCHEMA as the query datasource.  See the previous section on how to configure PERFORMANCE_SCHEMA as needed for PMM Query Analytics.

One great feature of PMM is our native support for AWS Aurora. We have a specific dashboard for those Aurora-only features:

PMM MySQL Amazon Aurora Details dashboard

Thanks for attending!

If you attended (or watched the video), please share via comments any takeaways or further questions you may have!   And let me know if you enjoyed my jokes ?

Jan
06
2021
--

Google Cloud Platform: MySQL at Scale with Reliable HA Webinar Q&A

MySQL at Scale with Reliable HA

Earlier in November, we had a chance to present the “Google Cloud Platform: MySQL at Scale with Reliable HA.” We discussed different approaches to hosting MySQL in Google Cloud Platform with the available options’ pros and cons. This webinar was recorded and can be viewed here at any time. We had several great questions, which we would like to address and elaborate on the answers given during the webinar.

MySQL at Scale with Reliable HA

Q: What is your view on Cloud SQL High Availability in Google Cloud?

A: Google Cloud SQL provides High Availability through regional instances. If your Cloud SQL database is regional, it means that there’s a standby instance in another zone within the same region. Both instances (primary and standby) are kept synced through synchronous replication on the persistent disk level. Thanks to this approach, in case of an unexpected failover, no data is lost. The biggest disadvantage of this approach is that you have to pay for standby resources even though you can’t use the standby instance for any traffic, which means you double your costs with no performance benefits. Failover typically takes more than 30 seconds.

To sum up, High Availability in Google Cloud SQL is reliable but can be expensive, and failover time is not always enough for critical applications.

 

Q: How would one migrate from Google Cloud SQL to AWS RDS?

A: The easiest way to migrate if you can afford downtime is stopping the write workload to the Cloud SQL instance, taking a logical backup (mysql or mydumper), restoring it on AWS RDS, and then moving the entire workload to AWS RDS. In most cases, it’s not enough. The situation is more complex when you want to make it with no (or minimal) downtime.

To avoid downtime, you need to establish replication between your Cloud SQL (source) and RDS instances (replica). Cloud SQL can be used as a source instance for external replicas, as described in this documentation. You can take a logical backup from running a Cloud SQL instance (e.g., using mydumper), restore it to RDS and establish the replication between Cloud SQL and RDS. Using an external source for RDS is described here. It’s typically a good idea to use a VPN connection between both cloud regions to ensure your connection is secure and the database is not exposed to the public internet. Once replication is established, the steps are as follows:

  • Stop write traffic on Google Cloud SQL instance
  • Wait for the replication to catch up (synch all binlogs)
  • Make RDS instance writable and stop Cloud SQL -> RDS replication
  • Move write traffic to the RDS instance
  • Decommission your Cloud SQL instance

AWS DMS service can also be used as an intermediary in this operation.

 

Q: Is replication possible cross-cloud, e.g., Google Cloud SQL to AWS RDS, AWS RDS to Google Cloud SQL? If GCP is down, will RDS act as a primary and vice versa?

A: In general, replication between clouds is possible (see the previous question). Both Google Cloud SQL and AWS RDS can act as source and replica, including external instances as a part of your replication topology. High-availability solutions, though, in both cases, are very specific for a cloud provider implementation, and they can’t cooperate. So it’s not possible to automatically failover from RDS to GCP and vice versa. For such setups, we would recommend custom installation on Google Compute Instance and AWS EC2 with Percona Managed Database Services – if you don’t want to manage such a complex setup on your own.



Q: How did you calculate IOPS and throughput for the storage options?

A: We did not calculate the presented values in any way. Those are taken directly from Google Cloud Platform Documentation.


Q: How does GCP achieve synchronous replication?

A: Synchronous replication is possible only between the source and respective standby instance; it’s impossible to have synchronous replication between the primary and your read replicas. Each instance has its own persistent disk. Those disks are kept in sync – so replication happens on the storage layer, not the database layer. There are no implementation details about how it works available.


Q: Could you explain how to keep the primary instance available and writable during the maintenance window?

A: It’s not possible to guarantee the primary instance availability. Remember that even if you choose your maintenance window when you can accept downtime, it may or may not be followed (it’s just a preference). Maintenance events can happen at any point in time if they’re critical and may not be finished during the assigned window. If that’s not possible to accept by your application, we recommend designing a highly-available solution, e.g., with Percona XtraDB Cluster on Google Compute Engine instances instead. Such a solution won’t have such maintenance window problems.

Jan
04
2021
--

Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&AWe had great attendance, questions, and feedback from our “Converting MongoDB to Percona Server for MongoDB” webinar, which was recorded and can be viewed here. You can view another Q&A from a previous webinar on “Converting MongoDB to Percona Server for MongoDB” here. Without further ado, here are your questions and their responses.

 

Q: If migrating from MongoDB Enterprise Edition to Percona Server for MongoDB, can implementations that use LDAP or Kerberos be migrated using the replica set takeover method to avoid excessive downtime?

A: The intended design when using these two features is that Percona Server for MongoDB (PSMDB) remains a true drop-in replacement. This means no configuration changes will be necessary. This should be tested before go-live to ensure success. 

The above is valid for the Audit Plugin as well, which is a free feature that is included in the enterprise version.

 

Q: Does the replica set takeover method also work for sharded implementations of MongoDB that use configdb replica sets?

A: Yes, it does, although there are a few more steps to consider as you do not want chunks migrating while you are upgrading. 

To convert a sharded cluster to Percona Server for MongoDB, you would:

  1. Disable the balancer (sh.stopBalancer() )
  2. Convert the config servers (replica set takeover method)
  3. Upgrade the shards (replica set takeover method)
  4. Upgrade the mongos instances
  5. Re-enable the balancer (sh.startBalancer())

 

Q: Is it necessary to make any change on the driver/application side?

A: No. The drivers have the same compatibility for PSMDB.

 

Q: Does Percona Monitoring and Management (PMM) support alerting?

A: Yes, PMM supports alerting. This blog post discusses the new and upcoming PMM native alerting, this documentation shows how to configure Prometheus AlertManager integration, and this documentation shows how to utilize Grafana Alerts. 

 

Q: When is best to migrate from MySQL to MongoDB, and in what scenario would MongoDB be the best replacement?

A: This is a complicated question without a simple answer. It depends on the application workload, internal business directives, internal technology directives, and in-house expertise availability. In general, we recommend choosing the right tool for the right job. In that sense, MySQL was built for structured, relational, and transactional workloads, while MongoDB was built for an unstructured, JSON document model without a lot of transactions linking collections. While you technically can cross-pollinate both models between MySQL and MongoDB, we do not recommend doing so unless there is good reason to do so.  This is the perfect scenario to engage with Percona consulting for true expert input in the decision making process.

Aug
17
2020
--

Q & A on Webinar “Converting MongoDB to Percona Server for MongoDB”

Converting MongoDB to Percona Server for MongoDB

Converting MongoDB to Percona Server for MongoDBWe had great attendance, questions, and feedback from our “Converting MongoDB to Percona Server for MongoDB” webinar which was recorded and can be viewed here. Without further ado, here are your questions and their responses.

Q: What is the recommended path to go from MongoDB Community 3.4 (no Percona) to Percona for MongoDB 4.2?

We would recommend upgrading on Community version from 3.4 -> 3.6 -> 4.0 -> 4.2 with testing and validation at each major version milestone. Once you are on Community 4.2, you can upgrade to Percona Server for MongoDB 4.2. Because PSMDB is a true drop-in replacement you can really do this at any point, but I would do it separately from a major version upgrade.

Q: How does migration work when replica is configured to use “requireSSL”?

Migration works exactly the same as the steps described in the webinar. SSL connections are part of MongoDB Community, so they function identically in Percona Server for MongoDB.

Q: What about migrating encrypted instances to Percona? I guess we need to do initial sync from scratch.

Yes, that is correct. Because encryption affects the way data is stored, full syncs are required. Nodes can be added to an existing enterprise replica set for a close to zero downtime migration.

Q: Do you have any benchmark details with encryption version of Percona?

We have not published any benchmarks for this. However, by default, Percona Server for MongoDB uses the AES256-CBC cipher mode. If you want to use the AES256-GCM cipher mode, then use the encryptionCipherMode parameter to change it. In general, CBC and GCM cipher modes work differently. CBC is faster and GCM is safer (compared to each other). I found some interesting discussion and benchmark here: https://kazoo.ga/quick-benchmark-cbc-vs-gcm/.

Q: Is there any difference in storage of the data between MongoDB vs Percona?

No. They both utilize the same storage engines. WiredTiger is the default. If you’re interested in exploring MMAP, I suggest this blog post MongoDB Engines: MMAPV1 Vs WiredTiger.

Q: Can you migrate from Enterprise Advanced to Percona?

Yes.

Q: Are MongoDB dumps compatible?

Yes. This is another migration option if the database can tolerate downtime. The methods in our webinar are designed to limit downtime as much as possible.

Q: How does PMM integrate with the mongo replica set? How do you configure backups? Is it in PMM or a separate process?

Yes, PMM has dashboards and metrics for MongoDB replica sets. Our backup solutions are managed outside of Percona Monitoring and Management at this time, so it would be considered a separate process. Our backups are designed to be run on the MongoDB nodes themselves, so no additional middleware is required for their functionality. PMM should be installed on a separate server as it can potentially be storing, ingesting, and displaying a significant amount of data depending upon the size of your MongoDB deployment. View a free demo of Percona Monitoring and Management.

Q: Do MongoDB drivers working with MongoDB Community will work with Percona MongoDB Server?

Yes, MongoDB drivers are compatible with Percona Server for MongoDB. Percona designs its software to be fully compatible with upstream. From an application or connectivity perspective, they are identical.

Q: What would be the situation for a major binary upgrade, 3.4 Percona binary to 4.2 Percona binary?

We would recommend upgrading from 3.4 -> 3.6 -> 4.0 – 4.2 with testing and validation between each major version upgrade. Be sure to research compatibility changes, test, validate, and have tested rollback plans.

Q: Which tools we can use for backup in Percona? Same as Mongo community edition or different?

Percona Server for MongoDB supports all backup methods supported by the Community edition – for example, mongodump. We also have our own backup tool called Percona Backup for MongoDB that supports cluster-wide consistent backups of Sharded Clusters and ReplicaSets.

Thanks to everyone who attended. If you have any feedback or ideas on future MongoDB workshops, please let us know.


Learn more about the history of Oracle, the growth of MongoDB, and what really qualifies software as open source. If you are a DBA, or an executive looking to adopt or renew with MongoDB, this is a must-read!

Download “Is MongoDB the New Oracle?”

Oct
14
2019
--

Webinar 10/16: What’s New in Percona Monitoring and Management 2?

Percona Monitoring and Management 2

How can you ensure you are properly managing and optimizing the performance of your database environment?

Join Percona’s Product Manager Michael Coburn as he presents “What’s New in Percona Monitoring and Management 2?” and walks you through practical demonstration. This will be taking place on Wednesday, October 16, 2019, at 11:00 AM EDT.

Register Now

Percona Monitoring and Management (PMM) is a free, open source platform that supports MySQL, MariaDB, MongoDB, and PostgreSQL environments, providing detailed time-based analysis of your data. PMM allows you to embrace multiple database options and can be used on-premises and in the cloud.

Our recent major upgrade to PMM2 gives you far greater Query Analytics performance and usability and enables you to monitor much larger environments. Key features of PMM2 include:

•    New performance and usability query improvements.
•    New query analytics for PostgreSQL.
•    New ability to tag queries.
•    New administrative API.
•    New service-level dashboards.
•    Enhanced security protocols to ensure your data is safe.

Michael Coburn, Product Manager, Percona will provide an overview of these new features and a working demonstration of PMM2. You will also have the opportunity to ask questions in the chat window.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Sep
23
2019
--

Webinar 9/24: Preparing for Disaster Recovery on MySQL and PostgreSQL

Webinar Preparing for Disaster recovery on MySQL and PostgreSQL

Please join Percona Senior Support Engineer Carlos Tuttle as he presents his talk “Preparing for Disaster recovery on MySQL and PostgreSQL” on Tuesday, September 24th, 2019 at 11:00 AM PDT (UTC-7).

Register Now

Think for a moment what would happen if your company lost its current production data center; are you ready for that 3 AM call? Is everything in place for bringing your company back on-line? Is it automated? What would be your first step? Being ready is crucial, and making sure your readiness plan is in tip-top shape is even more important, and this session is all about that!

Join us to discover best practices on HA solutions, backups, failover, and how to apply them in production to be 100% confident your Disaster Recovery implementation will allow you to be up-and-running again in no time.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Sep
17
2019
--

Webinar 9/19: Introducing Java Profiling via Flame Graphs

Webinar Introducing Java Profiling via Flame Graphs

Please join Percona Senior Support Engineer Agustin Gallego as he presents his talk “Introducing Java Profiling via Flame Graphs” on Thursday, September 19th, 2019 at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

In this talk, you’ll be introduced to the basic concepts of profiling Java code using perf_events (aka perf). You’ll see how to collect stack traces and how to use Flame Graphs to provide a dynamic visual display for them. This, in turn, allows you to see if your code has room for improvement – and where it could be improved – in an easy and scalable way.

It doesn’t matter if you are coding a one-hundred or a one-hundred thousand-line application, the Flame Graph visualizer will make it easy to spot CPU hogs! After checking some basic how-to steps and recipes, I will take a deeper dive into using perf and Flame Graphs, to demonstrate how to make the most of these tools.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Sep
13
2019
--

Webinar 9/17: Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup

Webinar Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup

Please join Percona Senior Support Engineer Juan Pablo Arruti as he presents his talk “Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup” on Tuesday, September 17th, 2019 at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

There are few ways to take a backup. Some of the most used tools are Percona Xtrabackup, MariaBackup, and MySQL Enterprise Backup.

In this talk, the audience will get an in-depth overview of:

– Differences between the tools
– Comparison of features
– Which tool works on which MySQL/MariaDB flavor
– Supported Storage Engines
– Limitations

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com