Jan
11
2016
--

Percona Monitoring Plugins 1.1.6 release

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.6.

Changelog:

  • Added new RDS instance classes to RDS scripts.
  • Added boto profile support to RDS scripts.
  • Added AWS region support and ability to specify all regions to RDS scripts.
  • Added ability to set AWS region and boto profile on data source level in Cacti.
  • Added period, average time and debug options to pmp-check-aws-rds.py.
  • Added ability to override Nginx server status URL path on data source level in Cacti.
  • Made Memcached and Redis host configurable for Cacti script.
  • Added the ability to lookup the master’s server_id when using pt-heartbeat with pmp-check-mysql-replication-delay.
  • Changed how memory stats are collected by Cacti script and pmp-check-unix-memory.
    Now /proc/meminfo is parsed instead of running free command. This also fixes pmp-check-unix-memory for EL7.

  • Set default MySQL connect timeout to 5s for Cacti script. Can be overridden in the config.
  • Fixed innodb transactions count on the Cacti graph for MySQL 5.6 and higher.
  • Fixed –login-path option in Nagios scripts when using it along with other credential options.

Thanks to contributors: David Andruczyk, Denis Baklikov, Mischa ter Smitten, Mitch Hagstrand.

The project is fully hosted on Github now including issues and Launchpad project is discontinued.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. You can find links to the documentation, forums and more at the project homepage.

Percona Monitoring PluginsAbout Percona Monitoring Plugins
Percona Monitoring Plugins are monitoring and graphing components designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

Jun
21
2015
--

Percona Monitoring Plugins 1.1.5 release

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.5.

Changelog:

  • Added more DB instance classes to pmp-check-aws-rds.py (issue 1398911)
  • Added configurable query period and average time to pmp-check-aws-rds.py (issue 1436943)
  • Added region support to pmp-check-aws-rds.py (issue 1442980)
  • Added an option to alert when server is not configured as replica to pmp-check-mysql-replication-delay (issue 1357017)
  • Added an option to specify master connection to monitor MariaDB multi-source replication
  • Improved usage of lock-free SHOW SLAVE STATUS query (issue 1380690)
  • Fixed reporting of slave lag in ss_get_mysql_stats.php (issue 1389769)

We have also moved the code to Github https://github.com/percona/percona-monitoring-plugins but the bug tracker is still on Launchpad https://bugs.launchpad.net/percona-monitoring-plugins.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

Percona Monitoring PluginsAbout Percona Monitoring Plugins
Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

The post Percona Monitoring Plugins 1.1.5 release appeared first on MySQL Performance Blog.

Dec
02
2014
--

Tips from the trenches for over-extended MySQL DBAs

This post is a follow-up to my November 19 webinar, “Tips from the Trenches: A Guide to Preventing Downtime for the Over-Extended DBA,” during which I described some of the most common reasons DBAs experience avoidable downtime. The session was aimed at the “over-stretched DBA,” identified as the MySQL DBA short of time or an engineer of another discipline without the depth of the MySQL system. The over-stretched DBA may be prone to making fundamental mistakes that cause downtime through poor response time, operations that cause blocking on important data or administrative mishaps through the lack of best practice monitoring and alerting. (You can download my slides and view the recorded webinar here.)

Monitor the things
One of the aides to keeping the system up and running is ensuring that your finger is on the pulse of the environment. Here on the Percona Managed Services team, we leverage Percona Monitoring Plugins (open source plugins for Nagios, Cacti and Zabbix) to ensure we have visibility of our client’s operations. Having a handle on basics such as disk space, memory usage and MySQL operational metrics ensures that we avoid trivial downtime that would affect the client’s uptime or worse, their bottom line.

Road Blocks
One of the most common reasons that an application is unable to serve data to its end user is that access to a table is being blocked due to another ongoing operation. This can be blamed on a variety of sources: backups, schema changes, poor configuration and long running transactions can all lend themselves to costly blocking. Understanding the impact of actions on a MySQL server can be the difference between a happy end user and a frustrated one.

During the webinar I made reference to some resources and techniques that can assist the over extended DBA avoid downtime and here are some highlights….

Monitoring and Alerting
It’s important that you have some indications that something is reaching its capacity. It might be the disk, connections to MySQL or auto_increment limit on a highly used table. There is quite the landscape to cover but here are a handful of helpful tools:
* Percona Monitoring Plugins
* Monyog
* New Relic

Query Tuning
Poorly performing SQL can be indicative that the configuration is incorrect, that there’s a missing index or that your development team needs a quick lesson on MySQL anti-patterns. Arm yourself with proof that the SQL statements are substandard using these resources and work with the source to make things more efficient:
* Percona Cloud Tools
* pt-query-digest, explain, indexes

High Availability
If you need to ensure that your application survives hiccups such as hardware failure or network impairment, a well deployed HA solution will give you the peace of mind that you can quickly mitigate bumps in the road.
* MHA
Percona XtraDB Cluster, Galera
* Percona Replication Manager
* LinuxHA/Corosync/DRBD

Backups
A wise man once quoted “A backup today saves you tomorrow.” Covering all bases can be the difference between recovering from a catastrophic failure and job hunting. Mixing logical, physical and incremental backups while adding in some offsite copies can provide you with the safety net in the event that a small mistake like a dropped table is met or worse, all working copies of data and backups are lost in a SAN failure. It happens so be prepared.
* Percona XtraBackup
* mydumper
* mysqldump
* mysqlbinlog (5.6)
* mylvmbackup

We had some great questions from the attendees and regrettably were unable to answer them all, so here are some of them with my response.

Tips from the trenches for over-extended MySQL DBAsQ: I use MySQL on Amazon RDS. Isn’t much of the operations automated or do these tips still apply?
A: It’s not completely automated. There are still challenges to address and configuration opportunities, but understanding the limitations of RDS is key. For example, the location and size of the tmpdir is something you are unable to customise on RDS. You would typically review this config in a production environment if your workload required it. Any costly queries that perform operations requiring tmp area to sort (think OLAP) might not be a good fit on RDS due to this limitation. Getting to know the limitations around hosted or DBaaS services is time well spent to avoid explaining what keeps taking the application down in peak hours.

Q: What other parts of Percona Toolkit do you recommend for MySQL operations?
A: Percona Toolkit is a well-evolved suite of tools that all MySQL DBAs should familiarize themselves with. In particular I will fit many tools into my weekly workflow:

Operations

  • pt-online-schema-change
  • pt-table-checksum
  • pt-table-sync

Troubleshooting

  • pt-stalk
  • pt-pmp
  • pt-config-diff

Knowledge Gathering

  • pt-summary
  • pt-mysql-summary
  • pt-duplicate -key-checker

The key with Percona Toolkit is that many common tasks or problems that could cause you to reinvent the wheel are covered, mature and production ready. As with any tool, you should always read the label or in this case the documentation so you’re well aware what the tools can do, the risks and the features that you can make use of.

Q: HA – are there any solutions that you would stay away from?
A: Using any particular HA solution is going to be another R&D exercise. You will need to understand the tradeoffs, configuration options and compare between products. Some might have a higher TCO or lack functionality. Once the chosen solution is implemented it’s pertinent that the engineers understand the technology to be able to troubleshoot or utilize the functionality in the situation where failover needs to be instigated. I like HA solutions to be fast to failover to and some entail starting MySQL from cold.

Q: You mentioned having tested backups. How do you perform this?
A: Percona’s method is using a dedicated host with access to the backup files. Then with a combination of mysqlsandbox and pt-table-checksum we can discover if we trust the files we capture for disaster recovery. Many people underestimate the importance of this task.

Q: Percona Cloud Tools – how much does it cost?
A: Right now it’s a free service. Visit cloud.percona.com for more information, but in a nutshell Percona Cloud Tools is a hosted service providing access to query performance insights for all MySQL uses.

Q: Is there API access to Percona Cloud Tools for application integration?
A: There is currently not a public API available. It is on the roadmap, though. We’d be interested to hear more about your use case so please sign up for the service and try it out. After signing in, all pages include a Feedback link to share your thoughts and ideas such as how you’d like to use a public API.

Q: Can you use MHA with Percona XtraDB Cluster?
A: MHA is not something that can be used with Percona XtraDB Cluster (PXC). It’s common to partner PXC with HAProxy for making sure your writes are going to the appropriate node.

Q: Can MHA make automatic failover? If MHA has automatic failover, what do you recommend? Configure it for automatic failover?
A: MHA can make an automatic failover. Personally I prefer managed failover. When working with automated failover it’s important that failback is manual to avoid “flapping.” “Splitbrain” is an ailment that you don’t want to suffer from as well and auto failover removes the human judgment from the decision to relocate operations from a failed node onto a standby node. If you are going to vote for an automatic failover it is advised to test all potential failure scenarios and to employ a STONITH method to really ensure that the unresponsive node is not serving read/write traffic.

Q: What is the best way to detect database blocking from DML statements? Is there a tool that will show blocking after the fact so you don’t have to catch it real-time?
A: Once again, Percona has a tool called pt-deadlock-logger that can detect and log deadlocks. Detecting locking can be achieved using “SHOW ENGINE INNODB STATUS” or utilizing the information_schema.innodb_locks table. Some engineering might be required for this to be logged but those resources exist for use.

Q: Since you mentioned tinkering with ELK I was wondering if you had any tips on good Kibana dashboards to build to monitor MySQL databases/clusters?
A: ELK is something that I’m looking to publish some information on soon so watch this space!

Thanks again everyone for the great questions! And as a reminder, you can download my slides and view the recorded webinar here.

The post Tips from the trenches for over-extended MySQL DBAs appeared first on MySQL Performance Blog.

Nov
14
2014
--

Optimizing MySQL for Zabbix

dolphins making friends with zabbix
This blog post was inspired by my visit at the annual Zabbix Conference in Riga, Latvia this year, where I gave a couple of talks on MySQL and beyond.

It was a two day single-track event with some 200 participants, a number of interesting talks on Zabbix (and related technologies) and really well-organized evening activities. I was amazed how well organized the event was and hope to be invited to speak there next year as well. ;)  (Just in case you’re not sure what Zabbix is, it is an enterprise-class open source distributed monitoring solution for networks and applications)

I must secretly confess, it was also the first conference where I honestly enjoyed being on stage and connecting with the audience – I was even looking forward to it rather than being scared as hell (which is what typically happens to me)! I guess it was all about the positive atmosphere, so big thanks to all the speakers and attendees for that. It meant a lot to me.

If I had to mention one negative vibe I heard from attendees, it would be that there was not enough deeply technical content, however, I think this is slightly biased, because people I talked to most, were ones who enjoyed my technical talks and so they were craving for more.

And now, without further ado, let me get to the essence of this blog post.

Zabbix and MySQL

The very first thing I did when I arrived at the conference was to approach people who I knew use Zabbix on a large scale and tried to figure out what were the biggest challenges they face. Apparently, in all of the cases, it was MySQL and more specifically, MySQL disk IO.

With that in mind, I would like to suggest a few optimizations that will help your MySQL get the best out of your disks (and consequentially will help your Zabbix get the best out of MySQL) and the available hardware resources in general.

SSD is a game changer

“Will MySQL run better on SSDs?” I’ve been hearing this question over and over again, both publicly and privately.

I can tell you without a shadow of doubt, if IO is currently your bottle-neck – either because some queries take a long time to run and you see diskstats reporting 100-250 reads per second until the query completes (latency), or because you are overloading the disks with requests and wait time suffers (throughput), SSDs will definitely help and not just by little, by much!

Consider this: the fastest spinning disk (15k rpm) can do 250 random IO operations per second tops (at this point it is limited by physics) and single query will only ever read from one disk even if you have RAID10 made of 16 disks, so if you need to read 15,000 data points to display a graph, reading those data points from disk will take 60s.

Enterprise-class SSD disk, on the other hand, can do 15,000 or even more 16k random reads per second with a single-thread (16k is the size of an InnoDB block). And as you increase the throughput, it only gets better! So that means that the query in the previous example would take 1s instead of 60s, which is a significant difference. Plus you can run more requests on the same SSD at the same time and the total number of IO operations will only increase, while a single spinning disk would have to share the available 250 IO operations between multiple requests.

The only area where SSDs don’t beat spinning disks (yet) is sequential operation, especially single-threaded sequential writes. If that is your typical workload (which might be the case if you’re mostly collecting data and rarely if ever reading it), then you may want to consider other strategies.

MySQL configuration

Besides improving your disk IO subsystem, there’s ways to reduce the pressure on IO and I’m going to cover a few my.cnf variables that will help you with that (and with other things such as internal contention).

Note, most of the tunables are common for any typical high-performance MySQL setup, though some are explicitly suited for Zabbix because you can relax a few parameters for great effect at the price of, in the worst case, loosing up to 1s worth of collected data which, from discussions during the conference, didn’t seem like a big deal to anyone.

– innodb_buffer_pool_size – if you have a dedicated MySQL server, set it as high as you can (ceiling would be 75% of total available memory). Otherwise, you should balance it with other processes on the server, but if it’s only zabbix server, I would still leave it very high, close to 75% of total RAM.

– innodb_buffer_pool_instances – on MySQL 5.5, set it to 4, on MySQL 5.6 – 8 or even 16.

– innodb_flush_log_at_trx_commit = 0 – this is where you compromise durability for significantly improved write throughput, especially if you don’t own a disk subsystem with non-volatile cache. Basically the loss you may incur is up to 1s worth of writes during MySQL or server crash. A lot of websites actually run with that (a lot of websites still run on MyISAM!!!), I’m quite sure it’s not an issue for Zabbix setup.

– innodb_flush_method = O_DIRECT – if you are running Linux, just leave it set to that.

– innodb_log_file_size – you want these transaction logs (there’s two of them by default) to hold 1 to 2 hours worth of writes. To determinte that, you can probably have a look at the Zabbix graphs for your MySQL server, but also you can run the following from the mysql command line:

mysql> pager grep seq; show engine innodb statusG select sleep(3600); show engine innodb statusG
PAGER set to 'grep seq'
Log sequence number 8373513970951
...
Log sequence number 8373683996767

The difference between the two numbers is how many bytes InnoDB has written during last hour. So on this server above, I would set innodb_log_file_size=128M and would end up with 256M of log file space allowing me to store more than 1h worth of writes in transaction logs (See this on changing the log file size if you run MySQL 5.5 or earlier)

– innodb_read_io_threads, innodb_write_io_threads – don’t overthink these, they are not as important as they may seem, especially if you are using Async IO (you can check that by running “show global variables like ‘innodb_use_native_aio’” in mysql cli). On MySQL 5.5 and 5.6 you generally want to be using Async IO (AIO), so check mysql log to understand why, if you are not. That said, if you are not using AIO and you are not going to, just set these values to 8 and leave them there.

– innodb_old_blocks_time = 1000 – this will help you prevent buffer pool pollution due to occasional scans. This is now default in MySQL 5.6 (On 5.5, it needs to be set explicitly).

– innodb_io_capacity – set this to as many write iops as your disk subsystem can handle. For SSDs this should be at least few thousand (2000 could be a good start) while for rotating disks somewhat lower values – 500-800, depending on number of bearing disks, will do. Best to benchmark disks or do the math for actual numbers, but default of 200 is definitely too low for most systems nowadays.

– sync_binlog=0 – this is the default setting, but just in case it’s above 0, turn it off, unless you run something else besides Zabbix. The price of not synchronising binary logs is that in case of a master crash, replicas can get out of sync, but if you are constantly hitting IO bottle-neck due to binary log synchronisation just because you want to avoid the hassle of synchronising the slave once every five years when master crashes, you should reconsider this option.

– query_cache_size=0, query_cache_type=0 – that will disable the query cache. Most of the time you don’t want query cache. And if it’s not disabled in the kernel by these settings, queries (especially small ones) will likely suffer due to query cache mutex contention.

– sort_buffer_size, join_buffer_size, read_rnd_buffer_size – if you ever configured these variables, cancel those changes (just remove them or comment them out). I find these are the top three mistuned variables on most customer servers, while in many cases it’s best if they are not touched at all. Just leave them at their defaults and you’re set.

– tmpdir – sometimes it’s a good idea to point tmpdir to /dev/shm so that on-disk temporary tables are actually written to memory, but there’s one important caveat starting with MySQL 5.5: if you do that, it disables AIO acorss the board, because tmpfs does not support AIO. So I would monitor the activity on current tmpdir (/tmp usually) and only switch it to /dev/shm if I see it being an issue.

MySQL Partitioning

I know that Zabbix now supports partitions with a purpose of easier data pruining, however I think there are some extra benefits you could get from partitions. Well actually subpartitions if you are already using partitions by date.

The KPI for Zabbix, that you could hear over and over again, is the “new values per second” number that you can find in the status of Zabbix. Basically the higher the value (given you have enough values to monitor), the better is the throughput of your Zabbix. And this is where a lot of people are hitting the Zabbix limits – MySQL can’t insert enough new values per second.

Besides the optimizations I have already mentioned above (they should greatly increase your write throughput!), I would encourage you to try out partitions (if you’re not using partitions already) or subpartitions (if you are) BY HASH as we found that partitioning in some cases can increase the throughput of InnoDB.

I did not test it with Zabbix specifically and as it’s not supported by Zabbix out of the box, you would have to hack it to make it work, but if you’ve done all the changes above and you still can’t get enough new values per second (AND it is not the hardware that is limiting you), try partitioning or subpartitioning the key tables by hash.

If this sounds interesting but you’re not sure where to start, feel free to contact us and we’ll work with you to make it work.

On MySQL High Availability

There are options to make MySQL highly available, even though many believed it’s not the case. We’ve been writing a lot on it on our blog so I will not paraphrase or repeat, instead I would like to point you to a few valuable resources on that topic:

 

Percona Server, Percona XtraDB Cluster, Percona Toolkit – it’s all FREE!

I’m not really sure why, but many people I talked to at the conference thought that all of the Percona software either needs to be bought or that it has some enterprise features that are not available unless you buy a license.

The truth is that neither of it is true. All Percona software is completely free of charge. Feel free to download it from our website or through repositories and use it as you please.

See you at the Zabbix conference next year!

see you at Zabbix

The post Optimizing MySQL for Zabbix appeared first on MySQL Performance Blog.

Jul
21
2014
--

Percona Monitoring Plugins 1.1.4 release

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.4.

Changelog:

* Added login-path support to Nagios plugins with MySQL client 5.6 (bug 1338549)
* Added a new threshold option for delayed slaves to pmp-check-mysql-replication-delay (bug 1318280)
* Added delayed slave support to pmp-check-mysql-replication-running (bug 1332082)
* Updated Nagios plugins and Cacti script to leverage lock-free SHOW SLAVE STATUS in Percona Server (bug 1297442)
* Fixed pmp-check-mysql-replication-delay integer-float issue with MariaDB and MySQL 5.6 (bugs 1245934, 1295795)
* ss_get_rds_stats.py was not installed with 755 permissions from the package (bug 1316943)
* Cacti MySQL template item “handler_savepoint_rollback” was GAUGE type instead of DERIVE (bug 1334173)
* Fixed Zabbix running-slave check issue on some Debian systems (bug 1310723)
* Fixed paths in percona-cacti-templates package (bug 1349564)

A new tarball is available from downloads area or RPM and DEB packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

Percona Monitoring PluginsAbout Percona Monitoring Plugins
Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix and are delivered in the form of templates, plugins, and scripts which make it easy to monitor MySQL performance.

The post Percona Monitoring Plugins 1.1.4 release appeared first on MySQL Performance Blog.

Mar
21
2014
--

Percona Monitoring Plugins 1.1.3. Addressed CVE-2014-2569.

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.3.

Changelog:

* Introduced more secure location of PHP script configs to harden a Cacti setup
* Addressed CVE-2014-2569

We have introduced a more secure location /etc/cacti/ for PHP script configs. Earlier, the only way was to keep .php.cnf configs inside of scripts/ folder which falls under the web directory of Cacti setup, thus provides a potential security vulnerability. We strongly recommend to move all .php.cnf files from /usr/share/cacti/scripts/ to /etc/cacti/ and also harden your Cacti setup.

A new tarball is available from downloads area or RPM and DEB packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

Percona Monitoring PluginsAbout Percona Monitoring Plugins
Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix and are delivered in the form of templates, plugins, and scripts which make it easy to monitor MySQL performance.

The post Percona Monitoring Plugins 1.1.3. Addressed CVE-2014-2569. appeared first on MySQL Performance Blog.

Mar
14
2014
--

Percona Monitoring Plugins 1.1.2, now with Amazon RDS support

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.2.

Changelog:

* Added Nagios plugin and Cacti template for Amazon RDS
* Added Nagios config template to the documentation
* Added an option to pmp-check-pt-table-checksum to check MAX(ts) of latest checksum
* Added generic Nagios plugin for PT tables
* Extended pmp-check-mysql-processlist with max user connections check
* Zabbix MySQL.running-slave item failed with MySQL 5.6 (bug 1272358)
* ss_get_mysql_stats and MariaDB does not use have_response_time_distribution (bug 1285888)
* Cacti Monitoring plugins and SNMP via TCP (bug 1268552)

Shortly about the new features.

Amazon RDS support in Nagios and Cacti.
amazon-web-services We have created the Nagios plugin and Cacti template to give you a chance to add Amazon RDS instances into your monitoring system. The plugins are getting various RDS metrics from Amazon CloudWatch. With Nagios you can have the following checks per instance: RDS Status, RDS Load Average, RDS Free Storage, RDS Free Memory. Also check out the examples of Cacti RDS graphs.

Nagios config template. We never have a good template of Nagios config in the documentation with examples on how to put together the plugins. So we have updated Configuration Best Practices section with it.

Freshness check for checksumming. A new option of the plugin pmp-check-pt-table-checksum allows you to specify an interval in days over which to ensure pt-table-checksum was run. It is useful in cases when the cron job doing the checksumming suddenly stopped working.

New generic Nagios plugin for PT tables. pmp-check-mysql-ts-count looks at a table and counts the number of rows since the last interval, and alerts if this exceeds the threshold. This could be the table referenced by tools from Percona Toolkit such as pt-deadlock-logger, pt-kill, pt-fk-error-logger, or a custom table supplied.

MySQL max user connections check. The plugin pmp-check-mysql-processlist has got a new option to alert when @@max_user_connections is configured on MySQL and any user reaches this limit.

A new tarball is available from downloads area or RPM and DEB packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

Percona Monitoring PluginsAbout Percona Monitoring Plugins
Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix and are delivered in the form of templates, plugins, and scripts which make it easy to monitor MySQL performance.

The post Percona Monitoring Plugins 1.1.2, now with Amazon RDS support appeared first on MySQL Performance Blog.

Dec
18
2013
--

Percona Monitoring Plugins 1.1, now with Zabbix support

Percona is glad to announce the release of Percona Monitoring Plugins 1.1. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix, and are delivered in the form of templates, plugins, and scripts.

In this release we have added MySQL template for Zabbix 2.0.x adopted from the existing Cacti one.

Percona Monitoring Plugins

Changelog:

* Added MySQL template for Zabbix 2.0.x (first release)
* Added FreeBSD support to Nagios plugins, partially rewritten pmp-check-unix-memory (bugs 1249575, 1244081)
* Added new options to ss_get_mysql_stats.php to better support pt-heartbeat (bugs 1253125, 1253130)
* ss_get_mysql_stats.php script was opening multiple connections to the server (bug 1255371)
* sql query for idle_blocker_duraction check in pmp-check-mysql-innodb did not conform sql mode of ONLY_FULL_GROUP_BY (bug #1240417)

A new tarball is available from downloads area or RPM and DEB packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins

Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios and Cacti and are delivered in the form of templates, plugins, and scripts which make it easy to monitor MySQL performance.

The post Percona Monitoring Plugins 1.1, now with Zabbix support appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com