Oct
08
2020
--

MySQL 101: Troubleshooting MySQL with Percona Monitoring and Management

MySQL 101 Troubleshoot with percona monitoring and management

MySQL 101 Troubleshoot with percona monitoring and managementPercona Monitoring and Management (PMM) is a free and open source platform for managing and monitoring MySQL, MongoDB, and PostgreSQL databases. In this blog, we will look at troubleshooting MySQL issues and performance bottlenecks with the help of PMM.

We can start troubleshooting MySQL after selecting the appropriate server or cluster from the PMM dashboard. There are many aspects to check which should get priority depending on the issue you are facing.

Things to look at:

OS:

CPU Usage:

Check to see if there is a spike or gradual increase in the CPU usage on the database server other than the normal pattern. If so, you can check the timeframe of the spike or starting point of the increasing load and review the database connections/thread details from the MySQL dashboard for that time interval.

MySQL CPU usage

CPU Saturation Metrics and Max Core Usage:

This is an important metric as it shows the saturation level of the CPU with normalized CPU load. Normalized CPU Load is very helpful for understanding when the CPU is overloaded. An overloaded CPU causes response times to increase and performance to degrade.

CPU Saturation Metrics and Max Core Usage

Disk latency/ Disk IO utilization:

Check to see if there is any latency observed for the disk. If you see the Disk IO utilization reach 100%, this will cause latency in queries as Disk would not be able to perform the read/writes, causing a gradual pile up of queries and hence the spike. The issue might be with the underlying disk or hardware.

Disk latency/ Disk IO utilization

Memory Utilization:

Any sudden change in memory usage consumption could indicate some process hogging the memory, for example, the Mysql process, if many concurrent queries or any long-running queries are in progress. We can see any increase when any backup job or a scheduled batch job is in progress on the server as well.

Network Details:

Check the Inbound and Outbound Network traffic for the duration of the issue for any sudden dip which would point to some network problems.

MySQL:

MySQL Client Thread Activity / MySQL Connections:

If the issue at hand is for a large number of running threads or a connection spike, you can check the graphs of MySQL Connections and MySQL thread activity and get the timeframe when these connections start increasing. More details about the threads (or queries running) can be seen from Innodb dashboards. As mentioned previously, a spike in Disk IO utilization reaching 100% can also cause connections to pile up. Hence, it is important to check all aspects before coming to any conclusion.

MySQL Client Thread Activity / MySQL Connections

MySQL Slow Queries:

If there were queries that were performing slow, this would be reported in the MySQL slow queries graph. This could be due to old queries performing slowly due to multiple concurrent queries, underlying disk load, or newly introduced queries that need analysis and optimization. Look at the timeframe involved and further check the slow logs and QAN to get the exact queries.

MySQL Slow Queries

MySQL Aborted Connections:

If there were a large number of users or threads unable to establish a connection to the server this would be reported by a spike in aborted connections.

InnoDB Metrics:

InnoDB Transactions:

This metric will show the graph of History List Length on the server, which is basically ‘undo logs’ created to keep a consistent state of data for any particular connection. An increase in HLL over time is caused due to long-running transactions on the server. If you see a gradual increase in HLL, look at your server and check show engine innodb status\G. Look for the culprit query/transaction and try to kill it if it’s not truly needed. While not an immediate issue, an increase in HLL can hamper the performance of a server if the value is in the millions and still increasing.

InnoDB Transactions

InnoDB Row Operations:

When you see a spike in thread activity, you should check here next to get more details about the threads running.  Spike in reads/inserts/deletes? You will get details about each of the row operations and their count, which will help you understand what kind of queries were running on the server and find the particular job that is responsible for this.

InnoDB Row Operations

InnoDb Locking > InnoDB Row Lock Time:

Row Lock Waits indicates how many times a transaction waited on a row lock per second. Row Lock Wait Load is a rolling 5 minute average of Row Lock Waits. Average Row Lock Wait Time is the row lock wait time divided by the number of row locks.

InnoDb Locking > InnoDB Row Lock Time

InnoDB Logging > InnoDB Log file usage Hourly:

If there is an increase in writes on the server, we can see an increase in the log file usage hourly and it would show the size in GB on how much data was written to the ib_logfiles before being sent to disk.

Performance Schema Details:

PS File IO (Events):

This dashboard will provide details on the wait IO for various events as shown in the image:

PS File IO (Events)

PS File IO (Load):

Similar to events, this will display the load corresponding to the event.

QAN:

PMM Query Analytics:

The Query Analytics dashboard shows how queries are executed and where they spend their time. To get more information out of QAN you should have QAN prerequisites enabled. Select the timeframe and check for the slow queries that caused most of the load. Most probably these are the queries that you need to optimize to avoid any further issues.

PMM Query Analytics

Daniel’s blog on How to find query slowdowns using PMM will help you troubleshoot better with QAN.

For more details on MySQL Troubleshooting and Performance Optimizations, you can check our CEO Peter Zaitsev’s webinar on MySQL Troubleshooting and Performance Optimization with PMM.

Sep
30
2020
--

MySQL 101: Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading MemoryIn this post, we will discuss what to do when you add more memory to your instance. Adding memory to a server where MySQL is running is common practice when scaling resources.

First, Some Context

Scaling resources is just adding more resources to your environment, and this can be split in two main ways: vertical scaling and horizontal scaling.

Vertical scaling is increasing hardware capacity for a given instance, thus having a more powerful server, while horizontal scaling is adding more servers, a pretty standard approach for load balancing and sharding.

As traffic grows, working datasets are getting bigger, and thus we start to suffer because the data that doesn’t fit into memory has to be retrieved from disk. This is a costly operation, even with modern NVME drives, so at some point, we will need to deal with either of the scaling solutions we mentioned.

In this case, we will discuss adding more RAM, which is usually the fastest and easiest way to scale hardware vertically, and also having more memory is probably the main benefit for MySQL.

How to Calculate Memory Utilization

First of all, we need to be clear about what variables allocate memory during MySQL operations, and we will cover only commons ones as there are a bunch of them. Also, we need to know that some variables will allocate memory globally, and others will do a per-thread allocation.

For the sake of simplicity, we will cover this topic considering the usage of the standard storage engine: InnoDB.

We have globally allocated variables:

key_buffer_size: MyISAM setting should be set to 8-16M, and anything above that is just wrong because we shouldn’t use MyISAM tables unless for a particular reason. A typical scenario is MyISAM being used by system tables only, which are small (this is valid for versions up to 5.7), and in MySQL 8 system tables were migrated to the InnoDB engine. So the impact of this variable is negligible.

query_cache_size: 0 is default and removed in 8.0, so we won’t consider it.

innodb_buffer_pool_size: which is the cache where InnoDB places pages to perform operations. The bigger, the better. ?

Of course, there are others, but their impact is minimal when running with defaults.

Also, there are other variables that are allocated on each thread (or open connection):
read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_size and tmp_table_size, and few others. All of them, by default, work very well as allocation is small and efficient. Hence, the main potential issue becomes where we allocate many connections that can hold these buffers for some time and add extra memory pressure. The ideal situation is to control how many connections are being opened (and used) and try to reduce that number to a sufficient number that doesn’t hurt the application.

But let’s not lose the focus, we have more memory, and we need to know how to tune it properly to make the best usage.

The most memory-impacting setting we need to focus on is innodb_buffer_pool_size, as this is where almost all magic happens and is usually the more significant memory consumer. There is an old rule of thumb that says, “size of this setting should be set around 75% of available memory”, and some cloud vendors setup this value to total_memory*0.75.

I said “old” because that rule was good when running instances with 8G or 16G of RAM was common, so allocating roughly 6G out of 8G or 13G out of 16G used to be logical.

But what if we run into an instance with 100G or even 200G? It’s not uncommon to see this type of hardware nowadays, so we will use 80G out of 100G or 160G out of 200G? Meaning, will we avoid allocating something between 20G to 40G of memory and leave that for filesystem cache operations? While these filesystem operations are not useless, I don’t see OS needing more than 4G-8G for this purpose on a dedicated DB. Also, it is recommended to use the O_DIRECT flushing method for InnoDB to bypass the filesystem cache.

Example

Now that we understand the primary variables allocating memory let’s check a good use case I’m currently working on. Assuming this system:

$ free -m
      total      used     free    shared    buff/cache    available
Mem: 385625    307295    40921         4         37408        74865

So roughly 380G of RAM, a nice amount of memory. Now let’s check what is the maximum potential allocation considering max used connections.

*A little disclaimer here, while this query is not entirely accurate and thus it can diverge from real results, we can have a sense of what is potentially going to be allocated, and we can take advantage of performance_schema database, but this may require enabling some instruments disabled by default:

mysql > show global status like 'max_used_connections';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections |    67 |
+----------------------+-------+
1 row in set (0.00 sec)

So with a maximum of 67 connections used, we can get:

mysql > SELECT ( @@key_buffer_size
-> + @@innodb_buffer_pool_size
-> + 67 * (@@read_buffer_size
-> + @@read_rnd_buffer_size
-> + @@sort_buffer_size
-> + @@join_buffer_size
-> + @@tmp_table_size )) / (1024*1024*1024) AS MAX_MEMORY_GB;
+---------------+
| MAX_MEMORY_GB |
+---------------+
| 316.4434      |
+---------------+
1 row in set (0.00 sec)

So far, so good, we are within memory ranges, now let’s see how big the innodb_buffer_pool_size is and if it is well sized:

mysql > SELECT (@@innodb_buffer_pool_size) / (1024*1024*1024) AS BUFFER_POOL_SIZE;
+------------------+
| BUFFER_POOL_SIZE |
+------------------+
| 310.0000         |
+------------------+
1 row in set (0.01 sec)

So the buffer pool is 310G, roughly 82% of total memory, and total usage so far was around 84% which leaves us around 60G of memory not being used. Well, being used by filesystem cache, which, in the end, is not used by InnoDB.

Ok now, let’s get to the point, how to properly configure memory to be used effectively by MySQL. From pt-mysql-summary we know that the buffer pool is fully filled:

Buffer Pool Size | 310.0G
Buffer Pool Fill | 100%

Does this mean we need more memory? Maybe, so let’s check how many disk operations we have in an instance we know with a working dataset that doesn’t fit in memory (the very reason why we increased memory size) using with this command:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99857480858|
| Innodb_buffer_pool_reads         | 598600690  |
| Innodb_buffer_pool_read_requests | 274985     |
| Innodb_buffer_pool_reads         | 1602       |
| Innodb_buffer_pool_read_requests | 267139     |
| Innodb_buffer_pool_reads         | 1562       |
| Innodb_buffer_pool_read_requests | 270779     |
| Innodb_buffer_pool_reads         | 1731       |
| Innodb_buffer_pool_read_requests | 287594     |
| Innodb_buffer_pool_reads         | 1567       |
| Innodb_buffer_pool_read_requests | 282786     |
| Innodb_buffer_pool_reads         | 1754       |

Innodb_buffer_pool_read_requests: page reads satisfied from memory (good)
Innodb_buffer_pool_reads: page reads from disk (bad)

As you may notice, we still get some reads from the disk, and we want to avoid them, so let’s increase the buffer pool size to 340G (90% of total memory) and check again:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99937722883 |
| Innodb_buffer_pool_reads         | 599056712   |
| Innodb_buffer_pool_read_requests | 293642      |
| Innodb_buffer_pool_reads         | 1           |
| Innodb_buffer_pool_read_requests | 296248      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 294409      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 296394      |
| Innodb_buffer_pool_reads         | 6           |
| Innodb_buffer_pool_read_requests | 303379      |
| Innodb_buffer_pool_reads         | 0           |

Now we are barely going to disk, and IO pressure was released; this makes us happy –  right?

Summary

If you increase the memory size of a server, you mostly need to focus on innodb_buffer_pool_size, as this is the most critical variable to tune. Allocating 90% to 95% of total available memory on big systems is not bad at all, as OS requires only a few GB to run correctly, and a few more for memory swap should be enough to run without problems.

Also, check your maximum connections required (and used,) as this is a common mistake causing memory issues, and if you need to run with 1000 connections opened, then allocating 90% of the memory of the buffer pool may not be possible, and some additional actions may be required (i.e., adding a proxy layer or a connection pool).

From MySQL 8, we have a new variable called innodb_dedicated_server, which will auto-calculate the memory allocation. While this variable is really useful for an initial approach, it may under-allocate some memory in systems with more than 4G of RAM as it sets the buffer pool size = (detected server memory * 0.75), so in a 200G server, we have only 150 for the buffer pool.

Conclusion

Vertical scaling is the easiest and fastest way to improve performance, and it is also cheaper – but not magical. Tuning variables properly requires analysis and understanding of how memory is being used. This post focused on the essential variables to consider when tuning memory allocation, specifically innodb_buffer_pool_size and max_connections. Don’t over-tune when it’s not necessary and be cautious of how these two affect your systems.

Jun
26
2018
--

Webinar 6/27: MySQL Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance

performance troubleshooting MySQL monitoring tools

performance troubleshooting MySQL monitoring toolsPlease join Percona’s Principal Support Escalation Specialist Sveta Smirnova as she presents Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance on Wednesday, June 27th at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).

 

During the MySQL Troubleshooting webinar series, I covered many monitoring and logging tools such as:

  • General, slow, audit, binary, error log files
  • Performance Schema
  • Information Schema
  • System variables
  • Linux utilities
  • InnoDB monitors
  • PMM

However, I did not spend much time on the impact these instruments have on overall MySQL performance. And they do have an impact.

And this is the conflict many people face. MySQL Server users try exploring these monitoring instruments, see that they slow down their installations, and turn them off. This is unfortunate. If the instrument that can help you resolve a problem is OFF, you won’t have good and necessary information to help understand when, how and why the issue occurred. In the best case, you’ll re-enable instrumentation and wait for the next disaster occurrence. In the worst case, you try various fix options without any real knowledge if they solve the problem or not.

This is why it is important to understand the impact monitoring tools have on your database, and therefore how to minimize it.

Understanding and controlling the impact of MySQL monitoring tools

In this webinar, I cover why certain monitoring tools affect performance, and how to minimize the impact without turning the instrument off. You will learn how to monitor safely and effectively.

Register Now

 

Sveta Smirnova

Principal Support Escalation Specialist

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona, Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

The post Webinar 6/27: MySQL Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance appeared first on Percona Database Performance Blog.

Jun
20
2018
--

Webinar Thu 6/21: How to Analyze and Tune MySQL Queries for Better Performance

database query tuning

database query tuningPlease join Percona’s MySQL Database Administrator, Brad Mickel as he presents How to Analyze and Tune MySQL Queries for Better Performance on Thursday, June 21st, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

 

Query performance is essential in making any application successful. In order to finely tune your queries you first need to understand how MySQL executes them, and what tools are available to help identify problems.

In this session you will learn:

  1. The common tools for researching problem queries
  2. What an Index is, and why you should use one
  3. Index limitations
  4. When to rewrite the query instead of just adding a new index
Register Now

 

Brad Mickel

MySQL DBA

Bradley began working with MySQL in 2013 as part of his duties in healthcare billing. After 3 years in healthcare billing he joined Percona as part of the bootcamp process. After the bootcamp he has served as a remote database administrator on the Atlas team for Percona Managed Services.

The post Webinar Thu 6/21: How to Analyze and Tune MySQL Queries for Better Performance appeared first on Percona Database Performance Blog.

Mar
26
2018
--

New MySQL 8.0 innodb_dedicated_server Variable Optimizes InnoDB from the Get-Go

MySQL 8.0 innodb_dedicated_server

MySQL 8.0 innodb_dedicated_serverIn this post, we’ll look at the MySQL 8.0 innodb_dedicated_server variable.

MySQL 8.0 introduces a new variable called innodb_dedicated_server. When enabled, it auto tunes innodb_buffer_pool_size, innodb_log_file_size and innodb_flush_method at startup (if these variables are not explicitly defined in my.cnf).

The new MySQL 8.0 variable automatically sizes the following variables based on the RAM size of the system:

innodb_buffer_pool_size:

    • <1G: 128M(default value if innodb_dedicated_server is OFF)
    • <=4G: Detected Physical RAM * 0.5
    • >4G: Detected Physical RAM * 0.75

innodb_log_file_size:

    • <1G: 48M(default value if innodb_dedicated_server is OFF)
    • <=4G: 128M
    • <=8G: 512M
    • <=16G: 1024M
    • >16G: 2G

The variable also sets the following:

innodb_flush_method: 

    • Set to O_DIRECT_NO_FSYNC if the setting is available on the system. If not, set it to the default InnoDB flush method

These new default values are very reasonable, and the changes to these three variables show considerable performance improvements from the get-go than using the old default values. As stated in the worklog of this feature, the current MySQL version (5.7) only uses around 512M RAM with the default settings. With the new feature, these variables can easily adapt to the amount of RAM allocated to the server for the convenience of the system/database administrator.

With that said, you can achieve the best setting for these three variables by tuning it to your workload and hardware.

For InnoDB buffer pool size (based on this article), consider allocating 80% of physical RAM for starters. You can increase it to as large as needed and possible, as long as the system doesn’t swap on the production workload.

For InnoDB log file size, it should be able to handle one hour of writes to allow InnoDB to optimize writing the redo log to disk. You can calculate an estimate by following the steps here, which samples one minute worth of writes to the redo log. You could also get a better estimate from hourly log file usage with Percona Monitoring and Management (PMM) graphs.

Finally, for innodb_flush_method, O_DIRECT_NO_FSYNC prevents double buffering between the OS cache and disk, and works well with low-latency IO devices such as RAID subsystem with write cache. On the other hand, in high-latency IO devices, commonly found on deployments where MySQL is stored in SAN drives, having an OS cache with the default flush method fsync is more beneficial.

All in all, the MySQL 8.0 innodb_dedicated_server variable provides a fairly well-tuned InnoDB configuration at startup. But if it’s not enough, you can still tune these variables based on your workload and hardware. While MySQL 8.0 isn’t released yet, you can take a look at this article that helps you tune the current version (MySQL 5.7) right after installation.

The post New MySQL 8.0 innodb_dedicated_server Variable Optimizes InnoDB from the Get-Go appeared first on Percona Database Performance Blog.

Jul
18
2017
--

Backups and Disaster Recovery

Backups and Disaster Recovery

Backups and Disaster RecoveryIn this post, we’ll look at strategies for backups and disaster recovery.

Note: I am giving a talk on Backups and Disaster Recovery Best Practices on July 27th.

When discussing disaster recovery, it’s important to take your business’ continuity plan into consideration. Backup and recovery processes are a critical part of any application infrastructure.

A well-tested backup and recovery system can be the difference between a minor outage and the end of your business.

You will want to take three things into consideration when planning your disaster recovery strategy: recovery time objective, recovery point objective and risk mitigation.

Recovery time objective (RTO) is how long it takes to restore your backups. Recovery point objective (RPO) is what point in time you want to recover (in other words, how much data you can afford to lose after recovery). Finally, you need to understand what risks you are trying to mitigate. Risks to your data include (but are not limited to) bad actors, data corruption, user error, host failure and data center failure.

Recommended Backup Strategies

We recommend that you use both physical (Percona XtraBackup, RDS/LVM Snapshots, MySQL Enterprise Backup) and logical backups (mysqldump, mydumper, mysqlpump). Logical backups protect against the loss of single data points, while physical backups protect against total data loss or host failure.

The best practice is running Percona XtraBackup nightly, followed by mysqldump (or in 5.7+, mysqlpump). Percona XtraBackup enables you to quickly restore a server, and mysqldump enables you to quickly restore data points. These address recovery time objectives.

For point-in-time recovery, it is recommended that you download binlogs on a regular basis (once an hour, for example).

Another option is binlog streaming. You can find more information on binlog streaming in our blog: Backing up binary log files with mysqlbinlog.

There is also a whitepaper that is the basis of my webinar here: MySQL Backup and Recovery Best Practices.

Delayed Slave

One way to save on operational overhead is to create a 24-hour delayed slave. This takes the place of the logical backup (mysqldump) as well as the binlog streaming. You want to ensure that you stop the delayed slave immediately following any issues. This ensures that the data does not get corrupted on the backup as well.

A delayed slave is created in 5.6 and above with:

CHANGE MASTER TO MASTER_DELAY = N;

After a disaster, you would issue:

STOP SLAVE;

Then, in order to get a point-in-time, you can use:

START SLAVE UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos;

Restore

It is a good idea to test your backups at least once a quarter. Backups do not exist unless you know you can restore them. There are some recent high-profile cases where developers dropped tables or schemas, or data was corrupted in production, and in one case five different backup types were not viable to use to restore.

The best case scenario is an automated restore test that runs after your backup, and gives you information on how long it takes to restore (RTO) and how much data you can restore (RPO).

For more details on backups and disaster recovery, come to my webinar.

May
16
2017
--

Percona Live Open Source Database Conference 2017 Slides and Videos Available

Percona Live

Percona LiveThe slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.

To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.

To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.

There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.

Some examples of videos and slide decks from the Percona Live conference:

MongoDB 101: Efficient CRUD Queries in MongoDB
Adamo Tonete, Senior Technical Engineer, Percona
Video: https://www.percona.com/live/17/content/efficient-crud-queries-mongodb
Slides: https://www.percona.com/live/17/sessions/efficient-crud-queries-mongodb

MySQL 101: Choosing a MySQL High Availability Solution
Marcos Albe, Principal Technical Services Engineer, Percona
Video: https://www.percona.com/live/17/content/choosing-mysql-high-availability-solution
Slides: https://www.percona.com/live/17/sessions/choosing-mysql-high-availability-solution

Breakout Session: Using the MySQL Document Store
Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: https://www.percona.com/live/17/content/using-mysql-document-store
Slides: https://www.percona.com/live/17/sessions/using-mysql-document-store

Keynote: Continuent is Back! But What Does Continuent Do Anyway?
Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: https://www.percona.com/live/17/content/continuent-back-what-does-continuent-do-anyway
Slides: https://www.percona.com/live/17/sessions/continuent-back-what-does-continuent-do-anyway

Please let us know if you have any issues. Enjoy the videos!

Percona Live Europe 2017
Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

We look forward to seeing you there!

Mar
06
2017
--

MySQL, –i-am-a-dummy!

--I-am-a-dummyIn this blog post, we’ll look at how “operator error” can cause serious problems (like the one we saw last week with AWS), and how to avoid them in MySQL using

--i-am-a-dummy

.

Recently, AWS had some serious downtime in their East region, which they explained as the consequence of a bad deployment. It seems like most of the Internet was affected in one way or another. Some on Twitter dubbed it “S3 Dependency Awareness Day.”

Since the outage, many companies (especially Amazon!) are reviewing their production access and deployment procedures. It would be a lie if I claimed I’ve never made a mistake in production. In fact, I would be afraid of working with someone who claims to have never made a mistake in a production environment.

Making a mistake or two is how you learn to have a full sense of fear when you start typing:

UPDATE t1 SET c1='x' ...

I think many of us have experienced forehead sweats and hand shaking in these cases – they save us from major mistakes!

The good news is that MySQL can help you with this. All you have to do is admit that you are human, and use the following command (you can also set this in your user directory .my.cnf):

mysql --i-am-a-dummy

Using this command (also known as safe-updates) sets the following SQL mode when logging into the server:

SET sql_safe_updates=1, sql_select_limit=1000, max_join_size=1000000;

The safe-updates and iam-a-dummy flags were introduced together in MySQL 3.23.11, and according to some sites from around the time of release, it’s “for users that once may have done a DELETE FROM table_name but forgot the WHERE clause.”

What this does is ensure you can’t perform an UPDATE or DELETE without a WHERE clause. This is great because it forces you to think through what you are doing. If you still want to update the whole table, you need to do something like WHERE ID > 0. Interestingly, safe-updates also blocks the use of WHERE 1, which means “where true” (or basically everything).

The other safety you get with this option is that SELECT is automatically limited to 1000 rows, and JOIN is limited to examining 1 million rows. You can override these latter limits with extra flags, such as:

--select_limit=500 --max_join_size=10000

I have added this to the .my.cnf on my own servers, and definitely use this with my clients.

Dec
14
2015
--

used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes

used_columns covered index

In the “MySQL Query tuning 101” video, Alexander Rubin provides an excellent example of when to use a covered index. On slide 25, he takes the query

select name from City where CountryCode = ’USA’ and District = ’Alaska’ and population > 10000

 and adds the index

cov1(CountryCode, District, population, name)

 on table

City

. With Alex’s query tuning experience, making the right index decision is simple – but what about us mere mortals? If a query is more complicated, or simply uses more than one table, how do we know what to do? Maintaining another index can slow down

INSERT

 statements, so you need to be very careful when choosing one. Examining the array “used_columns” could help out.

Let’s assume a more complicated version of the query was used in “MySQL Query tuning 101”:

select City.name as city, Country.name as country, group_concat(Language)
from City join CountryLanguage using(CountryCode)
join Country
where City.CountryCode=Country.Code and Continent = 'North America' and District='St George'
group by City.name, Country.Name;

Can we use a covered index here?

A traditional text-based

EXPLAIN

  already shows that it is a pretty good plan:

mysql> explain select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: Country
   partitions: NULL
         type: ALL
possible_keys: PRIMARY
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 239
     filtered: 14.29
        Extra: Using where; Using temporary; Using filesort
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: City
   partitions: NULL
         type: ref
possible_keys: CountryCode
          key: CountryCode
      key_len: 3
          ref: world.Country.Code
         rows: 18
     filtered: 10.00
        Extra: Using where
*************************** 3. row ***************************
           id: 1
  select_type: SIMPLE
        table: CountryLanguage
   partitions: NULL
         type: ref
possible_keys: PRIMARY,CountryCode
          key: CountryCode
      key_len: 3
          ref: world.Country.Code
         rows: 4
     filtered: 100.00
        Extra: Using index
3 rows in set, 1 warning (0.00 sec)
Note (Code 1003): /* select#1 */ select `world`.`City`.`Name` AS `city`,`world`.`Country`.`Name` AS `country`,group_concat(`world`.`CountryLanguage`.`Language` separator ',') AS `group_concat(Language)` from `world`.`City` join `world`.`CountryLanguage` join `world`.`Country` where ((`world`.`City`.`District` = 'St George') and (`world`.`Country`.`Continent` = 'North America') and (`world`.`City`.`CountryCode` = `world`.`Country`.`Code`) and (`world`.`CountryLanguage`.`CountryCode` = `world`.`Country`.`Code`)) group by `world`.`City`.`Name`,`world`.`Country`.`Name`

Can we make it better? Since our topic is covered indexes, let’s consider this possibility.

EXPLAIN FORMAT=JSON

will tell us to which columns we should add covered index:

mysql> explain format=json select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG
*************************** 1. row ***************************
EXPLAIN: {
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "927.92"
    },
<I skipped output for groupping operation and other tables here>
        {
          "table": {
            "table_name": "City",
            "access_type": "ref",
            "possible_keys": [
              "CountryCode"
            ],
            "key": "CountryCode",
            "used_key_parts": [
              "CountryCode"
            ],
            "key_length": "3",
            "ref": [
              "world.Country.Code"
            ],
            "rows_examined_per_scan": 18,
            "rows_produced_per_join": 63,
            "filtered": "10.00",
            "cost_info": {
              "read_cost": "630.74",
              "eval_cost": "12.61",
              "prefix_cost": "810.68",
              "data_read_per_join": "4K"
            },
            "used_columns": [
              "ID",
              "Name",
              "CountryCode",
              "District"
            ],
            "attached_condition": "(`world`.`City`.`District` = 'St George')"
          }
        },

The answer is in the array “used_columns”. It lists the ID (primary key) and all columns which I used in the query:

"used_columns": [
      "ID",
      "Name",
      "CountryCode",
      "District"
    ],

Now we can try adding a covered index:

mysql> alter table City add index cov(CountryCode, District, Name);
Query OK, 0 rows affected (2.74 sec)
Records: 0  Duplicates: 0  Warnings: 0

EXPLAIN

  confirms what index access (“using_index”: true ) is used:

mysql> explain format=json select City.name as city, Country.name as country, group_concat(Language) from City join CountryLanguage using(CountryCode) join Country where City.CountryCode=Country.Code and Continent = 'North America' and District='St George' group by City.name, Country.NameG
*************************** 1. row ***************************
EXPLAIN: {
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "296.28"
    },
   
<I skipped output for groupping operation and other tables here>
        {
          "table": {
            "table_name": "City",
            "access_type": "ref",
            "possible_keys": [
              "CountryCode",
              "cov"
            ],
            "key": "cov",
            "used_key_parts": [
              "CountryCode",
              "District"
            ],
            "key_length": "23",
            "ref": [
              "world.Country.Code",
              "const"
            ],
            "rows_examined_per_scan": 2,
            "rows_produced_per_join": 100,
            "filtered": "100.00",
            "using_index": true,
            "cost_info": {
              "read_cost": "34.65",
              "eval_cost": "20.19",
              "prefix_cost": "108.64",
              "data_read_per_join": "7K"
            },
            "used_columns": [
              "ID",
              "Name",
              "CountryCode",
              "District"
            ]
          }
        },

It also provides such metrics as:

  • query_cost

     – 296.28 for the indexed table against 927.92 (smaller is better)

  • rows_examined_per_scan

     – 2 versus 18 (smaller is better)

  • filtered

     – 100 versus 10 (bigger is better)

  • cost_info

     –

    read_cost

      and

    prefix_cost

      for the indexed table are smaller than when not indexed, which is better. However, 

    eval_cost

      and 

    data_read_per_join

      are bigger. But since we read nine times less rows overall, the cost is still better.

Conclusion: if the number of columns in

used_columns

  array is reasonably small, you can use it as a guide for creating a covered index.

 

The post used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes appeared first on MySQL Performance Blog.

Oct
16
2015
--

MySQL 101 Videos – A Must Have Starter Kit for a MySQL DBA

The MySQL 101 track was introduced at the 2015 Percona Live MySQL Conference & Expo in Santa Clara, California. This track was tailored specifically for software developers and other specialists with technical backgrounds wanting to acquire the basic skills needed for a MySQL DBA role.101-Poster The Percona Live Conference Committee saw an increase in demand for people with MySQL DBA skills to be able to help with the day-to-day management of MySQL databases. In response to this demand, a MySQL crash course for developers, systems administrators and other technical resources was added to the official Percona Live conference program. This 2-day MySQL 101 track, led by Percona MySQL technical experts, took attendees through the fundamentals of MySQL DBA tools and techniques.

The goal of this educational course was to send attendees back ready to quickly and correctly take care of the day-to-day and long-term management of their MySQL environment.

“You send us developers and admins, and we’ll send you back MySQL DBAs!”
Completion of this course enabled the attendees to combine their daily non-DBA work with practical performance optimization techniques to resolve problems in environments not managed by a dedicated MySQL DBA.

Due to the overall success of the MySQL 101 track and the continued demand for this type of education, Percona has packaged these courses together in the official “Percona Live MySQL 101 Series,” now available for purchase.

You can buy all 24 sessions together at a bundled rate or purchase each video individually based on interest and needs. Don’t miss your opportunity to learn from some of the top MySQL experts.
Complete set of 24 Videos = $ 599.99 USD
Individual Videos = $29.99 USD

The post MySQL 101 Videos – A Must Have Starter Kit for a MySQL DBA appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com