May
11
2017
--

MyRocks and LOCK IN SHARE MODE

LOCK IN SHARE MODE

LOCK IN SHARE MODEIn this blog post, we’ll look at MyRocks and the

LOCK IN SHARE MODE

.

Those who attended the March 30th webinar “MyRocks Troubleshooting” might remember our discussion with Yoshinori on 

LOCK IN SHARE MODE

.

I did more tests, and I can confirm that his words are true:

LOCK IN SHARE MODE

 works in MyRocks.

This quick example demonstrates this. The initial setup:

CREATE TABLE t (
id int(11) NOT NULL,
f varchar(100) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=ROCKSDB;
insert into t values(12345, 'value1'), (54321, 'value2');

In session 1:

session 1> begin;
Query OK, 0 rows affected (0.00 sec)
session 1> select * from t where id=12345 lock in share mode;
+-------+--------+
| id | f |
+-------+--------+
| 12345 | value1 |
+-------+--------+
1 row in set (0.01 sec)

In session 2:

session 2> begin;
Query OK, 0 rows affected (0.00 sec)
session 2> update t set f='value3' where id=12345;
ERROR HY000: Lock wait timeout exceeded; try restarting transaction

However, in the webinar I wanted to remind everyone about the differences between

LOCK IN SHARE MODE

  and

FOR UPDATE

. To do so, I added the former to my “session 2” test for the webinar. Once I did, it ignores the lock set in “session 1”. I can update a row and commit:

session 2> select * from t where id=12345 lock in share mode;
+-------+--------+
| id    | f      |
+-------+--------+
| 12345 | value1 |
+-------+--------+
1 row in set (0.00 sec)
session 2> update t set f='value3' where id=12345;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0
session 2> commit;
Query OK, 0 rows affected (0.02 sec)

I reported this behavior here, and also at Percona Jira bugs database: MYR-107. In Facebook, this bug is already fixed.

This test clearly demonstrates that it is fixed in Facebook. In “session 1”:

session1> CREATE TABLE `t` (
    -> `id` int(11) NOT NULL,
    -> `f` varchar(100) DEFAULT NULL,
    -> PRIMARY KEY (`id`)
    -> ) ENGINE=ROCKSDB;
Query OK, 0 rows affected (0.00 sec)
session1> insert into t values(12345, 'value1'), (54321, 'value2');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
session1> begin;
Query OK, 0 rows affected (0.00 sec)
session1>  select * from t where id=12345 lock in share mode;
+-------+--------+
| id    | f      |
+-------+--------+
| 12345 | value1 |
+-------+--------+
1 row in set (0.00 sec)

And now in another session:

session2> begin;
Query OK, 0 rows affected (0.00 sec)
session2> select * from t where id=12345 lock in share mode;
+-------+--------+
| id    | f      |
+-------+--------+
| 12345 | value1 |
+-------+--------+
1 row in set (0.00 sec)
session2> update t set f='value3' where id=12345;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction: Timeout on index: test.t.PRIMARY

If you want to test the fix with the Facebook MySQL build, you need to update submodules to download the patch:

git submodule update

.

Apr
13
2017
--

TokuDB Troubleshooting: Q & A

TokuDB

TokuDBIn this blog, we will provide answers to the Q & A for the TokuDB Troubleshooting webinar.

First, we want to thank everybody for attending the March 22, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

Q: Is it possible to load specific tables or data from the backup?

A: Do you mean the backup created by TokuBackup? No, this is not possible. You have to restore the full backup to a temporary instance, then perform a logical dump/reload of the specific tables you want to restore.

Q: Since it is not recoverable when corruption happens, we have a risk of losing data. Is incremental backup the only option?

A: TokuBackup currently does not support incremental backup options. You can only create a kind of incremental backup if you copy and store the binary logs on a separate server, and then apply them on top of a previous full backup.

May
20
2016
--

Introduction to Troubleshooting Performance – Troubleshooting Slow Queries webinar: Q & A

Troubleshooting Slow Queries

Troubleshooting Slow QueriesIn this blog, I will provide answers to the Q & A for the Troubleshooting Slow Queries webinar.

First, I want to thank you for attending the April 28 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: I’ve heard that is a bad idea to use

select *

; what do you recommend?

A: When I used

SELECT *

 in my slides, I wanted to underline the idea that sometimes you need to select all columns from the table. There is nothing bad about it if you need them.

SELECT *

 is bad when you need only a few columns from the table. In this case, you retrieve more data than needed, which affects performance. Another issue that  

SELECT *

 can cause is if you hard-code the statement into your application, then change table definition; the application could start retrieving columns in wrong order and output (e.g., email instead of billing address). Or even worse, it will try to access a non-existent index in the result set array. The best practice is to explicitly enumerate all columns that your application needs.

Q: I heard that using 

index_field

 length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as indexing, what happens?

A: I assume you are asking about the ability to create an index with lengths smaller than the column length? They work as follows:

Assume you have a 

TEXT

  field which contains these user questions:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using 
    index_field

     length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as indexing, what happens?

  3. ….

Since this is a 

TEXT

  field you cannot create and index on it without specifying its length, so you need to make the index as minimal as possible to uniquely identify questions. If you create an index with length 10 it will contain:

  1. I’ve heard
  2. I heard th

You will index only those parts of questions that are not very distinct from each other, and do not contain useful information about what the question is. You can create index of length 255:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using index_field length will affect the indexing principle during query execution (e.g., one field is varchar and its length is not fixed, some values have short text, some values have long text. at this time). If we use this field as index

In this case, the index includes the whole first question and almost all the second question. This makes the index too large and requires us to use more disk space (which causes more IO). Also, information from the second question is probably too much.

If make index of length 75, we will have:

  1. I’ve heard that is a bad idea to use select * what do you recommend?
  2. I heard that using index_field length will affect the indexing principle du

This is more than enough for the first question and gives a good idea of what is in the second question. It also potentially will have enough unique entries to make its cardinality look more like the cardinality of real data distribution.

To conclude: choosing the correct index length is something that requires practice and analysis of your actual data. Try to make them as short as possible, but long enough so that the number of unique entries in the index will be similar to a number of unique entries in the table.

Q: Which view can we query to see stats?

A: Do you mean index statistics?

SHOW INDEX FROM table_name

 will do it.

Q: We have an InnoDB table with 47 fields (mostly text); some are ft-indexed. I tried to do an alter table, and it ran for 24 hours. What is the best way to run an alter table to add one extra field? The table has 1.9 M rows and 47 columns with many indexes.

A: Adding a column requires a table copy. Therefore, the speed of this operation depends on the table size and speed of your disk. If you are using version 5.6 and later, adding a column would not block parallel queries (and therefore is not a big deal). If you are using an older version, you can always use the pt-online-schema-change utility from Percona Toolkit. However, it will run even more slowly than the regular

ALTER TABLE

. Unfortunately, you cannot speed up the execution of

ALTER TABLE

 much. The only thing that you can do is to use a faster disk (with options, tuned to explore speed of the disk).

However, if you do not want to have this increased IO affect the production server, you can alter the table on the separate instance, then copy tablespace to production and then apply all changes to the original table from the binary logs. The steps will be something like:

  1. Ensure you use option
    innodb_file_per_table

      and the big table has individual tablespace

  2. Ensure that binary log is enabled
  3. Start a new server (you can also use an existent stand-by slave).
  4. Disable writes to the table
  5. Record the binary log position
  6. Copy the tablespace to the new server as described here.
  7. Enable writes on the production server
  8. Run
    ALTER TABLE

     on the new server you created in step 2 (it will still take 24 hours)

  9. Stop writes to the table on the production server
  10. Copy the tablespace, altered in step 7
  11. Apply all writes to this table, which are in the binary logs after position, recorded in step 4.
  12. Enable writes to the table

This scenario will take even more time overall, but will have minimal impact on the production server

Q: If there is a compound index like index1(emp_id,date), will the following query be able to use index? “select * from table1 where emp_id = 10”

A: Yes. At least it should.

Q: Are 

filesort

 and

temporary

 in extended info for explain not good?

A: Regarding

filesort

: it depends. For example, you will always have the word

filesort

” for tables which perform 

ORDER BY

 and cannot use an index for

ORDER BY

. This is not always bad. For example, in this query:

mysql> explain select emp_no, first_name from employees where emp_no <20000 order by first_nameG
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: employees
   partitions: NULL
         type: range
possible_keys: PRIMARY
          key: PRIMARY
      key_len: 4
          ref: NULL
         rows: 18722
     filtered: 100.00
        Extra: Using where; Using filesort
1 row in set, 1 warning (0,01 sec)

the primary key used to resolve rows and

filesort

 were necessary and not avoidable. You can read about different

filesort

 algorithms here.

Regarding

Using temporary

: this means what during query execution temporary table will be created. This is can be not good, especially if the temporary table is large and cannot fit into memory. In this case, it would be written to disk and slow down operations. But, again, sometimes creating temporary tables in not avoidable, for example, if you have both

GROUP BY

 and

ORDER BY

 clauses which list columns differently as stated in the user manual.

Q: Is

key_len

 length more of a good thing for query execution?

A:

key_len

 field is not

NULL

 for all queries that use and index, and just shows the length of the key part used. It is not good or bad, it is just for information. You can use this information, for example, to identify which part of combined index is used to resolve the query.

Q: Does an alter query go for an optimizer check?

A: No. You can check it either by enabling optimizer trace, running

ALTER

 and find what trace is empty. Or by enabling the debug option and searching the resulting trace for

optimize

.

Q: A query involves four columns that are all individually covered by an index. The optimizer didn’t merge indexes because of cost, and even didn’t choose the composite index I created.

A: This depends on the table definition and query you used. I cannot provide a more detailed answer based only on this information.

Q cont.: Finally, only certain composite indexes were suitable, the column order in the complex index mattered a lot. Why couldn’t the optimizer merge the four individual single column indexes, and why did the order of the columns in the composite index matter?

A: Regarding why the optimizer could not merge four indexes, I need to see how the table is defined and which data is in these indexed columns. Regarding why the order of the columns in the composite index matters, it depends on the query. Why the optimizer can use an index, say, on

(col1, col2)

 where the conditions

col1=X AND col2=Y

 and

col2=Y AND col2=X

 for the case when you use

OR

, the order is important. For example, for the condition

col1=X OR col2=Y

, where the part

col1=X

 is always executed and the part

col2=Y

  is executed only when

col1=X

 is false. The same logic applies to queries like

SELECT col1 WHERE col2=Y ORDER BY col3

. See the user manual for details.

Q: When I try to obtain the optimizer trace on the console, the result is cut off. Even if I redirect the output to a file, how to overcome that?

A: Which version of MySQL Server do you use? The 

TRACE

 column is defined as

longtext NOT NULL

, and should not cause such issues. If it does with a newer version, report a bug at http://bugs.mysql.com/.

Q: Are there any free graphical visualizers for either EXPLAIN or the optimizer TRACE output?

A: There is graphical visualizer for

EXPLAIN

 in MySQL Workbench. But it works with online data only: you cannot run it on

EXPLAIN

 output, saved into a file. I don’t know about any visualizer for the optimizer

TRACE

 output. However, since it is

JSON

 you can simply save it to file and open in web browser. It will allow a better view than if opened in simple text editor.

Q: When do you use force index instead of

use index

 hints?

A: According to user manual “

USE INDEX (index_list)

 hint tells MySQL to use only one of the named indexes to find rows in the table” and “

FORCE INDEX

  hint acts like

USE INDEX (index_list)

, with the addition that a table scan is assumed to be very expensive . . . a table scan is used only if there is no way to use one of the named indexes to find rows in the table.” This means that when you use

USE INDEX

, you are giving a hint for the optimizer to prefer a particular index to others, but not enforcing index usage if the optimizer prefers a table scan, while

FORCE INDEX

 requires using the index. I myself use only

FORCE

 and

IGNORE

  index hints.

Q: Very informative session. I missed the beginning part. Are you going to distribute the recoded session later?

A: Yes. As usual slides and recording available here.

Mar
10
2016
--

Introduction to MySQL Troubleshooting Webinar: Q & A for How to Create a Test Setup

MySQL Troubleshooting Webinar

MySQL Troubleshooting WebinarThank you for attending my March 2 MySQL troubleshooting webinar “Introduction to troubleshooting: How to create test setup,” which is the first in the “MySQL Troubleshooting Webinar” series. The recording and slides for the webinar are available here. Here is the list of your questions that I wasn’t able to answer during the webinar, with responses.

Q: “Copy Option file from Production to test server,” which option file we are talking about?

A: The MySQL configuration file, usually

/etc/my.cnf

 . See also this manual.

Q: For future seminars, what environment we need to setup?

A: There is no special environment, but I will use a standard employee database for many examples. If you want to run it you can install it in advance. It is also better to have MySQL version 5.7, because I will speak about features that don’t exist in older versions.

Q: Hello, may I have a copy of the presentation as a future reference?

A: Yes, you can find it here

Q: What is the difference between mysqldump and mysqldbcopy and mysql sandbox ? Which one is better for this?

A: MySQL Sandbox is set of utilities that will help you to quickly install one or more MySQL servers in the same host, either standalone or in groups. It extracts *tar.gz archive, creates default directories, initializes the system database, creates default configuration files and aliases to utilities, but it does not dump or load data for you. mysqldump is just a backup program that does logical backup of your databases. mysqldbcopy copies database objects between two servers, or between two databases on the same server, without creating intermediary dump file. These are completely different tools that should be used for different purposes: MySQL Sandbox to quickly install from *tar.gz package into custom directory, mysqldump to make a dump of your data and mysqldbcopy to quickly clone a database.

Q: Open source project. Will MySQL  behave like a Java program when garbage collections hang temporally execution?

A: This is not normal behavior, but can happen in a high-load environment. For example, when InnoDB starts purging the history list aggressively and a bunch of new queries comes in. You can read about issues with InnoDB flushing and purge in these posts: 1, 2.

The InnoDB development team works on making these stalls smaller from version to version, though.

Q: What is the effect of disk speed? Why is MySQL software affected by disk speed ? I thought that the faster a disk is, the faster the database is…?

A: I believe you are asking about innodb_io_capacity option, and why it is not dynamically set inside InnoDB. This option sets an upper limit of IO activity for background tasks that happen inside InnoDB (when it flushes data to disk, for example). This is basically “how many pages per second to flush?” Faster disks can perform more operations per second than slower disks, therefore bigger values are suited for faster disks and smaller for slower. Read this blog post, it has greater detail about how InnoDB flushing works. Regarding why this number is not calculated by InnoDB itself, you should, of course, ask InnoDB developers (I can only speculate!). But I think it’s because desired disk speed is not something “built in stone” by the manufacturer. It often depends on various factors.

Regarding why this number is not calculated by InnoDB itself,  you should, of course, ask InnoDB developers, because I can only speculate. But I think this is because desired disk speed is not something “built in stone” by the manufacturer, but can depend on many factors. For example, if you run MySQL on a non-dedicated server,  you probably would want to leave some resources for other software.Another case is if you have to temporarily save backups on the same filesystem where data directory is located.

Q: For innodb_io_capacity, what specifically does that limit? IOPS that can be initiated?

A: It specifies how many pages per second InnoDB can flush or merge from the change buffer in the background. Actually, the documentation for this option is very good, and I recommend you use it.

Q: You are focusing on InnoDB. But what’s about the others? MyISAM for example? Is it possible that my applications would work faster on MyISAM instead of InnoDB?

A: In MySQL 5.7, only in one case: point-select workloads, which means a single select that reads a row by its primary key. You will find benchmarks comparing MyISAM vs InnoDB in this post by Dimitri Kravtchuk. Follow the links in the beginning of the post to find the benchmarks for earlier versions. I will mention MyISAM in future webinars. I am also thinking about making webinar about storage engines specifics and probably will do separate webinar for MyISAM.

I will address MyISAM in future webinars. I am also thinking about making a webinar about storage engines specifics, and probably will do a separate webinar for MyISAM.

Q: mysqldbcopy? Is it available with Linux Ubuntu version?

A: Yes. MySQL Utilities is a set of Python programs, so they work everywhere Python is installed. There is a separate package for Ubuntu.

Q: What do you use to keep data synced between the master/slaves/dev environments?

A: If you want to keep data synced, you simply need to set up a master-slave environment. If you want to sync tables that already exist, but one of them has slightly outdated data, you can use pt-table-sync.

Q: For the binary backup do we have to do both of those things or just the cp?

A: Any of the commands from the slide work: either cp or XtraBackup.

Q: How we could identify slow queries? 

A: I will run a separate webinar called “Introduction to Troubleshooting Performance” where I will describe this in detail, but usually you can find such queries in the slow query log.

Q: Have you more dedicated recommendations for All-Flash storages or/and Amazon?

A: Regarding test setup: no. But if you run MySQL server on Amazon and want to test it on a local machine or just a cheaper instance, pay attention to slides about how to avoid limitations.

Q: No offense but so far too much time wasted on slides that convey things we already know…let’s get right to the main reason for the webinar!

A: Well, this was introductory webinar. I am planning to have about 16. The next two will also be introductory: “Introduction to MySQL Troubleshooting: Basic Techniques” and “Intro to Troubleshooting Performance: What Affects Query Execution” (exact date TBD). If you are an experienced user, you can re-join starting from the fourth webinar where I will speak about specifics like storage engines, replication and options.

Q: Is there a way to simulate the production load to the test server for better troubleshooting with MySQL?  I got that we restore the data. However I am keen to simulate the load with all the queries that were running on production to test.

A: This is good question, and I believe that many people who do QA or troubleshooting  want such a tool. The tool exists: this is QueryPlayback (formerly known Percona Playback), designed by Percona. But the issue with all such tools is that they aren’t perfect. For example, QueryPlayback is hampered by these pretty serious bugs: 1, 2. Also, replaying queries that modify data doesn’t always work if you did not have a backup taken prior to the query run in production. This means you still need to adjust your tests manually. You can also use scriptable benchmarks tools, such as SysBench, to emulate the load similar to one you have in production.

Q: Does innodb_thread_concurrency depend on the CPU cores (like quad core, hex core etc.) or just number of CPUs on the host?

A: Yes, it depends on the number of CPU cores.

Q: After converting a table from MyISAM to InnoDB, I’m finding the same queries to be significantly slower (all else same). Is there a common/obvious reason for this to happen?

A: The main differences between MyISAM and InnoDB are:

  • InnoDB is transactional and MyISAM is not. This means what every query you run on InnoDB is part of transaction. If this is an update query or if you use a version earlier than 5.6, InnoDB has to make a snapshot of all rows this query accesses to be able to restore it if the query fails.
  • InnoDB and MyISAM use different locking models
  • InnoDB does not support such options as Concurrent Inserts , Low Priority Updates or Delayed Inserts.
  • InnoDB has its own options for tuning to better performance, which are quite low by default

Q: How do I identify how many cores MySQL is currently using?

A: You should use OS-level tools, such as

`top -H -p PID_OF_MYSQL`

Q: What is the performance impact while reducing the buffer poll when the chunk size is the default (128MB) in MySQL 5.7?

A: innodb_buffer_pool_chunk_size controls the size of chunks in which a resizing operation will happen. It should affect performance in the same way as copying smaller files compared with copying a bigger file in your environment: copying bigger files requires more memory, but copying many smaller files requires more operations.

Q: Why do queries, especially desc table_name type of queries, sometimes get stuck at open tables thread state? Even increasing the table cache and open files will not release hung queries until we restart MySQL.

A: This depends on many factors: if the OS allows mysqld to have enough file descriptors to handle so many tables open in parallel (check output of

ulimit -n

 , taken for the user you run mysqld as) to other overtaxed resources, or bug in MySQL.

Q: Is it possible to extract one table from the binary backup (taken with xtrabackup or something similar) ?

A: Yes, this is possible for tables created with the option

innodb-file-per-table=1

 . Instructions for XtraBackup are here. XtraBackup supports extracting a single table from any type of backup. MySQL Enterprise Backup, instead, requires you to use option

--use-tts

 (with MySQL server 5.6 or newer) in order to perform partial restore. Instructions are here.

Q; Is there any direct command to measure disk speeds in rpm or in some other way on Linux?

A: I usually use ioping.

Q: So it’s OK to use mysqldump when the dataset is about few hundred GB big?

A: Yes.

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com