May
07
2018
--

Webinar Wednesday, May 9, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM)

MySQL Troubleshooting

MySQL TroubleshootingPlease join Percona’s CEO, Peter Zaitsev as he presents MySQL Troubleshooting and Performance Optimization with PMM on Wednesday, May 9, 2018, at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).

Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications must handle heavy traffic loads while remaining responsive and stable so that you can deliver an excellent user experience. Further, DBAs’ bosses expect solutions that are cost-efficient.

In this webinar, Peter discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them.

Register for the webinar now.

Peter ZaitsevPeter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. The Inc. 5000 recognized Percona in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He was also tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

The post Webinar Wednesday, May 9, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Mar
14
2018
--

Basic Internal Troubleshooting Tools for MySQL Server Webinar: Q & A

Troubleshooting Tools for MySQL

Troubleshooting Tools for MySQLIn this blog, I will provide answers to the Q & A for the Basic Internal Troubleshooting Tools for MySQL Server webinar.

First, I want to thank everybody for attending my February 15, 2018, webinar on troubleshooting tools for MySQL. The recording and slides for the webinar are available here. Below is the list of your questions that I was unable to answer fully during the webinar.

Q: How do we prevent the schema prefix from appearing in the show create view. This is causing issue with restore on another server with a different DB. See the issue here and reproducible test case: https://gist.github.com/anonymous/30cb138b598fec46be762789397796b6

A: I shortened the example in order to fit it in this blog:

mysql> create table t1(f1 int);
Query OK, 0 rows affected (3.47 sec)
mysql> create view v1 as select * from t1;
Query OK, 0 rows affected (0.21 sec)
mysql> show create view v1G
*************************** 1. row ***************************
                View: v1
         Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`f1` AS `f1` from `t1`
character_set_client: utf8
collation_connection: utf8_general_ci
1 row in set (0.00 sec)
mysql> select * from information_schema.views where table_schema='test'G
*************************** 1. row ***************************
       TABLE_CATALOG: def
        TABLE_SCHEMA: test
          TABLE_NAME: v1
     VIEW_DEFINITION: select `test`.`t1`.`f1` AS `f1` from `test`.`t1`
        CHECK_OPTION: NONE
        IS_UPDATABLE: YES
             DEFINER: root@localhost
       SECURITY_TYPE: DEFINER
CHARACTER_SET_CLIENT: utf8
COLLATION_CONNECTION: utf8_general_ci
1 row in set (0.00 sec)

The issue you experienced happened because even if you created a view as

SELECT foo FROM table1;

, it is stored as

SELECT foo FROM your_schema.table1;

. You can see it if you query the 

*.frm

  file for the view:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ cat var/mysqld.1/data/test/v1.frm
TYPE=VIEW
query=select `test`.`t1`.`f1` AS `f1` from `test`.`t1`
md5=5840f59d1287385629fcb9b948e53d96
updatable=1
algorithm=0
definer_user=root
definer_host=localhost
suid=2
with_check_option=0
timestamp=2018-02-24 10:27:45
create-version=1
source=select * from t1
client_cs_name=utf8
connection_cl_name=utf8_general_ci
view_body_utf8=select `test`.`t1`.`f1` AS `f1` from `test`.`t1`

You cannot prevent the schema prefix from being stored. If you restore the view on a different server with a different database name, you should edit the view definition manually. If you already restored the view that points to a non-existent schema, just recreate it.

VIEW

 is metadata only and does not hold any data, so this operation is non-blocking and will run momentarily.

Q: What is thread/sql/compress_gtid_table in performance_schema.threads?

A:

thread/sql/compress_gtid_table

  is name of the instrument. You can read this and other instruments as below:

  • thread/

     is a group of instruments. In this case, it is the instruments that are visible in the

    THREADS

     table.

  • thread/sql/

     is the group of instruments that are part of the server kernel code. If you are not familiar with MySQL source tree, download the source code tarball and check its content. The main components are:

    • sql

        – server kernel

    • storage

       – where storage engines code located (

      storage/innobase

        is InnoDB code,

      storage/myisam

        is MyISAM code and so on)

    • vio

       – input-output functions

    • mysys

       – code, shared between all parts of the server

    • client

       – client library and utilities

    • strings

       – functions to work with strings

This is not full list. For more information consult MySQL Internals Manual

  • thread/sql/compress_gtid_table

     is the name of the particular instrument.

Unfortunately, there is no link to source code for instrumented threads in the table

THREADS

, but we can easily find them in the

sql

 directory. The function

compress_gtid_table

 is defined in

sql/rpl_gtid_persist.cc

and we can check comments and find what it is doing:

/**
The main function of the compression thread.
- compress the gtid_executed table when get a compression signal.
@param p_thd Thread requesting to compress the table
@return
@retval 0 OK. always, the compression thread will swallow any error
for going to wait for next compression signal until
it is terminated.
*/
extern "C" {
static void *compress_gtid_table(void *p_thd)
{

You can also find the description of mysql.gtid_executed compression in the User Reference Manual.

You can follow the same actions to find out what other MySQL threads are doing.

Q: How does a novice on MySQL learn the core basics about MySQL. The documentation can be very vast which surpasses my understanding right now. Are there any good intro books you can recommend for a System Admin?

A: I learned MySQL a long time ago, and a book that I can recommend written for version 5.0. This is “MySQL 5.0 Certification Study Guide” by Paul DuBois,? Stefan Hinz and Carsten Pedersen. The book is in two parts: one is devoted to SQL developers and explains how to run and tune queries. The second part is for DBAs and describes how to tune MySQL server. I asked my colleagues to suggest more modern books for you, and this one is still on the list for many. This is in all cases an awesome book for beginners, just note that MySQL has changed a lot since 5.0 and you need to deepen your knowledge after you finish reading this book.

Another book that was recommended is “MySQL” by Paul DuBois. It is written for beginners and has plenty of content. Paul DuBois has been working on (and continues to work on) the official MySQL documentation for many years, and knows MySQL in great detail.

Another book is “Murach’s MySQL” by Joel Murach, which is used as a course book in many colleges for “Introduction into Databases” type classes.

For System Administrators, you can read “Systems Performance: Enterprise and the Cloud” by Brendan Gregg. This book talks about how to tune operating systems for performance. This is one of the consistent tasks we have to do when administering MySQL. I also recommend that you study Brendan Gregg’s website, which is a great source of information for everyone who is interested in operating system performance tuning.

After you finish the books for novices, you can check out “High Performance MySQL, 3rd Edition” by Peter Zaitsev, Vadim Tkachenko, Baron Schwartz and “MySQL Troubleshooting” by Sveta Smirnova (yours truly =) ). These two books require at least basic MySQL knowledge, however.

Q: Does the database migration goes on same way? Do these tools work for migration as well?

A: The tools I discussed in this webinar are available for any version of MySQL/Percona/MariaDB server. You may use them for migration. For example, it is always useful to compare configuration (

SHOW GLOBAL VARIABLES

) on both “old” and “new” servers. It helps if you observe performance drops on the “new” server. Or you can check table definitions before and after migration. There are many more uses for these tools during the migration process.

Q: How can we take backup of a single schema from a MySQL AWS instance without affecting the performance of applications. An AWS RDS instance to be more clear. mysqldump we cannot use in RDS instance in the current scenario.

A: You can connect to your RDS instance with mysqldump from your local machine, exactly like your MySQL clients connect to it. Then you can collect a dump of a single database, table or even specify the option –where to limit the resulting set to only a portion of the table. Note, by default

mysqldump

 is blocking, but if you backup solely transactional tables (InnoDB, TokuDB, MyRocks) you can run

mysqldump

 with the option

--single-transaction

, which starts the transaction at the beginning of the backup job.

Alternatively, you can use AWS Database Migration Service, which allows you to replicate your databases. Then you can take a backup of a single schema using whatever method you like.

Q: Why do some sites suggest to turn off information and performance schema? Is it important to keep it on or turn it off?

A: You cannot turn off Information Schema. It is always available.

Performance Schema in earlier versions (before 5.6.14) was resource-consuming, even if it was idle while enabled. These limitations were fixed a long time ago, and you don’t need to keep it off. At least unless you hit some new bug.

Q: How do we handle storage level threshold if a data file size grows and reaches max threshold when unnoticed? Can you please help on this question?

A: Do you mean what will happen if the data file grows until filesystem has no space? In this case, clients receive the error

"OS error code 28: No space left on device"

  until space is freed and mysqld can start functioning normally again. If it can write into error log file (for example, if it is located on different disk), you will see messages about error 28 in the error log file too.

Q: What are the performance bottlenecks when enabling performance_schema. Is there any benchmark we can have?

A: Just enabling Performance Schema in version 5.6 and up does not cause any performance issue. With version 5.7, it can also start with almost zero allocated memory, so it won’t affect your other buffers. The Performance Schema causes impact when you enable particular instruments. Most of them are instruments that start with the name

events_waits_*

. I performed benchmarks on effects of particular Performance Schema instruments and published them in this post.

Q: Suggest us some tips about creating a real-time dashboards for the same as we have some replication environment? it would be great if you can help us here for building business level dashboards

A: This is topic for yet another webinar or, better still, a tutorial. For starters, I recommend you to check out the “MySQL Replication” dashboard in PMM and extend it using the metrics that you need.

Thanks for attending the webinar on internal troubleshooting tools for MySQL.

Feb
20
2018
--

xMatters snares $40 million Series D led by Goldman Sachs Private Capital Investing

 When a crisis happens and your system is down, it’s easy for panic and chaos to ensue. Large companies use a range of tools to monitor system health, and finding the source of the problem and ensuring the proper personnel are involved is not always easy. That’s where xMatters comes in. It acts as an uber monitoring tool allowing you to understand the source of your problem and… Read More

Jan
17
2018
--

Troubleshooting Percona Monitoring and Management (PMM) Metrics

In this blog post, I’ll look at some helpful tips on troubleshooting Percona Monitoring and Management metrics.

With any luck, Percona Monitoring and Management (PMM) works for you out of the box. Sometimes, however, things go awry and you see empty or broken graphs instead of dashboards full of insights.

Troubleshooting Percona Monitoring and Management Metrics 1

Before we go through troubleshooting steps, let’s talk about how data makes it to the Grafana dashboards in the first place. The PMM Architecture documentation page helps explain it:

Troubleshooting Percona Monitoring and Management Metrics 2

If we focus just on the “Metrics” path, we see the following requirements:

  • The appropriate “exporters” (Part of PMM Client) are running on the hosts you’re monitoring
  • The database is configured to expose all the metrics you’re looking for
  • The hosts are correctly configured in the repository on PMM Server side (stored in Consul)
  • Prometheus on the PMM Server side can scrape them successfully – meaning it can reach them successfully, does not encounter any timeouts and has enough resources to ingest all the provided data
  • The exporters can retrieve metrics that they requested (i.e., there are no permissions problems)
  • Grafana can retrieve the metrics stored in Prometheus Server and display them

Now that we understand the basic requirements let’s look at troubleshooting the tool.

PMM Client

First, you need to check if the services are actually configured properly and running:

root@rocky:/mnt/data# pmm-admin list
pmm-admin 1.5.2
PMM Server      | 10.11.13.140
Client Name     | rocky
Client Address  | 10.11.13.141
Service Manager | linux-systemd
-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
SERVICE TYPE   NAME   LOCAL PORT  RUNNING  DATA SOURCE                                 OPTIONS
-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries  rocky  -           YES      root:***@unix(/var/run/mysqld/mysqld.sock)  query_source=slowlog, query_examples=true
linux:metrics  rocky  42000       YES      -
mysql:metrics  rocky  42002       YES      root:***@unix(/var/run/mysqld/mysqld.sock)

Second, you can also instruct the PMM client to perform basic network checks. These can spot connectivity problems, time drift and other issues:

root@rocky:/mnt/data# pmm-admin check-network
PMM Network Status
Server Address | 10.11.13.140
Client Address | 10.11.13.141
* System Time
NTP Server (0.pool.ntp.org)         | 2018-01-06 09:10:33 -0500 EST
PMM Server                          | 2018-01-06 14:10:33 +0000 GMT
PMM Client                          | 2018-01-06 09:10:33 -0500 EST
PMM Server Time Drift               | OK
PMM Client Time Drift               | OK
PMM Client to PMM Server Time Drift | OK
* Connection: Client --> Server
-------------------- -------
SERVER SERVICE       STATUS
-------------------- -------
Consul API           OK
Prometheus API       OK
Query Analytics API  OK
Connection duration | 355.085µs
Request duration    | 938.121µs
Full round trip     | 1.293206ms
* Connection: Client <-- Server
-------------- ------ ------------------- ------- ---------- ---------
SERVICE TYPE   NAME   REMOTE ENDPOINT     STATUS  HTTPS/TLS  PASSWORD
-------------- ------ ------------------- ------- ---------- ---------
linux:metrics  rocky  10.11.13.141:42000  OK      YES        -
mysql:metrics  rocky  10.11.13.141:42002  OK      YES        -

If everything is working, next we can check if exporters are providing the expected data directly.

Checking Prometheus Exporters

Looking at the output from pmm-admin check-network, we can see the “REMOTE ENDPOINT”. This shows the exporter address, which you can use to access it directly in your browser:

Troubleshooting Percona Monitoring and Management Metrics 3

You can see MySQL Exporter has different sets of metrics for high, medium and low resolution, and you can click on them to see the provided metrics:

Troubleshooting Percona Monitoring and Management Metrics 4

There are few possible problems you may encounter at this stage

  • You do not see the metrics you expect to seeThis could be a configuration issue on the database side (docs for MySQL and MongoDB), permissions errors or exporter not being correctly configured to expose the needed metrics.
  • Page takes too long to load. This could mean the data capture is too expensive for your configuration. For example, if you have a million tables, you probably can’t afford to capture per-table data.

mysql_exporter_collector_duration_seconds is a great metric that allows you to see which collectors are enabled for different resolutions, and how much time it takes for a given collector to execute. This way you can find and potentially disable collectors that are too expensive for your environment.

Let’s look at some more advanced ways to troubleshoot exporters.  

Looking at ProcessList

root@rocky:/mnt/data# ps aux | grep mysqld_exporter
root      1697  0.0  0.0   4508   848 ?        Ss    2017   0:00 /bin/sh -c
/usr/local/percona/pmm-client/mysqld_exporter -collect.auto_increment.columns=true
-collect.binlog_size=true -collect.global_status=true -collect.global_variables=true
-collect.info_schema.innodb_metrics=true -collect.info_schema.processlist=true
-collect.info_schema.query_response_time=true -collect.info_schema.tables=true
-collect.info_schema.tablestats=true -collect.info_schema.userstats=true
-collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true
-collect.perf_schema.indexiowaits=true -collect.perf_schema.tableiowaits=true
-collect.perf_schema.tablelocks=true -collect.slave_status=true
-web.listen-address=10.11.13.141:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml
-web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt
-web.ssl-key-file=/usr/local/percona/pmm-client/server.key >>
/var/log/pmm-mysql-metrics-42002.log 2>&1

This shows us that the exporter is running, as well as specific command line options that were used to start it (which collectors were enabled, for example).

Checking out Log File

root@rocky:/mnt/data# tail /var/log/pmm-mysql-metrics-42002.log
time="2018-01-05T18:19:10-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:11-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:492"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:616"
time="2018-01-05T18:19:13-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:14-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:15-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:16-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
2018/01/06 09:10:33 http: TLS handshake error from 10.11.13.141:56154: tls: first record does not look like a TLS handshake

If you have problems such as authentication or permission errors, you will see them in the log file. In the example above, we can see the exporter reporting many connection errors (the MySQL Server was down).

Prometheus Server

Next, we can take a look at the Prometheus Server. It is exposed in PMM Server at /prometheus path. We can go to Status->Targets to see which Targets are configured and if they are working: correctly

Troubleshooting Percona Monitoring and Management Metrics 5

In this example, some hosts are scraped successfully while others are not. As you can see I have some hosts that are down, so scraping fails with “no route to host”. You might also see problems caused by firewall configurations and other reasons.

The next area to check, especially if you have gaps in your graph, is if your Prometheus server has enough resources to ingest all the data reported in your environment. Percona Monitoring and  Management ships with the Prometheus dashboard to help to answer this question (see demo).

There is a lot of information in this dashboard, but one of the most important areas you should check is if there is enough CPU available for Prometheus:

Troubleshooting Percona Monitoring and Management Metrics 6

The most typical problem to have with Prometheus is getting into “Rushed Mode” and dropping some of the metrics data:

Troubleshooting Percona Monitoring and Management Metrics 7

Not using enough memory to buffer metrics is another issue, which is shown as “Configured Target Storage Heap Size” on the graph:

Troubleshooting Percona Monitoring and Management Metrics 8

Values of around 40% of total memory size often make sense. The PMM FAQ details how to tune this setting.

If the amount of memory is already configured correctly, you can explore upgrading to a more powerful instance size or reducing the number of metrics Prometheus ingests. This can be done either by adjusting Metrics Resolution (as explained in FAQ) or disabling some of the collectors (Manual). 

You might wonder which collectors generate the most data? This information is available on the same Prometheus Dashboard:

Troubleshooting Percona Monitoring and Management Metrics 9

While these aren’t not exact values, they correlate very well with what the load collectors generate. In this case, for example, we can see that the Performance Schema is responsible for a large amount of time series data. As such, disabling its collectors can reduce the Prometheus load substantially.

Hopefully, these troubleshooting steps were helpful to you in diagnosing PMM’s metrics capture. In a later blog post, I will write about how to diagnose problems with Query Analytics (Demo).

Jan
16
2018
--

Webinar January 18, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2

Percona Monitoring and Management

Percona Monitoring and ManagementJoin Percona’s Product Manager Michael Coburn as he presents MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2 on Thursday, January 18, 2018, at 11:00 am PST / 2:00 pm EST (UTC-8).

Tags: Percona Monitoring and Management, PMM, Monitoring, MySQL, Performance, Optimization, DBA, SysAdmin, DevOps
Experience Level: Expert

Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications need to handle heavy traffic loads while remaining responsive and stable. This is so that you can deliver an excellent user experience. Furthermore, DBA’s are also expected to find cost-efficient means of solving these issues.

In this webinar — the second part of a two-part series — Michael discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them.

By the end of this webinar, you will have a better understanding of how you can troubleshoot MySQL problems in your database.

Register for the webinar now.

Percona Monitoring and ManagementMichael Coburn, Product Manager

Michael joined Percona as a Consultant in 2012 and progressed through various roles including Managing Consultant, Principal Architect, Technical Account Manager, and Technical Support Engineer. He is now leading the Product Manager of Percona Monitoring and Management.

Apr
13
2017
--

TokuDB Troubleshooting: Q & A

TokuDB

TokuDBIn this blog, we will provide answers to the Q & A for the TokuDB Troubleshooting webinar.

First, we want to thank everybody for attending the March 22, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

Q: Is it possible to load specific tables or data from the backup?

A: Do you mean the backup created by TokuBackup? No, this is not possible. You have to restore the full backup to a temporary instance, then perform a logical dump/reload of the specific tables you want to restore.

Q: Since it is not recoverable when corruption happens, we have a risk of losing data. Is incremental backup the only option?

A: TokuBackup currently does not support incremental backup options. You can only create a kind of incremental backup if you copy and store the binary logs on a separate server, and then apply them on top of a previous full backup.

Mar
29
2017
--

Webinar Thursday 3/30: MyRocks Troubleshooting

MyRocks Troubleshooting

MyRocks TroubleshootingPlease join Percona’s Principal Technical Services Engineer Sveta Smirnova, and Senior Software Engineer George Lorch, MariaDB’s Query Optimizer Developer Sergei Petrunia and Facebook’s Database Engineer Yoshinori Matsunobu as they present MyRocks Troubleshooting on March 30, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

MyRocks is an alternative storage engine designed for flash storage. It provides great write workload performance and space efficiency. Like any other powerful engine, it has its own specific configuration scenarios that require special troubleshooting solutions.

This webinar will discuss how to deal with:

  • Data corruption issues
  • Inconsistent data
  • Locks
  • Slow performance

We will use well-known instruments and tools, as well as MyRocks-specific tools, and demonstrate how they work with the MyRocks storage engine.

Register for this webinar here.

Mar
28
2017
--

Troubleshooting Issues with MySQL Character Sets Q & A

MySQL Character Sets

MySQL Character SetsIn this blog, I will provide answers to the Q & A for the Troubleshooting Issues with MySQL Character Sets webinar.

First, I want to thank everybody for attending the March 9 MySQL character sets troubleshooting webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: We’ve had some issues converting tables from

utf8

  to

utf8mb4

. Our issue was that the collation we wanted to use –

utf8mb4_unicode_520_ci

 – did not distinguish between spaces and ideographic (Japanese) spaces, so we were getting unique constraint violations for the 

varchar

 fields when two entries had the same text with different kinds of spaces. Have you seen this problem and is there a workaround? We were wondering if this was related to the mother-child character bug with this collation.

A: Unfortunately this issue exists for many languages. For example, in Russian you cannot distinguish “?” and “?” if you use

utf8

 or

utf8mb4

. However, there is hope for Japanese: Oracle announced that they will implement new language-specific

utf8mb4

 collations in MySQL 8.0. I already see 21 new collations in my 8.0.0 installation.

mysql> show collation like '%0900%';
+----------------------------+---------+-----+---------+----------+---------+
| Collation                  | Charset | Id  | Default | Compiled | Sortlen |
+----------------------------+---------+-----+---------+----------+---------+
| utf8mb4_0900_ai_ci         | utf8mb4 | 255 |         | Yes      |       8 |
| utf8mb4_cs_0900_ai_ci      | utf8mb4 | 266 |         | Yes      |       8 |
| utf8mb4_da_0900_ai_ci      | utf8mb4 | 267 |         | Yes      |       8 |
| utf8mb4_de_pb_0900_ai_ci   | utf8mb4 | 256 |         | Yes      |       8 |
| utf8mb4_eo_0900_ai_ci      | utf8mb4 | 273 |         | Yes      |       8 |
| utf8mb4_es_0900_ai_ci      | utf8mb4 | 263 |         | Yes      |       8 |
| utf8mb4_es_trad_0900_ai_ci | utf8mb4 | 270 |         | Yes      |       8 |
| utf8mb4_et_0900_ai_ci      | utf8mb4 | 262 |         | Yes      |       8 |
| utf8mb4_hr_0900_ai_ci      | utf8mb4 | 275 |         | Yes      |       8 |
| utf8mb4_hu_0900_ai_ci      | utf8mb4 | 274 |         | Yes      |       8 |
| utf8mb4_is_0900_ai_ci      | utf8mb4 | 257 |         | Yes      |       8 |
| utf8mb4_la_0900_ai_ci      | utf8mb4 | 271 |         | Yes      |       8 |
| utf8mb4_lt_0900_ai_ci      | utf8mb4 | 268 |         | Yes      |       8 |
| utf8mb4_lv_0900_ai_ci      | utf8mb4 | 258 |         | Yes      |       8 |
| utf8mb4_pl_0900_ai_ci      | utf8mb4 | 261 |         | Yes      |       8 |
| utf8mb4_ro_0900_ai_ci      | utf8mb4 | 259 |         | Yes      |       8 |
| utf8mb4_sk_0900_ai_ci      | utf8mb4 | 269 |         | Yes      |       8 |
| utf8mb4_sl_0900_ai_ci      | utf8mb4 | 260 |         | Yes      |       8 |
| utf8mb4_sv_0900_ai_ci      | utf8mb4 | 264 |         | Yes      |       8 |
| utf8mb4_tr_0900_ai_ci      | utf8mb4 | 265 |         | Yes      |       8 |
| utf8mb4_vi_0900_ai_ci      | utf8mb4 | 277 |         | Yes      |       8 |
+----------------------------+---------+-----+---------+----------+---------+
21 rows in set (0,03 sec)

In 8.0.1 they promised new case-sensitive and Japanese collations. Please see this blog post for details. The note about the planned Japanese support is at the end.

Meanwhile, I can only suggest that you implement your own collation as described here. You may use

utf8_russian_ci

 collation from Bug #51976 as an example.

Although the user manual does not list

utf8mb4

 as a character set for which it’s possible to create new collations, you can actually do it. What you need to do is add a record about the character set

utf8mb4

 and the new collation into

Index.xml

, then restart the server.

<charset name="utf8mb4">
<collation name="utf8mb4_russian_ci" id="1033">
 <rules>
    <reset>u0415</reset><p>u0451</p><t>u0401</t>
  </rules>
</collaiton>
</charset>
mysql> show collation like 'utf8mb4_russian_ci';
+--------------------+---------+------+---------+----------+---------+
| Collation          | Charset | Id   | Default | Compiled | Sortlen |
+--------------------+---------+------+---------+----------+---------+
| utf8mb4_russian_ci | utf8mb4 | 1033 |         |          |       8 |
+--------------------+---------+------+---------+----------+---------+
1 row in set (0,03 sec)
mysql> create table test_yo(gen varchar(100) CHARACTER SET utf8mb4, yo varchar(100) CHARACTER SET utf8mb4 collate utf8mb4_russian_ci) engine=innodb default character set=utf8mb4;
Query OK, 0 rows affected (0,25 sec)
mysql> set names utf8mb4;
Query OK, 0 rows affected (0,02 sec)
mysql> insert into test_yo values('??', '??'), ('???', '???'), ('????', '????');
Query OK, 3 rows affected (0,05 sec)
Records: 3  Duplicates: 0  Warnings: 0
mysql> insert into test_yo values('??', '??'), ('???', '???'), ('????', '????');
Query OK, 3 rows affected (0,06 sec)
Records: 3  Duplicates: 0  Warnings: 0
mysql> select * from test_yo order by gen;
+----------+----------+
| gen      | yo       |
+----------+----------+
| ??       | ??       |
| ??       | ??       |
| ????     | ????     |
| ????     | ????     |
| ???      | ???      |
| ???      | ???      |
+----------+----------+
6 rows in set (0,00 sec)
mysql> select * from test_yo order by yo;
+----------+----------+
| gen      | yo       |
+----------+----------+
| ??       | ??       |
| ??       | ??       |
| ???      | ???      |
| ???      | ???      |
| ????     | ????     |
| ????     | ????     |
+----------+----------+
6 rows in set (0,00 sec)

Q: If receiving

utf8

 on

latin1

 charset it will be corrupted. Just want to confirm that you can reformat as

utf8

 and un-corrupt the data? Also, is there a time limit on how quickly this needs to be done?

A: It will be corrupted only if you store

utf8

 data in the 

latin1

 column. For example, if you have a table, defined as:

create table latin1(
  f1 varchar(100)
) engine=innodb default charset=latin1;

And then insert a word in

utf8

 format into it that contains characters that are not in the 

latin1

 character set:

mysql> set names utf8;
Query OK, 0 rows affected (0,00 sec)
mysql> set sql_mode='';
Query OK, 0 rows affected, 1 warning (0,00 sec)
mysql> insert into latin1 values('Sveta'), ('?????');
Query OK, 2 rows affected, 1 warning (0,04 sec)
Records: 2  Duplicates: 0  Warnings: 1

The data in

UTF8

 will be corrupted and can never be recovered:

mysql> select * from latin1;
+-------+
| f1    |
+-------+
| Sveta |
| ????? |
+-------+
2 rows in set (0,00 sec)
mysql> select f1, hex(f1) from latin1;
+-------+------------+
| f1    | hex(f1)    |
+-------+------------+
| Sveta | 5376657461 |
| ????? | 3F3F3F3F3F |
+-------+------------+
2 rows in set (0,01 sec)

However, if your data is stored in the 

UTF8

 column and you use

latin1

 for a connection, you will only get a corrupted result set. The data itself will be left untouched:

mysql> create table utf8(f1 varchar(100)) engine=innodb character set utf8;
Query OK, 0 rows affected (0,18 sec)
mysql> insert into utf8 values('Sveta'), ('?????');
Query OK, 2 rows affected (0,15 sec)
Records: 2  Duplicates: 0  Warnings: 0
mysql> set names latin1;
Query OK, 0 rows affected (0,00 sec)
mysql> select f1, hex(f1) from utf8;
+-------+----------------------+
| f1    | hex(f1)              |
+-------+----------------------+
| Sveta | 5376657461           |
| ????? | D0A1D0B2D0B5D182D0B0 |
+-------+----------------------+
2 rows in set (0,00 sec)
mysql> set names utf8;
Query OK, 0 rows affected (0,00 sec)
mysql> select f1, hex(f1) from utf8;
+------------+----------------------+
| f1         | hex(f1)              |
+------------+----------------------+
| Sveta      | 5376657461           |
| ?????      | D0A1D0B2D0B5D182D0B0 |
+------------+----------------------+
2 rows in set (0,00 sec)

Q: Can you discuss how charsets affect mysqldump? Specifically, how do we dump a database containing tables with different default charsets?

A: Yes, you can. MySQL can successfully convert data that uses different character sets, so your only job is to specify option

--default-character-set

 for

mysqldump

. In this case, strings in any character set you use can be converted to the character set specified. For example, if you use

cp1251

 and

latin1

, you may set option

--default-character-set

 to

cp1251

,

utf8

 and 

utf8mb4

. However, you cannot set it to

latin1

 because Cyrillic characters exist in the 

cp1251

 character set, but do not exist in

latin1

.

The default value for

mysqldump

 is

utf8

. You only need to change this default if you use values that are outside of the range supported by 

utf8

 (for example, the smileys in

utf8mb4

).

Q: But if you use the 

--single-transaction

 option for

mysqldump

, you can only specify one character set in the default?

A: Yes, and this is OK: all data will be converted into this character set. And then, when you will restore the dump, it will be converted back to the character set specified in column definitions.

Q: I noticed that MySQL doesn’t support case-sensitive

UTF-8

 character sets. What do you recommend for implementing case-sensitive

UTF-8

, if it’s at all possible?

A: In the link I provided earlier, Oracle promises to implement case-sensitive collations for

utf8mb4

 in version 8.0.1. Before that happens, I recommend you to implement your own case-sensitive collation.

Q: How are tools like

pt-table-checksum

 affected by charsets? Is it safe to use a 4-byte charset (like

utf8mb4

) as the default charset for all comparisons? Assuming our tables are a mix of

latin1

 ,

utf8

 and

utf8mb4

.

A: With this combination, you won’t have any issues:

pt-table-checksum

 uses a complicated set of functions that joins columns and calculates a 

crc32

 checksum on them. In your case, all data will be converted to

utf8mb4

 and no conflicts will happen.

However, if you use incompatible character sets in a single table, you may get the error

"Illegal mix of collations for operation 'concat_ws' "

:

mysql> create table cp1251(f1 varchar(100) character set latin1, f2 varchar(100) character set cp1251) engine=innodb;
Query OK, 0 rows affected (0,32 sec)
mysql> set names utf8;
Query OK, 0 rows affected (0,00 sec)
mysql> insert into cp1251 values('Sveta', '?????');
Query OK, 1 row affected (0,07 sec)
sveta@Thinkie:~/build/mysql-8.0/mysql-test$ ~/build/percona-toolkit/bin/pt-table-checksum h=127.0.0.1,P=13000,u=root,D=test
Diffs cannot be detected because no slaves were found.  Please read the --recursion-method documentation for information.
03-18T03:51:58 Error executing EXPLAIN SELECT COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*explain checksum table*/: DBD::mysql::st execute failed: Illegal mix of collations for operation 'concat_ws' [for Statement "EXPLAIN SELECT COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*explain checksum table*/"] at /home/sveta/build/percona-toolkit/bin/pt-table-checksum line 11351.
03-18T03:51:58 Error checksumming table db1.cp1251: Error executing checksum query: DBD::mysql::st execute failed: Illegal mix of collations for operation 'concat_ws' [for Statement "REPLACE INTO `percona`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT ?, ?, ?, ?, ?, ?, COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*checksum table*/" with ParamValues: 0='db1', 1='cp1251', 2=1, 3=undef, 4=undef, 5=undef] at /home/sveta/build/percona-toolkit/bin/pt-table-checksum line 10741.
TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
03-18T03:51:58      2      0        0       1       0   0.003 db1.cp1251
03-18T03:51:58      0      0        2       1       0   0.167 db1.latin1
03-18T03:51:58      0      0        6       1       0   0.198 db1.test_yo
...

The tool continues working, and will process the rest of your tables. I reported this behavior as Bug #1674266.

Thanks for attending the Troubleshooting Issues with MySQL Character Sets webinar.

Mar
21
2017
--

Webinar Wednesday March 22, 2017: TokuDB Troubleshooting

TokuDB Troubleshooting

TokuDB TroubleshootingPlease join Percona’s Principal Technical Services Engineer, Sveta Smirnova, Senior Software Engineer, George Lorch and Software Engineer, Vlad Lesin as they present TokuDB Troubleshooting on March 22, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

 TokuDB is an alternative storage engine, designed for big data applications. It provides great write workload scalability features. While the feature set is similar to InnoDB’s, this engine has its own specific configuration settings and troubleshooting instruments. This webinar will look at how to use them for optimal performance.

We will discuss how to deal with:

  • Data corruption issues
  • Inconsistent data
  • Locks
  • Slow performance

We will use well-known instruments and tools, and how they work with the TokuDB storage engine.

Register for the webinar here.

Vladislav Lesin, Software Engineer

Vladislav Lesin is a software engineer at Percona, joining in April 2012. Before coming to Percona, he worked on improving the performance and reliability of high load projects with LAMP architectures. His work consisted of developing fast servers and modules with C and C++, projects state monitoring, searching bottlenecks, and open source projects patching including nginx, memcache, sphinx, php, ejabberd. He took part in developing not only server-side applications, but desktop and mobile ones too. He also has experience in project/product management, hiring, partners negotiations.

Before that he worked in several IT companies, where he developed desktop applications on C++ for such areas as industrial automation, parallel computing, media production. He holds a Master’s Degree in Technique and Technology from Tula State University. Now he lives in Tula City with his wife and daughter.

MySQL Character SetsSveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as a Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle.

She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

George Lorch, Software Engineer

George joined the Percona development team in April 2012. George has over 20 years of experience in software support, development, architecture and project management. Prior to joining Percona, George was focused on Windows-based enterprise application server development and network protocol classification and optimization, with heavy doses of database schema design, architecture and tuning.
Mar
10
2017
--

Troubleshooting MySQL access privileges issues: Q & A

MySQL access privileges

MySQL access privilegesIn this blog, I will provide answers to the Q & A for the Troubleshooting MySQL Access Privileges Issues webinar.

First, I want to thank everybody for attending the February 23 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: Should the root@localhost user be given ALL privileges or Super privileges? Does All include Super privileges also?

A: Yes, you should have a user with all privileges. Better if this user has access from localhost only.

ALL

  includes

SUPER

.

Q: We have users who connect via a laptop that get dynamic IP addresses, so granting access with a server name is an easier way to manage these users. Can I grant access to a MySQL database with a hostname as opposed to an ipaddress? For example “myname@mymachine.mydomain.com” as opposed to “myname@10.10.10.10”?  Is the host cache/performance_schema required for this?

A: Yes, you can.

But it looks like I was not clear about host cache. Host cache is an internal structure that is always available and contains answers from DNS server. You cannot enable or disable it. Until version 5.6, you also could not control it. For example, if the cache got corrupted the only thing you could do is to restart the server. Version 5.6 the table

HOST_CACHE

 was introduced to Performance Schema. With this table you can examine the content of the host cache and truncate it if needed.

Q: If there are multiple entries in the user table that match the connecting user (e.g., with wildcards, hostname, and IP), what rules does MySQL use to select which is used for authentication?  Does it try multiple ones until it gets a password match?

A: Not, mysqld does not try to hack your passwords. Instead it sorts the user table by name and host in descending order as I showed on slide #37 (page 110). Then it takes the first matching row. So if you created users

foo@somehost

,

foo@some%

 and

foo@1.2.3.4

, and you connect as foo from

somehost

, mysqld first checks the user name and then chooses the first matching row

foo@somehost

. If you instead connect as

foo

 from

someotherhost

, mysqld chooses 

foo@some%

. An IP-based host is chosen if either mysqld started with option

skip-networking

 or if

1.2.3.4

  points to a host whose name does not start with “some”.

Mixing IP-based and name-based hosts is dangerous in situations when the same host can be resolved as

somehost

 or

1.2.3.4

. In this case, if something goes wrong with the host cache or DNS server, the wrong entry from the user table can be chosen. For example, if you initially had three hosts:

uniquehost

 (which resolves as

1.2.3.4

),

somehost

 (which resolves as

4.3.2.1

) and

someothershost

 (which resolves as

4.3.2.2

). Now you decided to re-locate

uniquehost

 to a machine with IP

1.2.3.5

 and use IP

1.2.3.4

 for the host with name

someyetanotherhost

. In this case, the clients from the machine with IP

1.2.3.4

 will be treated as

foo@some%

, which isn’t what you want.

To demonstrate this issue, I created two users and granted two different privileges to them:

mysql> create user sveta@Thinkie;
Query OK, 0 rows affected (0,01 sec)
mysql> create user sveta@'192.168.0.4';
Query OK, 0 rows affected (0,00 sec)
mysql> grant all on *.* to 'sveta'@'Thinkie';
Query OK, 0 rows affected (0,00 sec)
mysql> grant all on db1.* to 'sveta'@'192.168.0.4';
Query OK, 0 rows affected (0,00 sec)

Now I modified my

/etc/hosts

  file and pointed address

192.168.0.4

  to name

Thinkie

:

127.0.0.1   localhost
# 127.0.1.1   Thinkie
192.168.0.4 Thinkie

Now, if I connect as sveta both Thinkie and 192.168.0.4 are resolved to the same host:

sveta@Thinkie:$ mysql -hThinkie -usveta
...
mysql> select user(), current_user();
+---------------+----------------+
| user()        | current_user() |
+---------------+----------------+
| sveta@Thinkie | sveta@thinkie  |
+---------------+----------------+
1 row in set (0,00 sec)
mysql> show grants;
+--------------------------------------------------+
| Grants for sveta@thinkie                         |
+--------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'sveta'@'thinkie' |
+--------------------------------------------------+
1 row in set (0,00 sec)
mysql> q
Bye
sveta@Thinkie:$ mysql -h192.168.0.4 -usveta
...
mysql> show grants;
+--------------------------------------------------+
| Grants for sveta@thinkie                         |
+--------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'sveta'@'thinkie' |
+--------------------------------------------------+
1 row in set (0,00 sec)
mysql> select user(), current_user();
+---------------+----------------+
| user()        | current_user() |
+---------------+----------------+
| sveta@Thinkie | sveta@thinkie  |
+---------------+----------------+
1 row in set (0,00 sec)
mysql> q
Bye

Now I modified the 

/etc/hosts

  file and pointed

Thinkie

  back to

127.0.0.1

  (

localhost

):

127.0.0.1   localhost
127.0.1.1   Thinkie
# 192.168.0.4 Thinkie

But host

192.168.0.4

 still resolves to

Thinkie

:

sveta@Thinkie:$ mysql -h192.168.0.4 -usveta
...
mysql> select user(), current_user();
+---------------+----------------+
| user()        | current_user() |
+---------------+----------------+
| sveta@Thinkie | sveta@thinkie  |
+---------------+----------------+
1 row in set (0,00 sec)
mysql> show grants;
+--------------------------------------------------+
| Grants for sveta@thinkie                         |
+--------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'sveta'@'thinkie' |
+--------------------------------------------------+
1 row in set (0,00 sec)
mysql> q
Bye

The reason for this is a stalled host cache, which can be easily observable with Performance Schema:

sveta@Thinkie:$ mysql -uroot
...
mysql> select * from performance_schema.host_cacheG
*************************** 1. row ***************************
                                        IP: 192.168.0.4
                                      HOST: Thinkie
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 0
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 0
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 0
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2017-03-02 23:19:32
                                 LAST_SEEN: 2017-03-02 23:20:31
                          FIRST_ERROR_SEEN: NULL
                           LAST_ERROR_SEEN: NULL
1 row in set (0,00 sec)
mysql> truncate performance_schema.host_cache;
Query OK, 0 rows affected (0,00 sec)
mysql> q
Bye

After I truncated table

host_cache

 the numeric host resolves as I expect:

sveta@Thinkie:$ mysql -h192.168.0.4 -usveta
...
mysql> show grants;
+----------------------------------------------------------+
| Grants for sveta@192.168.0.4                             |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO 'sveta'@'192.168.0.4'              |
| GRANT ALL PRIVILEGES ON `db1`.* TO 'sveta'@'192.168.0.4' |
+----------------------------------------------------------+
2 rows in set (0,00 sec)
mysql> select user(), current_user();
+-------------------+-------------------+
| user()            | current_user()    |
+-------------------+-------------------+
| sveta@192.168.0.4 | sveta@192.168.0.4 |
+-------------------+-------------------+
1 row in set (0,00 sec)
mysql> q
Bye

Q: What privileges are required for a non-root or non-super user to be to use mysqldump to dump the database and then restore it on a different server?

A: Generally you should have

SELECT

  privilege on all objects you are going to dump. If you are dumping views, you also should have

SHOW VIEW

  privilege in order to run

SHOW CREATE TABLE

. If you want to dump stored routines/events, you need access to them as well. If you use option

--lock-tables

or

--lock-all-tables

, you should have the 

LOCK

  privilege.

Q: If the max_connection value is reached in MySQL, can root@localhost with ALL privilege still login, or with Super privilege user can login?

A:

ALL

 includes

SUPER

, so a user with

ALL

  privilege can login. Just note there can be only one such connection, thus do not grant

SUPER

 

or ALL

 privilege to the application user.

Q: Is it possible to remove a priv at a lower level? In other words, grant select and delete at the database level, but remove delete for a specific table?  Or can privs only be added to?

A: Not, MySQL will reject such a statement:

mysql> show grants for sveta@'192.168.0.4';
+----------------------------------------------------------+
| Grants for sveta@192.168.0.4                             |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO 'sveta'@'192.168.0.4'              |
| GRANT ALL PRIVILEGES ON `db1`.* TO 'sveta'@'192.168.0.4' |
+----------------------------------------------------------+
2 rows in set (0,00 sec)
mysql> revoke update on db1.t1 from sveta@'192.168.0.4';
ERROR 1147 (42000): There is no such grant defined for user 'sveta' on host '192.168.0.4' on table 't1'

Q: How can we have DB user roles… like a group of grants for a particular role?

A: You have several options.

  1. Use MariaDB 10.0.5 or newer. You can read about roles support in MariaDB here
  2. Use MySQL 8.0. You can read about roles in MySQL 8.0 here
  3. With MySQL 5.7: imitate roles as I showed on slide 19 (pages 53 – 60)
  4. With MySQL 5.5 and 5.6: use the same method as shown on slides, but use the custom authentication plugin that supports proxy users.
  5. Always: create a template with privileges, assign privileges to each user manually.

Q: How would you migrate role simulation with proxy users to actual roles in MySQL 8.x?

A: I would drop the proxied user and create a role with the same privileges instead, then grant the proxy user the newly created role instead of

PROXY

.

Q: Is there a plugin to integrate Active Directory and MySQL in order to use Active Directory groups?

A: There is commercial Windows Authentication Plugin, available in versions 5.5 and newer. You can also use the open source Percona PAM authentication plugin and connect it to Active Directory the same way as can be done for LDAP. There is a blog post describing how to do so, but I’ve never used this method myself.

Q: Can we use central auth with MySQL?

A: Yes, with the help of the PAM Plugin. There are tutorials for LDAP and Active Directory. You may use similar methods to setup other kinds of authentications, such as Kerberos.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com