Mar
08
2023
--

Dealing With “Too Many Connections” Error in MySQL 8

too many connections MySQL

Over the years of being a DBA, I had to deal with all kinds of problems in the database. One of the most common problems I faced is the one related to the well-known error “ERROR 1040 (08004): Too many connections”. A lot has been written about this error. Still, the users keep falling into this trap, maybe because of a poorly configured database, a change in the application components, or just because of a sudden increase of connections in the application. At some point, we all face this issue in our careers, not only once but many times. The main objective of this blog post is to point out the new administrative connections allowed on MySQL 8, as these connections can save us from restarting the instance in case this happens.

Default behavior

We know that the amount of connections allowed in the database is defined by the parameter “max_connections.” The default value for this parameter is 151, and it can be changed dynamically, which means without a database restart. If the connections in the database are maxed out, we will hit the dreadful message “ERROR 1040 (08004): Too many connections”. It is important to remember that out of the box, MySQL allows one extra connection, this connection is reserved for the users with “SUPER” privilege (already deprecated here) or the CONNECTION_ADMIN privilege.

I’ll show an example of this feature; for this example, I have an instance with “max_connections=20”, and I have three users, user “monitor1” has only the PROCESS privilege, user “admin1” has the privileges PROCESS and CONNECTION_ADMIN, finally user “admin2” has the privilege SUPER (deprecated). We will see how MySQL treats these connections in the event of having an instance maxed out on user connections:

-- execute all 20 concurrent connections
sysbench oltp_read_write --table-size=1000000 --db-driver=mysql --mysql-host=localhost --mysql-db=sbtest --mysql-user=root --mysql-password="***" --num-threads=20 --time=0 --report-interval=1 run
-- test with user monitor1 
[root@rocky-test1 ~]# mysql -u monitor1 -p
Enter password:
ERROR 1040 (08004): Too many connections

-- test with user admin1
[root@rocky-test1 ~]# mysql -u admin1 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 144
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show grants;
+-----------------------------------------------+
| Grants for admin1@%                           |
+-----------------------------------------------+
| GRANT PROCESS ON *.* TO `admin1`@`%`          |
| GRANT CONNECTION_ADMIN ON *.* TO `admin1`@`%` |
+-----------------------------------------------+
2 rows in set (0.00 sec)

mysql> select count(1) from information_schema.processlist;
+----------+
| count(1) |
+----------+
|       22 |
+----------+
1 row in set (0.00 sec)


-- test with user admin2 
[root@rocky-test1 ~]# mysql -u admin2 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 145
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show grants;
+------------------------------------+
| Grants for admin2@%                |
+------------------------------------+
| GRANT SUPER ON *.* TO `admin2`@`%` |
+------------------------------------+
1 row in set (0.00 sec)

mysql> select count(1) from information_schema.processlist;
+----------+
| count(1) |
+----------+
|        1 |
+----------+
1 row in set (0.00 sec)

As you can see, one single connection with a user with “CONNECTION_ADMIN” or “SUPER” privilege is allowed, however, when user “monitor1” tried to connect, it was not possible because it did not have any of those privileges. Once we gain access to the database, we can easily increase the connections by changing the variable “max_connections” online and then checking the origin of the problem. It is important to remember that only one of these connections is allowed, so please don’t grant these privileges to any user, or you could still be locked out of your database.

– trying a second connection with user admin1

[root@rocky-test1 ~]# mysql -u admin1 -p
Enter password:
ERROR 1040 (HY000): Too many connections

Usually, when this problem occurs, and we cannot gain access to MySQL, the immediate solution is to restart the database and deal with all consequences that this causes, but hey… that is better than rejecting connections for several minutes during the business’s normal operating hours. There is another alternative to gain access to the database, which is by using GDB, but it is not always possible, and Too many connections? No problem! is an article we wrote about this tool in the past, the article is a bit old but still valid.

Side note for Percona Server for MySQL and MariaDB

Percona Server for MySQL, in versions before 8.0.14, had another way to access the database instance, similar to the new feature introduced in version 8.0.14. It was by enabling variables “extra_port” and “extra_max_connections,” and the usage of these variables is out of the scope of this blog post, but the objective of such variables was to allow connections to the database even when the database maximum connections have been reached. Remember that those variables were removed on version 8.0.14, and if found in the config file, the instance will not start, and an error will be shown. Like Percona Server for MySQL, MariaDB had a similar implementation for the same variables. Documentation for MariaDB can be found here.

New feature

Starting with MySQL 8.0.14, a new “Administrative Connections” or “Administrative Network Interface” feature was introduced. This feature allows connections to the database through an administrative port, there is no limit on the number of administrative connections. The difference between this feature and the single connection shown in the previous example is that this is a different port, and it does not limit the connections to only one but more than one connection if required. This should allow us to access the database when the user connections are maxed out and work from there to increase the connections or kill some of the application connections.

The easiest way to enable the “Administrative Connections” is to define the “admin_address” variable, this is the IP address that the administrative connections will listen to, for example, if you only want to allow local connections, you can define this variable as “127.0.0.1”, or if you want to connect through the network, you can define this variable as the server’s IP address. This variable is not dynamic, which means it will require a database restart. By default, this variable is empty, meaning the administrative interface is disabled. Another related variable is “admin_port”; this variable defines the port MySQL will listen to for the administrative connections, the default value for this variable is 33062. Once you define both variables and restart the database, you will see a message indicating the admin interface is ready for connections in the error log:

2023-02-28T14:42:44.383663Z 0 [System] [MY-013292] [Server] Admin interface ready for connections, address: '127.0.0.1'  port: 33062

Now that the admin interface is configured, you need to define the users that can access this administrative connection. These users will require the “SERVICE_CONNECTION_ADMIN” privilege; otherwise, they won’t be able to connect to it. Following our initial example, I have granted the “SERVICE_CONNECTION_ADMIN” to the user “admin1” but not to user “admin2”

mysql> show grants for admin1;
+------------------------------------------------------------------------+
| Grants for admin1@%                                                    |
+------------------------------------------------------------------------+
| GRANT PROCESS ON *.* TO `admin1`@`%`                                   |
| GRANT CONNECTION_ADMIN,SERVICE_CONNECTION_ADMIN ON *.* TO `admin1`@`%` |
+------------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> show grants for admin2;
+------------------------------------+
| Grants for admin2@%                |
+------------------------------------+
| GRANT SUPER ON *.* TO `admin2`@`%` |
+------------------------------------+
1 row in set (0.00 sec)

Testing connection to the admin interface, we see that only user “admin1” is allowed, while user “admin2” connection is rejected for lacking privilege “SERVICE_CONNECTION_ADMIN.” Also, we can confirm user “admin1” is connected to port 33062, which is the port used for the admin interface.

-- testing user admin1

[root@rocky-test1 ~]# mysql -h 127.0.0.1 -P 33062 -u admin1 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \s
--------------
mysql  Ver 8.0.29-21 for Linux on x86_64 (Percona Server (GPL), Release 21, Revision c59f87d2854)

Connection id:		23
Current database:
Current user:		admin1@localhost
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854
Protocol version:	10
Connection:		127.0.0.1 via TCP/IP
Server characterset:	utf8mb4
Db     characterset:	utf8mb4
Client characterset:	utf8mb4
Conn.  characterset:	utf8mb4
TCP port:		33062
Binary data as:		Hexadecimal
Uptime:			50 min 27 sec

Threads: 3  Questions: 188  Slow queries: 0  Opens: 335  Flush tables: 3  Open tables: 269  Queries per second avg: 0.062
--------------

-- testing user admin2

[root@rocky-test1 ~]# mysql -h 127.0.0.1 -P 33062 -u admin2 -p
Enter password:
ERROR 1227 (42000): Access denied; you need (at least one of) the SERVICE_CONNECTION_ADMIN privilege(s) for this operation

Conclusion

If you are using MySQL 8.0.14 or higher, you should enable the admin interface, as we have seen, enabling this feature is super easy and leverages a great feature by allowing access to the database to DBAs in case of an event of “ERROR 1040 (08004): Too many connections”. This new feature does not affect normal database performance and brings great power to DBAs.  Please consider adding the privilege “SERVICE_CONNECTION_ADMIN” only to administrative users, not application users, the idea is not to abuse this feature. If you are still using a lower version of Percona Server for MySQL, please remember you can configure variables  “extra_port” and extra_max_connections to access your database in case you face a max connections issue.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Mar
08
2023
--

Dealing With “Too Many Connections” Error in MySQL 8

too many connections MySQL

Over the years of being a DBA, I had to deal with all kinds of problems in the database. One of the most common problems I faced is the one related to the well-known error “ERROR 1040 (08004): Too many connections”. A lot has been written about this error. Still, the users keep falling into this trap, maybe because of a poorly configured database, a change in the application components, or just because of a sudden increase of connections in the application. At some point, we all face this issue in our careers, not only once but many times. The main objective of this blog post is to point out the new administrative connections allowed on MySQL 8, as these connections can save us from restarting the instance in case this happens.

Default behavior

We know that the amount of connections allowed in the database is defined by the parameter “max_connections.” The default value for this parameter is 151, and it can be changed dynamically, which means without a database restart. If the connections in the database are maxed out, we will hit the dreadful message “ERROR 1040 (08004): Too many connections”. It is important to remember that out of the box, MySQL allows one extra connection, this connection is reserved for the users with “SUPER” privilege (already deprecated here) or the CONNECTION_ADMIN privilege.

I’ll show an example of this feature; for this example, I have an instance with “max_connections=20”, and I have three users, user “monitor1” has only the PROCESS privilege, user “admin1” has the privileges PROCESS and CONNECTION_ADMIN, finally user “admin2” has the privilege SUPER (deprecated). We will see how MySQL treats these connections in the event of having an instance maxed out on user connections:

-- execute all 20 concurrent connections
sysbench oltp_read_write --table-size=1000000 --db-driver=mysql --mysql-host=localhost --mysql-db=sbtest --mysql-user=root --mysql-password="***" --num-threads=20 --time=0 --report-interval=1 run
-- test with user monitor1 
[root@rocky-test1 ~]# mysql -u monitor1 -p
Enter password:
ERROR 1040 (08004): Too many connections

-- test with user admin1
[root@rocky-test1 ~]# mysql -u admin1 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 144
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show grants;
+-----------------------------------------------+
| Grants for admin1@%                           |
+-----------------------------------------------+
| GRANT PROCESS ON *.* TO `admin1`@`%`          |
| GRANT CONNECTION_ADMIN ON *.* TO `admin1`@`%` |
+-----------------------------------------------+
2 rows in set (0.00 sec)

mysql> select count(1) from information_schema.processlist;
+----------+
| count(1) |
+----------+
|       22 |
+----------+
1 row in set (0.00 sec)


-- test with user admin2 
[root@rocky-test1 ~]# mysql -u admin2 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 145
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show grants;
+------------------------------------+
| Grants for admin2@%                |
+------------------------------------+
| GRANT SUPER ON *.* TO `admin2`@`%` |
+------------------------------------+
1 row in set (0.00 sec)

mysql> select count(1) from information_schema.processlist;
+----------+
| count(1) |
+----------+
|        1 |
+----------+
1 row in set (0.00 sec)

As you can see, one single connection with a user with “CONNECTION_ADMIN” or “SUPER” privilege is allowed, however, when user “monitor1” tried to connect, it was not possible because it did not have any of those privileges. Once we gain access to the database, we can easily increase the connections by changing the variable “max_connections” online and then checking the origin of the problem. It is important to remember that only one of these connections is allowed, so please don’t grant these privileges to any user, or you could still be locked out of your database.

– trying a second connection with user admin1

[root@rocky-test1 ~]# mysql -u admin1 -p
Enter password:
ERROR 1040 (HY000): Too many connections

Usually, when this problem occurs, and we cannot gain access to MySQL, the immediate solution is to restart the database and deal with all consequences that this causes, but hey… that is better than rejecting connections for several minutes during the business’s normal operating hours. There is another alternative to gain access to the database, which is by using GDB, but it is not always possible, and Too many connections? No problem! is an article we wrote about this tool in the past, the article is a bit old but still valid.

Side note for Percona Server for MySQL and MariaDB

Percona Server for MySQL, in versions before 8.0.14, had another way to access the database instance, similar to the new feature introduced in version 8.0.14. It was by enabling variables “extra_port” and “extra_max_connections,” and the usage of these variables is out of the scope of this blog post, but the objective of such variables was to allow connections to the database even when the database maximum connections have been reached. Remember that those variables were removed on version 8.0.14, and if found in the config file, the instance will not start, and an error will be shown. Like Percona Server for MySQL, MariaDB had a similar implementation for the same variables. Documentation for MariaDB can be found here.

New feature

Starting with MySQL 8.0.14, a new “Administrative Connections” or “Administrative Network Interface” feature was introduced. This feature allows connections to the database through an administrative port, there is no limit on the number of administrative connections. The difference between this feature and the single connection shown in the previous example is that this is a different port, and it does not limit the connections to only one but more than one connection if required. This should allow us to access the database when the user connections are maxed out and work from there to increase the connections or kill some of the application connections.

The easiest way to enable the “Administrative Connections” is to define the “admin_address” variable, this is the IP address that the administrative connections will listen to, for example, if you only want to allow local connections, you can define this variable as “127.0.0.1”, or if you want to connect through the network, you can define this variable as the server’s IP address. This variable is not dynamic, which means it will require a database restart. By default, this variable is empty, meaning the administrative interface is disabled. Another related variable is “admin_port”; this variable defines the port MySQL will listen to for the administrative connections, the default value for this variable is 33062. Once you define both variables and restart the database, you will see a message indicating the admin interface is ready for connections in the error log:

2023-02-28T14:42:44.383663Z 0 [System] [MY-013292] [Server] Admin interface ready for connections, address: '127.0.0.1'  port: 33062

Now that the admin interface is configured, you need to define the users that can access this administrative connection. These users will require the “SERVICE_CONNECTION_ADMIN” privilege; otherwise, they won’t be able to connect to it. Following our initial example, I have granted the “SERVICE_CONNECTION_ADMIN” to the user “admin1” but not to user “admin2”

mysql> show grants for admin1;
+------------------------------------------------------------------------+
| Grants for admin1@%                                                    |
+------------------------------------------------------------------------+
| GRANT PROCESS ON *.* TO `admin1`@`%`                                   |
| GRANT CONNECTION_ADMIN,SERVICE_CONNECTION_ADMIN ON *.* TO `admin1`@`%` |
+------------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> show grants for admin2;
+------------------------------------+
| Grants for admin2@%                |
+------------------------------------+
| GRANT SUPER ON *.* TO `admin2`@`%` |
+------------------------------------+
1 row in set (0.00 sec)

Testing connection to the admin interface, we see that only user “admin1” is allowed, while user “admin2” connection is rejected for lacking privilege “SERVICE_CONNECTION_ADMIN.” Also, we can confirm user “admin1” is connected to port 33062, which is the port used for the admin interface.

-- testing user admin1

[root@rocky-test1 ~]# mysql -h 127.0.0.1 -P 33062 -u admin1 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854

Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \s
--------------
mysql  Ver 8.0.29-21 for Linux on x86_64 (Percona Server (GPL), Release 21, Revision c59f87d2854)

Connection id:		23
Current database:
Current user:		admin1@localhost
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		8.0.29-21 Percona Server (GPL), Release 21, Revision c59f87d2854
Protocol version:	10
Connection:		127.0.0.1 via TCP/IP
Server characterset:	utf8mb4
Db     characterset:	utf8mb4
Client characterset:	utf8mb4
Conn.  characterset:	utf8mb4
TCP port:		33062
Binary data as:		Hexadecimal
Uptime:			50 min 27 sec

Threads: 3  Questions: 188  Slow queries: 0  Opens: 335  Flush tables: 3  Open tables: 269  Queries per second avg: 0.062
--------------

-- testing user admin2

[root@rocky-test1 ~]# mysql -h 127.0.0.1 -P 33062 -u admin2 -p
Enter password:
ERROR 1227 (42000): Access denied; you need (at least one of) the SERVICE_CONNECTION_ADMIN privilege(s) for this operation

Conclusion

If you are using MySQL 8.0.14 or higher, you should enable the admin interface, as we have seen, enabling this feature is super easy and leverages a great feature by allowing access to the database to DBAs in case of an event of “ERROR 1040 (08004): Too many connections”. This new feature does not affect normal database performance and brings great power to DBAs.  Please consider adding the privilege “SERVICE_CONNECTION_ADMIN” only to administrative users, not application users, the idea is not to abuse this feature. If you are still using a lower version of Percona Server for MySQL, please remember you can configure variables  “extra_port” and extra_max_connections to access your database in case you face a max connections issue.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Feb
25
2019
--

MySQL Challenge: 100k Connections

thread pools MySQL 100k connections

In this post, I want to explore a way to establish 100,000 connections to MySQL. Not just idle connections, but executing queries.

100,000 connections. Is that really needed for MySQL, you may ask? Although it may seem excessive, I have seen a lot of different setups in customer deployments. Some deploy an application connection pool, with 100 application servers and 1,000 connections in each pool. Some applications use a “re-connect and repeat if the query is too slow” technique, which is a terrible practice. It can lead to a snowball effect, and could establish thousands of connections to MySQL in a matter of seconds.

So now I want to set an overachieving goal and see if we can achieve it.

Setup

For this I will use the following hardware:

Bare metal server provided by packet.net, instance size: c2.medium.x86
Physical Cores @ 2.2 GHz
(1 X AMD EPYC 7401P)
Memory: 64 GB of ECC RAM
Storage : INTEL® SSD DC S4500, 480GB

This is a server grade SATA SSD.

I will use five of these boxes, for the reason explained below. One box for the MySQL server and four boxes for client connections.

For the server I will use Percona  Server for MySQL 8.0.13-4 with the thread pool plugin. The plugin will be required to support the thousands of connections.

Initial server setup

Network settings (Ansible format):

- { name: 'net.core.somaxconn', value: 32768 }
- { name: 'net.core.rmem_max', value: 134217728 }
- { name: 'net.core.wmem_max', value: 134217728 }
- { name: 'net.ipv4.tcp_rmem', value: '4096 87380 134217728' }
- { name: 'net.ipv4.tcp_wmem', value: '4096 87380 134217728' }
- { name: 'net.core.netdev_max_backlog', value: 300000 }
- { name: 'net.ipv4.tcp_moderate_rcvbuf', value: 1 }
- { name: 'net.ipv4.tcp_no_metrics_save', value: 1 }
- { name: 'net.ipv4.tcp_congestion_control', value: 'htcp' }
- { name: 'net.ipv4.tcp_mtu_probing', value: 1 }
- { name: 'net.ipv4.tcp_timestamps', value: 0 }
- { name: 'net.ipv4.tcp_sack', value: 0 }
- { name: 'net.ipv4.tcp_syncookies', value: 1 }
- { name: 'net.ipv4.tcp_max_syn_backlog', value: 4096 }
- { name: 'net.ipv4.tcp_mem', value: '50576   64768 98152' }
- { name: 'net.ipv4.ip_local_port_range', value: '4000 65000' }
- { name: 'net.ipv4.netdev_max_backlog', value: 2500 }
- { name: 'net.ipv4.tcp_tw_reuse', value: 1 }
- { name: 'net.ipv4.tcp_fin_timeout', value: 5 }

These are the typical settings recommended for 10Gb networks and high concurrent workloads.

Limits settings for systemd:

[Service]
LimitNOFILE=1000000
LimitNPROC=500000

And the relevant setting for MySQL in my.cnf:

back_log=3500
max_connections=110000

For the client I will use sysbench version 0.5 and not 1.0.x, for the reasons explained below.

The workload is

sysbench --test=sysbench/tests/db/select.lua --mysql-host=139.178.82.47 --mysql-user=sbtest --mysql-password=sbtest --oltp-tables-count=10 --report-interval=1 --num-threads=10000 --max-time=300 --max-requests=0 --oltp-table-size=10000000 --rand-type=uniform --rand-init=on run

Step 1. 10,000 connections

This one is very easy, as there is not much to do to achieve this. We can do this with only one client. But you may face the following error on the client side:

FATAL: error 2004: Can't create TCP/IP socket (24)

This is caused by the open file limit, which is also a limit of TCP/IP sockets. This can be fixed by setting  

ulimit -n 100000

  on the client.

The performance we observe:

[  26s] threads: 10000, tps: 0.00, reads: 33367.48, writes: 0.00, response time: 3681.42ms (95%), errors: 0.00, reconnects:  0.00
[  27s] threads: 10000, tps: 0.00, reads: 33289.74, writes: 0.00, response time: 3690.25ms (95%), errors: 0.00, reconnects:  0.00

Step 2. 25,000 connections

With 25,000 connections, we hit an error on MySQL side:

Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug

If you try to lookup information on this error you might find the following article:  https://www.percona.com/blog/2013/02/04/cant_create_thread_errno_11/

But it does not help in our case, as we have all limits set high enough:

cat /proc/`pidof mysqld`/limits
Limit                     Soft Limit Hard Limit           Units
Max cpu time              unlimited  unlimited            seconds
Max file size             unlimited  unlimited            bytes
Max data size             unlimited  unlimited            bytes
Max stack size            8388608    unlimited            bytes
Max core file size        0          unlimited            bytes
Max resident set          unlimited  unlimited            bytes
Max processes             500000     500000               processes
Max open files            1000000    1000000              files
Max locked memory         16777216   16777216             bytes
Max address space         unlimited  unlimited            bytes
Max file locks            unlimited  unlimited            locks
Max pending signals       255051     255051               signals
Max msgqueue size         819200     819200               bytes
Max nice priority         0          0
Max realtime priority     0          0
Max realtime timeout      unlimited unlimited            us

This is where we start using the thread pool feature:  https://www.percona.com/doc/percona-server/8.0/performance/threadpool.html

Add:

thread_handling=pool-of-threads

to the my.cnf and restart Percona Server

The results:

[   7s] threads: 25000, tps: 0.00, reads: 33332.57, writes: 0.00, response time: 974.56ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 25000, tps: 0.00, reads: 33187.01, writes: 0.00, response time: 979.24ms (95%), errors: 0.00, reconnects:  0.00

We have the same throughput, but actually the 95% response time has improved (thanks to the thread pool) from 3690 ms to 979 ms.

Step 3. 50,000 connections

This is where we encountered the biggest challenge. At first, trying to get 50,000 connections in sysbench we hit the following error:

FATAL: error 2003: Can't connect to MySQL server on '139.178.82.47' (99)

Error (99) is cryptic and it means: Cannot assign requested address.

It comes from the limit of ports an application can open. By default on my system it is

cat /proc/sys/net/ipv4/ip_local_port_range : 32768   60999

This says there are only 28,231 available ports — 60999 minus 32768 — or the limit of TCP connections you can establish from or to the given IP address.

You can extend this using a wider range, on both the client and the server:

echo 4000 65000 > /proc/sys/net/ipv4/ip_local_port_range

This will give us 61,000 connections, but this is very close to the limit for one IP address (maximal port is 65535). The key takeaway from here is that if we want more connections we need to allocate more IP addresses for MySQL server. In order to achieve 100,000 connections, I will use two IP addresses on the server running MySQL.

After sorting out the port ranges, we hit the following problem with sysbench:

sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 50000
FATAL: pthread_create() for thread #32352 failed. errno = 12 (Cannot allocate memory)

In this case, it’s a problem with sysbench memory allocation (namely lua subsystem). Sysbench can allocate memory for only 32,351 connections. This is a problem which is even more severe in sysbench 1.0.x.

Sysbench 1.0.x limitation

Sysbench 1.0.x uses a different Lua JIT, which hits memory problems even with 4000 connections, so it is impossible to go over 4000 connection in sysbench 1.0.x

So it seems we hit a limit with sysbench sooner than with Percona Server. In order to use more connections, we need to use multiple sysbench clients, and if 32,351 connections is the limit for sysbench, we have to use at least four sysbench clients to get up to 100,000 connections.

For 50,000 connections I will use 2 servers (each running separate sysbench), each running 25,000 threads from sysbench.

The results for each sysbench looks like:

[  29s] threads: 25000, tps: 0.00, reads: 16794.09, writes: 0.00, response time: 1799.63ms (95%), errors: 0.00, reconnects:  0.00
[  30s] threads: 25000, tps: 0.00, reads: 16491.03, writes: 0.00, response time: 1800.70ms (95%), errors: 0.00, reconnects:  0.00

So we have about the same throughput (16794*2 = 33588 tps in total), however the 95% response time doubled. This is to be expected as we are using twice as many connections compared to the 25,000 connections benchmark.

Step 3. 75,000 connections

To achieve 75,000 connections we will use three servers with sysbench, each running 25,000 threads.

The results for each sysbench:

[ 157s] threads: 25000, tps: 0.00, reads: 11633.87, writes: 0.00, response time: 2651.76ms (95%), errors: 0.00, reconnects:  0.00
[ 158s] threads: 25000, tps: 0.00, reads: 10783.09, writes: 0.00, response time: 2601.44ms (95%), errors: 0.00, reconnects:  0.00

Step 4. 100,000 connections

There is nothing eventful to achieve75k and 100k connections. We just spin up an additional server and start sysbench. For 100,000 connections we need four servers for sysbench, each shows:

[ 101s] threads: 25000, tps: 0.00, reads: 8033.83, writes: 0.00, response time: 3320.21ms (95%), errors: 0.00, reconnects:  0.00
[ 102s] threads: 25000, tps: 0.00, reads: 8065.02, writes: 0.00, response time: 3405.77ms (95%), errors: 0.00, reconnects:  0.00

So we have the same throughput (8065*4=32260 tps in total) with 3405ms 95% response time.

A very important takeaway from this: with 100k connections and using a thread pool, the 95% response time is even better than for 10k connections without a thread pool. The thread pool allows Percona Server to manage resources more efficiently and provides better response times.

Conclusions

100k connections is quite achievable for MySQL, and I am sure we could go even further. There are three components to achieve this:

  • Thread pool in Percona Server
  • Proper tuning of network limits
  • Using multiple IP addresses on the server box (one IP address per approximately 60k connections)

Appendix: full my.cnf

[mysqld]
datadir {{ mysqldir }}
ssl=0
skip-log-bin
log-error=error.log
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
character_set_server=latin1
collation_server=latin1_swedish_ci
skip-character-set-client-handshake
innodb_undo_log_truncate=off
# general
table_open_cache = 200000
table_open_cache_instances=64
back_log=3500
max_connections=110000
# files
innodb_file_per_table
innodb_log_file_size=15G
innodb_log_files_in_group=2
innodb_open_files=4000
# buffers
innodb_buffer_pool_size= 40G
innodb_buffer_pool_instances=8
innodb_log_buffer_size=64M
# tune
innodb_doublewrite= 1
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit= 0
innodb_flush_method=O_DIRECT_NO_FSYNC
innodb_max_dirty_pages_pct=90
innodb_max_dirty_pages_pct_lwm=10
innodb_lru_scan_depth=2048
innodb_page_cleaners=4
join_buffer_size=256K
sort_buffer_size=256K
innodb_use_native_aio=1
innodb_stats_persistent = 1
#innodb_spin_wait_delay=96
innodb_adaptive_flushing = 1
innodb_flush_neighbors = 0
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_io_capacity=1500
innodb_io_capacity_max=2500
innodb_purge_threads=4
innodb_adaptive_hash_index=0
max_prepared_stmt_count=1000000
innodb_monitor_enable = '%'
performance_schema = ON

Jun
27
2018
--

Scaling PostgreSQL with PgBouncer: You May Need a Connection Pooler Sooner Than You Expect

PostgreSQL connection pools

PostgreSQL connection poolsAs PostgreSQL based applications scale, the need to implement connection pooling can become apparent sooner than you might expect. Since, PostgreSQL to date has no built-in connection pool handler, in this post I’ll explore some of the options for implementing connection pooling. In doing so, we’ll take a look at some of the implications for application performance.

PostgreSQL implements connection handling by “forking” it’s main OS process into a child process for each new connection. An interesting practical consequence of this is that we get a full view of resource utilization per connection in PostgreSQL at the OS level (the output below is from top):

PID USER      PR NI VIRT RES  SHR S %CPU %MEM TIME+  COMMAND             
24379 postgres  20 0 346m 148m 122m R 61.7  7.4 0:46.36 postgres: sysbench sysbench ::1(40120)
24381 postgres  20 0 346m 143m 119m R 62.7  7.1 0:46.14 postgres: sysbench sysbench ::1(40124)
24380 postgres  20 0 338m 137m 121m R 57.7  6.8 0:46.04 postgres: sysbench sysbench ::1(40122)
24382 postgres  20 0 338m 129m 115m R 57.4  6.5 0:46.09 postgres: sysbench sysbench ::1(40126)

This extra visualization comes at some additional cost though: it is more expensive—in terms of time and memory mostly—to fork an OS process than it would be, for example, to spawn a new thread for an existing process. This might be irrelevant if the rate at which connections are opened and closed is low but becomes increasingly important to consider over time as this reality changes. That is possibly one of the reasons why the need for a connection pooling mechanism often manifests itself early in the scaling life of a PostgreSQL-based application.

Scaling connections

When an application server sends a connection request to a PostgreSQL database it will be received by the Postmaster process. Postmaster, observing the limit set by max_connections, will then fork itself, creating a new backend process to handle this new connection. This backend process will live until the connection is closed by the client or terminated by PostgreSQL itself.

If the application was conceived with the database in mind it will make an effective use of connections, reusing existing ones whenever possible while avoiding idle connections from laying around for too long. Unfortunately, that is not always the case. Plus there are legitimate situations where the application popularity/usage increases. These naturally increases the rate at which new connection requests are created. There is a practical limit for the number of connections a server can manage at a given time. Beyond these limits, we start seeing contention in different areas. This in turn affects the server’s capacity to process requests, which can lead to bigger problems.

Putting a cap on the number of connections that may exist at any given time helps somewhat. However, considering the time it takes to properly establish a new connection, if the rate at which new connection requests arrive continues to rise we may soon reach a scaling problem.

Connection pooling for PostgreSQL applications

What if we could instead recycle existing connections to serve new client requests to gain on time (by avoiding the creation and remotion of yet another backend process each time), ultimately increasing transaction throughput, while also making a better use of the resources available? The strategy that revolves around the use of a cache (or pool) of connections that are kept open on the database server and re-used by different client requests is known as connection pooling.

Given there is no built-in handler for PostgreSQL, there are typically two ways of implementing such a mechanism:

  1. On the application side. Some frameworks like Ruby on Rails include their own, built-in connection pool mechanism. There are also libraries that extend database driver functionality to include connection pooling support, such as c3P0.
  2. As an external service, sitting between the application and the database server. The application will then connect to this external service instead, which will relay each request from the application to the database server through one of the connections it maintains in its pool.

An application connection pooler might provide better integration with the application, for example, when it comes to the use of prepared statements and the re-utilization of the cached result set. However, sometimes they may fall short on understanding PostgreSQL protocol and this might result in things like failing to properly clear pre-cached memory. An external service, on the other hand, won’t provide such a tight integration with a particular application but usually allows for greater customization and better maintenance of the pool. This often provides the ability to increase and decrease the number of connections it maintains cached dynamically and according to the demand observed, while also respecting pre-set threshold marks. Probably the most popular connection pooler used with PostgreSQL is PgBouncer.

A simple test

In order to illustrate the impact a connection pooler might have on the performance of a PostgreSQL server, I took advantage of the recent tests we did with sysbench-tpcc on PostgreSQL and repeated them partially by making use of PgBouncer as a connection pooler.

When we first ran the tests our goal was to optimize PostgreSQL for sysbench-tpcc workload running with 56 concurrent clients (threads), with the server having the same amount of CPUs available. The goal this time was to vary the number of concurrent clients (56, 150, 300 and 600) to see how the server would cope with the scaling of connections.

Instead of running each round for 10 hours, however, I ran them for 30 minutes only. This might mask, or at least change, the effects of checkpointing and caching observed in our initial tests.

PgBouncer

I compiled the latest version of PgBouncer (1.8.1) following the instruction on GitHub and installed it in our test box, alongside PostgreSQL.

PgBouncer can be configured with three different types of pooling:

  • Session pooling: once the client gets one of the connections in the pool assigned it will keep it until it disconnects (or a timeout is reached).
  • Transaction pooling: once the client gets a connection from the pool, it keeps it to run a single transaction only.  After that, the connection is returned to the pool. If the client wants to run other transactions it has to wait until it gets another connection assigned to it.
  • Statement pooling: in this mode, PgBouncer will return a connection to its pool as soon as the first query is processed, which means multi-statement transactions would break in this mode.

I went with transaction pooling for my tests as the workload of sysbench-tpcc is composed of several short and single-statement transactions. Here’s the configuration file I used in full, which I named pgbouncer.ini:

[databases]
sbtest = host=127.0.0.1 port=5432 dbname=sbtest
[pgbouncer]
listen_port = 6543
listen_addr = 127.0.0.1
auth_type = md5
auth_file = users.txt
logfile = pgbouncer.log
pidfile = pgbouncer.pid
admin_users = postgres
pool_mode = transaction
default_pool_size=56
max_client_conn=600

Apart from pool_mode, the other variables that matter the most are (definitions below came from PgBouncer’s manual page):

  • default_pool_size: how many server connections to allow per user/database pair.
  • max_client_conn: maximum number of client connections allowed

The users.txt file specified by auth_file contains only a single line with the user and password used to connect to PostgreSQL; more elaborate authentication methods are also supported.

Running the test

I started PgBouncer as a daemon with the following command:

$ pgbouncer -d pgbouncer.ini

Apart from running the benchmark for only 30 minutes and varying the number of concurrent threads each time, I employed the exact same options for sysbench-tpcc we used in our previous tests. The example below is from the first run with threads=56:

$ ./tpcc.lua --pgsql-user=postgres --pgsql-db=sbtest --time=1800 --threads=56 --report-interval=1 --tables=10 --scale=100 --use_fk=0  --trx_level=RC --pgsql-password=****** --db-driver=pgsql run > /var/lib/postgresql/Nando/56t.txt

For the tests using the connection pooler I adapted the connection options so as to connect with PgBouncer instead of PostgreSQL directly. Note it remains a local connection:

./tpcc.lua --pgsql-user=postgres --pgsql-db=sbtest --time=1800 --threads=56 --report-interval=1 --tables=10 --scale=100 --use_fk=0  --trx_level=RC --pgsql-password=****** --pgsql-port=6543 --db-driver=pgsql run > /var/lib/postgresql/Nando/P056t.txt

After each sysbench-tpcc execution I cleared the OS cache with the following command:

$ sudo sh -c 'echo 3 >/proc/sys/vm/drop_caches'

It is not like this will have made much of a difference. As discussed in our previous post shared_buffers was set with 75% of RAM, enough to fit all of the “hot data” in memory.

Results

Without further ado here are the results I obtained:

comparing the effects of running sysbench-tpcc with a connection pooler
TPS from sysbench-tpcc: comparing direct connection to PostgreSQL and the use of PgBouncer as a connection pooler

When running sysbench-tpcc with only 56 concurrent clients the use of direct connections to PostgreSQL provided a throughput (TPS stands for transactions per second) 2.5 times higher than that obtained when using PgBouncer. The use of a connection pooler in this case was extremely detrimental to performance. At such small scale there were no gains obtained from a pool of connections, only overhead.

When running the benchmark with 150 concurrent clients, however, we start seeing the benefits of employing a connection pooler. In fact, such benefits most probably materialize much earlier. With hindsight going from 56 concurrent transactions to 150 was too big of an initial jump.

It was somewhat surprising to me to realize at first that such throughput could be sustained with PgBouncer even when doubling and then quadrupling the number of concurrent clients. What happens, in this case, is that instead of flooding PostgreSQL with that many requests at once, they all stop at PgBouncer’s door. PgBouncer only allows the next request to proceed to PostgreSQL once one of the connections in its pool is freed. Remember, I configured it in transaction pooling mode. The process is transparent to PostgreSQL. It has no idea how many requests are waiting at PgBouncer’s door to be processed (and thus it is spared the trouble and doesn’t freak out!). It’s effectively like outsourcing connection management, beyond the ones used by the pool itself, to a contractor. PgBouncer does this hard work so PostgreSQL doesn’t need to.

This strategy seems to work great for sysbench-tpcc. With other workloads, the balance point might lie elsewhere.

Experimenting with bigger and smaller connection pools

For the tests above I set default_pool_size on PgBouncer equal to the number of CPU cores available on this server (56). To explore the tuning of this parameter, I repeated these tests using bigger connection pools (150, 300, 600) as well as a smaller one (14). The following chart summarizes the results obtained:

sysbench-tpcc with pgbouncer: comparing different pool sizes
How the use of PgBouncer impacts throughput on sysbench-tpcc: comparing the use of different pool sizes

Using a much smaller connection pool (14), sized to ¼ of the number of available CPUs, still yielded a result almost as good. That says a lot about how much leveraging connection handling alone to PgBouncer already helps. Maybe some of PgBouncer’s “gatekeeper” qualities could be incorporated by PostgreSQL itself?

Doubling the number of connections in the pool didn’t make any practical difference. But as soon as we extrapolate that number to 600 (which is the number of maximum concurrent threads I’ve tested) throughput becomes comparable to when not using a connection pooler once the number of concurrent threads is greater than the number of available CPUs. That’s true even when running as many concurrent threads as there are connections available in the pool (600). There’s a practical limit to it, which clearly is on the PostgreSQL side. That’s expected, otherwise, the need for a connection pool wouldn’t be as important.

As a starting point, setting the connection pool size equal to the number of CPUs available in the server looks like a good idea. There may be a hard limit for the pool around 150 connections or so, after which further benefits aren’t realized. However, that’s only speculation. Further tests using both different hardware and workloads would be needed to investigate this properly.

Here’s the table that summarizes the results obtained, for a different view:

Do I need a connection pooler ?

The need to couple PostgreSQL with a connection pooler depends on a number of factors including:

  • The number of connections typically established with the server. Take into account that the number usually varies significantly during the day. Think about the average number per hour, during different hours in the day, and particularly when compared with peak time.
  • The effective throughput (TPS) produced by these connections
  • The number of CPUs available in view of the effective throughput
  • The nature of your workload:
    • Whether your application opens a new connection for each new request or tends to leave connections open for longer
    • Whether it is composed of various single-statement transactions (AUTOCOMMIT=ON style) or big and long transactions.

Unfortunately, I don’t have a formula for this. But now that you understand how a connection pooler such as PgBouncer works and where it tends to benefit PostgreSQL’s performance (even if only having sysbench-tpcc’s workload as the sole example here) it’s a matter of investigating the points above and experimenting. Experimenting is the key! Though possibly using reads only on a stand-by replica at first, to stay on the safe side…

In a future post, we will cover how the architecture of a database solution changes with the use of connection poolers: where to place them, how to cope with single point of failure issues and how they can be used alongside load balancers.

The post Scaling PostgreSQL with PgBouncer: You May Need a Connection Pooler Sooner Than You Expect appeared first on Percona Database Performance Blog.

Sep
19
2017
--

ProxySQL Improves MySQL SSL Connections

In this blog post, we’ll look at how ProxySQL improves MySQL SSL connection performance.

When deploying MySQL with SSL, the main concern is that the initial handshake causes significant overhead if you are not using connection pools (i.e., mysqlnd-mux with PHP, mysql.connector.pooling in Python, etc.). Closing and making new connections over and over can greatly impact on your total query response time. A customer and colleague recently educated me that although you can improve SSL encryption/decryption performance with the AES-NI hardware extension on modern Intel processors, the actual overhead when creating SSL connections comes from the handshake when multiple roundtrips between the server and client are needed.

With ProxySQL’s support for SSL on its backend connections and connection pooling, we can have it sit in front of any application, on the same server (illustrated below):

ProxySQL

With this setup, ProxySQL is running on the same server as the application and is connected to MySQL though local socket. MySQL data does not need to go through the TCP stream unsecured.

To quickly verify how this performs, I used a PHP script that simply creates 10k connections in a single thread as fast it can:

<?php
$i = 10000;
$user = 'percona';
$pass = 'percona';
while($i>=0) {
	$mysqli = mysqli_init();
	// Use SSL
	//$link = mysqli_real_connect($mysqli, "192.168.56.110", $user, $pass, "", 3306, "", MYSQL_CLIENT_SSL)
	// No SSL
	//$link = mysqli_real_connect($mysqli, "192.168.56.110", $user, $pass, "", 3306 )
	// OpenVPN
	//$link = mysqli_real_connect($mysqli, "10.8.99.1",      $user, $pass, "", 3306 )
	// ProxySQL
	$link = mysqli_real_connect($mysqli, "localhost",      $user, $pass, "", 6033, "/tmp/proxysql.sock")
		or die(mysqli_connect_error());
	$info = mysqli_get_host_info($mysqli);
	$i--;
	mysqli_close($mysqli);
	unset($mysqli);
}
?>

Direct connection to MySQL, no SSL:

[root@ad ~]# time php php-test.php
real 0m20.417s
user 0m0.201s
sys 0m3.396s

Direct connection to MySQL with SSL:

[root@ad ~]# time php php-test.php
real	1m19.922s
user	0m29.933s
sys	0m9.550s

Direct connection to MySQL, no SSL, with OpenVPN tunnel:

[root@ad ~]# time php php-test.php
real 0m15.161s
user 0m0.493s
sys 0m0.803s

Now, using ProxySQL via the local socket file:

[root@ad ~]# time php php-test.php
real	0m2.791s
user	0m0.402s
sys	0m0.436s

Below is a graph of these numbers:

ProxySQL

As you can see, the difference between SSL and no SSL performance overhead is about 400% – pretty bad for some workloads.

Connections through OpenVPN are also better than MySQL without SSL. While this is interesting, the OpenVPN server needs to be deployed on another server, separate from the MySQL server and application. This approach allows the application servers and MySQL servers (including replica/cluster nodes) to communicate on the same secured network, but creates a single point of failure. Alternatively, deploying OpenVPN on the MySQL server means if you have an additional high availability layer in place and it gets quite complicated when a new master is promoted. In short, OpenVPN adds many additional moving parts.

The beauty with ProxySQL is that you can just run it from all application servers and it works fine if you simply point it to a VIP that directs it to the correct MySQL server (master), or use the replication group feature to identify the authoritative master.

Lastly, it is important to note that these tests were done on CentOS 7.3 with OpenSSL 1.0.1e, Percona Server for MySQL 5.7.19, ProxySQL 1.4.1, PHP 5.4 and OpenVPN 2.4.3.

Happy ProxySQLing!

Jun
27
2017
--

SSL Connections in MySQL 5.7

SSL Connections

SSL ConnectionsThis blog post looks at SSL connections and how they work in MySQL 5.7.

Recently I was working on an SSL implementation with MySQL 5.7, and I made some interesting discoveries. I realized I could connect to the MySQL server without specifying the SSL keys on the client side, and the connection is still secured by SSL. I was confused and I did not understand what was happening.

In this blog post, I am going to show you why SSL works in MySQL 5.7, and it worked previously in MySQL 5.6.

Let’s start with an introduction of how SSL worked in 5.6.

SSL in MySQL 5.6

The documentation for SSL in MySQL 5.6 is quite detailed, and it explains how SSL works. But first let’s make one thing clear: MySQL supports secure (encrypted) connections between clients and the server using the TLS (Transport Layer Security) protocol. TLS is sometimes referred to as SSL (Secure Sockets Layer), but MySQL doesn’t actually use the SSL protocol for secure connections because it provides weak encryption.

So when we/someone says MySQL is using SSL, it really means that it is using TLS. You can check which protocol you use:

show status like 'Ssl_version';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| Ssl_version | TLSv1.2 |
+---------------+---------+

So how does it work? Let me quote a few lines from the MySQL documentation:

TLS uses encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect data change, loss, or replay. TLS also incorporates algorithms that provide identity verification using the X509 standard. X509 makes it possible to identify someone on the Internet. In basic terms, there should be some entity called a “Certificate Authority” (or CA) that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys (a public key and a secret key). A certificate owner can present the certificate to another party as proof of identity. A certificate consists of its owner’s public key. Any data encrypted using this public key can be decrypted only using the corresponding secret key, which is held by the owner of the certificate.

It works with key pairs (private and public): the server has the private keys and the client has the public keys. Here is a link showing how we can generate these keys.

MySQL 5.6 Client

When the server is configured with SSL, the client has to have the client certificates. Once it gets those, it can connect to the server using SSL:

mysql -utestuser -p -h192.168.0.1 -P3306 --ssl-key=/root/mysql-ssl/client-key.pem --ssl-cert=/root/mysql-ssl/client-cert.pem

We have to specify the key and the certificate. Otherwise, we cannot connect to the server with SSL. So far so good, everything works as the documentation says. But how does it work in MySQL 5.7?

SSL in MySQL 5.7

The documentation was very confusing to me, and did not help much. It described how to create the SSL keys the same way (and even the server and client configuration) as in MySQL 5.6. So if everything is the same, why I can connect without specifying the client keys? I did not have an answer for that. One of my colleagues (Alexey Poritskiy) found the answer after digging through the manuals for a while, and it finally clearly described this behavior:

As of MySQL 5.7.3, a client need specify only the --ssl option to obtain an encrypted connection. The connection attempt fails if an encrypted connection cannot be established. Before MySQL 5.7.3, the client must specify either the --ssl-ca option, or all three of the --ssl-ca, --ssl-key, and --ssl-cert options.

These lines are in the “CREATE USER” syntax manual.

After this, I re-read the SSL documentation and found the following:

5.7.3: On the client side, an explicit --ssl option is no longer advisory but prescriptive. Given a server enabled to support secure connections, a client program can require a secure conection by specifying only the --ssl option. The connection attempt fails if a secure connection cannot be established. Other --ssl-xxx options on the client side mean that a secure connection is advisory (the connection attempt falls back to an unencrypted connection if a secure connection cannot be established).

I still think the SSL manual could be more expressive and detailed.

Before MySQL 5.7.3, it used key pairs. After that, it works a bit similar to websites (HTTPS): the client does not need the keys and the connection still can be secure. (I would still like to see more detailed documentation how it really works.)

Is This Good for Us?

In my opinion, this is a really good feature, but it didn’t get a lot of publicity. Prior to 5.7.3, if we wanted to use SSL we had to create the keys and use the client keys in every client application. It’s doable, but it is just a pain — especially if we are using different keys for every server with many users.

With this feature we can have a different key for every server, but on the client side we only have to enable the SSL connection. Most of the clients use it as the default if SSL is available.

I am pretty sure if people knew about this feature they would use it more often.

Limitations

I tested it with many client applications, and all worked without specifying the client keys. I only had issues with the MySQL 5.6 client, even though the server was 5.7. In that case, I had to specify the client keys. Without them, I couldn’t connect to the server.

It is possible some older applications won’t support this feature.

Conclusion

This is a great feature, easy to use it and it makes SSL implementations much easier. But I think the documentation should be clearer and more precise in describing how this new feature actually works. I already submitted a request to update the documentation.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com