Oct
30
2017
--

Percona XtraDB Cluster 5.7.19-29.22-3 is now available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7Percona announces the release of Percona XtraDB Cluster 5.7.19-29.22-3 on October 27, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

Percona XtraDB Cluster 5.7.19-29.22-3 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • Added access checks for DDL commands to make sure they do not get replicated if they failed without proper permissions. Previously, when a user tried to perform certain DDL actions that failed locally due to lack of privileges, the command could still be replicated to other nodes, because access checks were performed after replication.This vulnerability is identified as CVE-2017-15365.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Oct
30
2017
--

Percona XtraDB Cluster 5.6.37-26.21-3 is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.6.34-26.19Percona announces the release of Percona XtraDB Cluster 5.6.37-26.21-3 on October 27, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.37-26.21-3 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • Added access checks for DDL commands to make sure they do not get replicated if they failed without proper permissions. Previously, when a user tried to perform certain DDL actions that failed locally due to lack of privileges, the command could still be replicated to other nodes, because access checks were performed after replication.This vulnerability is identified as CVE-2017-15365.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
22
2017
--

Percona XtraDB Cluster 5.7.19-29.22 is now available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7Percona announces the release of Percona XtraDB Cluster 5.7.19-29.22 on September 22, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

Percona XtraDB Cluster 5.7.19-29.22 is now the current release, based on the following:

All Percona software is open-source and free.

Upgrade Instructions

After you upgrade each node to Percona XtraDB Cluster 5.7.19-29.22, run the following command on one of the nodes:

$ mysql -uroot -p < /usr/share/mysql/pxc_cluster_view.sql

Then restart all nodes, one at a time:

$ sudo service mysql restart

New Features

  • Introduced the pxc_cluster_view table to get a unified view of the cluster. This table is exposed through the performance schema.

    mysql> select * from pxc_cluster_view;
    -----------------------------------------------------------------------------
    HOST_NAME  UUID                                  STATUS  LOCAL_INDEX  SEGMENT
    -----------------------------------------------------------------------------
    n1         b25bfd59-93ad-11e7-99c7-7b26c63037a2  DONOR   0            0
    n2         be7eae92-93ad-11e7-88d8-92f8234d6ce2  JOINER  1            0
    -----------------------------------------------------------------------------
    2 rows in set (0.01 sec)
  • PXC-803: Added support for new features in Percona XtraBackup 2.4.7:

    • wsrep_debug enables debug logging
    • encrypt_threads specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1). This value is passed using the --encrypt-threads option in XtraBackup.
    • backup_threads specifies the number of threads that XtraBackup should use to create backups. See the --parallel option in XtraBackup.

Improvements

  • PXC-835: Limited wsrep_node_name to 64 bytes.
  • PXC-846: Improved logging to report reason of IST failure.
  • PXC-851: Added version compatibility check during SST with XtraBackup:
    • If a donor is 5.6 and a joiner is 5.7: A warning is printed to perform mysql_upgrade.
    • If a donor is 5.7 and a joiner is 5.6: An error is printed and SST is rejected.

Fixed Bugs

  • PXC-825: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to include the --defaults-group-suffix when logging to syslog. For more information, see #1559498.
  • PXC-826: Fixed multi-source replication to PXC node slave. For more information, see #1676464.
  • PXC-827: Fixed handling of different binlog names between donor and joiner nodes when GTID is enabled. For more information, see #1690398.
  • PXC-830: Rejected the RESET MASTER operation when wsrep provider is enabled and gtid_mode is set to ON. For more information, see #1249284.
  • PXC-833: Fixed connection failure handling during SST by making the donor retry connection to joiner every second for a maximum of 30 retries. For more information, see #1696273.
  • PXC-839: Fixed GTID inconsistency when setting gtid_next.
  • PXC-840: Fixed typo in alias for systemd configuration.
  • PXC-841: Added check to avoid replication of DDL if sql_log_bin is disabled. For more information, see #1706820.
  • PXC-842: Fixed deadlocks during Load Data Infile (LDI) with log-bin disabled by ensuring that a new transaction (of 10 000 rows) starts only after the previous one is committed by both wsrep and InnoDB. For more information, see #1706514.
  • PXC-843: Fixed situation where the joiner hangs after SST has failed by dropping all transactions in the receive queue. For more information, see #1707633.
  • PXC-853: Fixed cluster recovery by enabling wsrep_ready whenever nodes become PRIMARY.
  • PXC-862: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to use the ssl-dhparams value from the configuration file.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
20
2017
--

Percona XtraDB Cluster 5.6.37-26.21 is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.6.34-26.19Percona announces the release of Percona XtraDB Cluster 5.6.37-26.21 on September 20, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.37-26.21 is now the current release, based on the following:

All Percona software is open-source and free.

Improvements

  • PXC-851: Added version compatibility check during SST with XtraBackup:
    • If donor is 5.6 and joiner is 5.7: A warning is printed to perform mysql_upgrade.
    • If donor is 5.7 and joiner is 5.6: An error is printed and SST is rejected.

Fixed Bugs

  • PXC-825: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to include the --defaults-group-suffix when logging to syslog. For more information, see #1559498.
  • PXC-827: Fixed handling of different binlog names between donor and joiner nodes when GTID is enabled. For more information, see #1690398.
  • PXC-830: Rejected the RESET MASTER operation when wsrep provider is enabled and gtid_mode is set to ON. For more information, see #1249284.
  • PXC-833: Fixed connection failure handling during SST by making the donor retry connection to joiner every second for a maximum of 30 retries. For more information, see #1696273.
  • PXC-841: Added check to avoid replication of DDL if sql_log_bin is disabled. For more information, see #1706820.
  • PXC-853: Fixed cluster recovery by enabling wsrep_ready whenever nodes become PRIMARY.
  • PXC-862: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to use the ssl-dhparams value from the configuration file.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sep
19
2017
--

ProxySQL Improves MySQL SSL Connections

In this blog post, we’ll look at how ProxySQL improves MySQL SSL connection performance.

When deploying MySQL with SSL, the main concern is that the initial handshake causes significant overhead if you are not using connection pools (i.e., mysqlnd-mux with PHP, mysql.connector.pooling in Python, etc.). Closing and making new connections over and over can greatly impact on your total query response time. A customer and colleague recently educated me that although you can improve SSL encryption/decryption performance with the AES-NI hardware extension on modern Intel processors, the actual overhead when creating SSL connections comes from the handshake when multiple roundtrips between the server and client are needed.

With ProxySQL’s support for SSL on its backend connections and connection pooling, we can have it sit in front of any application, on the same server (illustrated below):

ProxySQL

With this setup, ProxySQL is running on the same server as the application and is connected to MySQL though local socket. MySQL data does not need to go through the TCP stream unsecured.

To quickly verify how this performs, I used a PHP script that simply creates 10k connections in a single thread as fast it can:

<?php
$i = 10000;
$user = 'percona';
$pass = 'percona';
while($i>=0) {
	$mysqli = mysqli_init();
	// Use SSL
	//$link = mysqli_real_connect($mysqli, "192.168.56.110", $user, $pass, "", 3306, "", MYSQL_CLIENT_SSL)
	// No SSL
	//$link = mysqli_real_connect($mysqli, "192.168.56.110", $user, $pass, "", 3306 )
	// OpenVPN
	//$link = mysqli_real_connect($mysqli, "10.8.99.1",      $user, $pass, "", 3306 )
	// ProxySQL
	$link = mysqli_real_connect($mysqli, "localhost",      $user, $pass, "", 6033, "/tmp/proxysql.sock")
		or die(mysqli_connect_error());
	$info = mysqli_get_host_info($mysqli);
	$i--;
	mysqli_close($mysqli);
	unset($mysqli);
}
?>

Direct connection to MySQL, no SSL:

[root@ad ~]# time php php-test.php
real 0m20.417s
user 0m0.201s
sys 0m3.396s

Direct connection to MySQL with SSL:

[root@ad ~]# time php php-test.php
real	1m19.922s
user	0m29.933s
sys	0m9.550s

Direct connection to MySQL, no SSL, with OpenVPN tunnel:

[root@ad ~]# time php php-test.php
real 0m15.161s
user 0m0.493s
sys 0m0.803s

Now, using ProxySQL via the local socket file:

[root@ad ~]# time php php-test.php
real	0m2.791s
user	0m0.402s
sys	0m0.436s

Below is a graph of these numbers:

ProxySQL

As you can see, the difference between SSL and no SSL performance overhead is about 400% – pretty bad for some workloads.

Connections through OpenVPN are also better than MySQL without SSL. While this is interesting, the OpenVPN server needs to be deployed on another server, separate from the MySQL server and application. This approach allows the application servers and MySQL servers (including replica/cluster nodes) to communicate on the same secured network, but creates a single point of failure. Alternatively, deploying OpenVPN on the MySQL server means if you have an additional high availability layer in place and it gets quite complicated when a new master is promoted. In short, OpenVPN adds many additional moving parts.

The beauty with ProxySQL is that you can just run it from all application servers and it works fine if you simply point it to a VIP that directs it to the correct MySQL server (master), or use the replication group feature to identify the authoritative master.

Lastly, it is important to note that these tests were done on CentOS 7.3 with OpenSSL 1.0.1e, Percona Server for MySQL 5.7.19, ProxySQL 1.4.1, PHP 5.4 and OpenVPN 2.4.3.

Happy ProxySQLing!

Sep
15
2017
--

The MySQL High Availability Landscape in 2017 (the Babies)

MySQL High Availability

This post is the third of a series focusing on the MySQL high availability solutions available in 2017.

The first post looked at the elders, the technologies that have been around for more than ten years. The second post talked about the adults, the more recent and mature technologies. In this post, we will look at the emerging MySQL high availability solutions. The “baby” MySQL high availability solutions I chose for the blog are group replication, proxies and distributed storage.

Group replication

Group replication is the Oracle response to Galera. The term “InnoDB cluster” means a cluster using group replication. The goal is offering similar functionalities, especially the almost synchronous feature.

At first glance, the group replication implementation appears to be rather elegant. The basis is the GTID replication mode. The nodes of an InnoDB cluster share a single UUID sequence. To control the replication lag, Oracle added a flow control layer. While Galera requires unanimity, group replication only requires a majority. The majority protocol in use is derived from Paxos. A majority protocol makes the cluster more resilient to a slow node.

Like Galera, when you add flow control you needs queues. Group replication has two queues. There is one queue for the certification process and one queue for the appliers. What is interesting in the Oracle approach is the presence of a throttling mechanism. When flow control is requested by a node, instead of halting the processing of new transactions like Galera, the rate of transactions is throttled. That can help to meet strict timing SLAs.

Because the group replication logic is fairly similar to Galera, they suffer from the same limitations: large transactions, latency and hot rows. Group replication is recent. The first GA version is 5.7.17, from December 2016. It is natural then that it has a number of sharp edges. I won’t extend too much here, but if you are interested read here, here. I am confident over time group replication will get more polished. Some automation, like the Galera SST process, would also be welcome.

Given the fact the technology is recent, I know no Percona customer using group replication in production.

Proxies

Intelligent proxies can be viewed as another type of upcoming MySQL high availability solution. It is not strictly MySQL. In fact, this solution is more of a mix of other solutions.

The principle is simple: you connect to a proxy, and the proxy directs you to a valid MySQL server. The proxy has to monitor the states of the back-end servers, and maybe even perform actions on them. Of course, the proxy layer must not become a single point of failure. There should be more than one proxy host for basic HA. If more that one proxy is used at the same time, they’ll have to agree on the state of the back-end servers. For example, on a cluster using MySQL async replication, if the proxies are not sending the write traffic to the same host, things will quickly become messy.

There are few ways of achieving this. The simplest solution is an active-passive setup where only one proxy is active at a given time. You’ll need some kind of logic to determine if the proxy host is available or not. Typical choices will use tools like keepalived or Pacemaker.

A second option is to have the proxies agree to a deterministic way of identifying a writer node. For example, with a Galera-based cluster, the sane back-end node with the lowest wsrep_local_index could be the writer node.

Finally, the proxies could talk to each other and coordinate. Such an approach is promising. It could allow a single proxy to perform the monitoring and inform its peers of the results. It would allow also coordinated actions on the cluster when a failure is detected.

Currently, there are a few options in terms of proxies:

  • ProxySQL: An open-source that understands the MySQL protocol and can do R/W splitting, query caching, sharding, SQL firewalling, etc. A new alpha level feature, mirroring, targets the inter-proxy communication need.
  • MaxScale: No longer fully open-source (BSL), but understands the MySQL protocol. Can do R/W splitting, sharding, binlog serving, SQL firewalling, etc.
  • MySQL Router: MySQL Router is an open-source proxy developed by Oracle for InnoDB Cluster (Group replication). It understands the MySQL protocol and also supports the new X protocol. It can do R/W splitting.
  • HAProxy: HAProxy is a popular open-source TCP level proxy. It doesn’t understand the MySQL protocol. It needs helper scripts, responding to HTTP type requests, to figure the node’s health.

To these open source proxies, there are two well-known commercial proxy-like solutions, Tungsten and ScaleArc. Both of these technologies are mature and are not “babies” in terms of age and traction. On top of these, there are also numerous hardware-based load balancer solutions.

The importance of proxies in MySQL high availability has led Percona to include ProxySQL in the latest releases of Percona XtraDB Cluster. In collaboration with the ProxySQL maintainer, René Cannaò, features have been added to make ProxySQL aware of the Percona XtraDB Cluster state.

Proxies are already often deployed in MySQL high availability solutions. Often proxies are only doing load balancing type work. We start to see deployment using proxies for more advanced things, like read/write splitting and sharding.

Distributed storage

Replication setup using distributed storage

 

This MySQL high availability solution is a project I am interested in. It is fair to say it is more a “fetus” than a real “baby,” since I know nobody using it in production. You can see this solution as a shared storage approach on steroids.

The simplest solution requires a three-node Ceph cluster. The nodes also run MySQL and the datadir is a Ceph RBD block device. Data in Ceph is automatically replicated to multiple hosts. This built-in data replication is an important component of the solution. Also, Ceph RBD supports snapshots and clones. A clone is a copy of the whole data set that consumes only the data that changed (delta) in terms of storage. Our three MySQL servers will thus not use three full copies of the dataset, but only one full copy and two deltas. As time passes, the deltas grow. When they are too large, we can simply generate new snapshots and clones and be back to day one. The generation of a new snapshot and clone takes a few seconds, and doesn’t require stopping MySQL.

The obvious use case for the distributed storage approach is a read-intensive workload on a very large dataset. The setup can handle a lot of writes. The higher the write load, the more frequently there will be a snapshot refresh. Keep in mind that refreshing a snapshot of a 10 TB data set takes barely more time than for a 1 GB data set.

For that purpose, I wrote an SST script for Percona XtraDB Cluster that works with Ceph. I blogged about it here. I also wrote a Ceph snapshot/clone backup script that can provision a slave from a master snapshot. I’ll blog about how to use this Ceph backup script in the near future.

Going further with distributed storage, multiple MySQL instances could use the same data pages. Ceph would be use as a distributed object store for InnoDB pages. This would allow to build an open-source Aurora like database. Coupled with Galera or Group replication, you could have a highly-available MySQL cluster sharing a single copy of the dataset.

I started to modify MySQL, actually Percona Server for MySQL 5.7, to add support for Ceph/Rados. Rados is the object store protocol of Ceph. There is still a lot of effort needed to make it work. My primary job is not development, so progress is slow. My work can be found (here). The source compiles well but MySQL doesn’t fully start. I need to debug where things are going wrong.

Adding a feature to MySQL like that is an awesome way to learn the internals of MySQL. I would really appreciate any help if you are interested in this project.

Conclusion

Over the three articles in this series, we have covered the 2017 landscape of MySQL high availability solutions. The first focused on the old timers, “the elders”, composed of: replication, shared storage and NDB. The second articles dealt with the solutions that are more recent and have a good traction: Galera and RDS Aurora. The conclusion of the series is the current article, which looked at what could be possibly coming in term of MySQL high availability solutions.

The main goal of this series is to help planning the deployment of MySQL in a highly-available way. I hope it can be used for hints and pointers to get better and more efficient solutions.

Sep
01
2017
--

How Life360 Used ProxySQL to Lower Its Database Load

ProxySQL

ProxySQLIn this blog post, we’ll look at how to use ProxySQL to help the database load by handling PINGs.

I’ve blogged before about one of our regular clients, Life360. One of the issues they recently had was the PING command taking about 30%-40% of total queries per second across their database infrastructure. This is a non-trivial amount and was easily tens of thousands of pings per second. This added a significant amount of latency to real queries.

A large number of pings is due to the use of PHP PDO with persistent connections. Persistence, or pooling, is necessary to reduce time spent on connecting, disconnecting and reconnecting.

Unfortunately, in PHP (and other) implementations, the driver checks if the database is still alive with a PING before sending the actual command. Logic dictates that you could use the actual command as the PING, and if it fails it could return the same error it would have if the ping itself failed. Baron Schwartz has a lot to say about how unwise the use of a PING is within the drivers.

Barring rewriting PHP PDO, we thought up another solution: ProxySQL.

Before testing ProxySQL, we didn’t know how much gets forwarded to the actual hosts (including these com_admin commands). We wanted to test a quick PoC to discover what the actual behavior of ProxySQL was with respect to these commands. We had two hypothesis that we wanted to check:

  1. ProxySQL forwards everything including com_commands
  2. ProxySQL responds to the com_commands itself

In the event that ProxySQL forwarded everything, we set up a “decoy” MySQL instance to respond to the pings using ProxySQL query filtering. As it turned out, we found that ProxySQL quickly and silently replies to PINGs and doesn’t forward it onto the underlying database server backend. This is the case for other commands as well, as ProxySQL isn’t strictly a forwarding proxy (but more of a reverse proxy).

By placing ProxySQL on the application servers, Life360 was able to reduce QPS significantly. The other advantage of introducing ProxySQL is that it does connection pooling and multiplexing for you.

Here is the graph of com_ping on the day of the deployment:

ProxySQL

Overall, we are talking about hundreds of millions of pings per day down to 0.

We can see that the vanilla install of ProxySQL also reduced active threads significantly:

ProxySQL

This change has enabled Life360 to put off some of their scaling plans and provided other operational gains.

In conclusion, you can use ProxySQL as a simple (or advanced) firewall between your application and database. It consumes very little resources, but provides an immense performance gain.

While ProxySQL can be used as a more advanced firewall, these features are beyond the scope of this post. There are very specific ways to configure it as an advanced firewall. We plan to blog more on this soon.

Aug
31
2017
--

Percona Live Europe Featured Talks: Orchestrating ProxySQL with Orchestrator and Consul with Avraham Apelbaum

Colin Charles

Percona Live EuropeWelcome to another post our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Avraham Apelbaum, DBA and DevOps at Wix.com His talk is titled Orchestrating ProxySQL with Orchestrator and Consul. The combination of ProxySQL and Orchestrator solves many problems, but still requires some manual labor when the configuration changes when there is a network split (and other scenarios). In our conversation, we discussed using Consul to solve some of these issues:

Percona: How did you get into database technology? What do you love about it?

Avraham: On my first day as a soldier in a technology unit of the IDF, I received a HUGE Oracle 8 book and a very low-level design of a DB-based system. “You have one month,” they told me. I finished it all within ten days. Before that, I didn’t even know what a DB was. Today, I’m at Wix managing hundreds of databases that support 100M users!

Percona: You’re presenting a session called “Orchestrating ProxySQL with Orchestrator and Consul”. How do these technologies work together to help provide a high availability solution?

Avraham: ProxySQL is supposed to help you out with high availability (HA) and disaster recovery (DR) for MySQL servers, but it still requires some manual labor when the configuration changes – as a result of a network split, for example. Somehow all ProxySQL servers need to get the new MySQL cluster topology. So to automate all that, I added two more parts: a Consul KV store and a Consul template, which are responsible for updating ProxySQL on every architecture change in the MySQL cluster.

Percona: What is special about this combination of products that works better than other solutions? Is it right all the time, or does it depend on the workload?

Avraham: As DevOps I prefer not to do anything manually. What’s more, no one wants to wake up in the middle of the night because any one of our DB servers can fail. Most everyone, I guess, will have more than one ProxySQL server in their system at some point, so this solution can help them use ProxySql and Orchestrator.

Percona: What do you want attendees to take away from your session? Why should they attend?

Avraham: I am hoping to help people automate their HA and DR solutions. If as a result of my talk someone will earn even one minute off downtime, I’ll be happy.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Avraham: In the DevOps and open source world, it’s all about sharing ideas. It was actually when I attended the talks by ProxySQL and Orchestrator’s creators that I thought of assembling it all up to solve our own problem. So I am looking forward to sharing my idea with others, and getting input from the audience so that everyone can benefit.

Want to find out more about Avraham and RDS migration? Register for Percona Live Europe 2017, and see his talk Orchestrating ProxySQL with Orchestrator and Consul. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Aug
25
2017
--

This Week in Data with Colin Charles #3: More Percona Live Europe!

Colin Charles

Percona Live Europe Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

We are five weeks out to the conference! The tutorials and the sessions have been released, and there’s an added bonus – you can now look at all this in a grid view: tutorials, day one and day two. Now that you can visualize what’s being offered, don’t forget to register.

If you want a discount code, feel free to email me at colin.charles@percona.com.

We have some exciting keynotes as well. Some highlights:

  1. MySQL as a Layered Service: How to Use ProxySQL to Control Traffic and Scale Out, given by René Cannaò, the creator of ProxySQL
  2. Why Open Sourcing Our Database Tooling was the Smart Decision, given by Shlomi Noach, creator of Orchestrator, many other tools, and developer at GitHub (so expect some talk about gh-ost)
  3. MyRocks at Facebook and a Roadmap, given by Yoshinori Matsunobu, shepherd of the MyRocks project at Facebook
  4. Real Time DNS Analytics at CloudFlare with ClickHouse, given by Tom Arnfeld
  5. Prometheus for Monitoring Metrics, given by Brian Brazil, core developer of Prometheus
  6. A Q&A session with Charity Majors and Laine Campbell on Database Reliability Engineering, their new upcoming book!

Let’s not forget the usual State of the Dolphin, an update from Oracle’s MySQL team (representative: Geir Høydalsvik), as well as a keynote by Peter Zaitsev (CEO, Percona) and Continuent. There will also be a couple of Percona customers keynoting, so expect information-packed fun mornings! You can see more details about the keynotes here: day one and day two.

Releases

  • Tarantool 1.7.5 stable. The first in the 1.7 series that comes as stable, and it also comes with its own Log Structured Merge Tree (LSM) engine called Vinyl. They wrote this when they found RocksDB insufficient for them. Slides: Vinyl: why we wrote our own write-optimized storage engine rather than chose RocksDB (and check out the video).
  • MariaDB Server 10.2.8. A– as per my previous column, this build merges TokuDB from Percona Server 5.6.36-82.1 (fixing some bugs). There is also a new InnoDB from MySQL 5.7.19 (current GA release). Have you tried MariaDB Backup yet? There are some GIS compatibility fixes (i.e., to make it behave like MySQL 5.7). One thing that piqued my interest is the CONNECT storage engine (typically used for ETL operations) now has beta support for the MONGO table type. No surprises, it’s meant to read MongoDB tables via the MongoDB C Driver API. Definitely something to try!

Link List

Upcoming Appearances

Percona’s website keeps track of community events, so check out where to listen to a Perconian speak. My upcoming appearances are:

  1. db tech show case Tokyo 2017 – 5-7 September 2017, Tokyo, Japan
  2. Open Source Summit North America – 11-14 September 2017, Los Angeles, CA, USA
  3. Percona Live Europe Dublin – 25-27 September 2017, Dublin, Ireland
  4. Velocity Europe – 17-20 October 2017, London, UK
  5. Open Source Summit Europe – 23-26 October 2017, Prague, Czech Republic

Feedback

Bill Bogasky (MariaDB Corporation) says that if you’re looking for commercial support for Riak now that Basho has gone under, you could get it from Erlang Solutions or TI Tokyo. See their announcement: Riak commercial support now available post-Basho. Thanks, Bill!

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Jul
20
2017
--

Where Do I Put ProxySQL?

ProxySQL

In this blog post, we’ll look at how to deploy ProxySQL.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. It was created for DBAs by René Cannaò, as a means of solving complex replication topology issues. When bringing up ProxySQL with my clients, I always get questions about where it fits into the architecture. This post should clarify that.

Before continuing, you might want to know why you should use this software. The features that are of interest include:

  • MySQL firewall
  • Connection pooling
  • Shard lookup and automated routing
  • Ability to read/write split
  • Automatically switch to another master in case of active master failure
  • Query cache
  • Performance metrics
  • Other neat features!

Initial Configuration

In general, you install it on nodes that do not have a running MySQL database. You manage it via the MySQL command line on another port, usually 6032. Once it is started the configuration in /etc is not used, and you do everything within the CLI. The backend database is actually SQLite, and the db file is stored in /var/lib/proxysql.

There are many guides out there on initializing and installing it, so I won’t cover those details here. It can be as simple as:

apt-get install proxysql

ProxySQL Architecture

While most first think to install ProxySQL on a standalone node between the application and database, this has the potential to affect query performance due to the additional latency from network hops.

 

ProxySQL

To have minimal impact on performance (and avoid the additional network hop), many recommend installing ProxySQL on the application servers. The application then connects to ProxySQL (acting as a MySQL server) on localhost, using Unix Domain Socket, and avoiding extra latency. It would then use its routing rules to reach out and talk to the actual MySQL servers with its own connection pooling. The application doesn’t have any idea what happens beyond its connection to ProxySQL.

ProxySQL

Reducing Your Network Attack Surface

Another consideration is reducing your network attack surface. This means attempting to control all of the possible vulnerabilities in your network’s hardware and software that are accessible to unauthenticated users.

Percona generally suggests that you put a ProxySQL instance on each application host, like in the second image above. This suggestion is certainly valid for reducing latency in your database environment (by limiting network jumps). But while this is good for performance, it can be bad for security.

Every instance must be able to talk to:

  • Every master
  • Every slave

As you can imagine, this is a security nightmare. With every instance, you have x many more connections spanning your network. That’s x many more connections an attacker might exploit.

Instead, it can be better to have one or more ProxySQL instances that are between your application and MySQL servers (like the first image above). This provides a reasonable DMZ-type setup that prevents opening too many connections across the network.

That said, both architectures are valid production configurations – depending on your requirements.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com