Mar
30
2017
--

Performance Evaluation of SST Data Transfer: With Encryption (Part 2)

SST Data Transfer

In this blog post, we’ll look at the performance of SST data transfer using encryption.

In my previous post, we reviewed SST data transfer in an unsecured environment. Now let’s take a closer look at a setup with encrypted network connections between the donor and joiner nodes.

The base setup is the same as the previous time:

  • Database server: Percona XtraDB Cluster 5.7 on donor node
  • Database: sysbench database – 100 tables 4M rows each (total ~122GB)
  • Network: donor/joiner hosts are connected with dedicated 10Gbit LAN
  • Hardware: donor/joiner hosts – boxes with 28 Cores+HT/RAM 256GB/Samsung SSD 850/Ubuntu 16.04

The setup details for the encryption aspects in our testing:

  • Cryptography libraries: openssl-1.0.2, openssl-1.1.0, libgcrypt-1.6.5(for xbstream encryption)
  • CPU hardware acceleration for AES – AES-NI: enabled/disabled
  • Ciphers suites: aes(default), aes128, aes256, chacha20(openssl-1.1.0)

Several notes regarding the above aspects:

  • Cryptography libraries. Now almost every Linux distribution is based on the openssl-1.0.2. This is the previous stable version of the OpenSSL library. The latest stable version (1.1.0) has various performance/scalability fixes and also support of new ciphers that may notably improve throughput, However, it’s problematic to upgrade from 1.0.2 to 1.1.0, or just to find packages for openssl-1.1.0 for existing distributions. This is due to the fact that replacing OpenSSL triggers update/upgrade of a significant number of packages. So in order to use openssl-1.1.0, most likely you will need to build it from sources. The same applies to socat – it will require some effort to build socat with openssl-1.1.0.
  • AES-NI. The Advanced Encryption Standard Instruction Set (AES-NI) is an extension to the x86 CPU’s from Intel and AMD. The purpose of AES-NI is to improve the performance of encryption and decryption operations using the Advanced Encryption Standard (AES), like the AES128/AES256 ciphers. If your CPU supports AES-NI, there should be an option in BIOS that allows you to enabled/disable that feature. In Linux, you can check /proc/cpuinfo for the existence of an “aes” flag. If it’s present, then AES-NI is available and exposed to the OS.There is a way to check what acceleration ratio you can expect from it:
    # AES_NI disabled with OPENSSL_ia32cap
    OPENSSL_ia32cap="~0x200000200000000" openssl speed -elapsed -evp aes-128-gcm
    ...
    The 'numbers' are in 1000s of bytes per second processed.
    type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
    aes-128-gcm      57535.13k    65924.18k   164094.81k   175759.36k   178757.63k
    # AES_NI enabled
    openssl speed -elapsed -evp aes-128-gcm
    The 'numbers' are in 1000s of bytes per second processed.
    type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
    aes-128-gcm     254276.67k   620945.00k   826301.78k   906044.07k   923740.84k

    Our interest is the very last column: 178MB/s(wo AES-NI) vs 923MB/s(w AES-NI)

  • Ciphers. In our testing for network encryption with socat+openssl 1.0.2/1.1.0, we used the following ciphers suites:
    DEFAULT – if you don’t specify a cipher/cipher string for OpenSSL connection, this suite will be used
    AES128 – suite with aes128 ciphers only
    AES256 – suites with aes256 ciphers onlyAdditionally, for openssl-1.1.0, there is an extra cipher suite:
    CHACHA20 – cipher suites using ChaCha20 algoIn the case of xtrabackup, where internal encryption is based on libgcrypt, we use the AES128/AES256 ciphers from this library.
  • SST methods. Streaming database files from the the donor to joiner with the rsync protocol over an OpenSSL-encrypted connection:
    (donor) rsync | socat+ssl       socat+ssl| rsync(daemon mode) (joiner)

    The current approach of wsrep_sst_rsync.sh doesn’t allow you to use the rsync SST method with SSL. However, there is a project that tries to address the lack of SSL support for rsync method. The idea is to create a secure connection with socat and then use that connection as a tunnel to connect rsync between the joiner and donor hosts. In my testing, I used a similar approach.

    Also take a note that in the chart below, there are results for two variants of rsync: “rsync” (the current approach), and “rsync_improved” (the improved one). I’ve explained the difference between them in my previous post.

  • Backup data on the donor side and stream it to the joiner in xbstream format over an OpenSSL encrypted connection

    (donor) xtrabackup| socat+ssl  socat+ssl | xbstream (joiner)

    In my testing for streaming over encrypted connections, I used the

    --parallel=4

     option for xtrabackup. In my previous post, I showed that this is important factor to get the best time. There is also a way to pass the name of the cipher that will be used by socat for the OpenSSL connection in the wsrep_sst_xtrabackup-v2.sh script with the

    sockopt

     option. For instance:

    [sst]
    inno-backup-opts="--parallel=4"
    sockopt=",cipher=AES128"
  • Backup data on the donor side/encrypt it internally(with libgcrypt) and stream the data to the joiner in xbstream format, and afterwards decrypt files on the joiner

    (donor) xtrabackup | socat   socat | xbstream ; xtrabackup decrypt (joiner)

    The xtrabackup tool has a feature to encrypt data when performing a backup. That encryption is based on the libgcrypt library, and it’s possible to use AES128 or AES256 ciphers. For encryption, it’s necessary to generate a key and then provide it to xtrabackup to perform encryption on fly. There is a way to specify the number of threads that will encrypt data, along with the chunk size to tune process of encryption.

    The current version of xtrabackup supports an efficient way to read, compress and encrypt data in parallel, and then write/stream it. From the other side, when we accept a stream we can’t decompress/decrypt stream on the fly. At first, the stream should be received/written to disk with the xbstream tool and only after that can you use xtrabackup with

    --decrypt/--decompress

     modes to unpack data. The inability to process data on the fly and save the stream to disk for later processing has a notable impact on stream time from the donor to the joiner. We have a plan to fix that issue, so that encryption+compression+streaming of data with xtrabackup happens without the necessity to write stream to the disk on the receiver side.

    For my testing, in the case of xtrabackup with internal encryption, I didn’t use SSL encryption for socat.

Results (click on the image for an enlarged view):

SST Data Transfer

Observations:

  • Transferring data with rsync is very inefficient, and the improved version is 2-2.5 times faster. Also, you may note that in the case of “no-aes-n”, the rsync_improved method has the best time for default/aes128/aes256 ciphers. The reason is that we perform both data transfers in parallel (we spawn rsync process for each file), as well as encryption/decryption (socat forks extra processes for each stream). This approach allows us to compensate for the absence of hardware acceleration by using several CPU cores. In all other cases, we only use one CPU for streaming of data and encryption/decryption.
  • xtrabackup (with hardware optimized crc32) shows the best time in all cases, except for the default/aes128/aes256 ciphers in “no-aes-ni” mode (where rsync_imporved showed the best time). However I would like to remind you that SST with rsync is a blocking operation, and during the data transfer the donor node becomes READ-ONLY. xtrabackup, on the other hand, uses backup locks and allows any operations on donor node during SST.
  • On the boxes without hardware acceleration (no-aes-ni mode), the chacha20 cipher allows you to perform data transfer 2-3 times faster. It’s a very good replacement for “aes” ciphers on such boxes. However, the problem with that cipher is that it is available only in openssl-1.1.0. In order to use it, you will need a custom build of OpenSSL and socat for many distros.
  • Regarding xtrabackup with internal encryption (xtrabackup_enc): reading/encrypting and streaming data is quite fast, especially with the latest libgcrypt library(1.7.x). The problem is decryption. As I’ve explained above, right now we need to get the stream and save encrypted data to storage first, and then perform the extra step of reading/decrypting and saving the data back. That extra part consumes 2/3 of the total time. Improving the xbstream tool to perform steam decryption/decompression on the fly would allow you to get very good results.

Testing Details

For purposes of the testing, I’ve created a script “sst-bench.sh” that covers all the methods used in this post. You can use it to measure all the above SST methods in your environment. In order to run the script, you have to adjust several environment variables at the beginning of the script:

joiner ip

, datadirs location on joiner and donor hosts, etc. After that, put the script on the “donor” and “joiner” hosts and run it as the following:

#joiner_host>
sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>
#donor_host>
sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>

Mar
30
2017
--

The enterprise strikes back

Light saber over NY Stock Exchange. You might have missed it amidst Snap’s noisy consumer debut, but enterprise-facing IPOs put points on the board during the first quarter of 2017, even more than their consumer-focused siblings. In fact, the group may be set to dominate the year’s offerings. For startups working to sell to large corporations, their investors and their tens of thousands of employees, it’s… Read More

Mar
30
2017
--

Looker catches the fancy of CapitalG, Goldman and Geodesic with $81.5M Series D

 Business intelligence platform Looker is announcing an $81.5 million Series D today led by CapitalG. Rather than compete in segmented markets against visualization and data preparation startups, Looker wants to own the vertical of business intelligence. The company supports the adoption of enterprise machine learning by providing a source of clean and reliable data. Read More

Mar
30
2017
--

HelloSign moves into digital workflow with new HelloWorks product

 HelloSign, founded in 2011, has been best known to this point as an e-signature company, but today it announced something a bit more substantial, a new product called HelloWorks, which allows the company to move into workflow, digitizing processes that involve complex forms. For many years now, HelloSign CEO Joseph Walla said, the way we moved paper processes into the digital realm was… Read More

Mar
29
2017
--

Performance Evaluation of SST Data Transfer: Without Encryption (Part 1)

SST Data Transfer

In this blog, we’ll look at evaluating the performance of an SST data transfer without encryption.

A State Snapshot Transfer (SST) operation is an important part of Percona XtraDB Cluster. It’s used to provision the joining node with all the necessary data. There are three methods of SST operation available: mysqldump, rsync, xtrabackup. The most advanced one – xtrabackup – is the default method for SST in Percona XtraDB Cluster.

We decided to evaluate the current state of xtrabackup, focusing on the process of transferring data between the donor and joiner nodes tp find out if there is any room for improvements or optimizations.

Taking into account that the security of the network connections used for Percona XtraDB Cluster deployment is one of the most important factors that affects SST performance, we will evaluate SST operations in two setups: without network encryption, and in a secure environment.

In this post, we will take a look at the setup without network encryption.

Setup:

  • database server: Percona XtraDB Cluster 5.7 on the donor node
  • database: sysbench database – 100 tables, 4M rows each (total ~122GB)
  • network: donor/joiner hosts are connected with dedicated 10Gbit LAN
  • hardware: donor/joiner hosts – boxes with 28 Cores+HT/RAM 256GB/Samsung SSD 850/Ubuntu 16.04

In our test, we will measure the amount of time it takes to stream all necessary data from the donor to the joiner with the help of one of SST’s methods.

Before testing, I measured read/write bandwidth limits of the attached SSD drives (with the help of sysbench/fileio): they are ~530-540MB/sec. That means that the best theoretical time to transfer all of our database files (122GB) is ~230sec.

Schematic view of SST methods:

  • Streaming DB files from the donor to joiner with tar
    (donor) tar | socat                         socat | tar (joiner)

    • tar is not really an SST method. It’s used here just to get some baseline numbers to understand how long it takes to transfer data without extra overhead.
  • Streaming DB files from the donor to joiner with rsync protocol
    (donor) rsync                               rsync(daemon mode) (joiner)

    • While working on the testing of the rsync SST method, I found that the current way of data streaming is quite inefficient: rsync parallelization is directory-based, not file-based. So if you have three directories, – for instance sbtest (100files/100GB), mysql (75files/10MB), performance_schema (88files/1M) – the rsync SST script will start three rsync processes, where each process will handle its own directory. As a result, instead of parallel transfer we end up with one stream that only streams the largest directory (sbtest). Replacing that approach with one that iterates over all files in datadir and queues them to rsync workers allows us to speed up the transfer of data 2-3 times.On the charts, ‘rsync’ is the current approach and ‘rsync_improved’ is the improved one.
  • Backup data on the donor side and stream it to the joiner in xbstream format
    (donor) xtrabackup | socat  socat | xbstream (joiner)

At the end of this post, you will find the command lines used for testing each SST method.

SST Data Transfer

Streaming of our database files with tar took a minimal amount of time, and it’s very close to the best possible time (~230sec). xtrabackup is slower (~2x), as is rsync (~3x).

From profiling xtrabackup, we can clearly see two things:

  1. IO utilization is quite low
  2. A notable amount of time was spent in crc32 computation

Issue 1
xtrabackup can process data in parallel, however by default it does it with a single thread only. Our tests showed that increasing the number of parallel threads to 2/4 with the

--parallel

 option allows us to improve IO utilization and reduce streaming time. One can pass this option to xtrabackup by adding the following to the [sst] section of my.cnf:

[sst]
inno-backup-opts="--parallel=4"

Issue 2
By default xtrabackup uses software-based crc32 functions from the libz library. Replacing this function with a hardware-optimized one allows a notable reduction in CPU usage and a speedup in data transfer. This fix will be included in the next release of xtrabackup.

SST Data Transfer

We ran more tests for xtrabackup with the parallel option and hardware optimized crc32, and got results that confirm our analysis. Streaming time for xtrabackup is now very close to baseline and storage limits.

Testing details

For the purposes of testing, I’ve created a script “sst-bench.sh” that covers all the methods used in this post. You can try to measure all the above SST methods in your environment. In order to run script, you have to adjust several environment variables in the beginning, such as

joiner ip

,

datadirs

 location on the joiner and donor hosts, etc. After that, put the script to the “donor” and “joiner” hosts and run it as the following:

#joiner_host> sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync>
#donor_host>  sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved>

Mar
29
2017
--

Webinar Thursday 3/30: MyRocks Troubleshooting

MyRocks Troubleshooting

MyRocks TroubleshootingPlease join Percona’s Principal Technical Services Engineer Sveta Smirnova, and Senior Software Engineer George Lorch, MariaDB’s Query Optimizer Developer Sergei Petrunia and Facebook’s Database Engineer Yoshinori Matsunobu as they present MyRocks Troubleshooting on March 30, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

MyRocks is an alternative storage engine designed for flash storage. It provides great write workload performance and space efficiency. Like any other powerful engine, it has its own specific configuration scenarios that require special troubleshooting solutions.

This webinar will discuss how to deal with:

  • Data corruption issues
  • Inconsistent data
  • Locks
  • Slow performance

We will use well-known instruments and tools, as well as MyRocks-specific tools, and demonstrate how they work with the MyRocks storage engine.

Register for this webinar here.

Mar
29
2017
--

Aiden closes $750,000 seed round in its quest to amplify marketers

 Aiden, a London-based startup building a machine learning-powered personal assistant to save mobile marketers time and money, closed a $750,000 seed round today from Kima Ventures and a number of angels, including Nicolas Pinto, Pierre Valade and Jonathan Wolf. The team first demoed the capabilities of its service on the stage of TechCrunch Disrupt as a Battlefield finalist. Read More

Mar
29
2017
--

Talking Drupal #141 – Working with MySQL

www.talkingdrupal.com/141

Talking Drupal #141 – Working with MySQL

In episode #141, we talk about getting comfortable with MySQL.

Show Topics

In episode #141, we talk about getting comfortable with MySQL.

  • Get familiar with the Drupal database
  • Why a SQL view is helpful
  • The command line
  • Modules that will help
  • Tools
  • Drush import/export

drush sql-dump > \~/db.sql

drush sql-drop

drush sql-cli < \~/db.sql

Modules

Backup and Migrate

Resources

Mysql Workbench

Sequel Pro

phpmyadmin

Module of the Week

Force Password Change

This module allows administrators to force users, by role, individual user, or newly created user, to change their password on their next page load or login, and/or expire their passwords after a period of time.

Hosts

Stephen Cross - www.ParallaxInfoTech.com @stephencross

John Picozzi - www.oomphinc.com @johnpicozzi

Nic Laflin - www.nLightened.net @nicxvan

 

Mar
29
2017
--

Talking Drupal #141 – Working with MySQL

www.talkingdrupal.com/141

Talking Drupal #141 – Working with MySQL

In episode #141, we talk about getting comfortable with MySQL.

Show Topics

In episode #141, we talk about getting comfortable with MySQL.

  • Get familiar with the Drupal database
  • Why a SQL view is helpful
  • The command line
  • Modules that will help
  • Tools
  • Drush import/export

drush sql-dump > \~/db.sql

drush sql-drop

drush sql-cli < \~/db.sql

Modules

Backup and Migrate

Resources

Mysql Workbench

Sequel Pro

phpmyadmin

Module of the Week

Force Password Change

This module allows administrators to force users, by role, individual user, or newly created user, to change their password on their next page load or login, and/or expire their passwords after a period of time.

Hosts

Stephen Cross - www.ParallaxInfoTech.com @stephencross

John Picozzi - www.oomphinc.com @johnpicozzi

Nic Laflin - www.nLightened.net @nicxvan

 

Mar
29
2017
--

GE invests $2 million in Alchemist Accelerator to back industrial IoT startups

 Alchemist Accelerator has raised $2 million from GE Digital to start a new program for industrial IoT startups. Stanford lecturer Timothy Chou, formerly President of Oracle On Demand, will chair Alchemist’s new IIoT accelerator along with GE Digital’s West Coast group. In the past, enterprise hardware and software startups were seen as capital intensive, with the challenge of… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com