Aug
23
2017
--

Migrating Data from an Encrypted Amazon MySQL RDS Instance to an Encrypted Amazon Aurora Instance

Migrating Data

Migrating DataIn this blog post, we’ll discuss migrating data from encrypted Amazon MySQL RDS to encrypted Amazon Aurora.

One of my customers wanted to migrate from an encrypted MySQL RDS instance to an encrypted Aurora instance. They have a pretty large database, therefore using mysqldump or a similar tool was not suitable for them. They also wanted to setup replication between old MySQL RDS and new Aurora instances.

Spoiler: this is possible without any logical dump.

At first, I checked Amazon’s documentation on encryption and found nothing about this type of migration. Even more, if I trust the documentation it looks like they don’t support replication or migration between encrypted MySQL RDS and encrypted Aurora. All instructions are for either “MySQL RDS to MySQL RDS” or “Aurora to Aurora” setups. For example, the documentation says here:

You can create Read Replicas of both encrypted and unencrypted DB clusters. The Read Replica must be encrypted if the source DB cluster is encrypted.

When I tried to create an Aurora read replica of my encrypted MySQL RDS instance, however, the “Enable Encryption” select control was grayed out and I could not change “No” to “Yes”.

I had to find a workaround.

Another idea was creating an encrypted MySQL RDS replica and migrating it to Aurora. While creating encrypted MySQL replica is certainly possible (actually all replicas of encrypted instances must be encrypted) it was not possible to migrate it to any other instance using the standard “Migrate Latest Snapshot” option:

However, the documentation specified that Aurora and MySQL RDS use the same AWS KMS key. As a result, both kinds of encryption should be compatible (if not practically the same). Amazon also has the “AWS Database Migration Service“, which has this promising section in its FAQ:

Q. Can I replicate data from encrypted data sources?

Yes, AWS Database Migration Service can read and write from and to encrypted databases. AWS Database Migration Service connects to your database endpoints on the SQL interface layer. If you use the Transparent Data Encryption features of Oracle or SQL Server, AWS Database Migration Service will be able to extract decrypted data from such sources and replicate it to the target. The same applies to storage-level encryption. As long as AWS Database Migration Service has the correct credentials to the database source, it will be able to connect to the source and propagate data (in decrypted form) to the target. We recommend using encryption-at-rest on the target to maintain the confidentiality of your information. If you use application-level encryption, the data will be transmitted through AWS Database Migration Service as is, in encrypted format, and then inserted into the target database.

I decided to give it a try. And it worked!

The next step was to make this newly migrated Aurora encrypted instance a read replica of the original MySQL RDS instance. This is easy in part with the help of great how-to on migration by Adrian Cantrill. As suggested, you only need to find the master’s binary log file, current position and supply them to the stored routine

mysql.rds_set_external_master

. Then start replication using the stored routine

mysql.rds_start_replication

.

Conclusion: While AWS Database Migration Service has limitations for both source and target databases, this solution allows you to migrate encrypted instances easily and securely.

Save

Save

Save

Save

Jul
16
2017
--

IBM dangles carrot of full encryption to lure buyers to new z14 mainframe

IBM z14 mainframe computer IBM is doing its damnedest to keep the mainframe relevant in a modern context, and believe it or not, there are plenty of monster corporations throughout the world who still use those relics from the earliest days of computing. Today, the company unveiled the z14, its latest z-Series mainframe, which comes with the considerable draw of full encryption. Is that enough for even corporate giants… Read More

Jul
10
2017
--

Webinar Tuesday July 11, 2017: Securing Your MySQL/MariaDB Data

Securing Your MySQL/MariaDB Data

Securing Your MySQL/MariaDB DataJoin Percona’s Chief Evangelist, Colin Charles as he presents Securing Your MySQL/MariaDB Data on Tuesday, July 11, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).

This webinar will discuss the features of MySQL/MariaDB that when enabled and used improve the default usage of MySQL. Many cloud-based applications fail to:

  • Use appropriate filesystem permissions
  • Employ TLS/SSL for connections
  • Require TLS/SSL with MySQL replication
  • Use external authentication plugins (LDAP, PAM, Kerberos)
  • Encrypt all your data at rest
  • Monitor your database with the audit plugin
  • Review and rejecting SQL injections
  • Design application access using traditional firewall technology
  • Employ other MySQL/MariaDB security features

This webinar will demonstrate and advise on how to correctly implement the features above. We will end the presentation with some simple steps on how to hack a MySQL installation.

You can register for the webinar here.

Securing Your MySQLColin Charles, Percona Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team of MariaDB Server in 2009, worked at MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within open source communities in APAC, and has spoken at many conferences.

Jul
05
2017
--

Webinar Thursday July 6, 2017: Security and Encryption in the MySQL World

Security and Encryption

Security and EncryptionJoin Percona’s Solutions Engineer, Dimitri Vanoverbeke as he presents Security and Encryption in the MySQL World on Thursday, July 6, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).

 

MySQL and MariaDB Server provide many new features that help with security and encryption, both of which are extremely important in today’s world. Learn how to use these features, from roles to at-rest-encryption, to increase security. At the end of the webinar, you should understand how to have a securely configured MySQL instance!

Register for the webinar here.

dimitriDimitri Vanoverbeke, Solutions Engineer

At the age of 7, Dimitri received his first computer. Since then, he’s addicted to anything with a digital pulse. Dimitri has been active in IT professionally since 2003, when he took various roles from internal system engineering to consulting. Before joining Percona, Dimitri worked as an open source consultant for a leading open source software consulting firm in Belgium. During his career, Dimitri became familiar with a broad range of open source solutions and with the devops philosophy. When not glued to his computer screen, he enjoys traveling, cultural activities, basketball and the great outdoors.

Jun
28
2017
--

MySQL Encryption at Rest – Part 2 (InnoDB)

MySQL Encryption at Rest

MySQL Encryption at RestWelcome to Part 2 in a series of blog posts on MySQL encryption at rest. This post covers InnoDB tablespace encryption.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPAA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series covered implementing disk-level encryption using crypt+LUKS.

In this post, we discuss InnoDB tablespace encryption (TE). You can choose to use just LUKS, just InnoDB TE, or you can use both. It depends on your compliance needs and your configuration. For example, say you put your binary logs and InnoDB redo logs on partition X, and your InnoDB tablespaces (i.e., your .ibd files) on partition Y. InnoDB TE only handles encrypting the tablespaces. You would need to use LUKS on partition X to encrypt those files.

InnoDB Tablespace Encryption Tutorial

The manual describes InnoDB encryption as follows:

InnoDB tablespace encryption uses a two-tier encryption key architecture, consisting of a master encryption key and tablespace keys. When an InnoDB table is encrypted, a tablespace key is encrypted and stored in the tablespace header. When an application or authenticated user wants to access encrypted tablespace data, InnoDB uses a master encryption key to decrypt the tablespace key. The decrypted version of a tablespace key never changes, but you can change the master encryption key as required. This action is referred to as master key rotation.

As a real-world example, picture a high-school janitor’s key-chain. Each key is different; each key works for only one door (or in MySQL’s case, table). The keys themselves never change, but the label on each key can change based on how the janitor wants to note which key works on which door. At any time, the janitor can re-label the keys with special names, thwarting any hooligans from opening doors they shouldn’t be opening.

Configuration Changes

As noted above, InnoDB uses a keyring to manage encryption keys on a per-table basis. But you must load the keyring plugin before InnoDB. Add the following lines to your my.cnf under the [mysqld] section. You may substitute a different path for the keyring file, but be sure that mysqld has permissions to this path:

early-plugin-load = keyring_file.so
keyring_file_data = /var/lib/mysql-keyring/keyring

You do not need to restart MySQL at this time, but you can if you wish. These settings are for when you restart MySQL in the future.

Install the Keyring Plugin in MySQL

If you don’t wish to restart MySQL at this time, you can still continue. Run the following SQL statements within MySQL to load the plugin and configure the keyring file:

mysql> INSTALL PLUGIN keyring_file SONAME 'keyring_file.so';
mysql> SET GLOBAL keyring_file_data = '/var/lib/mysql-keyring/keyring';

You can now CREATE encrypted tables or ALTER existing tables to encrypt them with a simple command:

mysql> CREATE TABLE t1 (c1 INT) ENCRYPTION='Y';
Query OK, 0 rows affected (0.04 sec)
mysql> ALTER TABLE users ENCRYPTION='Y';
Query OK, 1 row affected (0.04 sec)

Be careful with the above ALTER: It will most likely block all other transactions during this conversion.

Encrypting Tables Using pt-online-schema-change

Using pt-online-schema-change is fairly straightforward. While it has many options, the most basic usage to encrypt a single table is the following:

pt-online-schema-change --execute --alter "engine=innodb encryption='Y'" D=mydb,t=mytab

pt-online-schema-change creates an encrypted, shadow copy of the table mytab. Once the table is populated with the original data, it drop-swaps the table and replaces the original table with the new one.

It is always recommended to first run pt-online-schema-change with the --dry-run flag first, in order to validate the command and the change.

Encrypting All Tables in a Database

To encrypt all the tables in a database, we can simply take the above command and wrap it in a shell loop. The loop is fed a list of tables in the database from MySQL. You just need to change the dbname variable:

# dbname=mydb
# mysql -B -N -e "SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='$dbname' and TABLE_TYPE='BASE TABLE'" |
while read line; do pt-online-schema-change --execute --alter "engine=innodb encryption='Y'" D=$dbname,t=$line; done

Be careful with the above command, as it will take off and blast through all the changes!

Rotating InnoDB Encryption Keys

Your compliance policy dictates how often you need to rotate the encryption keys within InnoDB:

mysql> ALTER INSTANCE ROTATE INNODB MASTER KEY;

This command creates a new key, decrypts all encrypted tablespace headers (where you store the per-table key) using the previous key, and then re-encrypts the headers with the new key. Notice that the table itself is not re-encrypted. This would be an extremely intrusive and potentially dangerous operation. Recall above that each tables’ encryption key never changes (unless the entire table gets decrypted). What can change is the key used to encrypt that key within each tables’ header.

The keyring UDFs in MySQL

The keyring plugin can also be used to store sensitive data within MySQL for later retrieval. This functionality is beyond the scope of this blog as it doesn’t deal directly with InnoDB TE. You can read more about it in the MySQL manual.

Conclusion

InnoDB Tablespace Encryption makes it easy to use MySQL encryption at rest. Keep in mind the downsides of using only TE: InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, InnoDB encryption doesn’t cover binary-logs and slow-query-logs.

Jun
27
2017
--

SSL Connections in MySQL 5.7

SSL Connections

SSL ConnectionsThis blog post looks at SSL connections and how they work in MySQL 5.7.

Recently I was working on an SSL implementation with MySQL 5.7, and I made some interesting discoveries. I realized I could connect to the MySQL server without specifying the SSL keys on the client side, and the connection is still secured by SSL. I was confused and I did not understand what was happening.

In this blog post, I am going to show you why SSL works in MySQL 5.7, and it worked previously in MySQL 5.6.

Let’s start with an introduction of how SSL worked in 5.6.

SSL in MySQL 5.6

The documentation for SSL in MySQL 5.6 is quite detailed, and it explains how SSL works. But first let’s make one thing clear: MySQL supports secure (encrypted) connections between clients and the server using the TLS (Transport Layer Security) protocol. TLS is sometimes referred to as SSL (Secure Sockets Layer), but MySQL doesn’t actually use the SSL protocol for secure connections because it provides weak encryption.

So when we/someone says MySQL is using SSL, it really means that it is using TLS. You can check which protocol you use:

show status like 'Ssl_version';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| Ssl_version | TLSv1.2 |
+---------------+---------+

So how does it work? Let me quote a few lines from the MySQL documentation:

TLS uses encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect data change, loss, or replay. TLS also incorporates algorithms that provide identity verification using the X509 standard. X509 makes it possible to identify someone on the Internet. In basic terms, there should be some entity called a “Certificate Authority” (or CA) that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys (a public key and a secret key). A certificate owner can present the certificate to another party as proof of identity. A certificate consists of its owner’s public key. Any data encrypted using this public key can be decrypted only using the corresponding secret key, which is held by the owner of the certificate.

It works with key pairs (private and public): the server has the private keys and the client has the public keys. Here is a link showing how we can generate these keys.

MySQL 5.6 Client

When the server is configured with SSL, the client has to have the client certificates. Once it gets those, it can connect to the server using SSL:

mysql -utestuser -p -h192.168.0.1 -P3306 --ssl-key=/root/mysql-ssl/client-key.pem --ssl-cert=/root/mysql-ssl/client-cert.pem

We have to specify the key and the certificate. Otherwise, we cannot connect to the server with SSL. So far so good, everything works as the documentation says. But how does it work in MySQL 5.7?

SSL in MySQL 5.7

The documentation was very confusing to me, and did not help much. It described how to create the SSL keys the same way (and even the server and client configuration) as in MySQL 5.6. So if everything is the same, why I can connect without specifying the client keys? I did not have an answer for that. One of my colleagues (Alexey Poritskiy) found the answer after digging through the manuals for a while, and it finally clearly described this behavior:

As of MySQL 5.7.3, a client need specify only the --ssl option to obtain an encrypted connection. The connection attempt fails if an encrypted connection cannot be established. Before MySQL 5.7.3, the client must specify either the --ssl-ca option, or all three of the --ssl-ca, --ssl-key, and --ssl-cert options.

These lines are in the “CREATE USER” syntax manual.

After this, I re-read the SSL documentation and found the following:

5.7.3: On the client side, an explicit --ssl option is no longer advisory but prescriptive. Given a server enabled to support secure connections, a client program can require a secure conection by specifying only the --ssl option. The connection attempt fails if a secure connection cannot be established. Other --ssl-xxx options on the client side mean that a secure connection is advisory (the connection attempt falls back to an unencrypted connection if a secure connection cannot be established).

I still think the SSL manual could be more expressive and detailed.

Before MySQL 5.7.3, it used key pairs. After that, it works a bit similar to websites (HTTPS): the client does not need the keys and the connection still can be secure. (I would still like to see more detailed documentation how it really works.)

Is This Good for Us?

In my opinion, this is a really good feature, but it didn’t get a lot of publicity. Prior to 5.7.3, if we wanted to use SSL we had to create the keys and use the client keys in every client application. It’s doable, but it is just a pain — especially if we are using different keys for every server with many users.

With this feature we can have a different key for every server, but on the client side we only have to enable the SSL connection. Most of the clients use it as the default if SSL is available.

I am pretty sure if people knew about this feature they would use it more often.

Limitations

I tested it with many client applications, and all worked without specifying the client keys. I only had issues with the MySQL 5.6 client, even though the server was 5.7. In that case, I had to specify the client keys. Without them, I couldn’t connect to the server.

It is possible some older applications won’t support this feature.

Conclusion

This is a great feature, easy to use it and it makes SSL implementations much easier. But I think the documentation should be clearer and more precise in describing how this new feature actually works. I already submitted a request to update the documentation.

Jun
06
2017
--

Matt Mitchell of CryptoHarlem is building an open source tool to help organizations prepare for data breaches

 This morning on the stage of TC Sessions: Justice, Matt Mitchell of CryptoHarlem discussed his views on the link between surveillance and minority oppression and the importance of taking a preventative approach to security and privacy. Mitchell, a specialist in digital safety and encryption, is dedicating time to creating Protect Your Org, a free, open source, tool for all organizations… Read More

Jun
06
2017
--

MySQL Encryption at Rest – Part 1 (LUKS)

MySQL Encryption at Rest

MySQL Encryption at RestIn this first of a series of blog posts, we’ll look at MySQL encryption at rest.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPPA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series is implementing disk-level encryption using crypt+LUKS.

In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.

Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.

LUKS Tutorial

The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.

WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.

Install the necessary utilities:

# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk

The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.

Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):

# openssl rand -base64 32
# date | md5 | rev | head -c 24 | md5 | tail -c 32

Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.

WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!

# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4

You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:

# cryptsetup luksOpen /dev/sda4 mysqldata

You will be prompted for the passphrase you used above. On success, you should see the device show up:

# ls /dev/mapper/
lrwxrwxrwx  1 root root      7 Jun  2 11:50 mysqldata -> ../dm-0

You can now format this encrypted block device and create a filesystem:

# mkfs.ext4 /dev/mapper/mysqldata

Now you can mount the encrypted block device you just formatted:

# mount /dev/mapper/mysqldata /var/lib/mysql

Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).

Creating a Backup of Encryption Information

The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:

# cryptsetup luksHeaderBackup --header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4

Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:

# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat

GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.

Unmounting and Closing a Disk

If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:

# umount /var/lib/mysql/
# cryptsetup luksClose mysqldata

In order to mount this device again, you must “open” it and provide one of the keys.

Rotating Keys (Adding / Removing Keys)

Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.

Take a look at the existing header information:

# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: ENABLED
Iterations: 181059
Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b
69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:

# cryptsetup luksAddKey --key-slot 1 -v /dev/sda4
Enter any passphrase:
Key slot 0 unlocked.
Enter new passphrase for key slot:
Verify passphrase:
Command successful.

LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.

Now that we have saved the new key in slot 1, we can remove the key in slot 0.

# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
No key available with this passphrase.

In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.

Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:

# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: DISABLED
Key Slot 1: ENABLED
Iterations: 229712
Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df
cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6
Key material offset: 264
AF stripes: 4000
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.

Conclusion

Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.

In Part Two, we will cover using InnoDB tablespace encryption.

May
15
2017
--

NuCypher is using proxy re-encryption to lift more enterprise big data into the cloud

 After spending time at a London fintech accelerator last year, enterprise database startup ZeroDB scrapped its first business plan and mapped out a new one. By January this year it had a new name: NuCypher. It now will try to persuade enterprises to switch to their specialized encryption layer to enhance their ability to perform big data analytics by tapping into the cloud. Read More

Mar
30
2017
--

Performance Evaluation of SST Data Transfer: With Encryption (Part 2)

SST Data Transfer

In this blog post, we’ll look at the performance of SST data transfer using encryption.

In my previous post, we reviewed SST data transfer in an unsecured environment. Now let’s take a closer look at a setup with encrypted network connections between the donor and joiner nodes.

The base setup is the same as the previous time:

  • Database server: Percona XtraDB Cluster 5.7 on donor node
  • Database: sysbench database – 100 tables 4M rows each (total ~122GB)
  • Network: donor/joiner hosts are connected with dedicated 10Gbit LAN
  • Hardware: donor/joiner hosts – boxes with 28 Cores+HT/RAM 256GB/Samsung SSD 850/Ubuntu 16.04

The setup details for the encryption aspects in our testing:

  • Cryptography libraries: openssl-1.0.2, openssl-1.1.0, libgcrypt-1.6.5(for xbstream encryption)
  • CPU hardware acceleration for AES – AES-NI: enabled/disabled
  • Ciphers suites: aes(default), aes128, aes256, chacha20(openssl-1.1.0)

Several notes regarding the above aspects:

  • Cryptography libraries. Now almost every Linux distribution is based on the openssl-1.0.2. This is the previous stable version of the OpenSSL library. The latest stable version (1.1.0) has various performance/scalability fixes and also support of new ciphers that may notably improve throughput, However, it’s problematic to upgrade from 1.0.2 to 1.1.0, or just to find packages for openssl-1.1.0 for existing distributions. This is due to the fact that replacing OpenSSL triggers update/upgrade of a significant number of packages. So in order to use openssl-1.1.0, most likely you will need to build it from sources. The same applies to socat – it will require some effort to build socat with openssl-1.1.0.
  • AES-NI. The Advanced Encryption Standard Instruction Set (AES-NI) is an extension to the x86 CPU’s from Intel and AMD. The purpose of AES-NI is to improve the performance of encryption and decryption operations using the Advanced Encryption Standard (AES), like the AES128/AES256 ciphers. If your CPU supports AES-NI, there should be an option in BIOS that allows you to enabled/disable that feature. In Linux, you can check /proc/cpuinfo for the existence of an “aes” flag. If it’s present, then AES-NI is available and exposed to the OS.There is a way to check what acceleration ratio you can expect from it:
    # AES_NI disabled with OPENSSL_ia32cap
    OPENSSL_ia32cap="~0x200000200000000" openssl speed -elapsed -evp aes-128-gcm
    ...
    The 'numbers' are in 1000s of bytes per second processed.
    type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
    aes-128-gcm      57535.13k    65924.18k   164094.81k   175759.36k   178757.63k
    # AES_NI enabled
    openssl speed -elapsed -evp aes-128-gcm
    The 'numbers' are in 1000s of bytes per second processed.
    type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
    aes-128-gcm     254276.67k   620945.00k   826301.78k   906044.07k   923740.84k

    Our interest is the very last column: 178MB/s(wo AES-NI) vs 923MB/s(w AES-NI)

  • Ciphers. In our testing for network encryption with socat+openssl 1.0.2/1.1.0, we used the following ciphers suites:
    DEFAULT – if you don’t specify a cipher/cipher string for OpenSSL connection, this suite will be used
    AES128 – suite with aes128 ciphers only
    AES256 – suites with aes256 ciphers onlyAdditionally, for openssl-1.1.0, there is an extra cipher suite:
    CHACHA20 – cipher suites using ChaCha20 algoIn the case of xtrabackup, where internal encryption is based on libgcrypt, we use the AES128/AES256 ciphers from this library.
  • SST methods. Streaming database files from the the donor to joiner with the rsync protocol over an OpenSSL-encrypted connection:
    (donor) rsync | socat+ssl       socat+ssl| rsync(daemon mode) (joiner)

    The current approach of wsrep_sst_rsync.sh doesn’t allow you to use the rsync SST method with SSL. However, there is a project that tries to address the lack of SSL support for rsync method. The idea is to create a secure connection with socat and then use that connection as a tunnel to connect rsync between the joiner and donor hosts. In my testing, I used a similar approach.

    Also take a note that in the chart below, there are results for two variants of rsync: “rsync” (the current approach), and “rsync_improved” (the improved one). I’ve explained the difference between them in my previous post.

  • Backup data on the donor side and stream it to the joiner in xbstream format over an OpenSSL encrypted connection

    (donor) xtrabackup| socat+ssl  socat+ssl | xbstream (joiner)

    In my testing for streaming over encrypted connections, I used the

    --parallel=4

     option for xtrabackup. In my previous post, I showed that this is important factor to get the best time. There is also a way to pass the name of the cipher that will be used by socat for the OpenSSL connection in the wsrep_sst_xtrabackup-v2.sh script with the

    sockopt

     option. For instance:

    [sst]
    inno-backup-opts="--parallel=4"
    sockopt=",cipher=AES128"
  • Backup data on the donor side/encrypt it internally(with libgcrypt) and stream the data to the joiner in xbstream format, and afterwards decrypt files on the joiner

    (donor) xtrabackup | socat   socat | xbstream ; xtrabackup decrypt (joiner)

    The xtrabackup tool has a feature to encrypt data when performing a backup. That encryption is based on the libgcrypt library, and it’s possible to use AES128 or AES256 ciphers. For encryption, it’s necessary to generate a key and then provide it to xtrabackup to perform encryption on fly. There is a way to specify the number of threads that will encrypt data, along with the chunk size to tune process of encryption.

    The current version of xtrabackup supports an efficient way to read, compress and encrypt data in parallel, and then write/stream it. From the other side, when we accept a stream we can’t decompress/decrypt stream on the fly. At first, the stream should be received/written to disk with the xbstream tool and only after that can you use xtrabackup with

    --decrypt/--decompress

     modes to unpack data. The inability to process data on the fly and save the stream to disk for later processing has a notable impact on stream time from the donor to the joiner. We have a plan to fix that issue, so that encryption+compression+streaming of data with xtrabackup happens without the necessity to write stream to the disk on the receiver side.

    For my testing, in the case of xtrabackup with internal encryption, I didn’t use SSL encryption for socat.

Results (click on the image for an enlarged view):

SST Data Transfer

Observations:

  • Transferring data with rsync is very inefficient, and the improved version is 2-2.5 times faster. Also, you may note that in the case of “no-aes-n”, the rsync_improved method has the best time for default/aes128/aes256 ciphers. The reason is that we perform both data transfers in parallel (we spawn rsync process for each file), as well as encryption/decryption (socat forks extra processes for each stream). This approach allows us to compensate for the absence of hardware acceleration by using several CPU cores. In all other cases, we only use one CPU for streaming of data and encryption/decryption.
  • xtrabackup (with hardware optimized crc32) shows the best time in all cases, except for the default/aes128/aes256 ciphers in “no-aes-ni” mode (where rsync_imporved showed the best time). However I would like to remind you that SST with rsync is a blocking operation, and during the data transfer the donor node becomes READ-ONLY. xtrabackup, on the other hand, uses backup locks and allows any operations on donor node during SST.
  • On the boxes without hardware acceleration (no-aes-ni mode), the chacha20 cipher allows you to perform data transfer 2-3 times faster. It’s a very good replacement for “aes” ciphers on such boxes. However, the problem with that cipher is that it is available only in openssl-1.1.0. In order to use it, you will need a custom build of OpenSSL and socat for many distros.
  • Regarding xtrabackup with internal encryption (xtrabackup_enc): reading/encrypting and streaming data is quite fast, especially with the latest libgcrypt library(1.7.x). The problem is decryption. As I’ve explained above, right now we need to get the stream and save encrypted data to storage first, and then perform the extra step of reading/decrypting and saving the data back. That extra part consumes 2/3 of the total time. Improving the xbstream tool to perform steam decryption/decompression on the fly would allow you to get very good results.

Testing Details

For purposes of the testing, I’ve created a script “sst-bench.sh” that covers all the methods used in this post. You can use it to measure all the above SST methods in your environment. In order to run the script, you have to adjust several environment variables at the beginning of the script:

joiner ip

, datadirs location on joiner and donor hosts, etc. After that, put the script on the “donor” and “joiner” hosts and run it as the following:

#joiner_host>
sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>
#donor_host>
sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com