Apr
23
2017
--

Percona XtraDB Cluster: “dh key too small” error during an SST using SSL

dh key too small

dh key too smallIf you’ve tried to use SSL in Percona XtraDB Cluster and saw an error in the logs like SSL3_CHECK_CERT_AND_ALGORITHM:dh key too small, we’ve implemented some changes in Percona XtraDB Cluster 5.6.34 and 5.7.16 that get rid of these errors.

Some background

dh key too small refers to the Diffie-Hellman parameters used by the SSL code that are shorter than recommended.

Due to the Logjam vulnerability (https://weakdh.org/), the required key-lengths for the Diffie-Hellman parameters were changed from 512 bits to 2048 bits. Unfortunately, older versions of OpenSSL/socat still use 512 bits (and thus caused the error to appear).

Changes made to Percona XtraDB Cluster

Since versions of socat greater than 1.7.3 now use 2048 bits for the Diffie-Hellman parameters, we only do extra work for the older versions of socat (less than 1.7.3). The SST code now:

  1. Looks for a file with the DH params
    1. Uses the “ssl_dhparams” option in the [sst] section if it exists
    2. Looks for a “dhparams.pem” file in the datadir
  2. If the file is specified and exists, uses that file as a source for the DH parameters
  3. If the file does not exist, creates a dhparams.pem file in the datadir

Generating the dhparams yourself

Unfortunately, the time it can take several minutes to create the dhparams file. We recommend that the dhparams.pem be created prior to starting the SST.

openssl dhparam -out path/to/datadir/dhparams.pem 2048

Apr
21
2017
--

Simplified Percona XtraDB Cluster SSL Configuration

Percona XtraDB Cluster SST Traffic Encryption

Percona XtraDB Cluster SSLIn this blog post, we’ll look at a feature that recently added to Percona XtraDB Cluster 5.7.16, that makes it easier to configure Percona XtraDB Cluster SSL for all related communications. It uses mode “encrypt=4”, and configures SSL for both IST/Galera communications and SST communications using the same SSL files. “encrypt=4” is a new encryption mode added in Percona XtraDB Cluster 5.7.16 (we’ll cover it in a later blog post).

If this option is used, this will override all other Galera/SST SSL-related file options. This is to ensure that a consistent configuration is applied.
Using this option also means that the Galera/SST communications are using the same keys as client connections.

Example

This example shows how to startup a cluster using this option. We will use the default SSL files created by the bootstrap node. Basically, there are two steps:

  1. Set
    pxc_encrypt_cluster_traffic=ON

     on all nodes

  2. Ensure that all nodes share the same SSL files

Step 1: Configuration (on all nodes)

We enable the

pxc_encrypt_cluster_traffic

 option in the configuration files on all nodes. The default value of this option is “OFF”, so we enable it here.

[mysqld]
 pxc_encrypt_cluster_traffic=ON

Step 2: Startup the bootstrap node

After initializing and starting up the bootstrap node, the datadir will contain the necessary data files. Here is some SSL-related log output:

[Note] Auto generated SSL certificates are placed in data directory.
 [Warning] CA certificate ca.pem is self signed.
 [Note] Auto generated RSA key files are placed in data directory.

The required files are ca.pem, server-cert.pem and server-key.pem, which are the Certificate Authority (CA) file, the server certificate and the server private key, respectively.

Step 3: Copy the SSL files to all other nodes

Galera views the cluster as a set of homogeneous nodes, so the same configuration is expected on all nodes. Therefore, we have to copy the CA file, the server’s certificate and the server’s private key. By default, MySQL names these: ca.pem, server-cert.pem, and server-key.pem, respectively.

Step 4: Startup the other nodes

This is some log output showing that the SSL certificate files have been found. The other nodes should be using the files that were created on the bootstrap node.

[Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
[Note] Skipping generation of SSL certificates as certificate files are present in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Skipping generation of RSA key pair as key files are present in data directory.

This is some log output (with

log_error_verbosity=3

), showing the SST reporting on the configuration used.

WSREP_SST: [DEBUG] pxc_encrypt_cluster_traffic is enabled, using PXC auto-ssl configuration
WSREP_SST: [DEBUG] with encrypt=4 ssl_ca=/my/data//ca.pem ssl_cert=/my/data//server-cert.pem ssl_key=/my/data//server-key.pem

Customization

The “ssl-ca”, “ssl-cert”, and “ssl-key” options in the “[mysqld]” section can be used to specify the location of the SSL files. If these are not specified, then the datadir is searched (using the default names of “ca.pem”, “server-cert.pem” and “server-key.pem”).

[mysqld]
 pxc_encrypt_cluster_traffic=ON
 ssl-ca=/path/to/ca.pem
 ssl-cert=/path/to/server-cert.pem
 ssl-key=/path/to/server-key.pem

If you want to implement this yourself, the equivalent configuration file options are:

[mysqld]
wsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”
[sst]
encrypt=4
ssl-ca=ca.pem
ssl-cert=server-cert.pem
ssl-key=server-key.pem

How it works

  1. Determine the location of the SSL files
    1. Uses the values if explicitly specified (via the “ssl-ca”, “ssl-cert” and “ssl-key” options in the “[mysqld]” section)
    2. If the SSL file options are not specified, we look in the data directory for files named “ca.pem”, “server-cert.pem” and “server-key.pem” for the CA file, the server certificate, and the server key, respectively.
  2. Modify the configuration
    1. Overrides the values for socket.ssl_ca, socket.ssl_cert, and socket.ssl_key in
      wsrep_provider_options

       in the “[mysqld]” section.

    2. Sets “encrypt=4” in the “[sst]” section.
    3. Overrides the values for ssl-ca, ssl-cert and ssl-key in the “[sst]” section.

This is not a dynamic setting, and is only available on startup.

Apr
21
2017
--

Enabling Percona XtraDB Cluster SST Traffic Encryption

Percona XtraDB Cluster SST Traffic Encryption

Percona XtraDB Cluster SST Traffic EncryptionIn this blog post, we’ll look at enabling Percona XtraDB Cluster SST Traffic Encryption, and some of the changes to the SSL-based encryption of SST traffic in Percona XtraDB Cluster 5.7.16.

Some background

Percona XtraDB Cluster versions prior to 5.7 support encryption methods 0, 1, 2 and 3:

  • encrypt = 0 : (default) No encryption
  • encrypt = 1 : Symmetric encryption using AES-128, user-supplied key
  • encrypt = 2 : SSL-based encryption with a CA and cert files (via socat)
  • encrypt = 3 : SSL-based encryption with cert and key files (via socat)

We are deprecating modes encrypt=1,2,3 in favor of the new mode, encrypt=4. “encrypt=3” is not recommended, since it does not verify the cert being used (it cannot verify since no Certificate Authority (CA) file is provided). “encrypt=2” and “encrypt=3” use a slightly different way of building the SSL files than MySQL does. In order to remove confusion, we’ve deprecated these modes in favor of “encrypt=4”, which can use the MySQL generated SSL files.

New feature: encrypt= 4

The previous SSL methods (encrypt=2 and encrypt=3), are based on socat usage, http://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The certs are not built the same way as the certs created by MySQL (for encryption of client communication with MySQL). To simplify SSL configuration and usage, we added a new encryption method (encrypt=4) so that the SSL files generated by MySQL can now be used for SSL encryption of SST traffic.

For instructions on how to create these files, see https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/encrypt-traffic.html.

Configuration

In general, galera views the cluster as homogeneous, so it expects that all nodes are identically configured. This extends to the SSL configuration, so the preferred configuration is that all machines share the same certs/keys. The security implication is that possession of these certs/keys allows a machine to join the cluster and receive all of the data. So proper care must be taken to secure the SSL files.

The mode “encrypt=4” uses the same option names as MySQL, so it reads the SSL information from “ssl-ca”, “ssl-cert”, and “ssl-key” in the “[sst]” section of the configuration file.

Example my.cnf:

[sst]
 encrypt=4
 ssl-ca=/path/to/ca.pem
 ssl-cert=/path/to/server-cert.pem
 ssl-key=/path/to/server-key.pem

All three options (ssl-ca, ssl-cert, and ssl-key) must be specified otherwise the SST will return an error.

ssl-ca

This is the location of the Certificate Authority (CA) file. Only servers that have certificates generated from this CA file will be allowed to connect if SSL is enabled.

ssl-cert

This is the location fo the Certificate file. This is the digital certificate that will be sent to the other side of the SSL connection. The remote server will then verify that this certificate was generated from the Certificate Authority file in use by the remote server.

ssl-key

This is the location of the private key for the certificate specified in ssl-cert.

Nov
14
2016
--

Using Vault with MySQL

Using Vault with MySQL

Encrypt your secrets and use Vault with MySQL
Using Vault with MySQL

In my previous post I discussed using GPG to secure your database credentials. This relies on a local copy of your MySQL client config, but what if you want to keep the credentials stored safely along with other super secret information? Sure, GPG could still be used, but there must be an easier way to do this.

This post will look at a way to use Vault to store your credentials in a central location and use them to access your database. For those of you that have not yet come across Vault, it is a great way to manage your secrets – securing, storing and tightly controlling access. It has the added benefits of being able to handle leasing, key revocation, key rolling and auditing.

During this blog post we’ll accomplish the following tasks:

  1. Download the necessary software
  2. Get a free SAN certificate to use for Vault’s API and automate certificate renewal
  3. Configure Vault to run under a restricted user and secure access to its files and the API
  4. Create a policy for Vault to provide access control
  5. Enable TLS authentication for Vault and create a self-signed client certificate using OpenSSL to use with our client
  6. Add a new secret to Vault and gain access from a client using TLS authentication
  7. Enable automated, expiring MySQL grants

Before continuing onwards, I should drop in a quick note to say that the following is a quick example to show you how you can get Vault up and running and use it with MySQL, it is not a guide to production setup and does not cover High Availability (HA) implementations, etc.


Download time

We will be using some tools in addition to Vault, Let’s Encrypt, OpenSSL and json_pp (a command line utility using JSON::PP). For this post we’ll be using Ubuntu 16.04 LTS and we’ll presume that these aren’t yet installed.

$ sudo apt-get install letsencrypt openssl libjson-pp-perl

If you haven’t already heard of Let’s Encrypt then it is a free, automated, and open Certificate Authority (CA) enabling you to secure your website or other services without paying for an SSL certificate; you can even create Subject Alternative Name (SAN) certificates to make your life even easier, allowing one certificate to be used a number of different domains. The Electronic Frontier Foundation (EFF) provide Certbot, the recommended tool to manage your certificates, which is the new name for the letsencrypt software. If you don’t have letsencrypt/certbot in your package manager then you should be able to use the quick install method. We’ll be using json_pp to prettify the JSON output from the Vault API and openssl to create a client certificate.

We also need to download Vault, choosing the binary relevant for your Operating System and architecture. At the time of writing this, the latest version of Vault is 0.6.2, so the following steps may need adjusting if you use a different version.

# Download Vault (Linux x86_64), SHA256SUMS and signature
$ wget https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_linux_amd64.zip
  https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_SHA256SUMS.sig
  https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_SHA256SUMS
# Import the GPG key
$ gpg --keyserver pgp.mit.edu --recv-keys 51852D87348FFC4C
# Verify the checksums
$ gpg --verify vault_0.6.2_SHA256SUMS.sig
gpg: assuming signed data in `vault_0.6.2_SHA256SUMS'
gpg: Signature made Thu 06 Oct 2016 02:08:16 BST using RSA key ID 348FFC4C
gpg: Good signature from "HashiCorp Security <security@hashicorp.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE  F189 5185 2D87 348F FC4C
# Verify the download
$ sha256sum --check <(fgrep vault_0.6.2_linux_amd64.zip vault_0.6.2_SHA256SUMS)
vault_0.6.2_linux_amd64.zip: OK
# Extract the binary
$ sudo unzip -j vault_0.6.2_linux_amd64.zip -d /usr/local/bin
Archive:  vault_0.6.2_linux_amd64.zip
  inflating: /usr/local/bin/vault

Let’s Encrypt… why not?

We want to be able to access Vault from wherever we are, we can put additional security in place to prevent unauthorised access, so we need to get ourselves encrypted. The following example shows the setup on a public server, allowing the CA to authenticate your request. More information on different methods can be found in the Certbot documentation.

$ sudo letsencrypt --webroot -w /home/www/vhosts/default/public -d myfirstdomain.com -d myseconddomain.com
#IMPORTANT NOTES:
# - Congratulations! Your certificate and chain have been saved at
#   /etc/letsencrypt/live/myfirstdomain.com/fullchain.pem. Your cert will
#   expire on 2017-01-29. To obtain a new or tweaked version of this
#   certificate in the future, simply run certbot again. To
#   non-interactively renew *all* of your certificates, run "certbot
#   renew"
# - If you like Certbot, please consider supporting our work by:
#
#   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
#   Donating to EFF:                    https://eff.org/donate-le
#

That’s all it takes to get a SAN SSL certificate! The server that this was executed has a public webserver serving the domains that the certificates were requested for. During the request process a file is place in the specified webroot and is used to authenticate the domain(s) for the request. Essentially, the command said:

myfirstdomain.com and myseconddomain.com use /home/www/vhosts/default/public for the document root, so place your files there

Let’s Encrypt CA issues short-lived certificates (90 days), so you need to keep renewing them, but don’t worry as that is as easy as it was to create them in the first place! You can test that renewal works OK as follows (which will renew all certificates that you have without --dry-run):

$ sudo letsencrypt renew --dry-run
#
#-------------------------------------------------------------------------------
#Processing /etc/letsencrypt/renewal/myfirstdomain.com.conf
#-------------------------------------------------------------------------------
#** DRY RUN: simulating 'letsencrypt renew' close to cert expiry
#**          (The test certificates below have not been saved.)
#
#Congratulations, all renewals succeeded. The following certs have been renewed:
#  /etc/letsencrypt/live/myfirstdomain.com/fullchain.pem (success)
#** DRY RUN: simulating 'letsencrypt renew' close to cert expiry
#**          (The test certificates above have not been saved.)
#
#IMPORTANT NOTES:
# - Your account credentials have been saved in your Certbot
#   configuration directory at /etc/letsencrypt. You should make a
#   secure backup of this folder now. This configuration directory will
#   also contain certificates and private keys obtained by Certbot so
#   making regular backups of this folder is ideal.

Automating renewal

The test run for renewal worked fine, so we can now go and schedule this to take place automatically. I’m using systemd so the following example uses timers, but cron or similar could be used too. Here’s how to make systemd run the scheduled renew for you, running at 0600 – the rewew process will automatically proceed for any previously-obtained certificates that expire in less than 30 days.

$ sudo cat <<EOF > /etc/systemd/system/cert-renewal.service
[Unit]
Description=SSL renewal
[Service]
Type=simple
ExecStart=/usr/bin/letsencrypt renew --quiet
User=root
Group=root
EOF
$ sudo cat <<EOF > /etc/systemd/system/cert-renewal.timer
[Unit]
Description=Automatic SSL renewal
[Timer]
OnCalendar=*-*-* 06:00:00
Persistent=true
[Install]
WantedBy=timers.target
EOF
$ sudo systemctl enable cert-renewal.timer
Created symlink from /etc/systemd/system/timers.target.wants/cert-renewal.timer to /etc/systemd/system/cert-renewal.timer.
$ sudo systemctl start cert-renewal.timer
$ sudo systemctl list-timers
NEXT                         LEFT     LAST                         PASSED UNIT                         ACTIVATES
Tue 2016-11-01 06:00:00 UTC  6h left  n/a                          n/a    cert-renewal.timer           cert-renewal.service

Getting started with Vault

Firstly, a quick reminder that this is not an in-depth review, how-to or necessarily best-practice Vault installation as that is beyond the scope of this post. It is just to get you going to test things out, so please read up on the Vault documentation if you want to use it more seriously.

Whilst there is a development server that you can fire up with the command vault server -dev to get yourself testing a little quicker, we’re going to take a little extra time and configure it ourselves and make the data persistent. Vault supports a number of backends for data storage, including Zookeeper, Amazon S3 and MySQL, however the 3 maintained by HashiCorp are consul, file and inmem. The memory storage backend does not provide persistent data, so whilst there could possibly be uses for this it is really only useful for development and testing – it is the storage backend used with the -dev option to the server command. Rather than tackle the installation and configuration of Consul during this post, we’ll use file storage instead.

Before starting the server we’ll create a config, which can be written in one of 2 formats – HCL (HashiCorp Configuration Language) or JSON (JavaScript Object Notation). We’ll use HCL as it is a little cleaner and saves us a little extra typing!

# Create a system user
$ sudo useradd -r -g daemon -d /usr/local/vault -m -s /sbin/nologin -c "Vault user" vault
$ id vault
uid=998(vault) gid=1(daemon) groups=1(daemon)
# Create a config directory remove global access
$ sudo mkdir /etc/vault /etc/ssl/vault
$ sudo chown vault.root /etc/vault /etc/ssl/vault
$ sudo chmod 750 /etc/vault /etc/ssl/vault
$ sudo chmod 700 /usr/local/vault
# Copy the certficates and key
$ sudo cp -v /etc/letsencrypt/live/myfirstdomain.com/*pem /etc/ssl/vault
/etc/letsencrypt/live/myfirstdomain.com/cert.pem -> /etc/ssl/vault/cert.pem
/etc/letsencrypt/live/myfirstdomain.com/chain.pem -> /etc/ssl/vault/chain.pem
/etc/letsencrypt/live/myfirstdomain.com/fullchain.pem -> /etc/ssl/vault/fullchain.pem
/etc/letsencrypt/live/myfirstdomain.com/privkey.pem -> /etc/ssl/vault/privkey.pem
# Create a combined PEM certificate
$ sudo cat /etc/ssl/vault/{cert,fullchain}.pem /etc/ssl/vault/fullcert.pem
# Write the config to file
$ cat <<EOF | sudo tee /etc/vault/demo.hcl
listener "tcp" {
  address = "10.0.1.10:8200"
  tls_disable = 0
  tls_cert_file = "/etc/ssl/vault/fullcert.pem"
  tls_key_file = "/etc/ssl/vault/privkey.pem"
}
backend "file" {
  path = "/usr/local/vault/data"
}
disable_mlock = true
EOF

So, we’ve now set up a user and some directories to store the config, SSL certificate and key, and also the data, restricting access to the vault user. The config that we wrote specifies that we will use the file backend, storing data in /usr/local/vault/data, and the listener that will be providing TLS encryption using our certificate from Let’s Encrypt. The final setting, disable_mlock is not recommended for production and is being used to avoid some extra configuration during this post. More details about the other options available for configuration can be found in the Server Configuration section of the online documentation.

Please note that the Vault datadir should be kept secured as it contains all of the keys and secrets. In the example, we have done this by placing it in the vault user’s home directory and only allowing the vault user access. You can take this further by restricting local access (via logins) and access control lists

Starting Vault

Time to start the server and see if everything is looking good!

$ sudo -su vault vault server -config=/etc/vault/demo.hcl >/tmp/vault-debug.log 2>&1 &
$ jobs
[1]  + running    sudo -su vault vault server -config=/etc/vault/demo.hcl > /tmp/vault-debug.lo
$ VAULT_ADDR=https://myfirstdomain.com:8200 vault status
Error checking seal status: Error making API request.
URL: GET https://myfirstdomain.com:8200/v1/sys/seal-status
Code: 400. Errors:
* server is not yet initialized

Whilst it looks like something is wrong (we need to initialize the server), it does mean that everything is otherwise working as expected. So, we’ll initialize Vault, which is a pretty simple task, but you do need to make note/store some of the information that you will be given by the server during initialization – the unseal tokens and initial root key. You should distribute these to somewhere safe, but for now we’ll store them with the config.

# Change to vault user
$ sudo su -l vault -s /bin/bash
(vault)$ export VAULT_ADDR=https://myfirstdomain.com:8200 VAULT_SSL=/etc/ssl/vault
# Initialize Vault and save the token and keys
(vault)$ vault init 2>&1 | egrep '^Unseal Key|Initial Root Token' >/etc/vault/keys.txt
(vault)$ chmod 600 /etc/vault/keys.txt
# Unseal Vault
(vault)$ egrep -m3 '^Unseal Key' /etc/vault/keys.txt | cut -f2- -d: | tr -d ' ' |
while read key
do
  vault unseal
    -ca-cert=${VAULT_SSL}/fullchain.pem
    -client-cert=${VAULT_SSL}/client.pem
    -client-key=${VAULT_SSL}/privkey.pem ${key}
done
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 1
Sealed: true
Key Shares: 5
Key Threshold: 3
Unseal Progress: 2
Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
# Check Vault status
(vault)$ vault status
Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Version: 0.6.2
Cluster Name: vault-cluster-ebbd5ec7
Cluster ID: 61ae8f54-f420-09c1-90bb-60c9fbfa18a2
High-Availability Enabled: false

There we go, the vault is initialized and the status command now returns details and confirmation that it is up and running. It is worth noting here that each time you start Vault it will be sealed, which means that it cannot be accessed until 3 unseal keys have been used with vault unseal – for additional security here you would ensure that a single person cannot know any 3 keys, so that it always requires more than one person to (re)start the service.


Setting up a policy

Policies allow you to set access control restrictions to determine the data that authenticated users have access to. Once again the documents used to write policies are in either the HCL or JSON format. They are easy to write and apply, the only catch being that the policies associated with a token cannot be changed (added/removed) once the token has been issued; you need to revoke the token and apply the new policies. However, If you want to change the policy rules then this can be done on-the-fly as modifications apply on the next call to Vault.

When we initialized the server we were given the initial root key and we now need to use that in order to start configuring the server.

(vault)$ export VAULT_TOKEN=$(egrep '^Initial Root Token:' /etc/vault/keys.txt | cut -f2- -d: | tr -d ' ')

We will create a simple policy that allows us to read the MySQL secrets, but prevent access to the system information and commands

(vault)$ cat <<EOF > /etc/vault/demo-policy.hcl
path "sys/*" {
  policy = "deny"
}
path "secret/mysql/*" {
  policy = "read"
  capabilities = ["list", "sudo"]
}
EOF
(vault)$ vault policy-write demo /etc/vault/demo-policy.hcl
Policy 'demo' written.

We have only added one policy here, but you should really create as many policies as you need to suitably control access amongst the variety of humans and applications that may be using the service. As with any kind of data storage planning how to store your data is important, as it will help you write more compact policies with the level of granularity that you require. Writing everything in /secrets at the top level will most likely bring you headaches, or long policy definitions!


TLS authentication for MySQL secrets

We’re getting close to adding our first secret to Vault, but first of all we need a way to authenticate our access. Vault provides an API for access to your stored secrets, along with wealth of commands with direct use of the vault binary as we are doing at the moment. We will now enable the cert authentication backend, which allows authentication using SSL/TLS client certificates

(vault)$ vault auth-enable cert
Successfully enabled 'cert' at 'cert'!

Generate a client certificate using OpenSSL

The TLS authentication backend accepts certificates that are either signed by a CA or self-signed, so let’s quickly create ourselves a self-signed SSL certificate using openssl to use for authentication.

# Create working directory for SSL managment and copy in the config
$ mkdir ~/.ssl && cd $_
$ cp /usr/lib/ssl/openssl.cnf .
# Create a 4096-bit CA
$ openssl genrsa -des3 -out ca.key 4096
Generating RSA private key, 4096 bit long modulus
...........++
..........................................................................++
e is 65537 (0x10001)
Enter pass phrase for ca.key:
Verifying - Enter pass phrase for ca.key:
$ openssl req -config ./openssl.cnf -new -x509 -days 365 -key ca.key -out ca.crt
Enter pass phrase for ca.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) [Some-Place]:
Organization Name (eg, company) [Percona]:
Organizational Unit Name (eg, section) [Demo]:
Comon Name (e.g. server FQDN or YOUR name) [ceri]:
Email Address [thisisnotme@myfirstdomain.com]:
# Create a 4096-bit Client Key and CSR
$ openssl genrsa -des3 -out client.key 4096
Generating RSA private key, 4096 bit long modulus
......................++
..................................++
e is 65537 (0x10001)
Enter pass phrase for client.key:
Verifying - Enter pass phrase for client.key:
$ openssl req -config ./openssl.cnf -new -key client.key -out client.csr
Enter pass phrase for client.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) [Some-Place]:
Organization Name (eg, company) [Percona]:
Organizational Unit Name (eg, section) [Demo]:
Comon Name (e.g. server FQDN or YOUR name) [ceri]:
Email Address [thisisnotme@myfirstdomain.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
# Self-sign
$ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt
Signature ok
subject=/C=GB/ST=Some-State/L=Some-Place/O=Percona/OU=Demo/CN=ceri/emailAddress=thisisnotme@myfirstdomain.com
Getting CA Private Key
Enter pass phrase for ca.key:
# Create an unencrypted copy of the client key
$ openssl rsa -in client.key -out privkey.pem
Enter pass phrase for client.key:
writing RSA key
# Copy the certificate for Vault access
$ sudo cp client.crt /etc/ssl/vault/user.pem

OK, there was quite a lot of information there. You can edit openssl.cnf to set reasonable defaults for yourself and save time. In brief, we have created our own CA, created a self-signed certificate and then created a single PEM certificate with a decrypted key (this avoids specifying the password to use it – you may wish to leave the password in place to add more security, assuming that your client application can request the password.

Adding an authorisation certificate to Vault

Now that we have created a certificate and a policy we now need to allow authentication to occur using the certificate. We will give the token a 1-hour expiration and allow access to the MySQL secrets via the demo policy that we created in the previous step.

(vault)$ vault write auth/cert/certs/demo
    display_name=demo
    policies=demo
    certificate=@${VAULT_SSL}/user.pem
    ttl=3600
Success! Data written to: auth/cert/certs/demo
$ curl --cert user.pem --key privkey.pem ${VAULT_ADDR}/v1/auth/cert/login -X POST
{"request_id":"d5715ce1-2c6c-20c8-83ef-ce6259ad9110","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":null,"auth":{"client_token":"e3b98fac-2676-9f44-fdc2-41114360d2fd","accessor":"4c5b4eb5-4faf-0b01-b732-39d309afd216","policies":["default","demo"],"metadata":{"authority_key_id":"","cert_name":"demo","common_name":"thisisnotme@myfirstdomain.com","subject_key_id":""},"lease_duration":600,"renewable":true}}

Awesome! We requested out first client token using an SSL client certificate, we are logged it and we were given our access token (client_token) in the response that provides us with a 1 hour lease (lease_duration) to go ahead and make requests as a client without reauthentication, but there is nothing in the vault right now.


Ssshh!! It’s secret!

“The time has come,” the Vault master said, “to encrypt many things: our keys and passwords and top-secret notes, our MySQL DSNs and strings.”

Perhaps the easiest way to use Vault with your application is to store information there as you would do in a configuration file and read it when the application first requires it. An example of such information is the Data Source Name (DSN) for a MySQL connection, or perhaps the information needed to dynamically generate a .my.cnf. As this is about using Vault with MySQL we will do exactly that and store the user, password and connection method as our first secret, reading it back using the command line tool to check that it looks as expected.

(vault)$ $ vault write secret/mysql/test password="mysupersecretpassword" user="percona" socket="/var/run/mysqld/mysqld.sock"
Success! Data written to: secret/mysql/test
(vault)$ vault read secret/mysql/test
Key                     Value
---                     -----
refresh_interval        768h0m0s
password                mysupersecretpassword
socket                  /var/run/mysqld/mysqld.sock
user                    percona

A little while back (hopefully less than 1 hour ago!) we authenticated using cURL and gained a token, so now that we have something secret to read we can try it out. Fanfares and trumpets at the ready…

$ curl --cert user.pem --key privkey.pem -H 'Content-type: application/json' -H 'X-Vault-Token: 2f1fb630-cbe9-a8c9-5931-515a12d79291' ${VAULT_ADDR}/v1/secret/mysql/test -X GET 2>/dev/null | json_pp
{
   "wrap_info" : null,
   "lease_id" : "",
   "request_id" : "c79033b1-f8f7-be89-4208-44d721a55804",
   "auth" : null,
   "data" : {
      "password" : "mysupersecretpassword",
      "socket" : "/var/run/mysqld/mysqld.sock",
      "user" : "percona"
   },
   "lease_duration" : 2764800,
   "renewable" : false,
   "warnings" : null
}

We did it! Now there is no longer the need to store passwords in your code or config files, you can just go and get them from Vault when you need them, such as when your application starts and holding them in memory, or on-demand if your application can tolerate any additional latency, etc. You would need to take further steps to make sure that your application is tolerant of Vault going down, as well as providing an HA setup of Vault to minimise the risk of the secrets being unavailable.

It doesn’t stop here though…


On-demand MySQL grants

Vault acts like a virtual filesystem and uses the generic storage backend by default, mounted as /secret, but due to powerful abstraction it is possible to use many other backends as mountpoints such as an SQL database, AWS IAM, HSMs and much more. We have kept things simple and been using the generic backend so far. You can view the available (mounted) backends using the mounts command:

(vault)$ vault mounts
Path        Type       Default TTL  Max TTL  Description
secret/     generic    system       system   generic secret storage
sys/        system     n/a          n/a      system endpoints used for control, policy and debugging

We are now going to enable the MySQL backend, add the management connection (which will use the auth_socket plugin) and then request a new MySQL user that will auto-expire!

# Create a dedicated MySQL user account
$ mysql -Bsse "CREATE USER vault@localhost IDENTIFIED WITH auth_socket; GRANT CREATE USER, SELECT, INSERT, UPDATE ON *.* TO vault@localhost WITH GRANT OPTION;"
# Enable the MySQL backend and set the connection details
(vault)$ vault mount mysql
(vault)$ vault write mysql/config/connection connection_url="vault:vault@unix(/var/run/mysqld/mysqld.sock)/"
Read access to this endpoint should be controlled via ACLs as it will return the connection URL as it is, including passwords, if any.
# Write the template for the readonly role
(vault)$ vault write mysql/roles/readonly
 sql="CREATE USER '{{name}}'@'%' IDENTIFIED WITH mysql_native_password BY '{{password}}' PASSWORD EXPIRE INTERVAL 1 DAY; GRANT SELECT ON *.* TO '{{name}}'@'%';"
Success! Data written to: mysql/roles/readonly
# Set the lease on MySQL grants
(vault)$ vault write mysql/config/lease lease=1h lease_max=12h
Success! Data written to: mysql/config/lease

Here you can see that a template is created so that you can customise the grants per role. We created a readonly role, so it just has SELECT access. We have set an expiration on the account so that MySQL will automatically mark the password as expired and prevent access. This is not strictly necessary since Vault will remove the user accounts that it created as it expires the tokens, but by adding an extra level in MySQL it would allow you to set the lease, which seems to be global, in Vault to a little longer than required and vary it by role using MySQL password expiration. You could also use it as a way of tracking which Vault-generated MySQL accounts are going to expire soon. The important part is that you ensure that the application is tolerant of reauthentication, whether it would hand off work whilst doing so, accept added latency, or perhaps the process would terminate and respawn.

Now we will authenticate and request our user to connect to the database with.

$ curl --cert user.pem --key privkey.pem -H 'Content-type: application/json' ${VAULT_ADDR}/v1/auth/cert/login -X POST 2>/dev/null | json_pp
{
   "auth" : {
      "policies" : [
         "default",
         "demo"
      ],
      "accessor" : "2e6d4b95-3bf5-f459-cd27-f9e35b9bed16",
      "renewable" : true,
      "lease_duration" : 3600,
      "metadata" : {
         "common_name" : "thisisnotme@myfirstdomain.com",
         "cert_name" : "demo",
         "authority_key_id" : "",
         "subject_key_id" : ""
      },
      "client_token" : "018e6feb-65c4-49f2-ae30-e4fbba81e687"
   },
   "lease_id" : "",
   "wrap_info" : null,
   "renewable" : false,
   "data" : null,
   "request_id" : "f00fe669-4382-3f33-23ae-73cec0d02f39",
   "warnings" : null,
   "lease_duration" : 0
}
$ curl --cert user.pem --key privkey.pem -H 'Content-type: application/json' -H 'X-Vault-Token: 018e6feb-65c4-49f2-ae30-e4fbba81e687' ${VAULT_ADDR}/v1/mysql/creds/readonly -X GET 2>/dev/null | json_pp
{
   "errors" : [
      "permission denied"
   ]
}

Oh, what happened? Well, remember the policy that we created earlier? We hadn’t allowed access to the MySQL role generator, so we need to update and apply the policy.

(vault)$ cat <<EOF | vault policy-write demo /dev/stdin
path "sys/*" {
  policy = "deny"
}
path "secret/mysql/*" {
  policy = "read"
  capabilities = ["list", "sudo"]
}
path "mysql/creds/readonly" {
  policy = "read"
  capabilities = ["list", "sudo"]
}
EOF
Policy 'demo' written.

Now that we have updated the policy to allow access to the readonly role (requests go via mysql/creds when requesting access) we can check that the policy has applied and whether we get a user account for MySQL.

# Request a user account
$ curl --cert user.pem --key privkey.pem -H 'Content-type: application/json' -H 'X-Vault-Token: 018e6feb-65c4-49f2-ae30-e4fbba81e687' ${VAULT_ADDR}/v1/mysql/creds/readonly -X GET 2>/dev/null | json_pp
{
   "request_id" : "7b45c9a1-bc46-f410-7af2-18c8e91f43de",
   "lease_id" : "mysql/creds/readonly/c661426c-c739-5bdb-cb7a-f51f74e16634",
   "warnings" : null,
   "lease_duration" : 3600,
   "data" : {
      "password" : "099c8f2e-588d-80be-1e4c-3c2e20756ab4",
      "username" : "read-cert-401f2c"
   },
   "wrap_info" : null,
   "renewable" : true,
   "auth" : null
}
# Test MySQL access
$ mysql -h localhost -u read-cert-401f2c -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 17
Server version: 5.7.14-8-log Percona Server (GPL), Release '8', Revision '1f84ccd'
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> show grants;
+-----------------------------------------------+
| Grants for read-cert-401f2c@%                 |
+-----------------------------------------------+
| GRANT SELECT ON *.* TO 'read-cert-401f2c'@'%' |
+-----------------------------------------------+
1 row in set (0.00 sec)
# Display the full account information
$ pt-show-grants --only='read-cert-401f2c'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.7.14-8-log at 2016-11-08 23:28:37
-- Grants for 'read-cert-401f2c'@'%'
CREATE USER IF NOT EXISTS 'read-cert-401f2c'@'%';
ALTER USER 'read-cert-401f2c'@'%' IDENTIFIED WITH 'mysql_native_password' AS '*FF157E33408E1FBE707B5FF89C87A2D14E8430C2' REQUIRE NONE PASSWORD EXPIRE INTERVAL 1 DAY ACCOUNT UNLOCK;
GRANT SELECT ON *.* TO 'read-cert-401f2c'@'%';

Hurrah! Now we don’t even need to go and create a user, the application can get one when it needs one. We’ve made the account auto-expire so that the credentials are only valid for 1 day, regardless of Vault expiration, and also we’ve reduced the amount of time that the token is valid, so we’ve done a pretty good job of limiting the window of opportunity for any rogue activity


We’ve covered quite a lot in this post, some detail for which has been left out to keep us on track. The online documentation for OpenSSL, Let’s Encrypt and Vault are pretty good, so you should be able to take a deeper dive should you wish to. Hopefully, this post has given a good enough introduction to Vault to get you interested and looking to test it out, as well as bringing the great Let’s Encrypt service to your attention so that there’s very little reason to not provide a secure online experience for your readers, customers and services.

Feb
23
2016
--

MySQL connection using SSL… or not ?

MySQL connection using SSL

MySQL connection using SSLIn this blog post, we’ll discuss how we can determine if a MySQL connection is using SSL.

Since MySQL 5.7.5 the server generates SSL certificates (see auto_generate_certs) by default if compiled with SSL, or uses mysql_ssl_rsa_setup if compiled with YaSSL.

But how can we check to see if our MySQL client connection uses SSL ?

When using an interactive client, it’s easy! You have two options:

1. Check the status(s):

mysql> s
--------------
mysql  Ver 14.14 Distrib 5.7.11, for Linux (x86_64) using  EditLine wrapper
Connection id:		7
Current database:
Current user:		root@localhost
SSL:			Cipher in use is DHE-RSA-AES256-SHA
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		5.7.11-log MySQL Community Server (GPL)
Protocol version:	10
Connection:		Localhost via UNIX socket
Server characterset:	latin1
Db     characterset:	latin1
Client characterset:	utf8
Conn.  characterset:	utf8
UNIX socket:		/var/lib/mysql/mysql.sock
Uptime:			36 min 33 sec

As you can see, for that connection, we are indeed using SSL:

SSL: Cipher in use is DHE-RSA-AES256-SHA

2. Use the status variables

Ssl_version

  and

Ssl_cipher

:

mysql> show  status like 'Ssl_version';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| Ssl_version   | TLSv1.1 |
+---------------+---------+
mysql> show  status like 'Ssl_cipher';
+---------------+--------------------+
| Variable_name | Value              |
+---------------+--------------------+
| Ssl_cipher    | DHE-RSA-AES256-SHA |
+---------------+--------------------+

But is there a way to verify this on all the connections? For example, if we have some applications connected to our database server, how do we verify which connections are using SSL and which are not?

Yes there is solution: Performance_Schema!

This is how:

mysql> SELECT sbt.variable_value AS tls_version,  t2.variable_value AS cipher,
              processlist_user AS user, processlist_host AS host
       FROM performance_schema.status_by_thread  AS sbt
       JOIN performance_schema.threads AS t ON t.thread_id = sbt.thread_id
       JOIN performance_schema.status_by_thread AS t2 ON t2.thread_id = t.thread_id
      WHERE sbt.variable_name = 'Ssl_version' and t2.variable_name = 'Ssl_cipher' ORDER BY tls_version;
+-------------+--------------------+------+-----------+
| tls_version | cipher             | user | host      |
+-------------+--------------------+------+-----------+
|             |                    | root | localhost |
| TLSv1       | DHE-RSA-AES256-SHA | root | localhost |
| TLSv1.1     | DHE-RSA-AES256-SHA | root | localhost |
+-------------+--------------------+------+-----------+

That’s it. Isn’t that easy? ????

May
18
2015
--

Percona security update: oCERT and SSL improvements

We have recently become a member of oCERT to aid in allowing responsible disclosure for Percona products and services as can be seen on their members page.

We are presently working on the verbiage for the responsible disclosure program, and we are also investigating establishing a bug bounty program. In the mean time you can refer to our security contact page which will be updated as more information becomes available.

Secondly as you have quiet possibly noticed www.percona.com now enforces SSL and requests are redirected to https://www.percona.com should a http request be made.

This is but one small part of the continuing security initiative here at Percona and one I am happy to finally announce completion of as it had been on the “list” for some time.

The current SSL configuration follows best practices such as those laid out in the Mozilla Security Server Side TLS wiki entry, and as such gains an A+ rating from Qualys’ SSLLabs.com

There are of course still improvements to be made, and we are incrementally deploying those as they are completed and pass QA which sometimes leads to unavoidable delays. I would like to thank isvsecwatch for their report (which came in near the end of the overhaul process) and their patience in the extended time it took to get it into production.

The post Percona security update: oCERT and SSL improvements appeared first on MySQL Performance Blog.

Mar
05
2015
--

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

How to test if CVE-2015-0204 FREAK SSL security flaw affects youThe CVE-2015-0204 FREAK SSL vulnerability abuses intentionally weak “EXPORT” ciphers which could be used to perform a transparent Man In The Middle attack. (We seem to be continually bombarded with not only SSL vulnerabilities but the need to name vulnerabilities with increasing odd names.)

Is your server vulnerable?

This can be tested using the following GIST

If the result is 0; the server is not providing the EXPORT cipher; and as such is not vulnerable.

Is your client vulnerable?

Point your client to https://oneiroi.co.uk:4443/test if this returns “Vulnerable” then the client is vulnerable, if you find a connection error your client should not be vulnerable for example:

root@host:/tmp$ openssl version
OpenSSL 1.0.1e 11 Feb 2013
root@host:/tmp$ curl https://oneiroi.co.uk:4443/test -k
Vulnerable

root@host:/tmp$ openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify return:1

Certificate chain
0 s:/C=XX/L=Default City/O=Default Company Ltd
i:/C=XX/L=Default City/O=Default Company Ltd

Server certificate
—–BEGIN CERTIFICATE—–
MIIDVzCCAj+gAwIBAgIJANvTn7jl

[root@3654e4df1cc2 bin]# curl https://oneiroi.co.uk:4443/test -k
curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
[root@3654e4df1cc2 bin]# openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
139942442694560:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:

In short a vulnerable client will complete the connection, and a non vulnerable client should present an SSL handshake failure error.

DIY

You can recreate this setup yourself


openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem;
openssl s_server -cipher EXPORT -accept 4443 -cert mycert.pem -HTTP;

Is MySQL affected ?

Some of the code per the POODLE Blog post can be re-purposed here.


mysql -Bse "SHOW STATUS LIKE 'Ssl_cipher_list'" | sed 's/:/n/g' | grep EXP | wc -l

A result of 0 means the MySQL instance does not support any of the EXPORT ciphers, and thus should not be vulnerable to this attack.

How about other clients?

Most clients link to another library for SSL purposes; however there are examples where this is not the case; take for example golang http://golang.org/pkg/crypto/tls/ which partially implements the TLS1.2 RFC.

The following test code however shows golang does not appear to be affected.


package main

import (
“fmt”
“net/http”
“crypto/tls”
)

func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{},
DisableCompression: true,
}
client := &http.Client{Transport: tr}
resp, err := client.Get(“https://oneiroi.co.uk:4443/test”)
fmt.Println(err)
fmt.Println(resp)
}

Get https://oneiroi.co.uk:4443/test: remote error: handshake failure

SSLLabs

Qualys’s SSLLabs now have a test avaialble here: https://dev.ssllabs.com/ssltest/viewMyClient.html

References

The post How to test if CVE-2015-0204 FREAK SSL security flaw affects you appeared first on MySQL Performance Blog.

Apr
09
2014
--

Heartbleed: Separating FAQ From FUD

If you’ve been following this blog (my colleague, David Busby, posted about it yesterday) or any tech news outlet in the past few days, you’ve probably seen some mention of the “Heartbleed” vulnerability in certain versions of the OpenSSL library.

So what is ‘Heartbleed’, really?

In short, Heartbleed is an information-leak issue. An attacker can exploit this bug to retrieve the contents of a server’s memory without any need for local access. According to the researchers that discovered it, this can be done without leaving any trace of compromise on the system. In other words, if you’re vulnerable, they can steal your keys and you won’t even notice that they’ve gone missing. I use the word “keys” literally here; by being able to access the contents of the impacted service’s memory, the attacker is able to retrieve, among other things, private encryption keys for SSL/TLS-based services, which means that the attacker could be able to decrypt communications, impersonate other users (see here, for example, for a session hijacking attack based on this bug), and generally gain access to data which is otherwise believed to be secure. This is a big deal. It isn’t often that bugs have their own dedicated websites and domain names, but this one does: http://www.heartbleed.com

Why is it such a big deal?

One, because it has apparently been in existence since at least 2012. Two, because SSL encryption is widespread across the Internet. And three, because there’s no way to know if your keys have been compromised. The best detection that currently exists for this are some Snort rules, but if you’re not using Snort or some other IDS, then you’re basically in the dark.

What kinds of services are impacted?

Any software that uses the SSL/TLS features of a vulnerable version of OpenSSL. This means Apache, NGINX, Percona Server, MariaDB, the commercial version of MySQL 5.6.6+, Dovecot, Postfix, SSL/TLS VPN software (OpenVPN, for example), instant-messaging clients, and many more. Also, software packages that bundle their own vulnerable version of SSL rather than relying on the system’s version, which might be patched. In other words, it’s probably easier to explain what isn’t affected.

What’s NOT impacted?

SSH does not use SSL/TLS, so you’re OK there. If you downloaded a binary installation of MySQL community from Oracle, you’re also OK, because the community builds use yaSSL, which is not known to be vulnerable to this bug. In fact, anything that’s been built against yaSSL (or some other SSL library that isn’t OpenSSL) should be fine. Obviously, any service which doesn’t use SSL/TLS is not going to be vulnerable, either, since the salient code paths aren’t going to be executed. So, for example, if you don’t use SSL for your MySQL connections, then this bug isn’t going to affect your database server, although it probably still impacts you in other ways (e.g., your web servers).

What about Amazon cloud services?

According to Amazon’s security bulletin on the issue, all vulnerable services have been patched, but they still recommend that you rotate your SSL certificates.

Do I need to upgrade Percona Server, MySQL, NGINX, Apache, or other server software?

No, not unless you built any of these and statically-linked them with a vulnerable version of OpenSSL. This is not common. 99% of affected users can fix this issue by upgrading their OpenSSL libraries and cycling their keys/certificates.

What about client-level tools, like Percona Toolkit or XtraBackup?

Again, no. The client sides of Percona Toolkit, Percona XtraBackup, and Percona Cloud Tools (PCT) are not impacted. Moreover, the PCT website has already been patched. Encrypted backups are still secure.

There are some conflicting reports out there about exactly how much information leakage this bug allows. What’s the deal?

Some of the news articles and blogs claim that with this bug, any piece of the server’s memory can be accessed. Others have stated that the disclosure is limited to memory space owned by processes using a vulnerable version of OpenSSL. As far as we are aware, and as reported in CERT’s Vulnerability Notes Database, the impact of the bug is the latter; i.e., it is NOT possible for an attacker to use this exploit to retrieve arbitrary bits of your server’s memory, only bits of memory from your vulnerable services. That said, your vulnerable services could still leak information that attackers could exploit in other ways.

How do I know if I’m affected?

You can test your web server at http://filippo.io/Heartbleed/ – enter your site’s URL and see what it says. If you have Go installed, you can also get a command-line tool from Github and test from the privacy of your own workstation. There’s also a Python implementation. You can also check the version of OpenSSL that’s installed on your servers. If you’re running OpenSSL 1.0.1 through 1.0.1f or 1.0.2-beta, you’re vulnerable. (Side note here: some distributions, such as RHEL/CentOS, have patched the issue without actually updating the OpenSSL version number; on RPM-based systems you can view the changelog to see if it’s been fixed, for example):

rpm -q --changelog openssl | head -2
* Mon Apr 07 2014 Tomáš Mráz <tmraz@redhat.com> 1.0.1e-16.7
- fix CVE-2014-0160 - information disclosure in TLS heartbeat extension

Also, note that versions of OpenSSL prior to 1.0.1 are not known to be vulnerable. If you’re still unsure, we would be happy to assist you in determining the extent of the issue in your environment and taking any required corrective action. Just give us a call.

How can I know if I’ve been compromised?

If you’ve already been compromised, you have no way of knowing. However, if you use Snort as an IDS, you can use some rules developed by Fox-IT to detect and defer new attacks; other IDS/IPS vendors may have similar rule updates available. Without some sort of IDS in place, however, attacks can and will go unnoticed.

Are there any exploits for this currently out there?

Currently, yes, there are some proofs-of-concept floating around out there, although before this week, that was uncertain. But given that this is likely a 2-year-old bug, I would be quite surprised if someone, somewhere (you came to my talk at Percona Live last week, didn’t you? Remember what I said about assuming that you’re already owned? There are No Secrets Anymore.) didn’t have a solid, viable exploit.

So, then, what should I do?

Ubuntu, RedHat/CentOS, Amazon, and Fedora have already released patched versions of the OpenSSL library. Upgrade now. Do it now, as in right now. If you’ve compiled your own version of OpenSSL from source, upgrade to 1.0.1g or rebuild your existing source with the -DOPENSSL_NO_HEARTBEATS flag.

Once that’s been done, stop any certificate-using services, regenerate the private keys for any services that use SSL (Apache, MySQL, whatever), and then obtain a new SSL certificate from whatever certificate authority you’re using. Once the new certificates are installed, restart your services. You can also, of course, do the private key regeneration and certificate cycling on a separate machine, and only bring the service down long enough to install the new keys and certificates. Yes, you will need to restart MySQL. Or Apache. Or whatever. Full stop, full start, no SIGHUP (or equivalent).

Unfortunately, that’s still not all. Keeping in mind the nature of this bug, you should also change / reset any user passwords and/or user sessions that were in effect prior to patching your system and recycling your keys. See, for example, the session hijacking exploit referenced above. Also note that Google services were, prior to their patching of the bug, vulnerable to this issue, which means that it’s entirely possible that your Gmail login (or any other Google-related login) could have been compromised.

Can I get away with just upgrading OpenSSL?

NO. At a bare minimum, you will need to restart your services, but in order to really be sure you’ve fixed the problem, you also need to cycle your keys and certificates (and revoke your old ones, if possible). This is the time-consuming part, but since you have no way of knowing whether or not someone has previously compromised your private keys (and you can bet that now that the bug is public, there will be a lot of would-be miscreants out there looking for servers to abuse), the only safe thing to do is cycle them. You wouldn’t leave your locks unchanged after being burgled, would you?

Also note that once you do upgrade OpenSSL, you can get a quick list of the services that need to be restarted by running the following:

sudo lsof | grep ssl | grep DEL

Where can I get help and/or more information?

In addition to the assorted links already mentioned, you can read up on the nuts and bolts of this bug, or various news articles such as this or this. There are a lot of articles out there right now, most of which are characterizing this as a major issue. It is. Test your vulnerability, upgrade your OpenSSL and rotate your private keys and certificates, and then change your passwords.

The post Heartbleed: Separating FAQ From FUD appeared first on MySQL Performance Blog.

Nov
18
2013
--

MySQL encryption performance, revisited

This is part two on a two-part series on the performance implications of in-flight data encryption with MySQL. In the first part, I focused specifically on the impact of using MySQL’s built-in SSL support with some rather surprising results. Certainly it was expected that query throughput would be lower with SSL than without, but I was rather surprised by the magnitude of the performance hit incurred at connection setup time. These results naturally lended themselves to some further investigation; in particular, I wanted to compare performance differences between MySQL’s built-in SSL encryption facilities and external encryption technologies, such as SSH tunneling. I’ll also be using this post to address a couple of questions posed in the comments on my original article. So, without further ado….

Test Environment

The various tests discussed in this post involved a total of four different machines:

  • Machine A: m1.xlarge EC2 instance (4 vCPU/15GB RAM/Amazon Linux) in US-West-2/Oregon
  • Machine B: m1.xlarge EC2 instance (4 vCPU/15GB RAM/Amazon Linux) in EU-West/Ireland
  • Machine C: Intel Core i7-2600K 3.4GHz (8 HT cores/32GB RAM/CentOS 6.4)
  • Machine D: Intel Core i3-550 3.2GHz (4 HT cores/16GB RAM/CentOS 6.4)

Some tests were conducted with MySQL 5.6.13-community, and other tests were conducted with Percona Server 5.6.13.

External Encryption Technology

For this test, I looked at one of the most common ways of establishing a site-to-site link in the absence of a “true” VPN – the good old-fashioned SSH tunnel. I don’t have ready access to sufficient gear to set up a hardware-accelerated VPN, but this is enough to illustrate the point. Recall that the default SSL cipher suite used by MySQL/SSL is DHE-RSA-AES256-SHA; if we unpack this a bit, it tells us that we’re using Diffie-Hellman key exchange with SHA1 as our hashing function, RSA for authentication, and encryption with 256-bit AES (in CBC mode, according to the OpenSSL docs). Although perhaps not immediately obvious, this same cipher suite is pretty easy to emulate with OpenSSH. The SSH version 2 protocol uses DHE/RSA/SHA1 by default, so then all we need to do is explicitly specify the AES256-CBC cipher when we’re setting up our tunnel, and we should be, for all intents and purposes, comparing encrypted apples to encrypted apples. For the sake of curiosity, we also try our SSH tunnel with AES256 in CTR mode, which should theoretically be a bit faster due to its ability to encrypt blocks in parallel, but as it turns out, at least for this test, the difference is marginal.

The machines used for this test were machines C (server) and D (client), which are on the same VLAN of a gigabit Ethernet link, and the test script in use was similar to that from part 1, wherein we attempt to create 100 connections as fast as possible. Each test configuration was run 10 times, with the averages and standard deviations for each test listed in the table below, and the numbers are given in connections per second. Also, note that for this particular test, all keys used are 4096 bits and tests were run against Percona Server 5.6.13 only.

 

NO ENCRYPTION MySQL+SSL SSH tunnel (AES256-CBC) SSH tunnel (AES256-CTR)
1001.33 (59.26) 22.23 (0.1392) 476.52 (11.87) 482.02 (13.42)

 


Or, for those of you who like graphics, I think the chart speaks for itself.
connection-throughput2

 

Obviously, not using encryption is still the fastest game in town, but connection throughput over an SSH tunnel doesn’t obliterate performance to anywhere near the same level as using MySQL’s native SSL capability. Going from 1000 cps to 22 cps is potentially unusable, but I’d venture a guess that 470-480 cps in a single thread would still provide serviceable results for most people.

Connection Performance over High-Latency Links

Pursuit of data for this test came out of a question left in the comments on my original post; specifically, how the SSL connection penalty is impacted by increasing network latency. We can see from the results above that on a low-latency link, using SSL dramatically impacts performance, but what if we’re doing that connection over a WAN link? It might be the case that if we’re already paying a latency penalty due to simple round-trip time, maybe throwing encryption into the mix won’t hurt that much, even if we do it with MySQL’s built-in SSL support. Thus, for this test, I spun up two different Amazon EC2 instances (machines A and B, described above). Machine C, loacated in Northern California, was used as the client, and this test was conducted both with Community MySQL as well as Percona Server, and with key sizes ranging from zero to 4096 bits. The SSL cipher suite used was the default, and the test script the same as before – run 10 trials, create 100 connections as fast as we can, and report results in connections per second. However, for this test, the raw numbers are somewhat secondary; we’re really just looking to see the impact of network latency on SSL connection performance.

First, from C to B (Northern California to Ireland):
--- ec2-54-220-212-210.eu-west-1.compute.amazonaws.com ping statistics ---
50 packets transmitted, 50 received, 0% packet loss, time 49228ms
rtt min/avg/max/mdev = 167.851/170.126/177.433/2.289 ms

us_to_ireland_throughput

And next, from C to A (Northern California to Oregon):
--- ec2-54-212-134-221.us-west-2.compute.amazonaws.com ping statistics ---
50 packets transmitted, 50 received, 0% packet loss, time 49108ms
rtt min/avg/max/mdev = 42.543/44.648/59.994/3.194 ms

us_to_us_throughput

As expected, obviously the raw numbers are much lower for the transcontinental test than they are when connecting to servers that, at least geographically, are only a few hundred miles away, but as it turns out, with the exception of how Community MySQL behaves, which I’ll talk about in a minute, the percentage decrease in performance actually doesn’t really vary by that much. The table below compares connections from C to B with connections from C to A.

MySQL 5.6.13 US->EU MySQL 5.6.13 US->US PS 5.6.13 US->EU PS 5.6.13 US->US PS 5.6.13-static US->EU PS 5.6.13-static US->US
1024-bit 34.39% 36.13% 34.59% 35.23% 33.44% 36.31%
2048-bit 37.04% 45.07% 33.91% 38.35% 34.30% 35.40%
4096-bit 51.85% 71.66% 37.06% 43.17% 37.64% 41.66%

 

A few comments on the above. First, at 1024 bits of SSL encryption, it doesn’t make that much difference if you’re 40ms away or 170ms away. Second, as latency decreases, the connection-throughput performance lost due to SSL encryption overhead increases. That makes sense, particularly when we consider that in the base cases (two servers on the same LAN, or connections over a TCP socket to the same server), connection throughput performance is dominated by the use (or not) of SSL. However, the one thing in all of this that doesn’t make sense to me is just how badly community MySQL fares with 4096-bit encryption compared to Percona Server. What’s so special about 4096-bit encryption that tanks the performance of Community MySQL but doesn’t seem to impact Percona Server nearly as much? I don’t have a good answer for this nor even a good hypothesis; if it hadn’t happened on both tests I’d probably have claimed a PEBCAT, but now I’m just not sure. So, if anyone else tries this test, I’d be curious to know if you get the same results.

Parting Thoughts

Regardless of the anomaly with MySQL 5.6.13 and 4096-bit SSL, I think the primary takeaway from both this post and its predecessor is pretty clear: if you need end-to-end encryption of your MySQL traffic and you’re using replication or a connection-pooling sort of workload, MySQL’s built-in SSL support will probably serve you just fine, but if your application is of the type that requires setting up and tearing down large numbers of connections frequently, you might want to look at offloading that encryption work elsewhere, even if “elsewhere” just means running it through an SSH tunnel.

The post MySQL encryption performance, revisited appeared first on MySQL Performance Blog.

Oct
10
2013
--

SSL Performance Overhead in MySQL

NOTE: This is part 1 of what will be a two-part series on the performance implications of using in-flight data encryption.

Some of you may recall my security webinar from back in mid-August; one of the follow-up questions that I was asked was about the performance impact of enabling SSL connections. My answer was 25%, based on some 2011 data that I had seen over on yaSSL’s website, but I included the caveat that it is workload-dependent, because the most expensive part of using SSL is establishing the connection. Not long thereafter, I received a request to conduct some more specific benchmarks surrounding SSL usage in MySQL, and today I’m going to show the results.

First, the testing environment. All tests were performed on an Intel Core i7-2600K 3.4GHz CPU (8 cores, HT included) with 32GB of RAM and CentOS 6.4. The disk subsystem is a 2-disk RAID-0 of Samsung 830 SSDs, although since we’re only concerned with measuring the overhead added by using SSL connections, we’ll only be conducting read-only tests with a dataset that fits completely in the buffer pool. The version of MySQL used for this experiment is Community Edition 5.6.13, and the testing tools are sysbench 0.5 and Perl. We conduct two tests, each one designed to simulate one of the most common MySQL usage patterns. First, we examine connection pooling, often seen in the Java world, where some small set of connections are established by, for example, the servlet container and then just passed around to the application as needed, and one-request-per-connection, typical in the LAMP world, where the script that displays a given page might connect to the database, run a couple of queries, and then disconnect.

Test 1: Connection Pool

For the first test, I ran sysbench in read-only mode at concurrency levels of 1, 2, 4, 8, 16, and 32 threads, first with no encryption and then with SSL enabled and key lengths of 1024, 2048, and 4096 bits. 8 sysbench tables were prepared, each containing 100,000 rows, resulting in a total data size of approximately 256MB. The size of my InnoDB buffer pool was 4GB, and before conducting each official measurement run, I ran a warm-up run to prime the buffer pool. Each official test run lasted 10 minutes; this might seem short, but unlike, say, a PCIe flash storage device, I would not expect the variable under observation to really change that much over time or need time to stabilize. The basic sysbench syntax used is shown below.

#!/bin/bash
for SSL in on off ;
do
  for threads in 1 2 4 8 16 32 ;
  do
    sysbench --test=/usr/share/sysbench/oltp.lua --mysql-user=msandbox$SSL --mysql-password=msandbox \
       --mysql-host=127.0.0.1 --mysql-port=5613 --mysql-db=sbtest --mysql-ssl=$SSL \
       --oltp-tables-count=8 --num-threads=$threads --oltp-dist-type=uniform --oltp-read-only=on \
       --report-interval=10 --max-time=600 --max-requests=0 run > sb-ssl_${SSL}-threads-${threads}.out
  done
done

If you’re not familiar with sysbench, the important thing to know about it for our purposes is that it does not connect and disconnect after each query or after each transaction. It establishes N connections to the database (where N is the number of threads) and runs queries though them until the test is over. This behavior provides our connection-pool simulation. The assumption, given what we know about where SSL is the slowest, is that the performance penalty here should be the lowest. First, let’s look at raw throughput, measured in queries per second:

sysbench-throughput

The average throughput and standard deviation (both measured in queries per second) for each test configuration is shown below in tabular format:

 

 

# of threads
SSL key size
1 2 4 8 16 32
SSL OFF 9250.18 (1005.82) 18297.61 (689.22) 33910.31 (446.02) 50077.60 (1525.37) 49844.49 (934.86) 49651.09 (498.68)
1024-bit 2406.53 (288.53) 4650.56 (558.58) 9183.33 (1565.41) 26007.11 (345.79) 25959.61 (343.55) 25913.69 (192.90)
2048-bit 2448.43 (290.02) 4641.61 (510.91) 8951.67 (1043.99) 26143.25 (360.84) 25872.10 (324.48) 25764.48 (370.33)
4096-bit 2427.95 (289.00) 4641.32 (547.57) 8991.37 (1005.89) 26058.09 (432.86) 25990.13 (439.53) 26041.27 (780.71)

 

 

 

 

So, given that this is an 8-core machine and IO isn’t a factor, we would expect throughput to max out at 8 threads, so the levelling-off of performance is expected. What we also see is that it doesn’t seem to make much difference what key length is used, which is also largely expected. However, I definitely didn’t think the encryption overhead would be so high.

The next graph here is 95th-percentile latency from the same test:

sysbench-response-time

And in tabular format, the raw numbers (average and standard deviation):

 

 

 

# of threads
SSL key size
1 2 4 8 16 32
SSL OFF 1.882 (0.522) 1.728 (0.167) 1.764 (0.145) 2.459 (0.523) 6.616 (0.251) 27.307 (0.817)
1024-bit 6.151 (0.241) 6.442 (0.180) 6.677 (0.289) 4.535 (0.507) 11.481 (1.403) 37.152 (0.393)
2048-bit 6.083 (0.277) 6.510 (0.081) 6.693 (0.043) 4.498 (0.503) 11.222 (1.502) 37.387 (0.393)
4096-bit 6.120 (0.268) 6.454 (0.119) 6.690 (0.043) 4.571 (0.727) 11.194 (1.395) 37.26 (0.307)

 

 

 

 

With the exception of 8 and 32 threads, the latency introduced by the use of SSL is constant at right around 5ms, regardless of the key length or the number of threads. I’m not surprised that there’s a large jump in latency at 32 threads, but I don’t have an immediate explanation for the improvement in the SSL latency numbers at 8 threads.

Test 2: Connection Time

For the second test, I wrote a simple Perl script to just connect and disconnect from the database as fast as possible. We know that it’s the connection setup which is the slowest part of SSL, and the previous test already shows us roughly what we can expect for SSL encryption overhead for sending data once the connection has been established, so let’s see just how much overhead SSL adds to connection time. The basic script to do this is quite simple (non-SSL version shown):

#!/usr/bin/perl
use DBI;
use Time::HiRes qw(time);
$start = time;
for (my $i=0; $i<100; $i++) {
  my $dbh = DBI->connect("dbi:mysql:host=127.0.0.1;port=5613",
                         "msandbox","msandbox",undef);
  $dbh->disconnect;
  undef $dbh;
}
printf "%.6f\n", time - $start;

As with test #1, I ran test #2 with no encryption and SSL encryption of 1024, 2048, and 4098 bits, and I conducted 10 trials of each configuration. Then I took the elapsed time for each test and converted it to connections per second. The graph below shows the results from each run:
connection-throughput

Here are the averages and standard deviations:

 

 

Encryption Average connections per second Standard deviation
None 2701.75 165.54
1024-bit 77.04 6.14
2048-bit 28.183 1.713
4096-bit 5.45 0.015

 

 


Yes, that’s right, 4096-bit SSL connections are 3 orders of magnitude slower to establish than unencrypted connections. Really, the connection overhead for any level of SSL usage is quite high when compared to the unencrypted test, and it’s certainly much higher than my original quoted number of 25%.

 

 

Analysis and Parting Thoughts

So, what do we take away from this? The first thing is, of course, is that SSL overhead is a lot higher than 25%, particularly if your application uses anything close to the one-connection-per-request pattern. For a system which establishes and maintains long-running connections, the initial connection overhead becomes a non-factor, regardless of the encryption strength, but there’s still a rather large performance penalty compared to the unencrypted connection.

This leads directly into the second point, which is that connection pooling is by far a more efficient method of using SSL if your application can support it.

But what if connection pooling isn’t an option, MySQL’s SSL performance is insufficient, and you still need full encryption of data in-flight? Run the encryption component of your system at a lower layer – a VPN with hardware crypto would be the fastest approach, but even something as simple as an SSH tunnel or OpenVPN *might* be faster than SSL within MySQL. I’ll be exploring some of these solutions in a follow-up post.

And finally… when in doubt, run your own benchmarks. I don’t have an explanation for why the yaSSL numbers are so different from these (maybe yaSSL is a faster SSL library than openSSL, or maybe they used a different cipher – if you’re curious, the original 25% number came from slides 56-58 of this presentation), but in any event, this does illustrate why it’s important to run tests on your own hardware and with your own workload when you’re interested in finding out how well something will perform rather than taking someone else’s word for it.

The post SSL Performance Overhead in MySQL appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com