Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.
Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.
The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.
“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”
This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).
“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”
In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.
“AsmuchasGooglelovesthecloud, amantrathatGooglehas is ‘let’smeetcustomerswheretheyare.’ Weknowthatcustomersareembracingthecloud, butwealsoknowthat they haveamassive, massivefootprintoftraditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.
Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.
Please join Percona’s Chief Evangelist Colin Charles on Thursday, June 28th, 2018, as he presents Securing Database Servers From External attacks at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).
A critical piece of your infrastructure is the database tier, yet people don’t pay enough attention to it judging by how many are bitten via poorly chosen defaults, or just a lack understanding of running a secure database tier. In this talk, I’ll focus on MySQL/MariaDB, PostgreSQL, and MongoDB, and cover external authentication, auditing, encryption, SSL, firewalls, replication, and more gems from over a decade of consulting in this space from Percona’s 4,000+ customers.
Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team of MariaDB Server in 2009, and had worked at MySQL since 2005, and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within open source communities in APAC, and has spoken at many conferences. Experienced technologist, well known in the open source world for work that spans nearly two decades within the community. Pays attention to emerging technologies from an integration standpoint. Prolific speaker at many industry-wide conferences delivering talks and tutorials with ease. Interests: application development, systems administration, database development, migration, Web-based technologies. Considered expert in Linux and Mac OS X usage/administration/roll-out’s. Specialties: MariaDB, MySQL, Linux, Open Source, Community, speaking & writing to technical audiences as well as business stakeholders.
In this blog post, we’ll look at how ProxySQL improves MySQL SSL connection performance.
When deploying MySQL with SSL, the main concern is that the initial handshake causes significant overhead if you are not using connection pools (i.e., mysqlnd-mux with PHP, mysql.connector.pooling in Python, etc.). Closing and making new connections over and over can greatly impact on your total query response time. A customer and colleague recently educated me that although you can improve SSL encryption/decryption performance with the AES-NI hardware extension on modern Intel processors, the actual overhead when creating SSL connections comes from the handshake when multiple roundtrips between the server and client are needed.
With ProxySQL’s support for SSL on its backend connections and connection pooling, we can have it sit in front of any application, on the same server (illustrated below):
With this setup, ProxySQL is running on the same server as the application and is connected to MySQL though local socket. MySQL data does not need to go through the TCP stream unsecured.
To quickly verify how this performs, I used a PHP script that simply creates 10k connections in a single thread as fast it can:
[root@ad ~]# time php php-test.php
real 0m20.417s
user 0m0.201s
sys 0m3.396s
Direct connection to MySQL with SSL:
[root@ad ~]# time php php-test.php
real 1m19.922s
user 0m29.933s
sys 0m9.550s
Direct connection to MySQL, no SSL, with OpenVPN tunnel:
[root@ad ~]# time php php-test.php
real 0m15.161s
user 0m0.493s
sys 0m0.803s
Now, using ProxySQL via the local socket file:
[root@ad ~]# time php php-test.php
real 0m2.791s
user 0m0.402s
sys 0m0.436s
Below is a graph of these numbers:
As you can see, the difference between SSL and no SSL performance overhead is about 400% – pretty bad for some workloads.
Connections through OpenVPN are also better than MySQL without SSL. While this is interesting, the OpenVPN server needs to be deployed on another server, separate from the MySQL server and application. This approach allows the application servers and MySQL servers (including replica/cluster nodes) to communicate on the same secured network, but creates a single point of failure. Alternatively, deploying OpenVPN on the MySQL server means if you have an additional high availability layer in place and it gets quite complicated when a new master is promoted. In short, OpenVPN adds many additional moving parts.
The beauty with ProxySQL is that you can just run it from all application servers and it works fine if you simply point it to a VIP that directs it to the correct MySQL server (master), or use the replication group feature to identify the authoritative master.
Lastly, it is important to note that these tests were done on CentOS 7.3 with OpenSSL 1.0.1e, Percona Server for MySQL 5.7.19, ProxySQL 1.4.1, PHP 5.4 and OpenVPN 2.4.3.
In this blog post, we’ll look at how to set up Percona PAM with Active Directory for external authentication.
In my previous article on Percona PAM, I demonstrated how to use Samba as a domain, and how easy it is to create domain users and groups via the samba-tool. Then we configured nss-pam-ldapd and nscd to enumerate user and group information via LDAP calls, and authenticate users from this source.
This time around, I will demonstrate two other ways of using Active Directory for external authentication by joining the domain via SSSD or Winbind. System Security Services Daemon (SSSD) allows you to configure access to several authentication hosts such as LDAP, Kerberos, Samba and Active Directory and have your system use this service for all types of lookups. Winbind, on the other hand, pulls data from Samba or Active Directory only. If you’re mulling over using SSSD or Winbind, take a look at this article on what SSSD or Winbind support.
For both methods, we’ll use realmd. That makes it easy to join a domain and enumerate users from it.
Note: Please follow the steps in the last article for setting up the Samba PDC environment.
Percona Server 5.7 with LDAP authentication via SSS or WinBind
OS: CentOS 7
IP Address: 172.16.0.21
Hostname: ps-ldap-21.example.com
DNS: 172.16.0.10(Samba PDC)
Installing realmd and Its Dependencies
First, we need to make sure that the time is in sync (since this is a requirement for joining domains). Install NTP and make sure that it starts up at boot time:
[root@ps-ldap-21 ~]# yum -y install ntp
* * *
Installed:
ntp.x86_64 0:4.2.6p5-25.el7.centos.2
* * *
[root@ps-ldap-21 ~]# ntpdate 0.centos.pool.ntp.org
systemctl enable ntpd.service
systemc 3 Jul 03:48:35 ntpdate[3708]: step time server 202.90.132.242 offset 1.024550 sec
[root@ps-ldap-21 ~]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@ps-ldap-21 ~]# systemctl start ntpd.service
Install realmd and its dependencies for SSSD or Winbind.
For SSSD:
Our Samba PDC is detected as an Active Directory Controller, and the packages required have been installed previously.
The next step is to join the domain by running realm join domain. If you want to get more information, add the
--verbose option
. You could also add the
-U user
option if you want to use a different administrator account.
[root@ps-ldap-21 ~]# realm join example.com --verbose
* Resolving: _ldap._tcp.example.com
* Performing LDAP DSE lookup on: 172.16.0.10
* Successfully discovered: example.com
Password for Administrator:
* Required files: /usr/sbin/oddjobd, /usr/libexec/oddjob/mkhomedir, /usr/sbin/sssd, /usr/bin/net
* LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.DM6W2Y -U Administrator ads join example.com
Enter Administrator's password:
Using short domain name -- EXAMPLE
Joined 'PS-LDAP-21' to dns domain 'example.com'
* LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.DM6W2Y -U Administrator ads keytab create
Enter Administrator's password:
* /usr/bin/systemctl enable sssd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/sssd.service to /usr/lib/systemd/system/sssd.service.
* /usr/bin/systemctl restart sssd.service
* /usr/bin/sh -c /usr/sbin/authconfig --update --enablesssd --enablesssdauth --enablemkhomedir --nostart && /usr/bin/systemctl enable oddjobd.service && /usr/bin/systemctl start oddjobd.service
* Successfully enrolled machine in realm
As you can see from the command above, the realm command simplifies SSSD configuration and uses existing tools such as net and authconfig to join the domain and use it as an identity provider.
Let’s test if we enumerate existing accounts by using the
id
command:
[root@ps-ldap-21 ~]# id jervin
id: jervin: no such user
[root@ps-ldap-21 ~]# id jervin@example.com
uid=343401115(jervin@example.com) gid=343400513(domain users@example.com) groups=343400513(domain users@example.com),343401103(support@example.com)
As you can see, the user can be queried if the domain is specified. So if you want to log in as ‘jervin@example.com’, in Percona Server for MySQL you’ll need to create the user as ‘jervin@example.com’ and not ‘jervin’. For example:
# Creating user 'jervin@example.com'
CREATE USER 'jervin@example.com'@'%' IDENTIFIED WITH auth_pam;
# Logging in as 'jervin@example.com'
mysql -u 'jervin@example.com'
If you want to omit the domain name when logging in, you’ll need to replace “use_fully_qualified_names = True” to “use_fully_qualified_names = False” in /etc/sssd/sssd.conf, and then restart SSSD. If you do this, then the user can be found without providing the domain:
[root@ps-ldap-21 ~]# id jervin
uid=343401115(jervin) gid=343400513(domain users) groups=343400513(domain users),343401103(support)
[root@ps-ldap-21 ~]# id jervin@example.com
uid=343401115(jervin) gid=343400513(domain users) groups=343400513(domain users),343401103(support)
When you create the MySQL user, you don’t need to include the domain anymore:
# Creating user 'jervin'
CREATE USER 'jervin'@'%' IDENTIFIED WITH auth_pam;
# Logging in as 'jervin'
mysql -u jervin
Optionally, you can specify which users and groups can log in by adding these settings to SSSD: Domain access filter Under “[domain/example.com]” /etc/sssd/sssd.conf, you can add the following to specify that only users that are members of support and dba are allowed to use SSSD. For example:
Done. All you need to do now is to install Percona Server for MySQL, enable the auth_pam and auth_pam_compat plugins, and add PAM users. You can then check for authentication errors at /var/log/secure for troubleshooting. You could also get verbose logs by adding debug_level=[1-9] to [nss], [pam], or [domain] and then restarting SSSD. You can view the logs from /var/log/sssd.
Joining the Domain via Winbind and Preparing it for Percona PAM
The
realm
command assumes that SSSD is used. To change the client software, use
Since the required packages have already been installed, we can now attempt to join this host to the domain:
[root@ps-ldap-21 ~]# realm --verbose --client-software=winbind join example.com
* Resolving: _ldap._tcp.example.com
* Performing LDAP DSE lookup on: 172.16.0.10
* Successfully discovered: example.com
Password for Administrator:
* Required files: /usr/libexec/oddjob/mkhomedir, /usr/sbin/oddjobd, /usr/bin/wbinfo, /usr/sbin/winbindd, /usr/bin/net
* LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.9YEO2Y -U Administrator ads join example.com
Enter Administrator's password:
Using short domain name -- EXAMPLE
Joined 'PS-LDAP-21' to dns domain 'example.com'
* LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.9YEO2Y -U Administrator ads keytab create
Enter Administrator's password:
* /usr/bin/systemctl enable winbind.service
Created symlink from /etc/systemd/system/multi-user.target.wants/winbind.service to /usr/lib/systemd/system/winbind.service.
* /usr/bin/systemctl restart winbind.service
* /usr/bin/sh -c /usr/sbin/authconfig --update --enablewinbind --enablewinbindauth --enablemkhomedir --nostart && /usr/bin/systemctl enable oddjobd.service && /usr/bin/systemctl start oddjobd.service
* Successfully enrolled machine in realm
Let’s test if we enumerate existing accounts by using the
id
command
[root@ps-ldap-21 ~]# id jervin
id: jervin: no such user
[root@ps-ldap-21 ~]# id jervin@example.com
uid=10000(EXAMPLEjervin) gid=10000(EXAMPLEdomain users) groups=10000(EXAMPLEdomain users),10001(EXAMPLEsupport)
Unfortunately for Winbind, users identified with their domains cannot login to Percona Server for MySQL. We need to disable this from the Samba config (performed in the next step).
Edit /etc/samba/smb.conf, and change “winbind use default domain = no” to “winbind use default domain = yes”. Restart the Winbind service. For example:
vi /etc/samba/smb.conf
#Look for:
"winbind use default domain = no"
#Change to:
"winbind use default domain = yes"
systemctl restart winbind.service
Try running
id
again:
[root@ps-ldap-21 ~]# id jervin
uid=10000(jervin) gid=10000(domain users) groups=10000(domain users),10001(support)
[root@ps-ldap-21 ~]# id jervin@example.com
id: jervin@example.com: no such user
When you create the MySQL user, do not include the domain name. For example:
# Creating user 'jervin'
CREATE USER 'jervin'@'%' IDENTIFIED WITH auth_pam;
# Logging in as 'jervin'
mysql -u jervin
Finally, configure Percona Server for MySQL to authenticate to Winbind by creating /etc/pam.d/mysqld with this content:
You can debug authentication attempts by reviewing the logs at /var/log/secure. You may also change “auth required pam_winbind.so” to “auth required pam_winbind.so debug” in /etc/pam.d/mysqld to get verbose logging in the same file.
As for filtering who can authenticate with Winbind, you can add
require_membership_of=group_name
under the [global] section of /etc/security/pam_winbind.conf
You’ll need to restart winbind daemon to apply the changes.
Conclusion
Thanks to realmd, it’s easier to setup Active Directory as an identity provider. With minimal configuration tweaks, you can use the identity provider to authenticate MySQL users.
Join Percona’s Chief Evangelist, Colin Charles as he presents Securing Your MySQL/MariaDB Data on Tuesday, July 11, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).
This webinar will discuss the features of MySQL/MariaDB that when enabled and used improve the default usage of MySQL. Many cloud-based applications fail to:
Use appropriate filesystem permissions
Employ TLS/SSL for connections
Require TLS/SSL with MySQL replication
Use external authentication plugins (LDAP, PAM, Kerberos)
Encrypt all your data at rest
Monitor your database with the audit plugin
Review and rejecting SQL injections
Design application access using traditional firewall technology
Employ other MySQL/MariaDB security features
This webinar will demonstrate and advise on how to correctly implement the features above. We will end the presentation with some simple steps on how to hack a MySQL installation.
Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team of MariaDB Server in 2009, worked at MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within open source communities in APAC, and has spoken at many conferences.
This blog post looks at SSL connections and how they work in MySQL 5.7.
Recently I was working on an SSL implementation with MySQL 5.7, and I made some interesting discoveries. I realized I could connect to the MySQL server without specifying the SSL keys on the client side, and the connection is still secured by SSL. I was confused and I did not understand what was happening.
In this blog post, I am going to show you why SSL works in MySQL 5.7, and it worked previously in MySQL 5.6.
Let’s start with an introduction of how SSL worked in 5.6.
SSL in MySQL 5.6
The documentation for SSL in MySQL 5.6 is quite detailed, and it explains how SSL works. But first let’s make one thing clear: MySQL supports secure (encrypted) connections between clients and the server using the TLS (Transport Layer Security) protocol. TLS is sometimes referred to as SSL (Secure Sockets Layer), but MySQL doesn’t actually use the SSL protocol for secure connections because it provides weak encryption.
So when we/someone says MySQL is using SSL, it really means that it is using TLS. You can check which protocol you use:
show status like 'Ssl_version';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| Ssl_version | TLSv1.2 |
+---------------+---------+
So how does it work? Let me quote a few lines from the MySQL documentation:
TLS uses encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect data change, loss, or replay. TLS also incorporates algorithms that provide identity verification using the X509 standard. X509 makes it possible to identify someone on the Internet. In basic terms, there should be some entity called a “Certificate Authority” (or CA) that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys (a public key and a secret key). A certificate owner can present the certificate to another party as proof of identity. A certificate consists of its owner’s public key. Any data encrypted using this public key can be decrypted only using the corresponding secret key, which is held by the owner of the certificate.
It works with key pairs (private and public): the server has the private keys and the client has the public keys. Here is a link showing how we can generate these keys.
MySQL 5.6 Client
When the server is configured with SSL, the client has to have the client certificates. Once it gets those, it can connect to the server using SSL:
mysql -utestuser -p -h192.168.0.1 -P3306 --ssl-key=/root/mysql-ssl/client-key.pem --ssl-cert=/root/mysql-ssl/client-cert.pem
We have to specify the key and the certificate. Otherwise, we cannot connect to the server with SSL. So far so good, everything works as the documentation says. But how does it work in MySQL 5.7?
SSL in MySQL 5.7
The documentation was very confusing to me, and did not help much. It described how to create the SSL keys the same way (and even the server and client configuration) as in MySQL 5.6. So if everything is the same, why I can connect without specifying the client keys? I did not have an answer for that. One of my colleagues (Alexey Poritskiy) found the answer after digging through the manuals for a while, and it finally clearly described this behavior:
As of MySQL 5.7.3, a client need specify only the --ssl option to obtain an encrypted connection. The connection attempt fails if an encrypted connection cannot be established. Before MySQL 5.7.3, the client must specify either the --ssl-ca option, or all three of the --ssl-ca, --ssl-key, and --ssl-cert options.
After this, I re-read the SSL documentation and found the following:
5.7.3: On the client side, an explicit --ssl option is no longer advisory but prescriptive. Given a server enabled to support secure connections, a client program can require a secure conection by specifying only the --ssl option. The connection attempt fails if a secure connection cannot be established. Other --ssl-xxx options on the client side mean that a secure connection is advisory (the connection attempt falls back to an unencrypted connection if a secure connection cannot be established).
I still think the SSL manual could be more expressive and detailed.
Before MySQL 5.7.3, it used key pairs. After that, it works a bit similar to websites (HTTPS): the client does not need the keys and the connection still can be secure. (I would still like to see more detailed documentation how it really works.)
Is This Good for Us?
In my opinion, this is a really good feature, but it didn’t get a lot of publicity. Prior to 5.7.3, if we wanted to use SSL we had to create the keys and use the client keys in every client application. It’s doable, but it is just a pain — especially if we are using different keys for every server with many users.
With this feature we can have a different key for every server, but on the client side we only have to enable the SSL connection. Most of the clients use it as the default if SSL is available.
I am pretty sure if people knew about this feature they would use it more often.
Limitations
I tested it with many client applications, and all worked without specifying the client keys. I only had issues with the MySQL 5.6 client, even though the server was 5.7. In that case, I had to specify the client keys. Without them, I couldn’t connect to the server.
It is possible some older applications won’t support this feature.
Conclusion
This is a great feature, easy to use it and it makes SSL implementations much easier. But I think the documentation should be clearer and more precise in describing how this new feature actually works. I already submitted a request to update the documentation.
If you’ve tried to use SSL in Percona XtraDB Cluster and saw an error in the logs like SSL3_CHECK_CERT_AND_ALGORITHM:dh key too small, we’ve implemented some changes in Percona XtraDB Cluster 5.6.34 and 5.7.16 that get rid of these errors.
Some background
dh key too small refers to the Diffie-Hellman parameters used by the SSL code that are shorter than recommended.
Due to the Logjam vulnerability (https://weakdh.org/), the required key-lengths for the Diffie-Hellman parameters were changed from 512 bits to 2048 bits. Unfortunately, older versions of OpenSSL/socat still use 512 bits (and thus caused the error to appear).
Changes made to Percona XtraDB Cluster
Since versions of socat greater than 1.7.3 now use 2048 bits for the Diffie-Hellman parameters, we only do extra work for the older versions of socat (less than 1.7.3). The SST code now:
Looks for a file with the DH params
Uses the “ssl_dhparams” option in the [sst] section if it exists
Looks for a “dhparams.pem” file in the datadir
If the file is specified and exists, uses that file as a source for the DH parameters
If the file does not exist, creates a dhparams.pem file in the datadir
Generating the dhparams yourself
Unfortunately, the time it can take several minutes to create the dhparams file. We recommend that the dhparams.pem be created prior to starting the SST.
In this blog post, we’ll look at a feature that recently added to Percona XtraDB Cluster 5.7.16, that makes it easier to configure Percona XtraDB Cluster SSL for all related communications. It uses mode “encrypt=4”, and configures SSL for both IST/Galera communications and SST communications using the same SSL files. “encrypt=4” is a new encryption mode added in Percona XtraDB Cluster 5.7.16 (we’ll cover it in a later blog post).
If this option is used, this will override all other Galera/SST SSL-related file options. This is to ensure that a consistent configuration is applied.
Using this option also means that the Galera/SST communications are using the same keys as client connections.
Example
This example shows how to startup a cluster using this option. We will use the default SSL files created by the bootstrap node. Basically, there are two steps:
Set
pxc_encrypt_cluster_traffic=ON
on all nodes
Ensure that all nodes share the same SSL files
Step 1: Configuration (on all nodes)
We enable the
pxc_encrypt_cluster_traffic
option in the configuration files on all nodes. The default value of this option is “OFF”, so we enable it here.
[mysqld]
pxc_encrypt_cluster_traffic=ON
Step 2: Startup the bootstrap node
After initializing and starting up the bootstrap node, the datadir will contain the necessary data files. Here is some SSL-related log output:
[Note] Auto generated SSL certificates are placed in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Auto generated RSA key files are placed in data directory.
The required files are ca.pem, server-cert.pem and server-key.pem, which are the Certificate Authority (CA) file, the server certificate and the server private key, respectively.
Step 3: Copy the SSL files to all other nodes
Galera views the cluster as a set of homogeneous nodes, so the same configuration is expected on all nodes. Therefore, we have to copy the CA file, the server’s certificate and the server’s private key. By default, MySQL names these: ca.pem, server-cert.pem, and server-key.pem, respectively.
Step 4: Startup the other nodes
This is some log output showing that the SSL certificate files have been found. The other nodes should be using the files that were created on the bootstrap node.
[Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
[Note] Skipping generation of SSL certificates as certificate files are present in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Skipping generation of RSA key pair as key files are present in data directory.
This is some log output (with
log_error_verbosity=3
), showing the SST reporting on the configuration used.
WSREP_SST: [DEBUG] pxc_encrypt_cluster_traffic is enabled, using PXC auto-ssl configuration
WSREP_SST: [DEBUG] with encrypt=4 ssl_ca=/my/data//ca.pem ssl_cert=/my/data//server-cert.pem ssl_key=/my/data//server-key.pem
Customization
The “ssl-ca”, “ssl-cert”, and “ssl-key” options in the “[mysqld]” section can be used to specify the location of the SSL files. If these are not specified, then the datadir is searched (using the default names of “ca.pem”, “server-cert.pem” and “server-key.pem”).
Uses the values if explicitly specified (via the “ssl-ca”, “ssl-cert” and “ssl-key” options in the “[mysqld]” section)
If the SSL file options are not specified, we look in the data directory for files named “ca.pem”, “server-cert.pem” and “server-key.pem” for the CA file, the server certificate, and the server key, respectively.
Modify the configuration
Overrides the values for socket.ssl_ca, socket.ssl_cert, and socket.ssl_key in
wsrep_provider_options
in the “[mysqld]” section.
Sets “encrypt=4” in the “[sst]” section.
Overrides the values for ssl-ca, ssl-cert and ssl-key in the “[sst]” section.
This is not a dynamic setting, and is only available on startup.
In this blog post, we’ll look at enabling Percona XtraDB Cluster SST Traffic Encryption, and some of the changes to the SSL-based encryption of SST traffic in Percona XtraDB Cluster 5.7.16.
Some background
Percona XtraDB Cluster versions prior to 5.7 support encryption methods 0, 1, 2 and 3:
encrypt = 0 : (default) No encryption
encrypt = 1 : Symmetric encryption using AES-128, user-supplied key
encrypt = 2 : SSL-based encryption with a CA and cert files (via socat)
encrypt = 3 : SSL-based encryption with cert and key files (via socat)
We are deprecating modes encrypt=1,2,3 in favor of the new mode, encrypt=4. “encrypt=3” is not recommended, since it does not verify the cert being used (it cannot verify since no Certificate Authority (CA) file is provided). “encrypt=2” and “encrypt=3” use a slightly different way of building the SSL files than MySQL does. In order to remove confusion, we’ve deprecated these modes in favor of “encrypt=4”, which can use the MySQL generated SSL files.
New feature: encrypt= 4
The previous SSL methods (encrypt=2 and encrypt=3), are based on socat usage, http://www.dest-unreach.org/socat/doc/socat-openssltunnel.html. The certs are not built the same way as the certs created by MySQL (for encryption of client communication with MySQL). To simplify SSL configuration and usage, we added a new encryption method (encrypt=4) so that the SSL files generated by MySQL can now be used for SSL encryption of SST traffic.
In general, galera views the cluster as homogeneous, so it expects that all nodes are identically configured. This extends to the SSL configuration, so the preferred configuration is that all machines share the same certs/keys. The security implication is that possession of these certs/keys allows a machine to join the cluster and receive all of the data. So proper care must be taken to secure the SSL files.
The mode “encrypt=4” uses the same option names as MySQL, so it reads the SSL information from “ssl-ca”, “ssl-cert”, and “ssl-key” in the “[sst]” section of the configuration file.
All three options (ssl-ca, ssl-cert, and ssl-key) must be specified otherwise the SST will return an error.
ssl-ca
This is the location of the Certificate Authority (CA) file. Only servers that have certificates generated from this CA file will be allowed to connect if SSL is enabled.
ssl-cert
This is the location fo the Certificate file. This is the digital certificate that will be sent to the other side of the SSL connection. The remote server will then verify that this certificate was generated from the Certificate Authority file in use by the remote server.
ssl-key
This is the location of the private key for the certificate specified in ssl-cert.
In my previous post I discussed using GPG to secure your database credentials. This relies on a local copy of your MySQL client config, but what if you want to keep the credentials stored safely along with other super secret information? Sure, GPG could still be used, but there must be an easier way to do this.
This post will look at a way to use Vault to store your credentials in a central location and use them to access your database. For those of you that have not yet come across Vault, it is a great way to manage your secrets – securing, storing and tightly controlling access. It has the added benefits of being able to handle leasing, key revocation, key rolling and auditing.
During this blog post we’ll accomplish the following tasks:
Before continuing onwards, I should drop in a quick note to say that the following is a quick example to show you how you can get Vault up and running and use it with MySQL, it is not a guide to production setup and does not cover High Availability (HA) implementations, etc.
Download time
We will be using some tools in addition to Vault, Let’s Encrypt, OpenSSL and json_pp (a command line utility using JSON::PP). For this post we’ll be using Ubuntu 16.04 LTS and we’ll presume that these aren’t yet installed.
If you haven’t already heard of Let’s Encrypt then it is a free, automated, and open Certificate Authority (CA) enabling you to secure your website or other services without paying for an SSL certificate; you can even create Subject Alternative Name (SAN) certificates to make your life even easier, allowing one certificate to be used a number of different domains. The Electronic Frontier Foundation (EFF) provide Certbot, the recommended tool to manage your certificates, which is the new name for the letsencrypt software. If you don’t have letsencrypt/certbot in your package manager then you should be able to use the quick install method. We’ll be using json_pp to prettify the JSON output from the Vault API and openssl to create a client certificate.
We also need to download Vault, choosing the binary relevant for your Operating System and architecture. At the time of writing this, the latest version of Vault is 0.6.2, so the following steps may need adjusting if you use a different version.
# Download Vault (Linux x86_64), SHA256SUMS and signature
$ wget https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_linux_amd64.zip
https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_SHA256SUMS.sig
https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_SHA256SUMS
# Import the GPG key
$ gpg --keyserver pgp.mit.edu --recv-keys 51852D87348FFC4C
# Verify the checksums
$ gpg --verify vault_0.6.2_SHA256SUMS.sig
gpg: assuming signed data in `vault_0.6.2_SHA256SUMS'
gpg: Signature made Thu 06 Oct 2016 02:08:16 BST using RSA key ID 348FFC4C
gpg: Good signature from "HashiCorp Security <security@hashicorp.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE F189 5185 2D87 348F FC4C
# Verify the download
$ sha256sum --check <(fgrep vault_0.6.2_linux_amd64.zip vault_0.6.2_SHA256SUMS)
vault_0.6.2_linux_amd64.zip: OK
# Extract the binary
$ sudo unzip -j vault_0.6.2_linux_amd64.zip -d /usr/local/bin
Archive: vault_0.6.2_linux_amd64.zip
inflating: /usr/local/bin/vault
Let’s Encrypt… why not?
We want to be able to access Vault from wherever we are, we can put additional security in place to prevent unauthorised access, so we need to get ourselves encrypted. The following example shows the setup on a public server, allowing the CA to authenticate your request. More information on different methods can be found in the Certbot documentation.
$ sudo letsencrypt --webroot -w /home/www/vhosts/default/public -d myfirstdomain.com -d myseconddomain.com
#IMPORTANT NOTES:
# - Congratulations! Your certificate and chain have been saved at
# /etc/letsencrypt/live/myfirstdomain.com/fullchain.pem. Your cert will
# expire on 2017-01-29. To obtain a new or tweaked version of this
# certificate in the future, simply run certbot again. To
# non-interactively renew *all* of your certificates, run "certbot
# renew"
# - If you like Certbot, please consider supporting our work by:
#
# Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
# Donating to EFF: https://eff.org/donate-le
#
That’s all it takes to get a SAN SSL certificate! The server that this was executed has a public webserver serving the domains that the certificates were requested for. During the request process a file is place in the specified webroot and is used to authenticate the domain(s) for the request. Essentially, the command said:
myfirstdomain.com and myseconddomain.com use /home/www/vhosts/default/public for the document root, so place your files there
Let’s Encrypt CA issues short-lived certificates (90 days), so you need to keep renewing them, but don’t worry as that is as easy as it was to create them in the first place! You can test that renewal works OK as follows (which will renew all certificates that you have without --dry-run):
$ sudo letsencrypt renew --dry-run
#
#-------------------------------------------------------------------------------
#Processing /etc/letsencrypt/renewal/myfirstdomain.com.conf
#-------------------------------------------------------------------------------
#** DRY RUN: simulating 'letsencrypt renew' close to cert expiry
#** (The test certificates below have not been saved.)
#
#Congratulations, all renewals succeeded. The following certs have been renewed:
# /etc/letsencrypt/live/myfirstdomain.com/fullchain.pem (success)
#** DRY RUN: simulating 'letsencrypt renew' close to cert expiry
#** (The test certificates above have not been saved.)
#
#IMPORTANT NOTES:
# - Your account credentials have been saved in your Certbot
# configuration directory at /etc/letsencrypt. You should make a
# secure backup of this folder now. This configuration directory will
# also contain certificates and private keys obtained by Certbot so
# making regular backups of this folder is ideal.
Automating renewal
The test run for renewal worked fine, so we can now go and schedule this to take place automatically. I’m using systemd so the following example uses timers, but cron or similar could be used too. Here’s how to make systemd run the scheduled renew for you, running at 0600 – the rewew process will automatically proceed for any previously-obtained certificates that expire in less than 30 days.
$ sudo cat <<EOF > /etc/systemd/system/cert-renewal.service
[Unit]
Description=SSL renewal
[Service]
Type=simple
ExecStart=/usr/bin/letsencrypt renew --quiet
User=root
Group=root
EOF
$ sudo cat <<EOF > /etc/systemd/system/cert-renewal.timer
[Unit]
Description=Automatic SSL renewal
[Timer]
OnCalendar=*-*-* 06:00:00
Persistent=true
[Install]
WantedBy=timers.target
EOF
$ sudo systemctl enable cert-renewal.timer
Created symlink from /etc/systemd/system/timers.target.wants/cert-renewal.timer to /etc/systemd/system/cert-renewal.timer.
$ sudo systemctl start cert-renewal.timer
$ sudo systemctl list-timers
NEXT LEFT LAST PASSED UNIT ACTIVATES
Tue 2016-11-01 06:00:00 UTC 6h left n/a n/a cert-renewal.timer cert-renewal.service
Getting started with Vault
Firstly, a quick reminder that this is not an in-depth review, how-to or necessarily best-practice Vault installation as that is beyond the scope of this post. It is just to get you going to test things out, so please read up on the Vault documentation if you want to use it more seriously.
Whilst there is a development server that you can fire up with the command vault server -dev to get yourself testing a little quicker, we’re going to take a little extra time and configure it ourselves and make the data persistent. Vault supports a number of backends for data storage, including Zookeeper, Amazon S3 and MySQL, however the 3 maintained by HashiCorp are consul, file and inmem. The memory storage backend does not provide persistent data, so whilst there could possibly be uses for this it is really only useful for development and testing – it is the storage backend used with the -dev option to the server command. Rather than tackle the installation and configuration of Consul during this post, we’ll use file storage instead.
Before starting the server we’ll create a config, which can be written in one of 2 formats – HCL (HashiCorp Configuration Language) or JSON (JavaScript Object Notation). We’ll use HCL as it is a little cleaner and saves us a little extra typing!
So, we’ve now set up a user and some directories to store the config, SSL certificate and key, and also the data, restricting access to the vault user. The config that we wrote specifies that we will use the file backend, storing data in /usr/local/vault/data, and the listener that will be providing TLS encryption using our certificate from Let’s Encrypt. The final setting, disable_mlock is not recommended for production and is being used to avoid some extra configuration during this post. More details about the other options available for configuration can be found in the Server Configuration section of the online documentation.
Please note that the Vault datadir should be kept secured as it contains all of the keys and secrets. In the example, we have done this by placing it in the vault user’s home directory and only allowing the vault user access. You can take this further by restricting local access (via logins) and access control lists
Starting Vault
Time to start the server and see if everything is looking good!
$ sudo -su vault vault server -config=/etc/vault/demo.hcl >/tmp/vault-debug.log 2>&1 &
$ jobs
[1] + running sudo -su vault vault server -config=/etc/vault/demo.hcl > /tmp/vault-debug.lo
$ VAULT_ADDR=https://myfirstdomain.com:8200 vault status
Error checking seal status: Error making API request.
URL: GET https://myfirstdomain.com:8200/v1/sys/seal-status
Code: 400. Errors:
* server is not yet initialized
Whilst it looks like something is wrong (we need to initialize the server), it does mean that everything is otherwise working as expected. So, we’ll initialize Vault, which is a pretty simple task, but you do need to make note/store some of the information that you will be given by the server during initialization – the unseal tokens and initial root key. You should distribute these to somewhere safe, but for now we’ll store them with the config.
There we go, the vault is initialized and the status command now returns details and confirmation that it is up and running. It is worth noting here that each time you start Vault it will be sealed, which means that it cannot be accessed until 3 unseal keys have been used with vault unseal – for additional security here you would ensure that a single person cannot know any 3 keys, so that it always requires more than one person to (re)start the service.
Setting up a policy
Policies allow you to set access control restrictions to determine the data that authenticated users have access to. Once again the documents used to write policies are in either the HCL or JSON format. They are easy to write and apply, the only catch being that the policies associated with a token cannot be changed (added/removed) once the token has been issued; you need to revoke the token and apply the new policies. However, If you want to change the policy rules then this can be done on-the-fly as modifications apply on the next call to Vault.
When we initialized the server we were given the initial root key and we now need to use that in order to start configuring the server.
We have only added one policy here, but you should really create as many policies as you need to suitably control access amongst the variety of humans and applications that may be using the service. As with any kind of data storage planning how to store your data is important, as it will help you write more compact policies with the level of granularity that you require. Writing everything in /secrets at the top level will most likely bring you headaches, or long policy definitions!
TLS authentication for MySQL secrets
We’re getting close to adding our first secret to Vault, but first of all we need a way to authenticate our access. Vault provides an API for access to your stored secrets, along with wealth of commands with direct use of the vault binary as we are doing at the moment. We will now enable the cert authentication backend, which allows authentication using SSL/TLS client certificates
(vault)$ vault auth-enable cert
Successfully enabled 'cert' at 'cert'!
Generate a client certificate using OpenSSL
The TLS authentication backend accepts certificates that are either signed by a CA or self-signed, so let’s quickly create ourselves a self-signed SSL certificate using openssl to use for authentication.
# Create working directory for SSL managment and copy in the config
$ mkdir ~/.ssl && cd $_
$ cp /usr/lib/ssl/openssl.cnf .
# Create a 4096-bit CA
$ openssl genrsa -des3 -out ca.key 4096
Generating RSA private key, 4096 bit long modulus
...........++
..........................................................................++
e is 65537 (0x10001)
Enter pass phrase for ca.key:
Verifying - Enter pass phrase for ca.key:
$ openssl req -config ./openssl.cnf -new -x509 -days 365 -key ca.key -out ca.crt
Enter pass phrase for ca.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) [Some-Place]:
Organization Name (eg, company) [Percona]:
Organizational Unit Name (eg, section) [Demo]:
Comon Name (e.g. server FQDN or YOUR name) [ceri]:
Email Address [thisisnotme@myfirstdomain.com]:
# Create a 4096-bit Client Key and CSR
$ openssl genrsa -des3 -out client.key 4096
Generating RSA private key, 4096 bit long modulus
......................++
..................................++
e is 65537 (0x10001)
Enter pass phrase for client.key:
Verifying - Enter pass phrase for client.key:
$ openssl req -config ./openssl.cnf -new -key client.key -out client.csr
Enter pass phrase for client.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [GB]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) [Some-Place]:
Organization Name (eg, company) [Percona]:
Organizational Unit Name (eg, section) [Demo]:
Comon Name (e.g. server FQDN or YOUR name) [ceri]:
Email Address [thisisnotme@myfirstdomain.com]:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
# Self-sign
$ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt
Signature ok
subject=/C=GB/ST=Some-State/L=Some-Place/O=Percona/OU=Demo/CN=ceri/emailAddress=thisisnotme@myfirstdomain.com
Getting CA Private Key
Enter pass phrase for ca.key:
# Create an unencrypted copy of the client key
$ openssl rsa -in client.key -out privkey.pem
Enter pass phrase for client.key:
writing RSA key
# Copy the certificate for Vault access
$ sudo cp client.crt /etc/ssl/vault/user.pem
OK, there was quite a lot of information there. You can edit openssl.cnf to set reasonable defaults for yourself and save time. In brief, we have created our own CA, created a self-signed certificate and then created a single PEM certificate with a decrypted key (this avoids specifying the password to use it – you may wish to leave the password in place to add more security, assuming that your client application can request the password.
Adding an authorisation certificate to Vault
Now that we have created a certificate and a policy we now need to allow authentication to occur using the certificate. We will give the token a 1-hour expiration and allow access to the MySQL secrets via the demo policy that we created in the previous step.
(vault)$ vault write auth/cert/certs/demo
display_name=demo
policies=demo
certificate=@${VAULT_SSL}/user.pem
ttl=3600
Success! Data written to: auth/cert/certs/demo
$ curl --cert user.pem --key privkey.pem ${VAULT_ADDR}/v1/auth/cert/login -X POST
{"request_id":"d5715ce1-2c6c-20c8-83ef-ce6259ad9110","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":null,"auth":{"client_token":"e3b98fac-2676-9f44-fdc2-41114360d2fd","accessor":"4c5b4eb5-4faf-0b01-b732-39d309afd216","policies":["default","demo"],"metadata":{"authority_key_id":"","cert_name":"demo","common_name":"thisisnotme@myfirstdomain.com","subject_key_id":""},"lease_duration":600,"renewable":true}}
Awesome! We requested out first client token using an SSL client certificate, we are logged it and we were given our access token (client_token) in the response that provides us with a 1 hour lease (lease_duration) to go ahead and make requests as a client without reauthentication, but there is nothing in the vault right now.
Ssshh!! It’s secret!
“The time has come,” the Vault master said, “to encrypt many things: our keys and passwords and top-secret notes, our MySQL DSNs and strings.”
Perhaps the easiest way to use Vault with your application is to store information there as you would do in a configuration file and read it when the application first requires it. An example of such information is the Data Source Name (DSN) for a MySQL connection, or perhaps the information needed to dynamically generate a .my.cnf. As this is about using Vault with MySQL we will do exactly that and store the user, password and connection method as our first secret, reading it back using the command line tool to check that it looks as expected.
(vault)$ $ vault write secret/mysql/test password="mysupersecretpassword" user="percona" socket="/var/run/mysqld/mysqld.sock"
Success! Data written to: secret/mysql/test
(vault)$ vault read secret/mysql/test
Key Value
--- -----
refresh_interval 768h0m0s
password mysupersecretpassword
socket /var/run/mysqld/mysqld.sock
user percona
A little while back (hopefully less than 1 hour ago!) we authenticated using cURL and gained a token, so now that we have something secret to read we can try it out. Fanfares and trumpets at the ready…
We did it! Now there is no longer the need to store passwords in your code or config files, you can just go and get them from Vault when you need them, such as when your application starts and holding them in memory, or on-demand if your application can tolerate any additional latency, etc. You would need to take further steps to make sure that your application is tolerant of Vault going down, as well as providing an HA setup of Vault to minimise the risk of the secrets being unavailable.
It doesn’t stop here though…
On-demand MySQL grants
Vault acts like a virtual filesystem and uses the generic storage backend by default, mounted as /secret, but due to powerful abstraction it is possible to use many other backends as mountpoints such as an SQL database, AWS IAM, HSMs and much more. We have kept things simple and been using the generic backend so far. You can view the available (mounted) backends using the mounts command:
(vault)$ vault mounts
Path Type Default TTL Max TTL Description
secret/ generic system system generic secret storage
sys/ system n/a n/a system endpoints used for control, policy and debugging
We are now going to enable the MySQL backend, add the management connection (which will use the auth_socket plugin) and then request a new MySQL user that will auto-expire!
# Create a dedicated MySQL user account
$ mysql -Bsse "CREATE USER vault@localhost IDENTIFIED WITH auth_socket; GRANT CREATE USER, SELECT, INSERT, UPDATE ON *.* TO vault@localhost WITH GRANT OPTION;"
# Enable the MySQL backend and set the connection details
(vault)$ vault mount mysql
(vault)$ vault write mysql/config/connection connection_url="vault:vault@unix(/var/run/mysqld/mysqld.sock)/"
Read access to this endpoint should be controlled via ACLs as it will return the connection URL as it is, including passwords, if any.
# Write the template for the readonly role
(vault)$ vault write mysql/roles/readonly
sql="CREATE USER '{{name}}'@'%' IDENTIFIED WITH mysql_native_password BY '{{password}}' PASSWORD EXPIRE INTERVAL 1 DAY; GRANT SELECT ON *.* TO '{{name}}'@'%';"
Success! Data written to: mysql/roles/readonly
# Set the lease on MySQL grants
(vault)$ vault write mysql/config/lease lease=1h lease_max=12h
Success! Data written to: mysql/config/lease
Here you can see that a template is created so that you can customise the grants per role. We created a readonly role, so it just has SELECT access. We have set an expiration on the account so that MySQL will automatically mark the password as expired and prevent access. This is not strictly necessary since Vault will remove the user accounts that it created as it expires the tokens, but by adding an extra level in MySQL it would allow you to set the lease, which seems to be global, in Vault to a little longer than required and vary it by role using MySQL password expiration. You could also use it as a way of tracking which Vault-generated MySQL accounts are going to expire soon. The important part is that you ensure that the application is tolerant of reauthentication, whether it would hand off work whilst doing so, accept added latency, or perhaps the process would terminate and respawn.
Now we will authenticate and request our user to connect to the database with.
Oh, what happened? Well, remember the policy that we created earlier? We hadn’t allowed access to the MySQL role generator, so we need to update and apply the policy.
Now that we have updated the policy to allow access to the readonly role (requests go via mysql/creds when requesting access) we can check that the policy has applied and whether we get a user account for MySQL.
# Request a user account
$ curl --cert user.pem --key privkey.pem -H 'Content-type: application/json' -H 'X-Vault-Token: 018e6feb-65c4-49f2-ae30-e4fbba81e687' ${VAULT_ADDR}/v1/mysql/creds/readonly -X GET 2>/dev/null | json_pp
{
"request_id" : "7b45c9a1-bc46-f410-7af2-18c8e91f43de",
"lease_id" : "mysql/creds/readonly/c661426c-c739-5bdb-cb7a-f51f74e16634",
"warnings" : null,
"lease_duration" : 3600,
"data" : {
"password" : "099c8f2e-588d-80be-1e4c-3c2e20756ab4",
"username" : "read-cert-401f2c"
},
"wrap_info" : null,
"renewable" : true,
"auth" : null
}
# Test MySQL access
$ mysql -h localhost -u read-cert-401f2c -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 17
Server version: 5.7.14-8-log Percona Server (GPL), Release '8', Revision '1f84ccd'
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> show grants;
+-----------------------------------------------+
| Grants for read-cert-401f2c@% |
+-----------------------------------------------+
| GRANT SELECT ON *.* TO 'read-cert-401f2c'@'%' |
+-----------------------------------------------+
1 row in set (0.00 sec)
# Display the full account information
$ pt-show-grants --only='read-cert-401f2c'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.7.14-8-log at 2016-11-08 23:28:37
-- Grants for 'read-cert-401f2c'@'%'
CREATE USER IF NOT EXISTS 'read-cert-401f2c'@'%';
ALTER USER 'read-cert-401f2c'@'%' IDENTIFIED WITH 'mysql_native_password' AS '*FF157E33408E1FBE707B5FF89C87A2D14E8430C2' REQUIRE NONE PASSWORD EXPIRE INTERVAL 1 DAY ACCOUNT UNLOCK;
GRANT SELECT ON *.* TO 'read-cert-401f2c'@'%';
Hurrah! Now we don’t even need to go and create a user, the application can get one when it needs one. We’ve made the account auto-expire so that the credentials are only valid for 1 day, regardless of Vault expiration, and also we’ve reduced the amount of time that the token is valid, so we’ve done a pretty good job of limiting the window of opportunity for any rogue activity
We’ve covered quite a lot in this post, some detail for which has been left out to keep us on track. The online documentation for OpenSSL, Let’s Encrypt and Vault are pretty good, so you should be able to take a deeper dive should you wish to. Hopefully, this post has given a good enough introduction to Vault to get you interested and looking to test it out, as well as bringing the great Let’s Encrypt service to your attention so that there’s very little reason to not provide a secure online experience for your readers, customers and services.