ActiveFence comes out of the shadows with $100M in funding and tech that detects online harm, now valued at $500M+

Online abuse, disinformation, fraud and other malicious content is growing and getting more complex to track. Today, a startup called ActiveFence is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services. ActiveFence has quietly built a tech platform to suss out threats as they are being formed and planned to make it easier for trust and safety teams to combat them on platforms.

The startup, co-headquartered in New York and Tel Aviv, has raised $100 million, funding that it will use to continue developing its tools and to continue expanding its customer base. To date, ActiveFence says that its customers include companies in social media, audio and video streaming, file sharing, gaming, marketplaces and other technologies — it has yet to disclose any specific names but says that its tools collectively cover “billions” of users. Governments and brands are two other categories that it is targeting as it continues to expand. It has been around since 2018 and is growing at around 100% annually.

The $100 million being announced today actually covers two rounds: Its most recent Series B led by CRV and Highland Europe, as well as a Series A it never announced led by Grove Ventures and Norwest Venture Partners. Vintage Investment Partners, Resolute Ventures and other unnamed backers also participated. It’s not disclosing valuation but I understand it’s over $500 million.

“We are very honored to be ActiveFence partners from the very earliest days of the company, and to be part of this important journey to make the internet a safer place and see their unprecedented success with the world’s leading internet platforms,” said Lotan Levkowitz, general partner at Grove Ventures, in a statement.

The increased presence of social media and online chatter on other platforms has put a strong spotlight on how those forums are used by bad actors to spread malicious content. ActiveFence’s particular approach is a set of algorithms that tap into innovations in AI (natural language processing) and to map relationships between conversations. It crawls all of the obvious, and less obvious and harder-to-reach parts of the internet to pick up on chatter that is typically where a lot of the malicious content and campaigns are born — some 3 million sources in all — before they become higher-profile issues. It’s built both on the concept of big data analytics as well as understanding that the long tail of content online has a value if it can be tapped effectively.

“We take a fundamentally different approach to trust, safety and content moderation,” Noam Schwartz, the co-founder and CEO, said in an interview. “We are proactively searching the darkest corners of the web and looking for bad actors in order to understand the sources of malicious content. Our customers then know what’s coming. They don’t need to wait for the damage, or for internal research teams to identify the next scam or disinformation campaign. We work with some of the most important companies in the world, but even tiny, super niche platforms have risks.”

The insights that ActiveFence gathers are then packaged up in an API that its customers can then feed into whatever other systems they use to track or mitigate traffic on their own platforms.

ActiveFence is not the only company building technology to help platform operators, governments and brands have a better picture of what is going on in the wider online world. Factmata has built algorithms to better understand and track sentiments online; Primer (which also recently raised a big round) also uses NLP to help its customers track online information, with its customers including government organizations that used its technology to track misinformation during election campaigns; Bolster (formerly called RedMarlin) is another.

Some of the bigger platforms have also gotten more proactive in bringing tracking technology and talent in-house: Facebook acquired Bloomsbury AI several years ago for this purpose; Twitter has acquired Fabula (and is working on a bigger efforts like Birdwatch to build better tools), and earlier this year Discord picked up Sentropy, another online abuse tracker. In some cases, companies that more regularly compete against each other for eyeballs and dollars are even teaming up to collaborate on efforts.

Indeed, it may well be that ultimately there will exist multiple efforts and multiple companies doing good work in this area, not unlike other corners of the world of security, which might need more than one hammer thrown at problems to crack them. In this particular case, the growth of the startup to date, and its effectiveness in identifying early warning signs, is one reason investors have been interested in ActiveFence.

“We are pleased to support ActiveFence in this important mission,” commented Izhar Armony, a general partner at CRV, in a statement. “We believe they are ready for the next phase of growth and that they can maintain leadership in the dynamic and fast-growing trust and safety market.”

“ActiveFence has emerged as a clear leader in the developing online trust and safety category. This round will help the company to accelerate the growth momentum we witnessed in the past few years,” said Dror Nahumi, general partner at Norwest Venture Partners, in a statement.


Quantexa raises $153M to build out AI-based big data tools to track risk and run investigations

As financial crime has become significantly more sophisticated, so too have the tools that are used to combat it. Now, Quantexa — one of the more interesting startups that has been building AI-based solutions to help detect and stop money laundering, fraud and other illicit activity — has raised a growth round of $153 million, both to continue expanding that business in financial services and to bring its tools into a wider context, so to speak: linking up the dots around all customer and other data.

“We’ve diversified outside of financial services and working with government, healthcare, telcos and insurance,” Vishal Marria, its founder and CEO, said in an interview. “That has been substantial. Given the whole journey that the market’s gone through in contextual decision intelligence as part of bigger digital transformation, was inevitable.”

The Series D values the London-based startup between $800 million and $900 million on the heels of Quantexa growing its subscriptions revenues 108% in the last year.

Warburg Pincus led the round, with existing backers Dawn Capital, AlbionVC, Evolution Equity Partners (a specialist cybersecurity VC), HSBC, ABN AMRO Ventures and British Patient Capital also participating. The valuation is a significant hike up for Quantexa, which was valued between $200 million and $300 million in its Series C last July. It has now raised over $240 million to date.

Quantexa got its start out of a gap in the market that Marria identified when he was working as a director at Ernst & Young tasked with helping its clients with money laundering and other fraudulent activity. As he saw it, there were no truly useful systems in the market that efficiently tapped the world of data available to companies — matching up and parsing both their internal information as well as external, publicly available data — to get more meaningful insights into potential fraud, money laundering and other illegal activities quickly and accurately.

Quantexa’s machine learning system approaches that challenge as a classic big data problem — too much data for a human to parse on their own, but small work for AI algorithms processing huge amounts of that data for specific ends.

Its so-called “Contextual Decision Intelligence” models (the name Quantexa is meant to evoke “quantum” and “context”) were built initially specifically to address this for financial services, with AI tools for assessing risk and compliance and identifying financial criminal activity, leveraging relationships that Quantexa has with partners like Accenture, Deloitte, Microsoft and Google to help fill in more data gaps.

The company says its software — and this, not the data, is what is sold to companies to use over their own data sets — has handled up to 60 billion records in a single engagement. It then presents insights in the form of easily digestible graphs and other formats so that users can better understand the relationships between different entities and so on.

Today, financial services companies still make up about 60% of the company’s business, Marria said, with seven of the top 10 U.K. and Australian banks and six of the top 14 financial institutions in North America among its customers. (The list includes its strategic backer HSBC, as well as Standard Chartered Bank and Danske Bank.)

But alongside those — spurred by a huge shift in the market to rely significantly more on wider data sets, to businesses updating their systems in recent years, and the fact that, in the last year, online activity has in many cases become the “only” activity — Quantexa has expanded more significantly into other sectors.

“The Financial crisis [of 2007] was a tipping point in terms of how financial services companies became more proactive, and I’d say that the pandemic has been a turning point around other sectors like healthcare in how to become more proactive,” Marria said. “To do that you need more data and insights.”

So in the last year in particular, Quantexa has expanded to include other verticals facing financial crime, such as healthcare, insurance, government (for example in tax compliance) and telecoms/communications, but in addition to that, it has continued to diversify what it does to cover more use cases, such as building more complete customer profiles that can be used for KYC (know your customer) compliance or to serve them with more tailored products. Working with government, it’s also seeing its software getting applied to other areas of illicit activity, such as tracking and identifying human trafficking.

In all, Quantexa has “thousands” of customers in 70 markets. Quantexa cites figures from IDC that estimate the market for such services — both financial crime and more general KYC services — is worth about $114 billion annually, so there is still a lot more to play for.

“Quantexa’s proprietary technology enables clients to create single views of individuals and entities, visualized through graph network analytics and scaled with the most advanced AI technology,” said Adarsh Sarma, MD and co-head of Europe at Warburg Pincus, in a statement. “This capability has already revolutionized the way KYC, AML and fraud processes are run by some of the world’s largest financial institutions and governments, addressing a significant gap in an increasingly important part of the industry. The company’s impressive growth to date is a reflection of its invaluable value proposition in a massive total available market, as well as its continued expansion across new sectors and geographies.”

Interestingly, Marria admitted to me that the company has been approached by big tech companies and others that work with them as an acquisition target — no real surprises there — but longer term, he would like Quantexa to consider how it continues to grow on its own, with an independent future very much in his distant sights.

“Sure, an acquisition to the likes of a big tech company absolutely could happen, but I am gearing this up for an IPO,” he said.


Cloud security platform Netskope boosts valuation to $7.5B following $300M raise

Netskope, focused on Secure Access Service Edge architecture, announced Friday a $300 million investment round on a post-money valuation of $7.5 billion.

The oversubscribed insider investment was led by ICONIQ Growth, which was joined by other existing investors, including Lightspeed Venture Partners, Accel, Sequoia Capital Global Equities, Base Partners, Sapphire Ventures and Geodesic Capital.

Netskope co-founder and CEO Sanjay Beri told TechCrunch that since its founding in 2012, the company’s mission has been to guide companies through their digital transformation by finding what is most valuable to them — sensitive data — and protecting it.

“What we had before in the market didn’t work for that world,” he said. “The theory is that digital transformation is inevitable, so our vision is to transform that market so people could do that, and that is what we are building nearly a decade later.”

With this new round, Netskope continues to rack up large rounds: it raised $340 million last February, which gave it a valuation of nearly $3 billion. Prior to that, it was a $168.7 million round at the end of 2018.

Similar to other rounds, the company was not actively seeking new capital, but that it was “an inside round with people who know everything about us,” Beri said.

“The reality is we could have raised $1 billion, but we don’t need more capital,” he added. “However, having a continued strong balance sheet isn’t a bad thing. We are fortunate to be in that situation, and our destination is to be the most impactful cybersecurity company in the world.

Beri said the company just completed a “three-year journey building the largest cloud network that is 15 milliseconds from anyone in the world,” and intends to invest the new funds into continued R&D, expanding its platform and Netskope’s go-to-market strategy to meet demand for a market it estimated would be valued at $30 billion by 2024, he said.

Even pre-pandemic the company had strong hypergrowth over the past year, surpassing the market average annual growth of 50%, he added.

Today’s investment brings the total raised by Santa Clara-based Netskope to just over $1 billion, according to Crunchbase data.

With the company racking up that kind of capital, the next natural step would be to become a public company. Beri admits that Netskope could be public now, though it doesn’t have to do it for the traditional reasons of raising capital or marketing.

“Going public is one day on our path, but you probably won’t see us raise another private round,” Beri said.



Authenticate Percona Server for MongoDB Users via Native LDAP

Authenticate Percona Server for MongoDB Users via Native LDAP

Authenticate Percona Server for MongoDB Users via Native LDAPPercona Server for MongoDB supports two different ways of authenticating against an LDAP service:

  • operating system libraries (aka Native LDAP)
  • saslauthd (aka LDAP proxy)

We’ve talked about the LDAP proxy option many times already. In this post, I am going to discuss the Native LDAP approach.

Note: for the purposes of the examples, I am considering a RHEL-based distribution.


First of all, the following packages are needed at the operating system level:

yum install cyrus-sasl-devel cyrus-sasl-md5 cyrus-sasl-plain cyrus-sasl-gssapi cyrus-sasl-lib

If any of these are missing, you most likely will encounter some cryptic errors. For example, something like the following could appear in your mongod.log:

2021-07-22T14:29:14.905-0500 E QUERY [js] Error: SASL(-4): no mechanism available: No worthy mechs found :

By default, MongoDB creates a TLS connection when binding to the LDAP server. The next step is to make the certificate for the company’s internal Certificate Authority (CA) available to our MongoDB server. We can do this by placing the certificate file in /etc/openldap/certs/ directory:

cp my_CA.crt /etc/openldap/certs/

Next, we need to point our server to the CA certificate we copied, by adding the following line to /etc/openldap/ldap.conf:

tee -a /etc/openldap/ldap.conf <<EOF
TLS_CACERT /etc/openldap/certs/my_CA.crt

MongoDB Configuration

Once the prerequisites are fulfilled, we need to adjust our mongod.conf to authenticate against LDAP. We need:

  • a read-only user that allows MongoDB to query LDAP
  • an LDAP queryTemplate to authorize users based on LDAP group membership

If you don’t know what this query string will be, you should work together with the LDAP server administrators to figure it out. The following example is for an Active Directory deployment:

  servers: "ldap.example.com" 
    queryTemplate: "DC=example,DC=com??sub?(&(objectClass=group)(member:1.2.840.113556.1.4.1941:={USER}))" 
      match : "(.+)", 
      ldapQuery: "DC=example,DC=com??sub?(userPrincipalName={0})" 
    queryUser: "ldapreadonly@example" 
    queryPassword: "mypwd" 
    authenticationMechanisms: "PLAIN,SCRAM-SHA-1,SCRAM-SHA-256"

We can also use transformation expressions in order to avoid specifying the complete DN of the authenticating users. In the example above, the {0} is replaced with the first token of the user as specified. If you are logging in as myuser@example.com that would be the string “myuser”, so the query becomes:

ldapQuery: "DC=example,DC=com??sub?(userPrincipalName=myuser)"

This returns the following LDAP result:


The queryTemplate specified in the config file is the standard AD-specific way to query a user’s groups recursively. The {USER} above is replaced with the transformed username and becomes:

queryTemplate: "DC=example,DC=com??sub?(&(objectClass=group)(member:1.2.840.113556.1.4.1941:="cn=myuser,dc=example,dc=com"))"

The authenticationMechanisms as specified allows MongoDB to authenticate both LDAP and built-in users. The PLAIN word might raise some eyebrows but remember the connection is still encrypted unless you specify the transportSecurity: none.

Creating Roles for LDAP Groups

We need to create roles in the MongoDB admin database for each of the LDAP groups we are going to be using.

For example, we can create groups for users that require read-only or read-write privileges respectively:

     role: "CN=myapp_rw,CN=Users,DC=example,DC=com",
     privileges: [],
     roles: [
       { role: "readWrite", db: "myapp" }

     role: "CN=myapp_ro,CN=Users,DC=example,DC=com",
     privileges: [],
     roles: [
       { role: "read", db: "myapp" }

In this case, any authenticating users that are members of the myapp_ro group in LDAP will automatically get read-only permissions against the myapp database.

Testing Access

To authenticate using LDAP, the following form can be used:

mongo --username myuser@example.com --password mypwd --authenticationMechanism=PLAIN --authenticationDatabase='$external' --host

Since we left the SCRAM-SHA options in the config file, we are still able to authenticate using MongoDB built-in users as well:

mongo --username root --password mypwd --authenticationDatabase admin --authenticationMechanism=SCRAM-SHA1 --host

Final Words

We’ve seen how to use the native method to configure LDAP integration. The main benefit of this method is that it requires fewer moving parts than the proxy-based approach.

Keep in mind that Percona Server for MongoDB offers LDAP authentication (and authorization) free of charge in all versions. These features are not available in the MongoDB Community Edition.

You might also want to check the official documentation on this topic, as there are some additional options to deal with things like LDAP-referrals, connection pool sizes, etc.


Opaque raises $9.5M seed to secure sensitive data in the cloud

Opaque, a new startup born out of Berkeley’s RISELab, announced a $9.5 million seed round today to build a solution to access and work with sensitive data in the cloud in a secure way, even with multiple organizations involved. Intel Capital led today’s investment with participation by Race Capital, The House Fund and FactoryHQ.

The company helps customers work with secure data in the cloud while making sure the data they are working on is not being exposed to cloud providers, other research participants or anyone else, says company president Raluca Ada Popa.

“What we do is we use this very exciting hardware mechanism called Enclave, which [operates] deep down in the processor — it’s a physical black box — and only gets decrypted there. […] So even if somebody has administrative privileges in the cloud, they can only see encrypted data,” she explained.

Company co-founder Ion Stoica, who was a co-founder at Databricks, says the startup’s solution helps resolve two conflicting trends. On one hand, businesses increasingly want to make use of data, but at the same time are seeing a growing trend toward privacy. Opaque is designed to resolve this by giving customers access to their data in a safe and fully encrypted way.

The company describes the solution as “a novel combination of two key technologies layered on top of state-of-the-art cloud security—secure hardware enclaves and cryptographic fortification.” This enables customers to work with data — for example to build machine learning models — without exposing the data to others, yet while generating meaningful results.

Popa says this could be helpful for hospitals working together on cancer research, who want to find better treatment options without exposing a given hospital’s patient data to other hospitals, or banks looking for money laundering without exposing customer data to other banks, as a couple of examples.

Investors were likely attracted to the pedigree of Popa, a computer security and applied crypto professor at UC Berkeley and Stoica, who is also a Berkeley professor and co-founded Databricks. Both helped found RISELabs at Berkeley where they developed the solution and spun it out as a company.

Mark Rostick, vice president and senior managing director at lead investor Intel Capital says his firm has been working with the founders since the startup’s earliest days, recognizing the potential of this solution to help companies find complex solutions even when there are multiple organizations involved sharing sensitive data.

“Enterprises struggle to find value in data across silos due to confidentiality and other concerns. Confidential computing unlocks the full potential of data by allowing organizations to extract insights from sensitive data while also seamlessly moving data to the cloud without compromising security or privacy,” Rostick said in a statement

He added, “Opaque bridges the gap between data security and cloud scale and economics, thus enabling inter-organizational and intra-organizational collaboration.”



To guard against data loss and misuse, the cybersecurity conversation must evolve

Data breaches have become a part of life. They impact hospitals, universities, government agencies, charitable organizations and commercial enterprises. In healthcare alone, 2020 saw 640 breaches, exposing 30 million personal records, a 25% increase over 2019 that equates to roughly two breaches per day, according to the U.S. Department of Health and Human Services. On a global basis, 2.3 billion records were breached in February 2021.

It’s painfully clear that existing data loss prevention (DLP) tools are struggling to deal with the data sprawl, ubiquitous cloud services, device diversity and human behaviors that constitute our virtual world.

Conventional DLP solutions are built on a castle-and-moat framework in which data centers and cloud platforms are the castles holding sensitive data. They’re surrounded by networks, endpoint devices and human beings that serve as moats, defining the defensive security perimeters of every organization. Conventional solutions assign sensitivity ratings to individual data assets and monitor these perimeters to detect the unauthorized movement of sensitive data.

It’s painfully clear that existing data loss prevention (DLP) tools are struggling to deal with the data sprawl, ubiquitous cloud services, device diversity and human behaviors that constitute our virtual world.

Unfortunately, these historical security boundaries are becoming increasingly ambiguous and somewhat irrelevant as bots, APIs and collaboration tools become the primary conduits for sharing and exchanging data.

In reality, data loss is only half the problem confronting a modern enterprise. Corporations are routinely exposed to financial, legal and ethical risks associated with the mishandling or misuse of sensitive information within the corporation itself. The risks associated with the misuse of personally identifiable information have been widely publicized.

However, risks of similar or greater severity can result from the mishandling of intellectual property, material nonpublic information, or any type of data that was obtained through a formal agreement that placed explicit restrictions on its use.

Conventional DLP frameworks are incapable of addressing these challenges. We believe they need to be replaced by a new data misuse protection (DMP) framework that safeguards data from unauthorized or inappropriate use within a corporate environment in addition to its outright theft or inadvertent loss. DMP solutions will provide data assets with more sophisticated self-defense mechanisms instead of relying on the surveillance of traditional security perimeters.


Sources: SentinelOne expects to raise over $1B in NYSE IPO tomorrow, listing with a $10B market cap

After launching its IPO last week with an expected listing price range of $26 to $29 per share, cybersecurity company SentinelOne is going public tomorrow with some momentum behind it. Sources close to the deal tell us that the company, which will be trading under the ticker “S” on the New York Stock Exchange, is expecting to raise over $1 billion in its IPO, putting its valuation at around $10 billion.

Last week, when the company first announced the IPO, it was projected that it would raise $928 million at the top end of its range, giving SentinelOne a valuation of around $7 billion. Coming in at a $10 billion market capitalization would make SentinelOne the most valuable cybersecurity IPO to date.

A source said that the road show has been stronger than anticipated, in part because of the strength of one of its competitors, CrowdStrike, which is publicly traded and currently sitting at a market cap of $58 billion.

The other reason for the response is a slightly grimmer one: Cybersecurity continues to be a major issue for businesses of all sizes, public organizations, governments and individuals. “No one wants to see another SolarWinds, and there is no reason that there shouldn’t be more than one or two strong players,” a source said.

As is the bigger trend in cybersecurity, Israel-hatched, Mountain View-based SentinelOne‘s approach to combat that is artificial intelligence — and in its case specifically, a machine-learning-based solution that it sells under the brand Singularity that focuses on endpoint security, working across the entire edge of the network to monitor and secure laptops, phones, containerised applications and the many other devices and services connected to a network.

Last year, endpoint security solutions were estimated to be around an $8 billion market, and analysts project that it could be worth as much as $18.4 billion by 2024 — another reason why SentinelOne may have moved up the timetable on its IPO (last year the company’s CEO Tomer Weingarten had told me he thought the company had one or two years left as a private company before considering an IPO, a timeline it clearly decided was worth speeding up).

SentinelOne raised $267 million on a $3.1 billion valuation led by Tiger Global as recently as last November, but it has been expanding rapidly. Growth last quarter was 116% compared to the same period a year before, and it now has more than 4,700 customers and annual recurring revenue of $161 million, according to its S-1 filing. It is also still not profitable, posting a net loss of $64 million in the last quarter.


DevOps platform JFrog acquires AI-based IoT and connected device security specialist Vdoo for $300M

JFrog, the company best known for a platform that helps developers continuously manage software delivery and updates, is making a deal to help it expand its presence and expertise in an area that has become increasingly connected to DevOps: security. The company is acquiring Vdoo, which has built an AI-based platform that can be used to detect and fix vulnerabilities in the software systems that work with and sit on IoT and connected devices. The deal — in a mix of cash and stock — is valued at approximately $300 million, JFrog confirmed to me.

Sunnyvale-based, Israeli-founded JFrog is publicly traded on Nasdaq, where it went public last September, and currently it has a market cap of $4.65 billion. Vdoo, meanwhile, had raised about $70 million from investors that include NTT, Dell, GGV and Verizon (disclaimer: Verizon owns TechCrunch), and when we covered its most recent funding round, we estimated that the valuation was somewhere between $100 million and $200 million, making this a decent return.

Shlomi Ben Haim, JFrog’s co-founder and CEO, said that his company’s turn to focusing deeper on security, and making this acquisition in particular to fill out that strategy, are a natural progression in its aim to build out an end-to-end platform for the DevOps team.

“When we started JFrog, the main challenge was to educate the market on what we saw as most important priorities when it comes to building, testing and deploying software,” he said. Then sometime around 2015-2016 he said they started to realize there was a “crack” in the system, “a crack called security.” InfoSec engineers and developers sometimes work at cross purposes, as “developers became too fast” the work they were doing has inadvertently led to a lot of security vulnerabilities.

JFrog has been building a number of tools since then to address that and to bring the collective priorities together, such as its X-ray product. And indeed, Vdoo is not JFrog’s first foray into security, but it represents a significant step deeper into the hardware and systems that are being run on software. “It’s a very important leap forward,” Ben Haim said.

For its part, Vdoo was born out of a realization as well as a challenging mission: IoT and other connected devices — a universe of some 50 billion pieces of hardware as of last year — represents a massive security headache, and not just because of the volume of devices: Each object uses and interacts with software in the cloud and so each instance represents a potential vulnerability, with zero-day vulnerabilities, CVEs, configuration and hardening issues, and standard non-compliance among some of the most common.

While connected-device security up to now has typically focused on monitoring activity on the hardware, how data is moving in and out of it, Vdoo’s approach has been to build a platform that monitors the behavior of the devices themselves on top of that, using AI to compare that behavior to identify when something is not working as it should. Interestingly, this mirrors the kind of binary analysis that JFrog provides in its DevOps platform, making the two complementary to each other.

But what’s notable is that this will give JFrog a bigger play at the edge, since part of Vdoo’s platform works on devices themselves, “micro agents” as the company has described them to me previously, to detect and repair vulnerabilities on endpoints.

While JFrog has built a lot of its own business from the ground up, it has made a number of acquisitions to bolt on technology (one example: Shippable, which it used to bring continuous integration and delivery into its DevOps platform). In this case, Netanel Davidi, the co-founder and CEO of Vdoo (who previously co-founded and sold another security startup, Cyvera, to Palo Alto Networks) said that this was a good fit because the two companies are fundamentally taking the same approaches in their work (another synergy and justification for DevOps and InfoSec being more closely knitted together too I might add).

“In terms of the fit between the companies, it’s about our approach to binaries,” Davidi said in an interview, noting that the two being on the same page with this approach was fundamental to the deal. “That’s only the way to cover the entire pipeline from the very beginning, when they go you develop something, all the way to the device or to the server or to the application or to the mobile phone. That’s the only way to truly understand the context and contextual risk.”

He also made a note not just of the tech but of the talent that is coming on with the acquisition: 100 people joining JFrog’s 800.

“If JFrog chose to build something like this themselves, they could have done it,” he said. “But the uniqueness here is that we have built the best security team, the best security researchers, the best vulnerability researchers, the best reverse engineers, which focus not only on embedded systems, and IoT, which is considered to be the hardest thing to learn and to analyze, but also in software artifacts. We are bringing this knowledge along with us.”

JFrog said that Vdoo will continue to operate as a standalone SaaS product for the time being. Updates that are made will be in aid of supporting the JFrog platform and the two aim to have a fully integrated, “holistic” product by 2022.

Along with the deal, JFrog reiterated financial guidance for the next quarter that will end June 30, 2021. It expects revenues of $47.6 million to $48.6 million, with non-GAAP operating income of $0.5 million to $1.5 million and non-GAAP EPS of $0.00 to $0.01, assuming approximately 104 million weighted average diluted shares outstanding. For Full Year 2021, revenues are expected to be $198 million to $204 million, with non-GAAP operating income between $5 million and $7 million and an approximately 3% increase in weighted average diluted shares. JFrog anticipates consolidated operating expenses to increase by approximately $9-10 million for the remainder of 2021, subject to the acquisition closing.


Enabling SSL/TLS Sessions In PgBouncer

Enabling SSL:TLS Sessions In PgBouncer

Enabling SSL:TLS Sessions In PgBouncerPgBouncer is a great piece of technology! Over the years I’ve put it to good use in any number of situations requiring a particular type of control over application processes connecting to a postgres data cluster. However, sometimes it’s been a bit of a challenge when it comes to configuration.

Today, I want to demonstrate one way of conducting a connection session using the Secure Socket Layer, SSL/TLS.

For our purposes we’re going to make the following assumptions:

  • We are using a typical installation found on CENTOS-7.
  • PostgreSQL version 13 is used, but essentially any currently supported version of postgres will work.

Here are the steps enabling SSL connection sessions:

  1. Setup postgres
    • install RPM packages
    • setup remote access
    • create a ROLE with remote login privileges
  2. Setup pgbouncer
    • install RPM packages
    • setup the minimal configuration permitting remote login without SSL
  3. Generate SSL/TSL private keys and certificates
    • TLS certificate for postgres
    • TLS certificate for pgbouncer
    • Create a Certificate Authority (CA) capable of signing the aforementioned certificates
  4. Configure for SSL encrypted sessions
    1. postgres
    2. pgbouncer

Step 1: Setup Postgres

Setting up your postgres server is straightforward:

  • Add the appropriate repository for postgres version 13.
    yum install openssl
    yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
    yum update -y
    yum install -y postgresql13-server
  • The datacluster is initialized.
    /usr/pgsql-12/bin/postgresql-12-setup initdb
  • The datacluster configuration files “pg_hba.conf” and “postgresql.auto.conf” are edited. Note that both IPv4 and IPv6 protocols have been configured.
    echo "
    # "local" is for Unix domain socket connections only
    local all all trust
    # IPv4 local connections:
    host all all md5
    host all all md5
    # IPv6 local connections:
    host all all ::1/128 md5
    host all all ::0/0 md5
    # Allow replication connections from localhost, by a user with the
    # replication privilege.
    local replication all trust
    host replication all md5
    host replication all ::1/128 md5
    ##############################################################" > /var/lib/pgsql/12/data/pg_hba.conf

    # update runtime variable "listen_addresses"
    echo "listen_addresses='*' " >> /var/lib/pgsql/12/data/postgresql.auto.conf

    # as root: server start
    systemctl start postgresql-12

2: Setup PgBouncer

# Install the postgres community package connection pooler
yum install -y pgbouncer
# Configure pgbouncer for non-SSL access
mv /etc/pgbouncer/pgbouncer.ini /etc/pgbouncer/pgbouncer.ini_backup

There’s not much to this first iteration configuring pgbouncer. All that is required is to validate that a connection can be made before updating the SSL configuration.

# edit pgbouncer.ini
echo "
* = host=localhost

logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
;;any, trust, plain, md5, cert, hba, pam
auth_type = plain
auth_file = /etc/pgbouncer/userlist.txt
admin_users = postgres " > /etc/pgbouncer/pgbouncer.ini

NOTE: best practice recommends hashing the passwords when editing the file userlist.txt,. But for our purposes, keeping things simple, we’ll leave the passwords in the clear.

# edit userlist.txt 
echo " 
\"usr1\" \"usr1\" 
\"postgres\" \"postgres\" 
" > /etc/pgbouncer/userlist.txt

# as root: server start 
systemctl start pgbouncer

# test connectivity to the postgres server
psql 'host=localhost dbname=postgres user=postgres password=postgres' -c 'select now()' 
psql 'host=localhost dbname=postgres user=usr1 password=usr1' -c 'select now()' 

# test connectivity to pgbouncer 
psql 'host=localhost dbname=postgres user=postgres password=postgres port=6432' -c 'select now()' 
psql 'host=localhost dbname=postgres user=usr1 password=usr1 port=6432' -c 'select now()'

Step 3: Setup SSL/TSL Certificates

Create a root certificate fit for signing certificate requests:

set -e

openssl req -new -nodes -text -out $ROOT.pem -keyout $ROOT.key -subj "/CN=$ROOT.$HOST"

openssl x509 -req -in $ROOT.pem -text -days 3650 -extfile $OPENSSL_CNF -extensions v3_ca -signkey $ROOT.key -out $ROOT.crt

chmod 600 root.key
chmod 664 root.crt root.pem

Create two sets of keys and certificate requests, one for pgbouncer and postgres respectively. The certificate requests are signed with the newly created root certificate:

# usage
# ./02.mkcert.sh <key name>
set -e
SUBJ="/C=US/ST=Washington/L=Seattle/O=Percona/OU=Professional Services/CN=$HOST/emailAddress=robert.bernier@percona.com"

openssl genrsa -out $KEY 2048

openssl req -new -sha256 -key $KEY -out $REQ -subj "$SUBJ"

# which was generated by script "mkcert_root.sh"
openssl x509 -req -in $REQ -text -days 365 -CA $ROOT.crt -CAkey $ROOT.key -CAcreateserial -out $CRT

chmod 600 $KEY
chmod 664 $REQ
chmod 664 $CRT

Validate the signed certificates:

set -e
# check: private key
for u in $(ls *.key)
 echo -e "\n==== PRIVATE KEY: $u ====\n"
 openssl rsa -in $u -check

# check: certificate request
for u in $(ls *.pem)
 echo -e "\n==== CERTIFICATE REQUEST: $u ====\n"
 openssl req -text -noout -verify -in $u

# check: signed certificate
for u in $(ls *.crt)
 echo -e "\n==== SIGNED CERTIFICATE: $u ====\n"
 openssl req -text -noout -verify -in $u

Step 4: Install Certificates and Configure Servers For SSL Connectivity

Update ownership for keys and certificates:

set -e
chown pgbouncer:pgbouncer pgbouncer.*
chown postgres:postgres server.*

Move keys and certificates into their respective locations:

set -e
# pgbouncer
mv pgbouncer.* /etc/pgbouncer
cp root.crt /etc/pgbouncer

# postgres
mv server.* /var/lib/pgsql/13/data
cp root.crt /var/lib/pgsql/13/data

Update pgbouncer.ini:

echo "
;;; TLS settings for connecting to backend databases
;server_tls_sslmode = prefer | require | verify-ca | verify-full
server_tls_sslmode = require
server_tls_ca_file = /etc/pgbouncer/root.crt
server_tls_key_file = /etc/pgbouncer/pgbouncer.key
server_tls_cert_file = /etc/pgbouncer/pgbouncer.crt

;;; TLS settings for accepting client connections
;client_tls_sslmode = prefer | require | verify-ca | verify-full
client_tls_sslmode = require
client_tls_ca_file = /etc/pgbouncer/root.crt
client_tls_key_file = /etc/pgbouncer/pgbouncer.key
client_tls_cert_file = /etc/pgbouncer/pgbouncer.crt
" >> /etc/pgbouncer/pgbouncer.ini

Update postgresql.auto.conf: 

echo "
ssl = 'on'
ssl_ca_file = 'root.crt'
" >> /var/lib/pgsql/12/data/postgresql.auto.conf

# update runtime parameters by restarting the postgres server
systemctl restart postgresql-13

# restarting connection pooler
systemctl restart pgbouncer

And validate SSL connectivity:

# validate ssl connectivity, note the use of "sslmode" # # connect to pgbouncer psql 'host=blog dbname=postgres user=postgres password=postgres port=6432 sslmode=require'<<<"select 'hello world' as greetings" /* greetings ------------- hello world */ # connect to postgres server psql 'host=blog dbname=postgres user=usr1 password=usr1 port=5432 sslmode=require' \ <<<"select datname,usename, ssl, client_addr from pg_stat_ssl join pg_stat_activity on pg_stat_ssl.pid = pg_stat_activity.pid where datname is not null and usename is not null order by 2;"
-- host name resolution is via IPv6
-- 1st row is a server connection from pgbouncer established by the previous query
-- 2nd row is connection generating the results of this query

 datname | usename | ssl | client_addr
postgres | postgres | t | ::1
postgres | postgres | t | fe80::216:3eff:fec4:7769

CAVEAT: A Few Words About Those Certificates

Using certificates signed by a Certificate Authority offers one the ability to yet go even further than simply enabling SSL sessions. For example, although not covered here, you can dispense using passwords and instead rely on the certificate’s identity as the main authentication mechanism.

Remember: you can still conduct SSL sessions via the use of self-signed certificates, it’s just that you can’t leverage the other cool validation methods in postgres.

# #######################################################
# Only try an SSL connection. If a root CA file is present,
# verify the certificate in the same way as if verify-ca was specified
client_tls_sslmode = require
server_tls_sslmode = require
# Only try an SSL connection, and verify that the server certificate
# is issued by a trusted certificate authority (CA)
client_tls_sslmode = verify-ca
server_tls_sslmode = verify-ca
# Only try an SSL connection, verify that the server certificate
# is issued by a trusted CA and
# that the requested server host name
# matches that in the certificate
client_tls_sslmode = verify-full

And finally; don’t forget to save the root certificate’s private key, root.key, in a safe place!


ProxySQL-Admin 2.x: Encryption of Credential Information

ProxySQL-Admin 2.x Encryption of Credential Information

ProxySQL-Admin 2.x Encryption of Credential InformationStarting with the release of proxysql-admin 2.0.15,  the proxysql-admin 2.x series can now encrypt the credentials needed to access proxysql and cluster nodes. This only applies to the proxysql-admin configuration, this does not change the ProxySQL config, so those credentials are still unencrypted.

The credentials file is the unencrypted file containing the usernames, passwords, hostnames, and ports needed to connect to ProxySQL and PXC (Percona XtraDB Cluster).

The proxysql-login-file tool is used to encrypt the credentials file. This encrypted file is known as a login-file. This login-file can then be used by the proxysql-admin and proxysql-status scripts.

Note: This feature requires OpenSSL v1.1.1 and above (with the exception of Ubuntu 16.04). Please see the supported platforms topic below.

Configuration Precedence

  1. command-line options
  2. the encrypted login-file options (if the login-file is used)
  3. the unencrypted proxysql-admin configuration file values

Example Usage

# create the credentials file
$ echo "monitor.user=monitor" > credentials.cnf
$ echo "monitor.password=password" >> credentials.cnf

# Choose a password
$ passwd="secret"

# Method (1) : Encrypt this data with --password
$ proxysql-login-file --in credentials.cnf --out login-file.cnf --password=${passwd}

# Method (2a) : Encrypt the data with --password-file
# Sending the password via the command-line is insecure,
# it's better to use --password-file so that the
# password doesn't show up in the command-line
$ proxysql-login-file --in credentials.cnf --out login-file.cnf \
--password-file=<(echo "${passwd}")

# Method (2b) : Running the command using sudo will not work with
# bash's process substition. In this case, sending the
# password via stdin is another option.
$ sudo echo "${passwd}" | proxysql-login-file --in credentials.cnf --out login-file.cnf \

# Method (3) : The script will prompt for the password
# if no password is provided via the command-line options.
$ proxysql-login-file --in credentials.cnf --out login-file.cnf
Enter the password:

# Remove the unencrypted credentials file
$ rm credentials.cnf

# Call the proxysql-admin script with the login-file
$ proxysql-admin --enable --login-file=login-file.cnf \
--login-password-file=<(echo "${passwd}")

This script will assist with configuring ProxySQL for use with
Percona XtraDB Cluster (currently only PXC in combination
with ProxySQL is supported)


# Call proxysql-status with the login-file
$ proxysql-status --login-file=login-file.cnf \
--login-password-file=<(echo "${passwd}")

............ DUMPING MAIN DATABASE ............
***** DUMPING global_variables *****
| variable_name                                                | variable_value              |
| mysql-default_charset                                        | utf8                        |

Credentials File Format

# --------------------------------
# This file is constructed as a set of "name=value" pairs.
# Notes:
# (1) Comment lines start with '#' and must be on separate lines
# (2) the name part
# - The only acceptable names are shown below in this example.
# Other values will be ignored.
# (3) The value part:
# - This does NOT use quotes, so any quote character will be part of the value
# - The entire line will be used (be careful with spaces)
# If a value is not specified here, than the default value from the
# configuration file will be used.
# --------------------------------

# --------------------------------
# proxysql admin interface credentials.
# --------------------------------

# --------------------------------
# PXC admin credentials for connecting to a PXC node.
# --------------------------------

# --------------------------------
# proxysql monitoring user. proxysql admin script will create
# this user in PXC to monitor a PXC node.
# --------------------------------

# --------------------------------
# Application user to connect to a PXC node through proxysql
# --------------------------------


Requirements and Supported Platforms

OpenSSL 1.1.1 (and higher) is an installation requirement (with the exception of Ubuntu 16.04 (xenial), see the comment below).

  • Centos 7

The OpenSSL 1.1.1+ package must be installed. This can be installed with

yum install openssl11

This command will install OpenSSL 1.1 alongside the system installation and the script will use the openssl11 binary.

  •  Centos 8

The default version of OpenSSL is v1.1.1

  • Ubuntu 16.04 (xenial)

For Ubuntu xenial (16.04), installation of OpenSSL v1.1.1+ is not required, a purpose-built binary used for the encryption/decryption (proxysql-admin-openssl) will be installed alongside the proxysql-admin scripts.

  • Ubuntu 18.04 (bionic)

The default version of OpenSSL is v1.1.1

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com