Sep
23
2022
--

Keep Your Data Safe with Percona

Keep Your Data Safe with Percona

Keep Your Data Safe with PerconaSeptember was and is an extremely fruitful month (especially for the black-hat hackers) for news about data leaks and breaches:

  1. Uber suffers computer system breach, alerts authorities
  2. GTA 6 source code and videos leaked after Rockstar Games hack
  3. Revolut breach: personal and banking data exposed

In this blog post, we want to remind you how to keep your data safe when running your favorite open source databases.

Network exposure

Search engines like Shodan are an easy way to search for publicly available databases. Over 3.6 million MySQL servers found exposed on the Internet.

The best practice here is to run database servers in the isolated private network, even from the rest of your corporate network. In this case, you have a low risk of exposure even in the case of server misconfiguration.

If for some reason you run your database on the server in a public network, you still can avoid network exposure:

  • Bind your server to the localhost or private IP address of the server

For example, for MySQL use bind-address option in your my.cnf:

bind-address = 192.168.0.123

  • Configure your firewall to block access through a public network interface on the operating system

Users and passwords

To complement the network exposure story, ensure that your users cannot connect from just any IP address. Taking MySQL as an example, the following GRANT command allows to connect from one of the private networks only:

GRANT ALL ON db1.* TO 'perconaAdmin'@'192.168.0.0/255.255.0.0';

MySQL also has an auth_socket plugin, that controls the connection to the database through Unix sockets. Read more in this blog post: Use MySQL Without a Password (and Still be Secure).

Minimize the risk and do not use default usernames and passwords. SecList is a good example of bad choices for passwords: MySQL, PostgreSQL, and a misc list. Percona Platform provides users with Advisors (read more below) that preemptively check for misconfigured grants, weak passwords, and more.

So now we agree that a strong password is a must. Did you know that you can enforce it? This Percona post talks about Improving MySQL Password Security with Validation Plugin that performs such enforcement.

A strong password is set, great! To make your system even more resilient to security risks, it is recommended to have a password rotation policy. This policy can be manually executed, but also can be automated through various integrations, like LDAP, KMIP, HashiCorp Vault, and many more. For example, this document describes how Percona Server for MongoDB can work with LDAP.

Encryption

There are two types of encryption when you talk about databases and ideally, you’re going to use both of them:

  1. Transport encryption – secure the traffic between client and server and between cluster nodes
  2. Data-at-rest encryption (or Transparent Data Encryption – TDE) – encrypt the data on a disk to prevent unauthorized access

Transport

With an unencrypted connection between the client and the server, someone with access to the network could watch all your traffic and steal the credentials and sensitive data. We recommend enabling network encryption by default. Read the following blog posts highlighting the details:

Data-at-rest

Someone can get access to the physical disk or a network block storage and read the data. To mitigate this risk, you can encrypt the data on the disk. It can be done on the file system, block storage level, and with the database storage engine itself.

Tools like fscrypt or in-built encryption in ZFS can help with file system encryption. Public clouds provide built-in encryption for their network storage solutions (ex AWS EBS, GCP). Private storage solutions, like Ceph, also come with the support of data-at-rest encryption on the block level.

Percona takes security seriously, which is why we recommend enabling data-at-rest encryption by default, especially for production workloads. Percona Server for MySQL and Percona Server for MongoDB provides you with a wide variety of options to perform TDE on the database level.

Preventive measures

Mistakes and misconfiguration can happen and it would be cool if there was a mechanism to alert you about issues before it is too late. Guess what – we have it! 

Percona Monitoring and Management (PMM) comes with Advisors which are the checks that identify potential security threats, vulnerabilities, data loss or data corruption, and other issues. Advisors are the software representation of the years of Percona’s expertise in database security and performance.

By connecting PMM to Percona Platform, users can get more sophisticated Advisors for free, whereas our paid customers are getting even deeper database checks, which discover various misconfiguration or non-compliance gems.

Learn more about Percona Platform with PMM on our website and check if your databases are secured and fine-tuned right away.

If you still believe you need more help, please let us know through our Community Forums or contact the Percona team directly.

Jun
30
2022
--

Digital Signatures: Another Layer of Data Protection in Percona Server for MySQL

Percona Server for MySQL

Percona Server for MySQLImagine you need to design an online system for storing documents on a per-user basis where nobody, including database administrators, would be able to change the content of those documents without being noticed by document owners.

In Percona Server for MySQL 8.0.28-20, we added a new component called Encryption UDFs – an open-source alternative to MySQL Enterprise Encryption that allows users to access a number of low-level OpenSSL encryption primitives directly from MySQL. This includes calculating digests (with a great variety of hash functions), asymmetric key generation (RSA, DSA), asymmetric encryption/decryption for RSA, and calculating/verifying digital signatures (RSA, DSA) as well as primitives for working with Diffie-Hellman (DH) key exchange algorithm.

Prerequisites

In contrast to MySQL Enterprise Encryption, to make the functions available, users of Percona Server for MySQL do not have to register them manually with

CREATE FUNCTION … SONAME …

for each individual function. All they have to do is invoke

INSTALL COMPONENT 'file://component_encryption_udf'

Schema definition

Now, let us define a simple data schema in which we will store all the info required for our online document storage service.

It will have two tables: user and document.

CREATE TABLE user(
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  login VARCHAR(128) NOT NULL,
  key_type VARCHAR(16) NOT NULL,
  public_key_pem TEXT NOT NULL,
  PRIMARY KEY(id),
  UNIQUE(login)
);

Here,
id – unique user identifier (numerical, for internal usage).
login – unique user identifier (lexical, for public usage).
key_type – type of the asymmetric key generated by the user (currently, either RSA or DSA).
public_key_pem – public component of the asymmetric key generated by the user in PEM format (“–––––BEGIN PUBLIC KEY–––––”).

CREATE TABLE document(
  id INT UNSIGNED NOT NULL AUTO_INCREMENT,
  user_ref INT UNSIGNED NOT NULL,
  name VARCHAR(128) NOT NULL,
  content BLOB NOT NULL,
  digest_type VARCHAR(16) NOT NULL,
  signature VARBINARY(2048) NOT NULL,
  PRIMARY KEY(id),
  FOREIGN KEY (user_ref) REFERENCES user(id),
  UNIQUE (user_ref, name)
);

Here,
id – unique document identifier (numerical, internal).
user_ref – the ID of the user who owns the document.
name – a name under which the document is stored for the user_ref user.
content – binary object that holds document content.
digest_type – the name of the hash function used to calculate the digital signature on content.
signature – digital signature calculated by signing the digest value (calculated with digest_type hash function on content) with a private key that belongs to user_ref.

User registration

So, the first step in registering a new user would be to generate an asymmetric key pair. Although in this example we will be using the RSA algorithm, our system is flexible enough and allows us to use DSA keys as well.

Important notice: for our use case, private keys must always be generated on the client machine and never leave local secure storage. Only public keys are allowed to be transferred via the network.

To generate an RSA private key, the user can run the standard openssl utility.

openssl genrsa -out private_key.pem 4096

The content of the generated 4096-bit RSA private key in PEM format will be written to the private_key.pem file.
Next, we need to extract the public component from the generated key.

openssl rsa -in private_key.pem -pubout -out public_key.pem

The extracted 4096-bit RSA public key in PEM format will be written to the public_key.pem file.

SET @public_key = '<public_key_pem_content>';

Here, <public_key_pem_content> is the content of the public_key.pem file (a public key in PEM format with the “–––––BEGIN PUBLIC KEY–––––” header).

Just for the simplicity of this blog post (Once again, never use this approach in a real system in production), RSA / DSA keys can be generated by the Percona Server:

SET @algorithm = 'RSA';
SET @private_key = create_asymmetric_priv_key(@algorithm, 4096); /* never do this in production */
SET @public_key = create_asymmetric_pub_key(@algorithm, @private_key); /* never do this in production */

Now, when we have a public key, the user registration is straightforward.

INSERT INTO user VALUES(DEFAULT, 'alice', 'RSA', @public_key);

Again, in production, @public_key must be set to the content of the public_key.pem generated locally (with openssl utility, for instance) rather than generated on the server.

Uploading documents

When a user wants to upload a new file to our online document storage, the first step would be to calculate its digital signature.

For instance, if we are going to upload a local file called secure_data.doc and we want to use, say, SHA256 as a hash function to calculate digest before signing with a previously generated 4096-bit RSA private key, execute the following

openssl dgst -sha256 -sign private_key.pem -out secure_data.binsig secure_data.doc

The signature in binary format will be written to the secure_data.binsig.

In order to simplify copying the content of this file to SQL statements, let us also convert this signature to HEX format. We will be using the xxd utility to perform this operation (please notice that on some Linux distributions this utility is a part of the vim-common package).

xxd -p -u -c0 secure_data.binsig secure_data.hexsig

The signature in HEX format will be written to the secure_data.hexsig file. After that, the user is supposed to call the upload_document() stored procedure.

CALL upload_document('alice', 'secure_data.doc', <file_content>,
    'SHA256',  UNHEX(<file_signature_hex>), @upload_status);

Here,
alice – name of the document owner.
secure_data.doc – a name under which the document will be stored.
<file_content> – the content of the local secure_data.doc passed as binary data.
SHA256 – the name of the hash function used to calculate the file digest.
<file_signature_hex> – the file signature in HEX format (the content of the secure_data.hexsig file).
On the server upload_document() stored routine should do the following.
First, it needs to find @user_ref, @key_type, and @public_key_pem in the user table for the provided owner’s login (alice).

SELECT id, key_type, public_key_pem
    INTO user_id, user_key_type, user_public_key_pem
    FROM user
    WHERE login = user_login;

Second, it needs to calculate message digest @digest using the provided hash function name (SHA256) for the provided file data (file_content).

SET digest = create_digest(digest_type, file_content);

Then, the server code needs to verify the file signature provided by the user (file_signature) with the public key associated with the file owner (alice).

SET verification_result = asymmetric_verify(
    user_key_type, digest, file_signature,
    user_public_key_pem, digest_type);

After that, only if verification_result is equal to 1, we confirm the identity of the document owner and insert a new record into the document table.

INSERT INTO document VALUES(DEFAULT, @user_ref, 'secure_data.doc',
    file_content, 'SHA256', file_signature);

Here is how upload_document() may look like

CREATE PROCEDURE upload_document(
  user_login VARCHAR(128), file_name VARCHAR(128),
  file_content BLOB, digest_type VARCHAR(16),
  file_signature VARBINARY(2048), OUT status INT)
L_return:
BEGIN
  DECLARE success INT DEFAULT 0;
  DECLARE error_login_not_found INT DEFAULT 1;
  DECLARE error_verification_failed INT DEFAULT 2;
  
  DECLARE user_id INT UNSIGNED DEFAULT 0;
  DECLARE user_key_type VARCHAR(16) DEFAULT NULL;
  DECLARE user_public_key_pem TEXT DEFAULT NULL;
  DECLARE verification_result INT DEFAULT 0;

  DECLARE digest VARBINARY(64) DEFAULT NULL;
  SELECT id, key_type, public_key_pem
    INTO user_id, user_key_type, user_public_key_pem
    FROM user
    WHERE login = user_login;

  IF user_id = 0 THEN
    SET status = error_login_not_found;
    LEAVE l_return;
  END IF;

  SET digest = create_digest(digest_type, file_content);
  SET verification_result = asymmetric_verify(
    user_key_type, digest, file_signature,
    user_public_key_pem, digest_type)
 
  IF verification_result = 0 THEN
    SET status = error_verification_failed;
    LEAVE l_return;
  END IF;

  INSERT INTO document VALUES(
    DEFAULT, user_id, file_name, file_content,
    digest_type, file_signature);

  SET status = success;
END

Downloading documents and verifying their integrity

In order to download a file from our online document storage, the first step will be getting its content along with digital signature metadata.

CALL download_document('alice', 'secure_data.doc', @downloaded_content,
    @downloaded_digest_type, @downloaded_signature, @download_status);

Here,
alice – name of the document owner.
secure_data.doc – the name of the file we want to download.
@downloaded_content – the content of the downloaded file will be put in this variable.
@downloaded_digest_type – the name of the hash function used to calculate file digest for this file will be put in this variable.
@downloaded_signature – the digital signature of the downloaded file will be put in this variable.
On the server, the download_document() stored routine should do the following.
First, it needs to find @user_ref, @key_type, and @public_key_pem in the user table for the provided owner’s login (alice).

SELECT id, key_type, public_key_pem
    INTO user_id, user_key_type, user_public_key_pem
    FROM user
    WHERE login = user_login;

Second, it needs to get the file content and digital signature metadata.

SELECT id, content, digest_type, signature
  INTO file_id, file_content, file_digest_type, file_signature
  FROM document
  WHERE user_ref = user_id AND name = file_name;

After that, we calculate the digest of the file_content using the file_digest_type hash function.

SET digest = create_digest(file_digest_type, file_content);

And finally, we verify the integrity of the document:

SET verification_result = asymmetric_verify(
  user_key_type, digest, file_signature,
  user_public_key_pem, file_digest_type);

Only if verification_result is equal to 1, do we confirm the integrity of the document and return successful status.
Here is how download_document() may look like

CREATE PROCEDURE download_document(
  user_login VARCHAR(128), file_name VARCHAR(128)
  OUT file_content BLOB, OUT file_digest_type VARCHAR(16),
  OUT file_signature VARBINARY(2048), OUT status INT)
L_return:
BEGIN
  DECLARE success INT DEFAULT 0;
  DECLARE error_login_not_found INT DEFAULT 1;
  DECLARE error_file_not_found INT DEFAULT 2;
  DECLARE error_verification_failed INT DEFAULT 3;

  DECLARE user_id INT UNSIGNED DEFAULT 0;
  DECLARE user_key_type VARCHAR(16) DEFAULT NULL;
  DECLARE user_public_key_pem TEXT DEFAULT NULL;
  DECLARE verification_result INT DEFAULT 0;

  DECLARE file_id INT UNSIGNED DEFAULT 0;

  DECLARE digest VARBINARY(64) DEFAULT NULL;

  SELECT id, key_type, public_key_pem
    INTO user_id, user_key_type, user_public_key_pem
    FROM user
    WHERE login = user_login;

  IF user_id = 0 THEN
    SET status = error_login_not_found;
    LEAVE l_return;
  END IF;

  SELECT id, content, digest_type, signature
    INTO file_id, file_content, file_digest_type, file_signature
    FROM document
    WHERE user_ref = user_id AND name = file_name;

  IF file_id = 0 THEN
    SET status = error_file_not_found;
    LEAVE l_return;
  END IF;

  SET digest = create_digest(file_digest_type, file_content);
  SET verification_result = asymmetric_verify(
    user_key_type, digest, file_signature, user_public_key_pem, file_digest_type);

  IF verification_result = 0 THEN
    SET status = error_verification_failed;
    LEAVE l_return;
  END IF;

  SET status = success;
END

Although we included a digital signature verification code into the download_document() routine, it does not guarantee that the end-user (caller of the download_document() routine) will get the unmodified document. This code was added only as an additional step to detect integrity violations at earlier stages. The real digital signature verification must be performed on the client-side, not inside Percona Server.

Basically, after calling download_document(), we need to save the content of the @downloaded_content output variable to a local file (say, downloaded.doc). In addition, the content of the @downloaded_signature in HEX format (HEX(@downloaded_signature)) must be saved into a local file as well (say, downloaded.hexsig).
After that, we can convert the signature in HEX format into binary form.

xxd -r -p -c0 downloaded.hexsig downloaded.binsig

A digital signature in binary form will be written to the downloaded.binsig file. Now, all we have to do is verify the digital signature:

openssl dgst -sha256 -verify public_key.pem -signature downloaded.binsig downloaded.doc

And only if we see the desired Verified OK status line, we can be sure that the document we just downloaded has not been modified.

Conclusion

To begin with, I would like to highlight that a real production-ready online document storage system is far more complicated than the one we just described. Probably because it does not use the OpenSSL command-line utility to perform cryptographic operations on the client-side. ? Moreover, it takes into consideration a number of other security aspects that are outside the scope of this blog post.

Nevertheless, I still hope that the example with digital signatures shown here helped to convince you that asymmetric cryptography is not rocket science and with the help of Encryption UDFs component for Percona Server for MySQL can be indeed easy and straightforward. Check out the full documentation.

Sep
17
2021
--

Ketch raises another $20M as demand grows for its privacy data control platform

Six months after securing a $23 million Series A round, Ketch, a startup providing online privacy regulation and data compliance, brought in an additional $20 million in A1 funding, this time led by Acrew Capital.

Returning with Acrew for the second round are CRV, super{set} (the startup studio founded by Ketch’s co-founders CEO Tom Chavez and CTO Vivek Vaidya), Ridge Ventures and Silicon Valley Bank. The new investment gives Ketch a total of $43 million raised since the company came out of stealth earlier this year.

In 2020, Ketch introduced its data control platform for programmatic privacy, governance and security. The platform automates data control and consent management so that consumers’ privacy preferences are honored and implemented.

Enterprises are looking for a way to meet consumer needs and accommodate their rights and consents. At the same time, companies want data to fuel their growth and gain the trust of consumers, Chavez told TechCrunch.

There is also a matter of security, with much effort going into ransomware and malware, but Chavez feels a big opportunity is to bring security to the data wherever it lies. Once the infrastructure is in place for data control it needs to be at the level of individual cells and rows, he said.

“If someone wants to be deleted, there is a challenge in finding your specific row of data,” he added. “That is an exercise in data control.”

Ketch’s customer base grew by more than 300% since its March Series A announcement, and the new funding will go toward expanding its sales and go-to-market teams, Chavez said.

Ketch app. Image Credits: Ketch

This year, the company launched Ketch OTC, a free-to-use privacy tool that streamlines all aspects of privacy so that enterprise compliance programs build trust and reduce friction. Customer growth through OTC increased five times in six months. More recently, Qonsent, which developing a consent user experience, is using Ketch’s APIs and infrastructure, Chavez said.

When looking for strategic partners, Chavez and Vaidya wanted to have people around the table who have a deep context on what they were doing and could provide advice as they built out their products. They found that in Acrew founding partner Theresia Gouw, whom Chavez referred to as “the OG of privacy and security.”

Gouw has been investing in security and privacy for over 20 years and says Ketch is flipping the data privacy and security model on its head by putting it in the hands of developers. When she saw more people working from home and more data breaches, she saw an opportunity to increase and double down on Acrew’s initial investment.

She explained that Ketch is differentiating itself from competitors by taking data privacy and security and tying it to the data itself to empower software developers. With the OTC tool, similar to putting locks and cameras on a home, developers can download the API and attach rules to all of a user’s data.

“The magic of Ketch is that you can take the security and governance rules and embed them with the software and the piece of data,” Gouw added.

Aug
25
2021
--

Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”

Jul
07
2021
--

Opaque raises $9.5M seed to secure sensitive data in the cloud

Opaque, a new startup born out of Berkeley’s RISELab, announced a $9.5 million seed round today to build a solution to access and work with sensitive data in the cloud in a secure way, even with multiple organizations involved. Intel Capital led today’s investment with participation by Race Capital, The House Fund and FactoryHQ.

The company helps customers work with secure data in the cloud while making sure the data they are working on is not being exposed to cloud providers, other research participants or anyone else, says company president Raluca Ada Popa.

“What we do is we use this very exciting hardware mechanism called Enclave, which [operates] deep down in the processor — it’s a physical black box — and only gets decrypted there. […] So even if somebody has administrative privileges in the cloud, they can only see encrypted data,” she explained.

Company co-founder Ion Stoica, who was a co-founder at Databricks, says the startup’s solution helps resolve two conflicting trends. On one hand, businesses increasingly want to make use of data, but at the same time are seeing a growing trend toward privacy. Opaque is designed to resolve this by giving customers access to their data in a safe and fully encrypted way.

The company describes the solution as “a novel combination of two key technologies layered on top of state-of-the-art cloud security—secure hardware enclaves and cryptographic fortification.” This enables customers to work with data — for example to build machine learning models — without exposing the data to others, yet while generating meaningful results.

Popa says this could be helpful for hospitals working together on cancer research, who want to find better treatment options without exposing a given hospital’s patient data to other hospitals, or banks looking for money laundering without exposing customer data to other banks, as a couple of examples.

Investors were likely attracted to the pedigree of Popa, a computer security and applied crypto professor at UC Berkeley and Stoica, who is also a Berkeley professor and co-founded Databricks. Both helped found RISELabs at Berkeley where they developed the solution and spun it out as a company.

Mark Rostick, vice president and senior managing director at lead investor Intel Capital says his firm has been working with the founders since the startup’s earliest days, recognizing the potential of this solution to help companies find complex solutions even when there are multiple organizations involved sharing sensitive data.

“Enterprises struggle to find value in data across silos due to confidentiality and other concerns. Confidential computing unlocks the full potential of data by allowing organizations to extract insights from sensitive data while also seamlessly moving data to the cloud without compromising security or privacy,” Rostick said in a statement

He added, “Opaque bridges the gap between data security and cloud scale and economics, thus enabling inter-organizational and intra-organizational collaboration.”

 

Jun
25
2021
--

Edge Delta raises $15M Series A to take on Splunk

Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.

“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”

Image Credits: Edge Delta

He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.

While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.

With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.

Image Credits: Edge Delta

“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”

Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.

Image Credits: Edge Delta

Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.

“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”

But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.

Image Credits: Edge Delta

“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”

The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.

 

Feb
11
2021
--

Base Operations raises $2.2 million to modernize physical enterprise security

Typically when we talk about tech and security, the mind naturally jumps to cybersecurity. But equally important, especially for global companies with large, multinational organizations, is physical security — a key function at most medium-to-large enterprises, and yet one that to date, hasn’t really done much to take advantage of recent advances in technology. Enter Base Operations, a startup founded by risk management professional Cory Siskind in 2018. Base Operations just closed their $2.2 million seed funding round and will use the money to capitalize on its recent launch of a street-level threat mapping platform for use in supporting enterprise security operations.

The funding, led by Good Growth Capital and including investors like Magma Partners, First In Capital, Gaingels and First Round Capital founder Howard Morgan, will be used primarily for hiring, as Base Operations looks to continue its team growth after doubling its employe base this past month. It’ll also be put to use extending and improving the company’s product and growing the startup’s global footprint. I talked to Siskind about her company’s plans on the heels of this round, as well as the wider opportunity and how her company is serving the market in a novel way.

“What we do at Base Operations is help companies keep their people in operation secure with ‘Micro Intelligence,’ which is street-level threat assessments that facilitate a variety of routine security tasks in the travel security, real estate and supply chain security buckets,” Siskind explained. “Anything that the chief security officer would be in charge of, but not cyber — so anything that intersects with the physical world.”

Siskind has firsthand experience about the complexity and challenges that enter into enterprise security since she began her career working for global strategic risk consultancy firm Control Risks in Mexico City. Because of her time in the industry, she’s keenly aware of just how far physical and political security operations lag behind their cybersecurity counterparts. It’s an often overlooked aspect of corporate risk management, particularly since in the past it’s been something that most employees at North American companies only ever encounter periodically when their roles involve frequent travel. The events of the past couple of years have changed that, however.

“This was the last bastion of a company that hadn’t been optimized by a SaaS platform, basically, so there was some resistance and some allegiance to legacy players,” Siskind told me. “However, the events of 2020 sort of turned everything on its head, and companies realized that the security department, and what happens in the physical world, is not just about compliance — it’s actually a strategic advantage to invest in those sort of services, because it helps you maintain business continuity.”

The COVID-19 pandemic, increased frequency and severity of natural disasters, and global political unrest all had significant impact on businesses worldwide in 2020, and Siskind says that this has proven a watershed moment in how enterprises consider physical security in their overall risk profile and strategic planning cycles.

“[Companies] have just realized that if you don’t invest [in] how to keep your operations running smoothly in the face of rising catastrophic events, you’re never going to achieve the profits that you need, because it’s too choppy, and you have all sorts of problems,” she said.

Base Operations addresses this problem by taking available data from a range of sources and pulling it together to inform threat profiles. Their technology is all about making sense of the myriad stream of information we encounter daily — taking the wash of news that we sometimes associate with “doom-scrolling” on social media, for instance, and combining it with other sources using machine learning to extrapolate actionable insights.

Those sources of information include “government statistics, social media, local news, data from partnerships, like NGOs and universities,” Siskind said. That data set powers their Micro Intelligence platform, and while the startup’s focus today is on helping enterprises keep people safe, while maintaining their operations, you can easily see how the same information could power everything from planning future geographical expansion, to tailoring product development to address specific markets.

Siskind saw there was a need for this kind of approach to an aspect of business that’s essential, but that has been relatively slow to adopt new technologies. From her vantage point two years ago, however, she couldn’t have anticipated just how urgent the need for better, more scalable enterprise security solutions would arise, and Base Operations now seems perfectly positioned to help with that need.

Dec
14
2020
--

5 questions every IT team should to be able to answer

Now more than ever, IT teams play a vital role in keeping their businesses running smoothly and securely. With all of the assets and data that are now broadly distributed, a CEO depends on their IT team to ensure employees remain connected and productive and that sensitive data remains protected.

CEOs often visualize and measure things in terms of dollars and cents, and in the face of continuing uncertainty, IT — along with most other parts of the business — is facing intense scrutiny and tightening of budgets. So, it is more important than ever to be able to demonstrate that they’ve made sound technology investments and have the agility needed to operate successfully in the face of continued uncertainty.

For a CEO to properly understand risk exposure and make the right investments, IT departments have to be able to confidently communicate what types of data are on any given device at any given time.

Here are five questions that IT teams should be ready to answer when their CEO comes calling:

What have we spent our money on?

Or, more specifically, exactly how many assets do we have? And, do we know where they are? While these seem like basic questions, they can be shockingly difficult to answer … much more difficult than people realize. The last several months in the wake of the COVID-19 outbreak have been the proof point.

With the mass exodus of machines leaving the building and disconnecting from the corporate network, many IT leaders found themselves guessing just how many devices had been released into the wild and gone home with employees.

One CIO we spoke to estimated they had “somewhere between 30,000 and 50,000 devices” that went home with employees, meaning there could have been up to 20,000 that were completely unaccounted for. The complexity was further compounded as old devices were pulled out of desk drawers and storage closets to get something into the hands of employees who were not equipped to work remotely. Companies had endpoints connecting to corporate network and systems that they hadn’t seen for years — meaning they were out-of-date from a security perspective as well.

This level of uncertainty is obviously unsustainable and introduces a tremendous amount of security risk. Every endpoint that goes unaccounted for not only means wasted spend but also increased vulnerability, greater potential for breach or compliance violation, and more. In order to mitigate these risks, there needs to be a permanent connection to every device that can tell you exactly how many assets you have deployed at any given time — whether they are in the building or out in the wild.

Are our devices and data protected?

Device and data security go hand in hand; without the ability to see every device that is deployed across an organization, it becomes next to impossible to know what data is living on those devices. When employees know they are leaving the building and going to be off network, they tend to engage in “data hoarding.”

Oct
28
2020
--

Enso Security raises $6M for its application security posture management platform

Enso Security, a Tel Aviv-based startup that is building a new application security posture management platform, today announced that it has raised a $6 million seed funding round led by YL Ventures, with participation from Jump Capital. Angel investors in this round include HackerOne co-founder and CTO Alex Rice; Sounil Yu, the former chief security scientist at Bank of America; Omkhar Arasaratnam, the former head of Data Protection Technology at JPMorgan Chase and toDay Ventures.

The company was founded by Roy Erlich (CEO), Chen Gour Arie (CPO) and Barak Tawily (CTO). As is so often the case with Israeli security startups, the founding team includes former members of the Israeli Intelligence Corps, but also a lot of hands-on commercial experience. Erlich, for example, was previously the head of application security at Wix, while Gour Arie worked as an application security consultant for numerous companies across Europe and Tawily has a background in pentesting and led a security team at Wix, too.

Image Credits: Enso Security / Getty Images

“It’s no secret that, today, the diversity of R&D allows [companies] to rapidly introduce new applications and push changes to existing ones,” Erlich explained. “But this great complexity for application security teams results in significant AppSec management challenges. These challenges include the difficulty of tracking applications across environments, measuring risks, prioritizing tasks and enforcing uniform Application Security strategies across all applications.”

But as companies push out code faster than ever, the application security teams aren’t able to keep up — and may not even know about every application being developed internally. The team argues that application security today is often a manual effort to identify owners and measure risk, for example — and the resources for application security teams are often limited, especially when compared the size of the overall development team in most companies. Indeed, the Enso team argues that most AppSec teams today spend most of their time creating relationships with developers and performing operational and product-related tasks — and not on application security.

Image Credits: Enso Security / Getty Images

“It’s a losing fight from the application security side because you have no chance to cover everything,” Erlich noted. “Having said that, […] it’s all about managing the risk. You need to make sure that you take data-driven decisions and that you have all the data that you need in one place.”

Enso Security then wants to give these teams a platform that gives them a single pane of glass to discover applications, identify owners, detect changes and capture their security posture. From there, teams can then prioritize and track their tasks and get real-time feedback on what is happening across their tools. The company’s tools currently pull in data from a wide variety of tools, including the likes of JIRA, Jenkins, GitLab, GitHub, Splunk, ServiceNow and the Envoy edge and service proxy. But as the team argues, even getting data from just a few sources already provides benefits for Enso’s users.

Looking ahead, the team plans to continue improving its product and staff up from its small group of seven employees to about 20 in the next year.

“Roy, Chen and Barak have come up with a very elegant solution to a notoriously complex problem space,” said Ofer Schreiber, partner at YL Ventures . “Because they cut straight to visibility — the true heart of this issue — cybersecurity professionals can finally see and manage all of the applications in their environments. This will have an extraordinary impact on the rate of application rollout and enterprise productivity.”

Oct
20
2020
--

Splunk acquires Plumbr and Rigor to build out its observability platform

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com