Sep
23
2022
--

Keep Your Data Safe with Percona

Keep Your Data Safe with Percona

Keep Your Data Safe with PerconaSeptember was and is an extremely fruitful month (especially for the black-hat hackers) for news about data leaks and breaches:

  1. Uber suffers computer system breach, alerts authorities
  2. GTA 6 source code and videos leaked after Rockstar Games hack
  3. Revolut breach: personal and banking data exposed

In this blog post, we want to remind you how to keep your data safe when running your favorite open source databases.

Network exposure

Search engines like Shodan are an easy way to search for publicly available databases. Over 3.6 million MySQL servers found exposed on the Internet.

The best practice here is to run database servers in the isolated private network, even from the rest of your corporate network. In this case, you have a low risk of exposure even in the case of server misconfiguration.

If for some reason you run your database on the server in a public network, you still can avoid network exposure:

  • Bind your server to the localhost or private IP address of the server

For example, for MySQL use bind-address option in your my.cnf:

bind-address = 192.168.0.123

  • Configure your firewall to block access through a public network interface on the operating system

Users and passwords

To complement the network exposure story, ensure that your users cannot connect from just any IP address. Taking MySQL as an example, the following GRANT command allows to connect from one of the private networks only:

GRANT ALL ON db1.* TO 'perconaAdmin'@'192.168.0.0/255.255.0.0';

MySQL also has an auth_socket plugin, that controls the connection to the database through Unix sockets. Read more in this blog post: Use MySQL Without a Password (and Still be Secure).

Minimize the risk and do not use default usernames and passwords. SecList is a good example of bad choices for passwords: MySQL, PostgreSQL, and a misc list. Percona Platform provides users with Advisors (read more below) that preemptively check for misconfigured grants, weak passwords, and more.

So now we agree that a strong password is a must. Did you know that you can enforce it? This Percona post talks about Improving MySQL Password Security with Validation Plugin that performs such enforcement.

A strong password is set, great! To make your system even more resilient to security risks, it is recommended to have a password rotation policy. This policy can be manually executed, but also can be automated through various integrations, like LDAP, KMIP, HashiCorp Vault, and many more. For example, this document describes how Percona Server for MongoDB can work with LDAP.

Encryption

There are two types of encryption when you talk about databases and ideally, you’re going to use both of them:

  1. Transport encryption – secure the traffic between client and server and between cluster nodes
  2. Data-at-rest encryption (or Transparent Data Encryption – TDE) – encrypt the data on a disk to prevent unauthorized access

Transport

With an unencrypted connection between the client and the server, someone with access to the network could watch all your traffic and steal the credentials and sensitive data. We recommend enabling network encryption by default. Read the following blog posts highlighting the details:

Data-at-rest

Someone can get access to the physical disk or a network block storage and read the data. To mitigate this risk, you can encrypt the data on the disk. It can be done on the file system, block storage level, and with the database storage engine itself.

Tools like fscrypt or in-built encryption in ZFS can help with file system encryption. Public clouds provide built-in encryption for their network storage solutions (ex AWS EBS, GCP). Private storage solutions, like Ceph, also come with the support of data-at-rest encryption on the block level.

Percona takes security seriously, which is why we recommend enabling data-at-rest encryption by default, especially for production workloads. Percona Server for MySQL and Percona Server for MongoDB provides you with a wide variety of options to perform TDE on the database level.

Preventive measures

Mistakes and misconfiguration can happen and it would be cool if there was a mechanism to alert you about issues before it is too late. Guess what – we have it! 

Percona Monitoring and Management (PMM) comes with Advisors which are the checks that identify potential security threats, vulnerabilities, data loss or data corruption, and other issues. Advisors are the software representation of the years of Percona’s expertise in database security and performance.

By connecting PMM to Percona Platform, users can get more sophisticated Advisors for free, whereas our paid customers are getting even deeper database checks, which discover various misconfiguration or non-compliance gems.

Learn more about Percona Platform with PMM on our website and check if your databases are secured and fine-tuned right away.

If you still believe you need more help, please let us know through our Community Forums or contact the Percona team directly.

Sep
20
2022
--

Column-Level Encryption in MySQL

Column-Level Encryption in MySQL

Column-Level Encryption in MySQLIn a post written earlier this year – Percona Server for MySQL Encryption Options and Choices I discussed some of the options around encryption in MySQL.  Being such a complex topic, that post was meant to clarify and highlight various aspects of “encryption” at different levels.  I recently had this topic come up again, but specifically around column-level encryption and various options so I wanted to touch on this in more detail.

As of the current release of Percona Server for MySQL, there is no built-in way to define a single column as encrypted.  Ideally, there could be some metadata passed in a create statement and this would just automatically happen, such as this:

Unfortunately, this option isn’t available and we need to do some data manipulation at or prior to read/write time.

Built-in MySQL encryption functions

One of the most common approaches is to use the built-in MySQL encryption functions described here: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html.  The standard flows look roughly like this:

This works perfectly fine and the data will be stored encrypted for that column.  If an unauthorized person were to gain access to the running table, they would be unable to read that column without the key.  The biggest concern with this approach is that the plaintext key and plaintext data are BOTH specified in the query.  This leads to potential leaks in log files (slow query, binary log, etc) as well as potential sniffing over non-secure networks (i.e. connections not using SSL).

Also, key storage can become more cumbersome.  If you plan to share the same key for the entire table, key rotation and management can become non-trivial.  General best practices recommend that you rotate keys at least once a year.  On large tables, this could be a massive undertaking.  Let’s look at the envelope encryption pattern as an alternative.

Envelope encryption

In contrast to using built-in encryption, envelope encryption uses the concept of individual data keys.  While there are many ways to approach this, my personal experience uses the AWS KMS service.  In KMS, you can create what is called a “Customer Master Key” (CMK).  This is great for encrypting small strings like passwords but is limited to encrypting strings up to 4KB.

A more flexible approach is to use individual data keys, encrypt the data locally in the application, and then store the encrypted data along with the encrypted data key.  This can be done at various levels of granularity – from the per-row level to the table level, to the database level.   Here is the general process for envelope encryption:

When you need to decrypt the data, you first decrypt the key and then use that to decrypt the data:

Because both the data key and the data are encrypted, both are safe to store together.  This is one of the main benefits of this method.  You can have hundreds to millions of data keys independent of one another but protected by a single master key.  This allows you to deactivate all of the individual keys by disabling one single master key.  

It also simplifies key rotation – you can simply rotate the master key and start using the new key to generate new data keys. Note that this doesn’t re-encrypt the data, but it does allow you to follow the best practice of periodically rotating keys.  As long as you don’t delete any of the old keys, KMS can determine which key to use for decryption from the key itself and automatically decrypt your data (i.e. you don’t have to specify which key encrypted the data).  

I’ve included a link to a sample script in Python that shows this process in more detail.  While there isn’t actually an active database in the demo, the code prints out the INSERT and SELECT statements that could be run against a live server: https://github.com/mbenshoof/kms-envelope-demo

Challenges with column-level encryption

Searching

It should go without saying that introducing column-level encryption isn’t a trivial task.  One of the biggest challenges is reviewing how encrypted data is retrieved.  For example, if I store social security numbers encrypted individually, then how do I search for the user with an SSN of 123-45-6789?  If I use a shared key for the entire table/schema, then it is possible with proper indexing.  I just have to pass the encrypted value to the where clause and if it exists, it should be found.

However, on a per-row model where each row uses a unique key, this is no longer possible.  As I don’t know the key the value was encrypted with, I can’t search for the encrypted value in the table.  In cases like this, you might consider a one-way hash field that could be searched against.  For example, you could store the SHA256 hash of an SSN as an additional column for searching but then decrypt any other sensitive information.

CPU and space overhead

The other challenge is adding additional write/read overhead to handle encryption and decryption.  While this may or may not be an issue depending on the use case, the extra CPU needed either on the application side or MySQL side could come into play.  You will need to consider the extra processing required and factor that in when planning the size of your resources.

Additionally, depending on the encryption libraries used, there can be additional space needed to store encrypted values.  In some cases, the encrypted value (when stored in base64 specifically) may end up requiring a higher storage footprint.  The space could be compounded if using an index on an additional hash column.  For small values (like SSN), the hash may be much larger than the actual data.  This can result in a much higher storage footprint when applied to millions of records. 

Wrapping up

Encryption and security are very important and complicated topics.  When considering column-level encryption in MySQL, you definitely have some options.  The easiest way would be to just leverage the built-in encryption functions in MySQL.  However, you can take things a step further and handle all encryption and decryption in your application.  

As is always the case with complicated topics like this, the choice and approach depend entirely on your requirements and use case.  Just know that with the flexibility of MySQL, there is most likely a design and approach that works for you!

Jul
07
2021
--

Opaque raises $9.5M seed to secure sensitive data in the cloud

Opaque, a new startup born out of Berkeley’s RISELab, announced a $9.5 million seed round today to build a solution to access and work with sensitive data in the cloud in a secure way, even with multiple organizations involved. Intel Capital led today’s investment with participation by Race Capital, The House Fund and FactoryHQ.

The company helps customers work with secure data in the cloud while making sure the data they are working on is not being exposed to cloud providers, other research participants or anyone else, says company president Raluca Ada Popa.

“What we do is we use this very exciting hardware mechanism called Enclave, which [operates] deep down in the processor — it’s a physical black box — and only gets decrypted there. […] So even if somebody has administrative privileges in the cloud, they can only see encrypted data,” she explained.

Company co-founder Ion Stoica, who was a co-founder at Databricks, says the startup’s solution helps resolve two conflicting trends. On one hand, businesses increasingly want to make use of data, but at the same time are seeing a growing trend toward privacy. Opaque is designed to resolve this by giving customers access to their data in a safe and fully encrypted way.

The company describes the solution as “a novel combination of two key technologies layered on top of state-of-the-art cloud security—secure hardware enclaves and cryptographic fortification.” This enables customers to work with data — for example to build machine learning models — without exposing the data to others, yet while generating meaningful results.

Popa says this could be helpful for hospitals working together on cancer research, who want to find better treatment options without exposing a given hospital’s patient data to other hospitals, or banks looking for money laundering without exposing customer data to other banks, as a couple of examples.

Investors were likely attracted to the pedigree of Popa, a computer security and applied crypto professor at UC Berkeley and Stoica, who is also a Berkeley professor and co-founded Databricks. Both helped found RISELabs at Berkeley where they developed the solution and spun it out as a company.

Mark Rostick, vice president and senior managing director at lead investor Intel Capital says his firm has been working with the founders since the startup’s earliest days, recognizing the potential of this solution to help companies find complex solutions even when there are multiple organizations involved sharing sensitive data.

“Enterprises struggle to find value in data across silos due to confidentiality and other concerns. Confidential computing unlocks the full potential of data by allowing organizations to extract insights from sensitive data while also seamlessly moving data to the cloud without compromising security or privacy,” Rostick said in a statement

He added, “Opaque bridges the gap between data security and cloud scale and economics, thus enabling inter-organizational and intra-organizational collaboration.”

 

Apr
20
2021
--

Cape Privacy announces $20M Series A to help companies securely share data

Cape Privacy, the early-stage startup that wants to make it easier for companies to share sensitive data in a secure and encrypted way, announced a $20 million Series A today.

Evolution Equity Partners led the round with participation from new investors Tiger Global Management, Ridgeline Partners and Downing Lane. Existing investors Boldstart Ventures, Version One Ventures, Haystack, Radical Ventures and a slew of individual investors also participated. The company has now raised approximately $25 million, including a $5 million seed investment we covered last June.

Cape Privacy CEO Ché Wijesinghe says that the product has evolved quite a bit since we last spoke. “We have really focused our efforts on encrypted learning, which is really the core technology, which was fundamental to allowing the multi-party compute capabilities between two organizations or two departments to work and build machine learning models on encrypted data,” Wijesinghe told me.

Wijesinghe says that a key business case involves a retail company owned by a private equity firm sharing data with a large financial services company, which is using the data to feed its machine learning models. In this case, sharing customer data, it’s essential to do it in a secure way and that is what Cape Privacy claims is its primary value prop.

He said that while the data sharing piece is the main focus of the company, it has data governance and compliance components to be sure that entities sharing data are doing so in a way that complies with internal and external rules and regulations related to the type of data.

While the company is concentrating on financial services for now, because Wijesinghe has been working with these companies for years, he sees uses cases far beyond a single vertical, including pharmaceuticals, government, healthcare telco and manufacturing.

“Every single industry needs this and so we look at the value of what Cape’s encrypted learning can provide as really being something that can be as transformative and be as impactful as what SSL was for the adoption of the web browser,” he said.

Richard Seewald, founding and managing partner at lead investor Evolution Equity Partners likes that ability to expand the product’s markets. “The application in Financial Services is only the beginning. Cape has big plans in life sciences and government where machine learning will help make incredible advances in clinical trials and counter-terrorism for example. We anticipate wide adoption of Cape’s technology across many use cases and industries,” he said.

The company has recently expanded to 20 people and Wijesinghe, who is half Asian, takes DEI seriously. “We’ve been very, very deliberate about our DEI efforts, and I think one of the things that we pride ourselves in is that we do foster a culture of acceptance, that it’s not just about diversity in terms of color, race, gender, but we just hired our first nonbinary employee,” he said,

Part of making people feel comfortable and included involves training so that fellow employees have a deeper understanding of the cultural differences. The company certainly has diversity across geographies with employees in 10 different time zones.

The company is obviously remote with a spread like that, but once the pandemic is over, Wijesinghe sees bringing people together on occasion with New York City as the hub for the company, where people from all over the world can fly in and get together.

Apr
16
2021
--

Enterprise security attackers are one password away from your worst day

If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.

Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.

Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.

The current risks aren’t just technology problems; they’re also problems of people and processes.

Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.

It’s all about the credentials

Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the midpandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.

It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.

Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.

Dec
14
2020
--

5 questions every IT team should to be able to answer

Now more than ever, IT teams play a vital role in keeping their businesses running smoothly and securely. With all of the assets and data that are now broadly distributed, a CEO depends on their IT team to ensure employees remain connected and productive and that sensitive data remains protected.

CEOs often visualize and measure things in terms of dollars and cents, and in the face of continuing uncertainty, IT — along with most other parts of the business — is facing intense scrutiny and tightening of budgets. So, it is more important than ever to be able to demonstrate that they’ve made sound technology investments and have the agility needed to operate successfully in the face of continued uncertainty.

For a CEO to properly understand risk exposure and make the right investments, IT departments have to be able to confidently communicate what types of data are on any given device at any given time.

Here are five questions that IT teams should be ready to answer when their CEO comes calling:

What have we spent our money on?

Or, more specifically, exactly how many assets do we have? And, do we know where they are? While these seem like basic questions, they can be shockingly difficult to answer … much more difficult than people realize. The last several months in the wake of the COVID-19 outbreak have been the proof point.

With the mass exodus of machines leaving the building and disconnecting from the corporate network, many IT leaders found themselves guessing just how many devices had been released into the wild and gone home with employees.

One CIO we spoke to estimated they had “somewhere between 30,000 and 50,000 devices” that went home with employees, meaning there could have been up to 20,000 that were completely unaccounted for. The complexity was further compounded as old devices were pulled out of desk drawers and storage closets to get something into the hands of employees who were not equipped to work remotely. Companies had endpoints connecting to corporate network and systems that they hadn’t seen for years — meaning they were out-of-date from a security perspective as well.

This level of uncertainty is obviously unsustainable and introduces a tremendous amount of security risk. Every endpoint that goes unaccounted for not only means wasted spend but also increased vulnerability, greater potential for breach or compliance violation, and more. In order to mitigate these risks, there needs to be a permanent connection to every device that can tell you exactly how many assets you have deployed at any given time — whether they are in the building or out in the wild.

Are our devices and data protected?

Device and data security go hand in hand; without the ability to see every device that is deployed across an organization, it becomes next to impossible to know what data is living on those devices. When employees know they are leaving the building and going to be off network, they tend to engage in “data hoarding.”

Dec
01
2020
--

Google launches Android Enterprise Essentials, a mobile device management service for small businesses

Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.

The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.

As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.

Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.

Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.

The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.

Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.

Oct
26
2020
--

DataFleets keeps private data useful and useful data private with federated learning and $4.5M seed

As you may already know, there’s a lot of data out there, and some of it could actually be pretty useful. But privacy and security considerations often put strict limitations on how it can be used or analyzed. DataFleets promises a new approach by which databases can be safely accessed and analyzed without the possibility of privacy breaches or abuse — and has raised a $4.5 million seed round to scale it up.

To work with data, you need to have access to it. If you’re a bank, that means transactions and accounts; if you’re a retailer, that means inventories and supply chains, and so on. There are lots of insights and actionable patterns buried in all that data, and it’s the job of data scientists and their ilk to draw them out.

But what if you can’t access the data? After all, there are many industries where it is not advised or even illegal to do so, such as in healthcare. You can’t exactly take a whole hospital’s medical records, give them to a data analysis firm, and say “sift through that and tell me if there’s anything good.” These, like many other data sets, are too private or sensitive to allow anyone unfettered access. The slightest mistake — let alone abuse — could have serious repercussions.

In recent years a few technologies have emerged that allow for something better, though: analyzing data without ever actually exposing it. It sounds impossible, but there are computational techniques for allowing data to be manipulated without the user ever actually having access to any of it. The most widely used one is called homomorphic encryption, which unfortunately produces an enormous, orders-of-magnitude reduction in efficiency — and big data is all about efficiency.

This is where DataFleets steps in. It hasn’t reinvented homomorphic encryption, but has sort of sidestepped it. It uses an approach called federated learning, where instead of bringing the data to the model, they bring the model to the data.

DataFleets integrates with both sides of a secure gap between a private database and people who want to access that data, acting as a trusted agent to shuttle information between them without ever disclosing a single byte of actual raw data.

Illustration showing how a model can be created without exposing data.

Image Credits: DataFleets

Here’s an example. Say a pharmaceutical company wants to develop a machine-learning model that looks at a patient’s history and predicts whether they’ll have side effects with a new drug. A medical research facility’s private database of patient data is the perfect thing to train it. But access is highly restricted.

The pharma company’s analyst creates a machine-learning training program and drops it into DataFleets, which contracts with both them and the facility. DataFleets translates the model to its own proprietary runtime and distributes it to the servers where the medical data resides; within that sandboxed environment, it grows into a strapping young ML agent, which when finished is translated back into the analyst’s preferred format or platform. The analyst never sees the actual data, but has all the benefits of it.

Screenshot of the DataFleets interface. Look, it’s the applications that are meant to be exciting. Image Credits: DataFleets

It’s simple enough, right? DataFleets acts as a sort of trusted messenger between the platforms, undertaking the analysis on behalf of others and never retaining or transferring any sensitive data.

Plenty of folks are looking into federated learning; the hard part is building out the infrastructure for a wide-ranging enterprise-level service. You need to cover a huge amount of use cases and accept an enormous variety of languages, platforms and techniques, and of course do it all totally securely.

“We pride ourselves on enterprise readiness, with policy management, identity-access management, and our pending SOC 2 certification,” said DataFleets COO and co-founder Nick Elledge. “You can build anything on top of DataFleets and plug in your own tools, which banks and hospitals will tell you was not true of prior privacy software.”

But once federated learning is set up, all of a sudden the benefits are enormous. For instance, one of the big issues today in combating COVID-19 is that hospitals, health authorities, and other organizations around the world are having difficulty, despite their willingness, in securely sharing data relating to the virus.

Everyone wants to share, but who sends whom what, where is it kept, and under whose authority and liability? With old methods, it’s a confusing mess. With homomorphic encryption it’s useful but slow. With federated learning, theoretically, it’s as easy as toggling someone’s access.

Because the data never leaves its “home,” this approach is essentially anonymous and thus highly compliant with regulations like HIPAA and GDPR, another big advantage. Elledge notes: “We’re being used by leading healthcare institutions who recognize that HIPAA doesn’t give them enough protection when they are making a data set available for third parties.”

Of course there are less noble, but no less viable, examples in other industries: Wireless carriers could make subscriber metadata available without selling out individuals; banks could sell consumer data without violating anyone in particular’s privacy; bulky datasets like video can sit where they are instead of being duplicated and maintained at great expense.

The company’s $4.5 million seed round is seemingly evidence of confidence from a variety of investors (as summarized by Elledge): AME Cloud Ventures (Jerry Yang of Yahoo) and Morado Ventures, Lightspeed Venture Partners, Peterson Ventures, Mark Cuban, LG, Marty Chavez (president of the board of overseers of Harvard), Stanford-StartX fund, and three unicorn founders (Rappi, Quora and Lucid).

With only 11 full-time employees DataFleets appears to be doing a lot with very little, and the seed round should enable rapid scaling and maturation of its flagship product. “We’ve had to turn away or postpone new customer demand to focus on our work with our lighthouse customers,” Elledge said. They’ll be hiring engineers in the U.S. and Europe to help launch the planned self-service product next year.

“We’re moving from a data ownership to a data access economy, where information can be useful without transferring ownership,” said Elledge. If his company’s bet is on target, federated learning is likely to be a big part of that going forward.

Oct
05
2020
--

Strike Graph raises $3.9M to help automate security audits

Compliance automation isn’t exactly the most exciting topic, but security audits are big business and companies that aim to get a SOC 2, ISO 207001 or FedRamp certification can often spend six figures to get through the process with the help of an auditing service. Seattle-based Strike Graph, which is launching today and announcing a $3.9 million seed funding round, wants to automate as much of this process as possible.

The company’s funding round was led by Madrona Venture Group, with participation from Amplify.LA, Revolution’s Rise of the Rest Seed Fund and Green D Ventures.

Strike Graph co-founder and CEO Justin Beals tells me that the idea for the company came to him during his time as CTO at machine learning startup Koru (which had a bit of an odd exit last year). To get enterprise adoption for that service, the company had to get a SOC 2 security certification. “It was a real challenge, especially for a small company. In talking to my colleagues, I just recognized how much of a challenge it was across the board. And so when it was time for the next startup, I was just really curious,” he told me.

Image Credits: Strike Graph

Together with his co-founder Brian Bero, he incubated the idea at Madrona Venture Labs, where he spent some time as Entrepreneur in Residence after Koru.

Beals argues that today’s process tends to be slow, inefficient and expensive. The idea behind Strike Graph, unsurprisingly, is to remove as many of these inefficiencies as is currently possible. The company itself, it is worth noting, doesn’t provide the actual audit service. Businesses will still need to hire an auditing service for that. But Beals also argues that the bulk of what companies are paying for today is pre-audit preparation.

“We do all that preparation work and preparing you and then, after your first audit, you have to go and renew every year. So there’s an important maintenance of that information.”

Image Credits: Strike Graph

When customers come to Strike Graph, they fill out a risk assessment. The company takes that and can then provide them with controls for how to improve their security posture — both to pass the audit and to secure their data. Beals also noted that soon, Strike Graph will be able to help businesses automate the collection of evidence for the audit (say your encryption settings) and can pull that in regularly. Certifications like SOC 2, after all, require companies to have ongoing security practices in place and get re-audited every 12 months. Automated evidence collection will launch in early 2021, once the team has built out the first set of its integrations to collect that data.

That’s also where the company, which mostly targets mid-size businesses, plans to spend a lot of its new funding. In addition, the company plans to focus on its marketing efforts, mostly around content marketing and educating its potential customers.

“Every company, big or small, that sells a software solution must address a broad set of compliance requirements in regards to security and privacy. Obtaining the certifications can be a burdensome, opaque and expensive process. Strike Graph is applying intelligent technology to this problem — they help the company identify the appropriate risks, enable the audit to run smoothly and then automate the compliance and testing going forward,” said Hope Cochran, managing director at Madrona Venture Group. “These audits were a necessary pain when I was a CFO, and Strike Graph’s elegant solution brings together teams across the company to move the business forward faster.”

Jul
14
2020
--

Google Cloud launches Confidential VMs

At its virtual Cloud Next ’20 event, Google Cloud today announced Confidential VMs, a new type of virtual machine that makes use of the company’s work around confidential computing to ensure that data isn’t just encrypted at rest but also while it is in memory.

We already employ a variety of isolation and sandboxing techniques as part of our cloud infrastructure to help make our multi-tenant architecture secure,” the company notes in today’s announcement. “Confidential VMs take this to the next level by offering memory encryption so that you can further isolate your workloads in the cloud. Confidential VMs can help all our customers protect sensitive data, but we think it will be especially interesting to those in regulated industries.”

In the backend, Confidential VMs make use of AMD’s Secure Encrypted Virtualization feature, available in its second-generation EPYC CPUs. With that, the data will stay encrypted when used and the encryption keys to make this happen are automatically generated in hardware and can’t be exported — and with that, even Google doesn’t have access to the keys either.

Image Credits: Google

Developers who want to shift their existing VMs to a Confidential VM can do so with just a few clicks. Google notes that it built Confidential VMs on top of its Shielded VMs, which already provide protection against rootkits and other exploits.

“With built-in secure encrypted virtualization, 2nd Gen AMD EPYC processors provide an innovative hardware-based security feature that helps secure data in a virtualized environment,” said Raghu Nambiar, corporate vice president, Data Center Ecosystem, AMD. “For the new Google Compute Engine Confidential VMs in the N2D series, we worked with Google to help customers both secure their data and achieve performance of their workloads.”

That last part is obviously important, given that the extra encryption and decryption steps do incur at least a minor performance penalty. Google says it worked with AMD and developed new open-source drivers to ensure that “the performance metrics of Confidential VMs are close to those of non-confidential VMs.” At least according to the benchmarks Google itself has disclosed so far, both startup times and memory read and throughput performance are virtually the same for regular VMs and Confidential VMs.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com