Feb
20
2019
--

Xage brings role-based single sign-on to industrial devices

Traditional industries like oil and gas and manufacturing often use equipment that was created in a time when remote access wasn’t a gleam in an engineer’s eye, and hackers had no way of connecting to them. Today, these devices require remote access, and some don’t have even rudimentary authentication. Xage, the startup that wants to make industrial infrastructure more secure, announced a new solution today to bring single sign-on and role-based control to even the oldest industrial devices.

Company CEO Duncan Greatwood says that some companies have adopted firewall technology, but if a hacker breaches the firewall, there often isn’t even a password to defend these kinds of devices. He adds that hackers have been increasingly targeting industrial infrastructure.

Xage has come up with a way to help these companies with its latest product called Xage Enforcement Point (XEP). This tool gives IT a way to control these devices with a single password, a kind of industrial password manager. Greatwood says that some companies have hundreds of passwords for various industrial tools. Sometimes, whether because of distance across a factory floor, or remoteness of location, workers would rather adjust these machines remotely when possible.

While operations wants to simplify this for workers with remote access, IT worries about security, and the tension can hold companies back, force them to make big firewall investments or, in some cases, implement these kinds of solutions without adequate protection.

XEP helps bring a level of protection to these pieces of equipment. “XEP is a relatively small piece of software that can run on a tiny credit-card size computer, and you simply insert it in front of the piece of equipment you want to protect,” Greatwood explained.

The rest of the Xage platform adds additional security. The company introduced fingerprinting last year, which gives unique identifiers to these pieces of equipment. If a hacker tries to spoof a piece of equipment, and the device lacks a known fingerprint, they can’t get on the system.

Xage also makes use of the blockchain and a rules engine to secure industrial systems. The customer can define rules and use the blockchain as an enforcement mechanism where each node in the chain carries the rules, and a certain number of nodes as defined by the customer must agree that the person, machine or application trying to gain access is a legitimate actor.

The platform taken as a whole provides several levels of protection in an effort to discourage hackers who are trying to breach these systems. Greatwood says that while companies don’t usually get rid of tools they already have, like firewalls, they may scale back their investment after buying the Xage solution.

Xage was founded at the end of 2017. It has raised $16 million to this point and has 30 employees. Greatwood didn’t want to discuss a specific number of customers, but did say they were making headway in oil and gas, renewable energy, utilities and manufacturing.

Feb
19
2019
--

Senseon raises $6.4M to tackle cybersecurity threats with an AI ‘triangulation’ approach

Darktrace helped pave the way for using artificial intelligence to combat malicious hacking and enterprise security breaches. Now a new U.K. startup founded by an ex-Darktrace executive has raised some funding to take the use of AI in cybersecurity to the next level.

Senseon, which has pioneered a new model that it calls “AI triangulation” — simultaneously applying artificial intelligence algorithms to oversee, monitor and defend an organization’s network appliances, endpoints and “investigator bots” covering multiple microservices — has raised $6.4 million in seed funding.

David Atkinson — the startup’s CEO and founder who had previously been the commercial director for Darktrace and before that helped pioneer new cybersecurity techniques as an operative at the U.K.’s Ministry of Defense — said that Senseon will use the funding to continue to expand its business both in Europe and the U.S. 

The deal was co-led by MMC Ventures and Mark Weatherford, who is chief cybersecurity strategist at vArmour (which itself raised money in recent weeks) and previously Deputy Under Secretary for Cybersecurity, U.S. Department of Homeland Security. Others in the round included Amadeus Capital Partners, Crane Venture Partners and CyLon, a security startup incubator in London.

As Atkinson describes it, triangulation was an analytics concept first introduced by the CIA in the U.S., a method of bringing together multiple vectors of information to unearth inconsistencies in a data set (you can read more on triangulation in this CIA publication). He saw an opportunity to build a platform that took the same kind of approach to enterprise security.

There are a number of companies that are using AI-based techniques to help defend against breaches — in addition to Darktrace, there is Hexadite (a remediation specialist acquired by Microsoft), Amazon is working in the field and many others. In fact I think you’d be hard-pressed to find any IT security company today that doesn’t claim to or actually use AI in its approach.

Atkinson claims, however, that many AI-based solutions — and many other IT security products — take siloed, single-point approaches to defending a network. That is to say, you have network appliance security products, endpoint security, perhaps security for individual microservices and so on.

But while many of these work well, you don’t always get those different services speaking to each other. And that doesn’t reflect the shape that the most sophisticated security breaches are taking today.

As cybersecurity breaches and identified vulnerabilities continue to grow in frequency and scope — with hundreds of millions of individuals’ and organizations’ data potentially exposed in the process, systems disabled, and more — we’re seeing an increasing amount of sophistication on the part of the attackers.

Yes, those malicious actors employ artificial intelligence. But — as described in this 2019 paper on the state of cybersecurity from Symantec — they are also taking advantage of bigger “surface areas” with growing networks of connected objects all up for grabs; and they are tackling new frontiers like infiltrating data in transport and cloud-based systems. (In terms of examples of new frontiers, mobile networks, biometric data, gaming networks, public clouds and new card-skimming techniques are some of the specific areas that Experian calls out.)

Senseon’s antidote has been to build a new platform that “emulates how analysts think,” said Atkinson. Looking at an enterprise’s network appliance, an endpoint and microservices in the cloud, the Senseon platform “has an autonomous conversation” using the source data, before it presents a conclusion, threat, warning or even breach alert to the organization’s security team.

“We have an ability to take observations and compare that to hypothetical scenarios. When we tell you something, it has a rich context,” he said. Single-point alternatives essentially can create “blind spots that hackers manoeuvre around. Relying on single-source intelligence is like tying one hand behind your back.”

After Senseon compiles its data, it sends out alerts to security teams in a remediation service. Interestingly, while the platform’s aim is to identify malicious activity in a network, another consequence of what it’s doing is to help organizations identify “false positives” that are not actually threats, to cut down on time and money that get wasted on investigating those.

“Organisations of all sizes need to get better at keeping pace with emerging threats, but more importantly, identifying the attacks that require intervention,” said Mina Samaan of MMC Ventures in a statement. “Senseon’s technology directly addresses this challenge by using reinforcement learning AI techniques to help over-burdened security teams better understand anomalous behaviour through a single holistic platform.”

Although Senseon is only announcing seed funding today, the company has actually been around since 2017 and already has customers, primarily in the finance and legal industries (it would only give out one customer reference, the law firm of Harbottle & Lewis).

Feb
18
2019
--

Deprecation of TLSv1.0 2019-02-28

end of Percona support for TLS1.0

end of Percona support for TLS1.0Ahead of the PCI move to deprecate the use of ‘early TLS’, we’ve previously taken steps to disable TLSv1.0.

Unfortunately at that time we encountered some issues which led us to rollback these changes. This was to allow users of operating systems that did not – yet – support TLSv1.1 or higher to download Percona packages over TLSv1.0.

Since then, we have been tracking our usage statistics for older operating systems that don’t support TLSv1.1 or higher at https://repo.percona.com. We now receive very few legitimate requests for these downloads.

Consequently,  we are ending support for TLSv1.0 on all Percona web properties.

While the packages will still be available for download from percona.com, we are unlikely to update them as the OS’s are end-of-life (e.g. RHEL5). Also, in future you will need to download these packages from a client that supports TLSv1.1 or greater.

For example EL5 will not receive an updated version of OpenSSL to support versions greater than TLSv1.1. PCI has called for the deprecation of ‘early TLS’ standards. Therefore you should upgrade any EL5 installations to EL6 or greater as soon as possible. As noted in this support policy update by Red Hat, EL5 stopped receiving support under extended user support (EUS) in March 2015.

To continue to receive updates for your OS and for any Percona products that you use, you need to update to more recent versions of CentOS, Scientific Linux, and RedHat Enterprise Linux.


Photo by Kevin Noble on Unsplash

Feb
08
2019
--

Carbonite to acquire endpoint security company Webroot for $618.5M

Carbonite, the online backup and recovery company based in Boston, announced late yesterday that it will be acquiring Webroot, an endpoint security vendor, for $618.5 million in cash.

The company believes that by combining its cloud backup service with Webroot’s endpoint security tools, it will give customers a more complete solution. Webroot’s history actually predates the cloud, having launched in 1997. The private company reported $250 million in revenue for fiscal 2018, according to data provided by Carbonite . That will combine with Carbonite’s $296.4 million in revenue for the same time period.

Carbonite CEO and president Mohamad Ali saw the deal as a way to expand the Carbonite offering. “With threats like ransomware evolving daily, our customers and partners are increasingly seeking a more comprehensive solution that is both powerful and easy to use. Backup and recovery, combined with endpoint security and threat intelligence, is a differentiated solution that provides one, comprehensive data protection platform,” Ali explained in a statement.

The deal not only enhances Carbonite’s backup offering, it gives the company access to a new set of customers. While Carbonite sells mainly through Value Added Resellers (VARs), Webroot’s customers are mainly 14,000 Managed Service Providers (MSPs). That lack of overlap could increase its market reach through to the MSP channel. Webroot has 300,000 customers, according to Carbonite.

This is not the first Carbonite acquisition. It has acquired several other companies over the last several years, including buying Mozy from Dell a year ago for $145 million. The acquisition strategy is about using its checkbook to expand the capabilities of the platform to offer a more comprehensive set of tools beyond core backup and recovery.

Graphic: Carbonite

The company announced it is using cash on hand and a $550 million loan from Barclays, Citizens Bank and RBC Capital Markets to finance the deal. Per usual, the acquisition will be subject to regulatory approval, but is expected to close this quarter.

Feb
07
2019
--

Google open sources ClusterFuzz

Google today announced that it is open sourcing ClusterFuzz, a scalable fuzzing tool that can run on clusters with more than 25,000 machines.

The company has long used the tool internally, and if you’ve paid particular attention to Google’s fuzzing efforts (and you have, right?), then this may all seem a bit familiar. That’s because Google launched the OSS-Fuzz service a couple of years ago and that service actually used ClusterFuzz. OSS-Fuzz was only available to open-source projects, though, while ClusterFuzz is now available for anyone to use.

The overall concept behind fuzzing is pretty straightforward: you basically throw lots of data (including random inputs) at your application and see how it reacts. Often, it’ll crash, but sometimes you’ll be able to find memory leaks and security flaws. Once you start anything at scale, though, it becomes more complicated and you’ll need tools like ClusterFuzz to manage that complexity.

ClusterFuzz automates the fuzzing process all the way from bug detection to reporting — and then retesting the fix. The tool itself also uses open-source libraries like the libFuzzer fuzzing engine and the AFL fuzzer to power some of the core fuzzing features that generate the test cases for the tool.

Google says it has used the tool to find more than 16,000 bugs in Chrome and 11,000 bugs in more than  160 open-source projects that used OSS-Fuzz. Since so much of the software testing and deployment toolchain is now generally automated, it’s no surprise that fuzzing is also becoming a hot topic these days (I’ve seen references to “continuous fuzzing” pop up quite a bit recently).

Feb
06
2019
--

Percona Responds to MySQL LOCAL INFILE Security Issues

LOCAL INFILE Security

LOCAL INFILE SecurityIn this post, we’ll cover Percona’s thoughts about the current MySQL community discussion happening around MySQL LOCAL INFILE security issues.

Some of the detail within this blog post is marked <REDACTED>. I hope to address this shortly (by the end of Feb 2019) and provide complete detail and exploit proof-of-concept code. However, this post is released given the already public discussion of this particular issue, with the exploitation code currently redacted to ensure forks of MySQL client libraries have sufficient time to implement their response strategies.

Check back at the end of the month to see updates to this post!

Background

MySQL’s

LOCAL INFILE

  feature is fully documented by Oracle MySQL, and there is a legitimate use for the

LOCAL INFILE

 feature to upload data to a MySQL server in a single statement from a file on the client system.

However, some MySQL clients can be coerced into sending contents local to the machine they are running upon, without having issued a

LOCAL INFILE

 directive. This appears to be linked to how Adminer php web interface was attacked to point to a MALICIOUSLY crafted MySQL service to extract file data from the host on which Adminer was deployed. This malicious “server” has, it would appear, existed since early 2013.

The attack requires the use of a malicious/crafted MySQL “server”, to send a request for the file in place of the expected response to the SQL query in the normal query response flow.

IF however the client checks for the expected response, there is no file ex-filtration without further additional effort. This was noted with Java & ProxySQL testing, as a specific response was expected, and not sending the expected response would cause the client to retry.

I use the term “server” loosely here ,as often this is simply a service emulating the MySQL v10 protocol, and does not actually provide complete MySQL interaction capability—though this is theoretically possible, given enough effort or the adaption of a proxy to carry out this attack whilst backing onto a real MySQL server for the interaction capability.

For example, the “server” always responds OK to any auth attempt, regardless of credentials used, and doesn’t interpret any SQL sent. Consequently, you can send any string as a query, and the “server” responds with the request for a file on the client, which the client dutifully provides if local_infile is enabled.

There is potential, no doubt, for a far more sophisticated “server”. However, in my testing I did not go to this length, and instead produced the bare minimum required to test this theory—which proved to be true where local_infile was enabled.

The attack flow is as follows:

  1. The client connects to MySQL server, performs MySQL protocol handshaking to agree on capabilities.
  2. Authentication handshake (“server” often accepts any credentials passed to it).
  3. The client issues a query, e.g. SET NAMES, or other SQL (“server ignores this and immediately responds with file request response in 4.”).
  4. The server responds with a packet that is normally reserved when it is issued a “LOAD LOCAL DATA IN FILE…” SQL statement (0xFB…)
  5. IF Vulnerable the client responds with the full content of the file path if present on the local file system and if permissions allow this file to be read.
    1. Client’s handling here varies, the client may drop the connection with malformed packet error, or continue.

Exploitation testing

The following MySQL  clients were tested via their respective docker containers; and default configurations, the bash script which orchestrated this is as follows: <REDACTED>

This tests the various forks of the MySQL client; along with some manual testing the results were:

  • Percona Server for MySQL 5.7.24-26 (Not vulnerable)
    • PS 5.7.x aborts after server greeting
  • Percona Server for MySQL 5.6.42-64.2  (Not vulnerable)
    • PS 5.6 accepts the server greeting, proceeds to log in, aborts without handling malicious payload.
  • MariaDB 5.5
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.0
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.1.37
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.4.1
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MySQL 5.7. (Not vulnerable by default)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 5.6. (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 8.0.14 (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse, enabling local_infile however makes this susceptible.
  • PHP 7 mysqli
    • Depends on libmysqlclient in use (As PHP’s mysqli is a C wrapper of the underlying library).
  • Ruby
    • Depends on libmysqlclient in use
    • Note: I couldn’t get this to build on my laptop due to a reported syntax error in mysql.c. However, given this wraps libmysqlclient, I would suggest the result to likely mirror PHP’s test.
  • ProxySQL
    • Underlying library is known susceptible to LOCAL INFILE abuse.
    • ProxySQL issues SQL to the backend MySQL server, and protocol commands such as PING, and expects a specific result in for queries issued by ProxySQL. This leads to difficulty for the malicious server being generic, a targeted client that specifically seeks to target ProxySQL is likely possible however this has not been explored at this time.
  • Java
    • com.mysql.jdbc.Driver
      • As with ProxySQL, testing this drive issues “background” SQL, and expects a specific response. While theoretically possible to have a malicious service target on this drive, this has not been explored at this time.
  • Connector/J

There are many more clients out there ranging from protocol compatible implementations to wrappers of the underlying c library.

Your own research will ensure you are taking appropriate measures should you choose/need to mitigate this risk in your controls.

Can/Should this be fixed?

This is a particularly tricky issue to correct in code, as the MySQL client needs to be aware of a

LOAD LOCAL INFILE

 SQL statement getting sent. MariaDB’s proposed path implements this. Even then, if a stored procedure issues a file request via

LOAD LOCAL INFILE...

, the client has no awareness of this even being needed until the packet is received with the request, and local_infile can be abused. However, the intent is to allow the feature to load data, and as such DBAs/Admins should seek to employ compensating controls to reduce the risk to their organization:

Mitigation

  • DO NOT implement any stored procedures which trigger a
    LOAD INFILE

    .

  • Close/remove/secure access to ANY web admin interfaces.
    • Remember, security through obscurity is no security at all. This only delays time to access, it does not prevent access.
  • Deploy mandatory access controls
    • SELinux, AppArmor, GRSecurity, etc. can all help to ensure your client is not reading anything unexpected, lowering your risk of exposure through proper configuration.
  • Deploy Egress controls on your application nodes to ensure your application server can only reach your MySQL service(s) and does not attempt to connect elsewhere (As the exploit requires a malicious MySQL service).
    • Iptables/firewalld/ufw/pfsense/other firewall/etc.
    • This ensures that your vulnerable clients are not connecting to anything you do not know about.
    • This does not protect against a skilled adversary. Your application needs to communicate out to the internet to server pages. Running a malicious MySQL service on a suitably high random port can aid to “hide” this network traffic.
  • Be aware of Domain Name Service (DNS) rebinding attacks if you are using a Fully Qualified Domain Name (FQDN) to connect between application and database server. Use an IP address or socket in configurations if possible to negate this attack.
  • Deploy MySQL Transport Layer Security (TLS) configuration to ensure the server you expect requires the use of TLS during connection, set your client (if possible) to VERIFY_IDENTITY to ensure TLS “fails closed” if the client fails to negotiate TLS, and to perform basic identity checking of the server being connected to.
    • This will NOT dissuade a determined adversary who has a presence in your network long enough to perform certificate spoofing (in theory), and nothing but time to carry this out.
    • mysslstrip can also lead to issues if your configuration does “fail open” as such it is imperative you have:
      • In my.cnf: ssl_mode=VERIFY_IDENTITY
      • On the cli: –ssl_mode=VERIFY_IDENTITY
      • Be aware: This performs verification of the CA (Certificate Authority) and certificate hostname, this can lead to issues if you are using self-signed certificates and the CA is not trusted.
    • This is ONLY an issue if an adversary has the capability of being able to Man in the middle your Application <-> MySQL servers;
      • If they have this capability; this feature abuse is only a single avenue of data ex-filtration they can perform.
  • Deploy a Network Intrusion Detection System
    • There are many open source software (OSS) options, for example:
    • Set alerts on the logs, curate a response process to handle these alerts.
  • Client option mitigation may be possible; however, this varies from client to client and from underlying library to library.
    • MariaDB client binary.
      • Add to my.cnf: local_infile = 0
      • Or set –local_infile=0 on the command line
    • PHP / Ruby / Anything that relies on libmysqlclient
      • Replace libmysqlclient with a version that does not enable local_infile by default
        • This can be difficult, so ensure you test your process before running anything on production!
      • Switch to use PDO MySQL over MySQLi (PDO implementation implicitly sets, local_infile to 0 at the time of writing in php’s C code).
        • Authors note: mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, false); failed to mitigate this in testing, YMMV (Your Mileage May Vary).
        • Attempting to set a custom handler to return nothing also failed to mitigate this. Again, YMMV.

IDS Rule example

Here I provide an example “FAST” format rule for your IDS/IPS system;

Note however YMMV; this works with Snort, Suricata, and _may_ work with Zeek (formerly Bro), OSSEC, etc. However, please test and adapt as needed;

alert tcp any any <> any any (msg: “MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Note this is only an example, this doesn’t detect any packets flowing over TLS connections.

If you are running an Intrusion Prevention System (IPS), you should change the rule action from alert to drop.

Here the rule is set to any any as an adversary may wish to not use 3306 in an attempt to avoid detection you can of course change this as desired to suit your needs.

You must also assess if your applications are running local_infile legitimately and conduct your own threat modeling as well as impact analysis, prior to implementing such a rule.

Note increasing the “noise” threshold for your team, will likely only result in your team becoming desensitized to the “noise” and potentially missing an important alert as a result.

For example, you could modify the left and right side any any, to be anything not in your internal network range communicating to anything not in your internal network range:

alert tcp 192.168.1.0/24 any <> !192.168.1.0/24 any  (msg:”MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Adapting to your environment is key for this IDS rule to be effective.

Further reading

As noted this issue is already being publicly discussed, as such I add links here to sources relevant to this discussion and exploitation.

Exploitation Network flow

<REDACTED>

Thanks

This assessment was not a single person effort, here I would like to link to and give thanks where appropriate to the following individuals whom have helped with this investigation:

Willem de Groot – For sharing insights into the Adminer exploitation and for graciously responding to an inquiry from myself (this helped me get the PoC working, thank you).

<REDACTED> – original author of <REDACTED> (in 2013!), from which I was able to adapt to function for this investigation.

Ceri Williams – for helping me with proxySQL testing.

Marcelo Altman – for discussing MySQL protocol in depth.

Sergei Golubchik – for responding to my email notice for MariaDB, and implementing a workaround mitigation so quickly, as well providing me with a notice on the Connector/J announcement url.

Peter Zaitsev – for linking me to the original reddit discussion and for feedback.

Feb
06
2019
--

Google doubles down on its Asylo confidential computing framework

Last May, Google introduced Asylo, an open-source framework for confidential computing, a technique favored by many of the big cloud vendors because it allows you to set up trusted execution environments that are shielded from the rest of the (potentially untrusted) system. Workloads and their data basically sit in a trusted enclave that adds another layer of protection against network and operating system vulnerabilities.

That’s not a new concept, but, as Google argues, it has been hard to adopt. “Despite this promise, the adoption of this emerging technology has been hampered by dependence on specific hardware, complexity and the lack of an application development tool to run in confidential computing environments,” Google Cloud Engineering Director Jason Garms and Senior Product Manager Nelly Porter write in a blog post today. The promise of the Asylo framework, as you can probably guess, is to make confidential computing easy.

Asylo makes it easier to build applications that can run in these enclaves and can use various software- and hardware-based security back ends like Intel’s SGX and others. Once an app has been ported to support Asylo, you should also be able to take that code with you and run it on any other Asylo-supported enclave.

Right now, though, many of these technologies and practices around confidential computing remain in flux. Google notes there are no set design patterns for building applications that then use the Asylo API and run in these enclaves, for example.The different hardware manufacturers also don’t necessarily work together to ensure their technologies are interoperable.

“Together with the industry, we can work toward more transparent and interoperable services to support confidential computing apps, for example, making it easy to understand and verify attestation claims, inter-enclave communication protocols, and federated identity systems across enclaves,” write Garms and Porter.

And to do that, Google is launching its Confidential Computing Challenge (C3) today. The idea here is to have developers create novel use cases for confidential computing — or to advance the current state of the technologies. If you do that and win, you’ll get $15,000 in cash, $5,000 in Google Cloud Platform credits and an undisclosed hardware gift (a Pixelbook or Pixel phone, if I had to guess).

In addition, Google now also offers developers three hands-on labs that teach how to build apps using Asylo’s tools. Those are free for the first month if you use the code in Google’s blog post.

Feb
06
2019
--

vArmour, a security startup focused on multi-cloud deployments, raises $44M

As more organizations move to cloud-based IT architectures, a startup that’s helping them secure that data in an efficient way has raised some capital. vArmour, which provides a platform to help manage security policies across disparate public and private cloud environments in one place, is announcing today that it has raised a growth round of $44 million.

The funding is being led by two VCs that specialise in investments into security startups, AllegisCyber and NightDragon.

CEO Tim Eades said that also participating are “two large software companies” as strategic investors that vArmour works with on a regular basis but asked not to be named. (You might consider that candidates might include some of the big security vendors in the market, as well as the big cloud services providers.) This Series E brings the total raised by vArmour to $127 million.

When asked, Eades said the company would not be disclosing its valuation. That lack of transparency is not uncommon among startups, but perhaps especially should be expected at a business that operated in stealth for the first several years of its life.

According to PitchBook, vArmour was valued at $420 million when it last raised money, a $41 million round in 2016. That would put the startup’s valuation at $464 million with this round, if everything is growing at a steady pace, or possibly more if investors are keen to tap into what appears to be a growing need.

That growing need might be summarised like this: We’re seeing a huge migration of IT to cloud-based services, with public cloud services set to grow 17.3 percent in 2019. A large part of those deployments — for companies typically larger than 1,000 people — are spread across multiple private and public clouds.

This, in turn, has opened a new front in the battle to secure data amid the rising threat of cybercrime. “We believe that hybrid cloud security is a market valued somewhere between $6 billion and $8 billion at the moment,” said Eades. Cybercrime has been estimated by McAfee to cost businesses $600 billion annually worldwide. Accenture is even more bullish on the impact; it puts the impact on companies at $5.2 trillion over the next five years.

The challenge for many organizations is that they store information and apps across multiple locations — between seven and eight data centers on average for, say, a typical bank, Eades said. And while that may help them hedge bets, save money and reach some efficiencies, that lack of cohesion also opens the door to security loopholes.

“Organizations are deploying multiple clouds for business agility and reduced cost, but the rapid adoption is making it a nightmare for security and IT pros to provide consistent security controls across cloud platforms,” said Bob Ackerman, founder and managing director at AllegisCyber, in a statement. “vArmour is already servicing this need with hundreds of customers, and we’re excited to help vArmour grow to the next stage of development.”

vArmour hasn’t developed a security service per se, but it is among the companies — Cisco and others are also competing with it — that are providing a platform to help manage security policies across these disparate locations. That could either mean working on knitting together different security services as delivered in distinct clouds, or taking a single security service and making sure it works the same policies across disparate locations, or a combination of both of those.

In other words, vArmour takes something that is somewhat messy — disparate security policies covering disparate containers and apps — and helps to hand it in a more cohesive and neat way by providing a single way to manage and provision compliance and policies across all of them.

This not only helps to manage the data but potentially can help halt a breach by letting an organization put a stop in place across multiple environments.

“From my experience, this is an important solution for the cloud security space,” said Dave DeWalt, founder of NightDragon, in a statement. “With security teams now having to manage a multitude of cloud estates and inundated with regulatory mandates, they need a simple solution that’s capable of continuous compliance. We haven’t seen anyone else do this as well as vArmour.”

Eades said that one big change for his company in the last couple of years has been that, as cloud services have grown in popularity, vArmour has been putting in place a self-service version of the main product, the vArmour Application Controller, to better target smaller organizations. It’s also been leaning heavily on channel partners (Telstra, which led its previous round, is one strategic of this kind) to help with the heavy lifting of sales.

vArmour isn’t disclosing revenues or how many customers it has at the moment, but Eades said that it’s been growing at 100 percent each year for the last two and has “way more than 100 customers,” ranging from hospitals and churches through to “8-10 of the largest service providers and over 25 financial institutions.”

At this rate, he said the plan will be to take the company public in the next couple of years.

Feb
05
2019
--

Backed by Benchmark, Blue Hexagon just raised $31 million for its deep learning cybersecurity software

Nayeem Islam spent nearly 11 years with chipmaker Qualcomm, where he founded its Silicon Valley-based R&D facility, recruited its entire team and oversaw research on all aspects of security, including applying machine learning on mobile devices and in the network to detect threats early.

Islam was nothing if not prolific, developing a system for on-device machine learning for malware detection, libraries for optimizing deep learning algorithms on mobile devices and systems for parallel compute on mobile devices, among other things.

In fact, because of his work, he also saw a big opportunity in better protecting enterprises from cyberthreats through deep neural networks that are capable of processing every raw byte within a file and that can uncover complex relations within data sets. So two years ago, Islam and Saumitra Das, a former Qualcomm engineer with 330 patents to his name and another 450 pending, struck out on their own to create Blue Hexagon, a now 30-person Sunnyvale, Calif.-based company that is today disclosing it has raised $31 million in funding from Benchmark and Altimeter.

The funding comes roughly one year after Benchmark quietly led a $6 million Series A round for the firm.

So what has investors so bullish on the company’s prospects, aside from its credentialed founders? In a word, speed, seemingly. According to Islam, Blue Hexagon has created a real-time, cybersecurity platform that he says can detect known and unknown threats at first encounter, then block them in “sub seconds” so the malware doesn’t have time to spread.

The industry has to move to real-time detection, he says, explaining that four new and unique malware samples are released every second, and arguing that traditional security methods can’t keep pace. He says that sandboxes, for example, meaning restricted environments that quarantine cyberthreats and keep them from breaching sensitive files, are no longer state of the art. The same is true of signatures, which are mathematical techniques used to validate the authenticity and integrity of a message, software or digital document but are being bypassed by rapidly evolving new malware.

Only time will tell if Blue Hexagon is far more capable of identifying and stopping attackers, as Islam insists is the case. It is not the only startup to apply deep learning to cybersecurity, though it’s certainly one of the first. Critics, some who are protecting their own corporate interests, also worry that hackers can foil security algorithms by targeting the warning flags they look for.

Still, with its technology, its team and its pitch, Blue Hexagon is starting to persuade not only top investors of its merits, but a growing — and broad — base of customers, says Islam. “Everyone has this issue, from large banks, insurance companies, state and local governments. Nowhere do you find someone who doesn’t need to be protected.”

Blue Hexagon can even help customers that are already under attack, Islam says, even if it isn’t ideal. “Our goal is to catch an attack as early in the kill chain as possible. But if someone is already being attacked, we’ll see that activity and pinpoint it and be able to turn it off.”

Some damage may already be done, of course. It’s another reason to plan ahead, he says. “With automated attacks, you need automated techniques.” Deep learning, he insists, “is one way of leveling the playing field against attackers.”

Feb
05
2019
--

BetterCloud can now manage any SaaS application

BetterCloud began life as a way to provide an operations layer for G Suite. More recently, after a platform overhaul, it began layering on a handful of other SaaS applications. Today, the company announced, it is now possible to add any SaaS application to its operations dashboard and monitor usage across applications via an API.

As founder and CEO David Politis explains, a tool like Okta provides a way to authenticate your SaaS app, but once an employee starts using it, BetterCloud gives you visibility into how it’s being used.

“The first order problem was identity, the access, the connections. What we’re doing is we’re solving the second order problem, which is the interactions,” Politis explained. In his view, companies lack the ability to monitor and understand the interactions going on across SaaS applications, as people interact and share information, inside and outside the organization. BetterCloud has been designed to give IT control and security over what is occurring in their environment, he explained.

He says they can provide as much or as little control as a company needs, and they can set controls by application or across a number of applications without actually changing the user’s experience. They do this through a scripting library. BetterCloud comes with a number of scripts and provides log access to give visibility into the scripting activity.

If a customer is looking to use this data more effectively, the solution includes a Graph API for ingesting data and seeing the connections across the data that BetterCloud is collecting. Customers can also set event triggers or actions based on the data being collected as certain conditions are met.

All of this is possible because the company overhauled the platform last year to allow BetterCloud to move beyond G Suite and plug other SaaS applications into it. Today’s announcement is the ultimate manifestation of that capability. Instead of BetterCloud building the connectors, it’s providing an API to let its customers do it.

The company was founded in 2011 and has raised more than $106 million, according to Crunchbase.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com