Feb
07
2019
--

Google open sources ClusterFuzz

Google today announced that it is open sourcing ClusterFuzz, a scalable fuzzing tool that can run on clusters with more than 25,000 machines.

The company has long used the tool internally, and if you’ve paid particular attention to Google’s fuzzing efforts (and you have, right?), then this may all seem a bit familiar. That’s because Google launched the OSS-Fuzz service a couple of years ago and that service actually used ClusterFuzz. OSS-Fuzz was only available to open-source projects, though, while ClusterFuzz is now available for anyone to use.

The overall concept behind fuzzing is pretty straightforward: you basically throw lots of data (including random inputs) at your application and see how it reacts. Often, it’ll crash, but sometimes you’ll be able to find memory leaks and security flaws. Once you start anything at scale, though, it becomes more complicated and you’ll need tools like ClusterFuzz to manage that complexity.

ClusterFuzz automates the fuzzing process all the way from bug detection to reporting — and then retesting the fix. The tool itself also uses open-source libraries like the libFuzzer fuzzing engine and the AFL fuzzer to power some of the core fuzzing features that generate the test cases for the tool.

Google says it has used the tool to find more than 16,000 bugs in Chrome and 11,000 bugs in more than  160 open-source projects that used OSS-Fuzz. Since so much of the software testing and deployment toolchain is now generally automated, it’s no surprise that fuzzing is also becoming a hot topic these days (I’ve seen references to “continuous fuzzing” pop up quite a bit recently).

Feb
06
2019
--

Percona Responds to MySQL LOCAL INFILE Security Issues

LOCAL INFILE Security

LOCAL INFILE SecurityIn this post, we’ll cover Percona’s thoughts about the current MySQL community discussion happening around MySQL LOCAL INFILE security issues.

Some of the detail within this blog post is marked <REDACTED>. I hope to address this shortly (by the end of Feb 2019) and provide complete detail and exploit proof-of-concept code. However, this post is released given the already public discussion of this particular issue, with the exploitation code currently redacted to ensure forks of MySQL client libraries have sufficient time to implement their response strategies.

Check back at the end of the month to see updates to this post!

Background

MySQL’s

LOCAL INFILE

  feature is fully documented by Oracle MySQL, and there is a legitimate use for the

LOCAL INFILE

 feature to upload data to a MySQL server in a single statement from a file on the client system.

However, some MySQL clients can be coerced into sending contents local to the machine they are running upon, without having issued a

LOCAL INFILE

 directive. This appears to be linked to how Adminer php web interface was attacked to point to a MALICIOUSLY crafted MySQL service to extract file data from the host on which Adminer was deployed. This malicious “server” has, it would appear, existed since early 2013.

The attack requires the use of a malicious/crafted MySQL “server”, to send a request for the file in place of the expected response to the SQL query in the normal query response flow.

IF however the client checks for the expected response, there is no file ex-filtration without further additional effort. This was noted with Java & ProxySQL testing, as a specific response was expected, and not sending the expected response would cause the client to retry.

I use the term “server” loosely here ,as often this is simply a service emulating the MySQL v10 protocol, and does not actually provide complete MySQL interaction capability—though this is theoretically possible, given enough effort or the adaption of a proxy to carry out this attack whilst backing onto a real MySQL server for the interaction capability.

For example, the “server” always responds OK to any auth attempt, regardless of credentials used, and doesn’t interpret any SQL sent. Consequently, you can send any string as a query, and the “server” responds with the request for a file on the client, which the client dutifully provides if local_infile is enabled.

There is potential, no doubt, for a far more sophisticated “server”. However, in my testing I did not go to this length, and instead produced the bare minimum required to test this theory—which proved to be true where local_infile was enabled.

The attack flow is as follows:

  1. The client connects to MySQL server, performs MySQL protocol handshaking to agree on capabilities.
  2. Authentication handshake (“server” often accepts any credentials passed to it).
  3. The client issues a query, e.g. SET NAMES, or other SQL (“server ignores this and immediately responds with file request response in 4.”).
  4. The server responds with a packet that is normally reserved when it is issued a “LOAD LOCAL DATA IN FILE…” SQL statement (0xFB…)
  5. IF Vulnerable the client responds with the full content of the file path if present on the local file system and if permissions allow this file to be read.
    1. Client’s handling here varies, the client may drop the connection with malformed packet error, or continue.

Exploitation testing

The following MySQL  clients were tested via their respective docker containers; and default configurations, the bash script which orchestrated this is as follows: <REDACTED>

This tests the various forks of the MySQL client; along with some manual testing the results were:

  • Percona Server for MySQL 5.7.24-26 (Not vulnerable)
    • PS 5.7.x aborts after server greeting
  • Percona Server for MySQL 5.6.42-64.2  (Not vulnerable)
    • PS 5.6 accepts the server greeting, proceeds to log in, aborts without handling malicious payload.
  • MariaDB 5.5
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.0
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.1.37
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.4.1
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MySQL 5.7. (Not vulnerable by default)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 5.6. (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 8.0.14 (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse, enabling local_infile however makes this susceptible.
  • PHP 7 mysqli
    • Depends on libmysqlclient in use (As PHP’s mysqli is a C wrapper of the underlying library).
  • Ruby
    • Depends on libmysqlclient in use
    • Note: I couldn’t get this to build on my laptop due to a reported syntax error in mysql.c. However, given this wraps libmysqlclient, I would suggest the result to likely mirror PHP’s test.
  • ProxySQL
    • Underlying library is known susceptible to LOCAL INFILE abuse.
    • ProxySQL issues SQL to the backend MySQL server, and protocol commands such as PING, and expects a specific result in for queries issued by ProxySQL. This leads to difficulty for the malicious server being generic, a targeted client that specifically seeks to target ProxySQL is likely possible however this has not been explored at this time.
  • Java
    • com.mysql.jdbc.Driver
      • As with ProxySQL, testing this drive issues “background” SQL, and expects a specific response. While theoretically possible to have a malicious service target on this drive, this has not been explored at this time.
  • Connector/J

There are many more clients out there ranging from protocol compatible implementations to wrappers of the underlying c library.

Your own research will ensure you are taking appropriate measures should you choose/need to mitigate this risk in your controls.

Can/Should this be fixed?

This is a particularly tricky issue to correct in code, as the MySQL client needs to be aware of a

LOAD LOCAL INFILE

 SQL statement getting sent. MariaDB’s proposed path implements this. Even then, if a stored procedure issues a file request via

LOAD LOCAL INFILE...

, the client has no awareness of this even being needed until the packet is received with the request, and local_infile can be abused. However, the intent is to allow the feature to load data, and as such DBAs/Admins should seek to employ compensating controls to reduce the risk to their organization:

Mitigation

  • DO NOT implement any stored procedures which trigger a
    LOAD INFILE

    .

  • Close/remove/secure access to ANY web admin interfaces.
    • Remember, security through obscurity is no security at all. This only delays time to access, it does not prevent access.
  • Deploy mandatory access controls
    • SELinux, AppArmor, GRSecurity, etc. can all help to ensure your client is not reading anything unexpected, lowering your risk of exposure through proper configuration.
  • Deploy Egress controls on your application nodes to ensure your application server can only reach your MySQL service(s) and does not attempt to connect elsewhere (As the exploit requires a malicious MySQL service).
    • Iptables/firewalld/ufw/pfsense/other firewall/etc.
    • This ensures that your vulnerable clients are not connecting to anything you do not know about.
    • This does not protect against a skilled adversary. Your application needs to communicate out to the internet to server pages. Running a malicious MySQL service on a suitably high random port can aid to “hide” this network traffic.
  • Be aware of Domain Name Service (DNS) rebinding attacks if you are using a Fully Qualified Domain Name (FQDN) to connect between application and database server. Use an IP address or socket in configurations if possible to negate this attack.
  • Deploy MySQL Transport Layer Security (TLS) configuration to ensure the server you expect requires the use of TLS during connection, set your client (if possible) to VERIFY_IDENTITY to ensure TLS “fails closed” if the client fails to negotiate TLS, and to perform basic identity checking of the server being connected to.
    • This will NOT dissuade a determined adversary who has a presence in your network long enough to perform certificate spoofing (in theory), and nothing but time to carry this out.
    • mysslstrip can also lead to issues if your configuration does “fail open” as such it is imperative you have:
      • In my.cnf: ssl_mode=VERIFY_IDENTITY
      • On the cli: –ssl_mode=VERIFY_IDENTITY
      • Be aware: This performs verification of the CA (Certificate Authority) and certificate hostname, this can lead to issues if you are using self-signed certificates and the CA is not trusted.
    • This is ONLY an issue if an adversary has the capability of being able to Man in the middle your Application <-> MySQL servers;
      • If they have this capability; this feature abuse is only a single avenue of data ex-filtration they can perform.
  • Deploy a Network Intrusion Detection System
    • There are many open source software (OSS) options, for example:
    • Set alerts on the logs, curate a response process to handle these alerts.
  • Client option mitigation may be possible; however, this varies from client to client and from underlying library to library.
    • MariaDB client binary.
      • Add to my.cnf: local_infile = 0
      • Or set –local_infile=0 on the command line
    • PHP / Ruby / Anything that relies on libmysqlclient
      • Replace libmysqlclient with a version that does not enable local_infile by default
        • This can be difficult, so ensure you test your process before running anything on production!
      • Switch to use PDO MySQL over MySQLi (PDO implementation implicitly sets, local_infile to 0 at the time of writing in php’s C code).
        • Authors note: mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, false); failed to mitigate this in testing, YMMV (Your Mileage May Vary).
        • Attempting to set a custom handler to return nothing also failed to mitigate this. Again, YMMV.

IDS Rule example

Here I provide an example “FAST” format rule for your IDS/IPS system;

Note however YMMV; this works with Snort, Suricata, and _may_ work with Zeek (formerly Bro), OSSEC, etc. However, please test and adapt as needed;

alert tcp any any <> any any (msg: “MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Note this is only an example, this doesn’t detect any packets flowing over TLS connections.

If you are running an Intrusion Prevention System (IPS), you should change the rule action from alert to drop.

Here the rule is set to any any as an adversary may wish to not use 3306 in an attempt to avoid detection you can of course change this as desired to suit your needs.

You must also assess if your applications are running local_infile legitimately and conduct your own threat modeling as well as impact analysis, prior to implementing such a rule.

Note increasing the “noise” threshold for your team, will likely only result in your team becoming desensitized to the “noise” and potentially missing an important alert as a result.

For example, you could modify the left and right side any any, to be anything not in your internal network range communicating to anything not in your internal network range:

alert tcp 192.168.1.0/24 any <> !192.168.1.0/24 any  (msg:”MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Adapting to your environment is key for this IDS rule to be effective.

Further reading

As noted this issue is already being publicly discussed, as such I add links here to sources relevant to this discussion and exploitation.

Exploitation Network flow

<REDACTED>

Thanks

This assessment was not a single person effort, here I would like to link to and give thanks where appropriate to the following individuals whom have helped with this investigation:

Willem de Groot – For sharing insights into the Adminer exploitation and for graciously responding to an inquiry from myself (this helped me get the PoC working, thank you).

<REDACTED> – original author of <REDACTED> (in 2013!), from which I was able to adapt to function for this investigation.

Ceri Williams – for helping me with proxySQL testing.

Marcelo Altman – for discussing MySQL protocol in depth.

Sergei Golubchik – for responding to my email notice for MariaDB, and implementing a workaround mitigation so quickly, as well providing me with a notice on the Connector/J announcement url.

Peter Zaitsev – for linking me to the original reddit discussion and for feedback.

Feb
06
2019
--

Google doubles down on its Asylo confidential computing framework

Last May, Google introduced Asylo, an open-source framework for confidential computing, a technique favored by many of the big cloud vendors because it allows you to set up trusted execution environments that are shielded from the rest of the (potentially untrusted) system. Workloads and their data basically sit in a trusted enclave that adds another layer of protection against network and operating system vulnerabilities.

That’s not a new concept, but, as Google argues, it has been hard to adopt. “Despite this promise, the adoption of this emerging technology has been hampered by dependence on specific hardware, complexity and the lack of an application development tool to run in confidential computing environments,” Google Cloud Engineering Director Jason Garms and Senior Product Manager Nelly Porter write in a blog post today. The promise of the Asylo framework, as you can probably guess, is to make confidential computing easy.

Asylo makes it easier to build applications that can run in these enclaves and can use various software- and hardware-based security back ends like Intel’s SGX and others. Once an app has been ported to support Asylo, you should also be able to take that code with you and run it on any other Asylo-supported enclave.

Right now, though, many of these technologies and practices around confidential computing remain in flux. Google notes there are no set design patterns for building applications that then use the Asylo API and run in these enclaves, for example.The different hardware manufacturers also don’t necessarily work together to ensure their technologies are interoperable.

“Together with the industry, we can work toward more transparent and interoperable services to support confidential computing apps, for example, making it easy to understand and verify attestation claims, inter-enclave communication protocols, and federated identity systems across enclaves,” write Garms and Porter.

And to do that, Google is launching its Confidential Computing Challenge (C3) today. The idea here is to have developers create novel use cases for confidential computing — or to advance the current state of the technologies. If you do that and win, you’ll get $15,000 in cash, $5,000 in Google Cloud Platform credits and an undisclosed hardware gift (a Pixelbook or Pixel phone, if I had to guess).

In addition, Google now also offers developers three hands-on labs that teach how to build apps using Asylo’s tools. Those are free for the first month if you use the code in Google’s blog post.

Feb
06
2019
--

vArmour, a security startup focused on multi-cloud deployments, raises $44M

As more organizations move to cloud-based IT architectures, a startup that’s helping them secure that data in an efficient way has raised some capital. vArmour, which provides a platform to help manage security policies across disparate public and private cloud environments in one place, is announcing today that it has raised a growth round of $44 million.

The funding is being led by two VCs that specialise in investments into security startups, AllegisCyber and NightDragon.

CEO Tim Eades said that also participating are “two large software companies” as strategic investors that vArmour works with on a regular basis but asked not to be named. (You might consider that candidates might include some of the big security vendors in the market, as well as the big cloud services providers.) This Series E brings the total raised by vArmour to $127 million.

When asked, Eades said the company would not be disclosing its valuation. That lack of transparency is not uncommon among startups, but perhaps especially should be expected at a business that operated in stealth for the first several years of its life.

According to PitchBook, vArmour was valued at $420 million when it last raised money, a $41 million round in 2016. That would put the startup’s valuation at $464 million with this round, if everything is growing at a steady pace, or possibly more if investors are keen to tap into what appears to be a growing need.

That growing need might be summarised like this: We’re seeing a huge migration of IT to cloud-based services, with public cloud services set to grow 17.3 percent in 2019. A large part of those deployments — for companies typically larger than 1,000 people — are spread across multiple private and public clouds.

This, in turn, has opened a new front in the battle to secure data amid the rising threat of cybercrime. “We believe that hybrid cloud security is a market valued somewhere between $6 billion and $8 billion at the moment,” said Eades. Cybercrime has been estimated by McAfee to cost businesses $600 billion annually worldwide. Accenture is even more bullish on the impact; it puts the impact on companies at $5.2 trillion over the next five years.

The challenge for many organizations is that they store information and apps across multiple locations — between seven and eight data centers on average for, say, a typical bank, Eades said. And while that may help them hedge bets, save money and reach some efficiencies, that lack of cohesion also opens the door to security loopholes.

“Organizations are deploying multiple clouds for business agility and reduced cost, but the rapid adoption is making it a nightmare for security and IT pros to provide consistent security controls across cloud platforms,” said Bob Ackerman, founder and managing director at AllegisCyber, in a statement. “vArmour is already servicing this need with hundreds of customers, and we’re excited to help vArmour grow to the next stage of development.”

vArmour hasn’t developed a security service per se, but it is among the companies — Cisco and others are also competing with it — that are providing a platform to help manage security policies across these disparate locations. That could either mean working on knitting together different security services as delivered in distinct clouds, or taking a single security service and making sure it works the same policies across disparate locations, or a combination of both of those.

In other words, vArmour takes something that is somewhat messy — disparate security policies covering disparate containers and apps — and helps to hand it in a more cohesive and neat way by providing a single way to manage and provision compliance and policies across all of them.

This not only helps to manage the data but potentially can help halt a breach by letting an organization put a stop in place across multiple environments.

“From my experience, this is an important solution for the cloud security space,” said Dave DeWalt, founder of NightDragon, in a statement. “With security teams now having to manage a multitude of cloud estates and inundated with regulatory mandates, they need a simple solution that’s capable of continuous compliance. We haven’t seen anyone else do this as well as vArmour.”

Eades said that one big change for his company in the last couple of years has been that, as cloud services have grown in popularity, vArmour has been putting in place a self-service version of the main product, the vArmour Application Controller, to better target smaller organizations. It’s also been leaning heavily on channel partners (Telstra, which led its previous round, is one strategic of this kind) to help with the heavy lifting of sales.

vArmour isn’t disclosing revenues or how many customers it has at the moment, but Eades said that it’s been growing at 100 percent each year for the last two and has “way more than 100 customers,” ranging from hospitals and churches through to “8-10 of the largest service providers and over 25 financial institutions.”

At this rate, he said the plan will be to take the company public in the next couple of years.

Feb
05
2019
--

Backed by Benchmark, Blue Hexagon just raised $31 million for its deep learning cybersecurity software

Nayeem Islam spent nearly 11 years with chipmaker Qualcomm, where he founded its Silicon Valley-based R&D facility, recruited its entire team and oversaw research on all aspects of security, including applying machine learning on mobile devices and in the network to detect threats early.

Islam was nothing if not prolific, developing a system for on-device machine learning for malware detection, libraries for optimizing deep learning algorithms on mobile devices and systems for parallel compute on mobile devices, among other things.

In fact, because of his work, he also saw a big opportunity in better protecting enterprises from cyberthreats through deep neural networks that are capable of processing every raw byte within a file and that can uncover complex relations within data sets. So two years ago, Islam and Saumitra Das, a former Qualcomm engineer with 330 patents to his name and another 450 pending, struck out on their own to create Blue Hexagon, a now 30-person Sunnyvale, Calif.-based company that is today disclosing it has raised $31 million in funding from Benchmark and Altimeter.

The funding comes roughly one year after Benchmark quietly led a $6 million Series A round for the firm.

So what has investors so bullish on the company’s prospects, aside from its credentialed founders? In a word, speed, seemingly. According to Islam, Blue Hexagon has created a real-time, cybersecurity platform that he says can detect known and unknown threats at first encounter, then block them in “sub seconds” so the malware doesn’t have time to spread.

The industry has to move to real-time detection, he says, explaining that four new and unique malware samples are released every second, and arguing that traditional security methods can’t keep pace. He says that sandboxes, for example, meaning restricted environments that quarantine cyberthreats and keep them from breaching sensitive files, are no longer state of the art. The same is true of signatures, which are mathematical techniques used to validate the authenticity and integrity of a message, software or digital document but are being bypassed by rapidly evolving new malware.

Only time will tell if Blue Hexagon is far more capable of identifying and stopping attackers, as Islam insists is the case. It is not the only startup to apply deep learning to cybersecurity, though it’s certainly one of the first. Critics, some who are protecting their own corporate interests, also worry that hackers can foil security algorithms by targeting the warning flags they look for.

Still, with its technology, its team and its pitch, Blue Hexagon is starting to persuade not only top investors of its merits, but a growing — and broad — base of customers, says Islam. “Everyone has this issue, from large banks, insurance companies, state and local governments. Nowhere do you find someone who doesn’t need to be protected.”

Blue Hexagon can even help customers that are already under attack, Islam says, even if it isn’t ideal. “Our goal is to catch an attack as early in the kill chain as possible. But if someone is already being attacked, we’ll see that activity and pinpoint it and be able to turn it off.”

Some damage may already be done, of course. It’s another reason to plan ahead, he says. “With automated attacks, you need automated techniques.” Deep learning, he insists, “is one way of leveling the playing field against attackers.”

Feb
05
2019
--

BetterCloud can now manage any SaaS application

BetterCloud began life as a way to provide an operations layer for G Suite. More recently, after a platform overhaul, it began layering on a handful of other SaaS applications. Today, the company announced, it is now possible to add any SaaS application to its operations dashboard and monitor usage across applications via an API.

As founder and CEO David Politis explains, a tool like Okta provides a way to authenticate your SaaS app, but once an employee starts using it, BetterCloud gives you visibility into how it’s being used.

“The first order problem was identity, the access, the connections. What we’re doing is we’re solving the second order problem, which is the interactions,” Politis explained. In his view, companies lack the ability to monitor and understand the interactions going on across SaaS applications, as people interact and share information, inside and outside the organization. BetterCloud has been designed to give IT control and security over what is occurring in their environment, he explained.

He says they can provide as much or as little control as a company needs, and they can set controls by application or across a number of applications without actually changing the user’s experience. They do this through a scripting library. BetterCloud comes with a number of scripts and provides log access to give visibility into the scripting activity.

If a customer is looking to use this data more effectively, the solution includes a Graph API for ingesting data and seeing the connections across the data that BetterCloud is collecting. Customers can also set event triggers or actions based on the data being collected as certain conditions are met.

All of this is possible because the company overhauled the platform last year to allow BetterCloud to move beyond G Suite and plug other SaaS applications into it. Today’s announcement is the ultimate manifestation of that capability. Instead of BetterCloud building the connectors, it’s providing an API to let its customers do it.

The company was founded in 2011 and has raised more than $106 million, according to Crunchbase.

Jan
26
2019
--

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Jan
25
2019
--

Vodafone pauses Huawei network supply purchases in Europe

Huawei had a very good 2018, and it’s likely to have a very good 2019, as well. But there’s one little thing that keeps putting a damper on the hardware maker’s global expansion plans. The U.S. and Canada have already taken action over the company’s perceived link to the Chinese government, and now Vodafone is following suit over concerns that other countries may join. 

The U.K.-based telecom giant announced this week that it’s enacting a temporary halt on purchases from the Chinese hardware maker. The move arrives out of concern that additional countries may ban Huawei products, putting the world’s second largest carrier in a tricky spot as it works to roll out 5G networks across the globe.

For now, the move is focused on European markets. As The Wall Street Journal notes, there remains some possibility that Vodafone could go forward with Huawei networking gear in other markets, including India, Turkey and parts of Africa. In Europe, however, these delays could ultimately work to raise the price and/or delay its planned 5G push.

“We have decided to pause further Huawei in our core whilst we engage with the various agencies and governments and Huawei just to finalize the situation, of which I feel Huawei is really open and working hard,” Vodafone CEO Nick Read said in a statement.

Huawei has continued to deny all allegations related to Chinese government spying.

Jan
23
2019
--

Anchorage emerges with $17M from a16z for ‘omnimetric’ crypto security

I’m not allowed to tell you exactly how Anchorage keeps rich institutions from being robbed of their cryptocurrency, but the off-the-record demo was damn impressive. Judging by the $17 million Series A this security startup raised last year led by Andreessen Horowitz and joined by Khosla Ventures, #Angels, Max Levchin, Elad Gil, Mark McCombe of Blackrock and AngelList’s Naval Ravikant, I’m not the only one who thinks so. In fact, crypto funds like Andreessen’s a16z crypto, Paradigm and Electric Capital are already using it.

They’re trusting in the guys who engineered Square’s first encrypted card reader and Docker’s security protocols. “It’s less about us choosing this space and more about this space choosing us. If you look at our backgrounds and you look at the problem, it’s like the universe handed us on a silver platter the Venn diagram of our skill set,” co-founder Diogo Monica tells me.

Today, Anchorage is coming out of stealth and launching its cryptocurrency custody service to the public. Anchorage holds and safeguards crypto assets for institutions like hedge funds and venture firms, and only allows transactions verified by an array of biometrics, behavioral analysis and human reviewers. And because it doesn’t use “buried in the backyard” cold storage, asset holders can actually earn rewards and advantages for participating in coin-holder votes without fear of getting their Bitcoin, Ethereum or other coins stolen.

The result is a crypto custody service that could finally lure big-time commercial banks, endowments, pensions, mutual funds and hedgies into the blockchain world. Whether they seek short-term gains off of crypto volatility or want to HODL long-term while participating in coin governance, Anchorage promises to protect them.

Evolving past “pirate security”

Anchorage’s story starts eight years ago when Monica and his co-founder Nathan McCauley met after joining Square the same week. Monica had been getting a PhD in distributed systems while McCauley designed anti-reverse engineering tech to keep U.S. military data from being extracted from abandoned tanks or jets. After four years of building systems that would eventually move more than $80 billion per year in credit card transactions, they packaged themselves as a “pre-product acqui-hire” Monica tells me, and they were snapped up by Docker.

As their reputation grew from work and conference keynotes, cryptocurrency funds started reaching out for help with custody of their private keys. One had lost a passphrase and the $1 million in currency it was protecting in a display of jaw-dropping ignorance. The pair realized there were no true standards in crypto custody, so they got to work on Anchorage.

“You look at the status quo and it was and still is cold storage. It’s the same technology used by pirates in the 1700s,” Monica explains. “You bury your crypto in a treasure chest and then you make a treasure map of where those gold coins are,” except with USB keys, security deposit boxes and checklists. “We started calling it Pirate Custody.” Anchorage set out to develop something better — a replacement for usernames and passwords or even phone numbers and two-factor authentication that could be misplaced or hijacked.

This led them to Andreessen Horowitz partner and a16z crypto leader Chris Dixon, who’s now on their board. “We’ve been buying crypto assets running back to Bitcoin for years now here at a16z crypto. [Once you’re holding crypto,] it’s hard to do it in a way that’s secure, regulatory compliant, and lets you access it. We felt this pain point directly.”

Andreessen Horowitz partner and Anchorage board member Chris Dixon

It’s at this point in the conversation when Monica and McCauley give me their off-the-record demo. While there are no screenshots to share, the enterprise security suite they’ve built has the polish of a consumer app like Robinhood. What I can say is that Anchorage works with clients to whitelist employees’ devices. It then uses multiple types of biometric signals and behavioral analytics about the person and device trying to log in to verify their identity.

But even once they have access, Anchorage is built around quorum-based approvals. Withdrawals, other transactions and even changing employee permissions requires approval from multiple users inside the client company. They could set up Anchorage so it requires five of seven executives’ approval to pull out assets. And finally, outlier detection algorithms and a human review the transaction to make sure it looks legit. A hacker or rogue employee can’t steal the funds even if they’re logged in because they need consensus of approval.

That kind of assurance means institutional investors can confidently start to invest in crypto assets. That swell of capital could help replace the retreating consumer investors who’ve fled the market this year, leading to massive price drops. The liquidity provided by these asset managers could keep the whole blockchain industry moving. “Institutional investing has had centuries to build up a set of market infrastructure. Custody was something that for other asset classes was solved hundreds of years ago, so it’s just now catching up [for crypto],” says McCauley. “We’re creating a bigger market in and of itself,” Monica adds.

With Anchorage steadfastly handling custody, the risk these co-founders admit worries them lies in the smart contracts that govern the cryptocurrencies themselves. “We need to be extremely wide in our level of support and extremely deep because each blockchain has details of implementation. This is inherently a very difficult problem,” McCauley explains. It doesn’t matter if the coins are safe in Anchorage’s custody if a janky smart contract can botch their transfer.

There are plenty of startups vying to offer crypto custody, ranging from Bitgo and Ledger to well-known names like Coinbase and Gemini. Yet Anchorage offers a rare combination of institutional-since-day-one security rigor with the ability to participate in votes and governance of crypto assets that’s impossible if they’re in cold storage. Down the line, Anchorage hints that it might serve clients recommendations for how to vote to maximize their yield and preserve the sanctity of their coin.

They’ll have crypto investment legend Chris Dixon on their board to guide them. “What you’ll see is in the same way that institutional investors want to buy stock in Facebook and Google and Netflix, they’ll want to buy the equivalent in the world 10 years from now and do that safely,” Dixon tells me. “Anchorage will be that layer for them.”

But why do the Anchorage founders care so much about the problem? McCauley concludes that, “When we look at what’s potentially possible with crypto, there a fundamentally more accessible economy. We view ourselves as a key component of bringing that future forward.”

Jan
15
2019
--

Microsoft continues to build government security credentials ahead of JEDI decision

While the DoD is in the process of reviewing the $10 billion JEDI cloud contract RFPs (assuming the work continues during the government shutdown), Microsoft continues to build up its federal government security bona fides, regardless.

Today the company announced it has achieved the highest level of federal government clearance for the Outlook mobile app, allowing US Government Community Cloud (GCC) High and Department of Defense employees to use the mobile app. This is on top of FedRamp compliance, the company achieved last year.

“To meet the high level of government security and compliance requirements, we updated the Outlook mobile architecture so that it establishes a direct connection between the Outlook mobile app and the compliant Exchange Online backend services using a native Microsoft sync technology and removes middle tier services,” the company wrote in a blog post announcing the update.

The update will allows these highly security-conscious employees to access some of the more recent updates to Outlook Mobile such as the ability to add a comment when canceling an event.

This is in line with government security updates the company made last year. While none of these changes are specifically designed to help win the $10 billion JEDI cloud contract, they certainly help make a case for Microsoft from a technology standpoint

As Microsoft corporate vice president for Azure, Julia White stated in a blog post last year, which we covered, “Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly,” White wrote at the time. The Outlook Mobile release is clearly in line with that.

Today’s announcement comes after the Pentagon announced just last week that it has awarded Microsoft a separate large contract for $1.7 billion. This involves providing Microsoft Enterprise Services for the Department of Defense (DoD), Coast Guard and the intelligence community, according to a statement from DoD.

All of this comes ahead of decision on the massive $10 billion, winner-take-all cloud contract. Final RFPs were submitted in October and the DoD is expected to make a decision in April. The process has not been without controversy with Oracle and IBM submitting a formal protests even before the RFP deadline — and more recently, Oracle filing a lawsuit alleging the contract terms violate federal procurement laws. Oracle has been particularly concerned that the contract was designed to favor Amazon, a point the DoD has repeatedly denied.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com