Jun
10
2019
--

Apple is making corporate ‘BYOD’ programs less invasive to user privacy

When people bring their own devices to work or school, they don’t want IT administrators to manage the entire device. But until now, Apple only offered two ways for IT to manage its iOS devices: either device enrollments, which offered device-wide management capabilities to admins or those same device management capabilities combined with an automated setup process. At Apple’s Worldwide Developer Conference last week, the company announced plans to introduce a third method: user enrollments.

This new MDM (mobile device management) enrollment option is meant to better balance the needs of IT to protect sensitive corporate data and manage the software and settings available to users, while at the same time allowing users’ private personal data to remain separate from IT oversight.

According to Apple, when both users’ and IT’s needs are in balance, users are more likely to accept a corporate “bring your own device” (BYOD) program — something that can ultimately save the business money that doesn’t have to be invested in hardware purchases.

The new user enrollments option for MDM has three components: a managed Apple ID that sits alongside the personal ID; cryptographic separation of personal and work data; and a limited set of device-wide management capabilities for IT.

The managed Apple ID will be the user’s work identity on the device, and is created by the admin in either Apple School Manager or Apple Business Manager — depending on whether this is for a school or a business. The user signs into the managed Apple ID during the enrollment process.

From that point forward until the enrollment ends, the company’s managed apps and accounts will use the managed Apple ID’s iCloud account.

Meanwhile, the user’s personal apps and accounts will use the personal Apple ID’s iCloud account, if one is signed into the device.

Third-party apps are then either used in managed or unmanaged modes.

That means users won’t be able to change modes or run the apps in both modes at the same time. However, some of the built-in apps like Notes will be account-based, meaning the app will use the appropriate Apple ID — either the managed one or personal — depending on which account they’re operating on at the time.

To separate work data from personal, iOS will create a managed APFS volume at the time of the enrollment. The volume uses separate cryptographic keys which are destroyed along with the volume itself when the enrollment period ends. (iOS had always removed the managed data when the enrollment ends, but this is a cryptographic backstop just in case anything were to go wrong during unenrollment, the company explained.)

The managed volume will host the local data stored by any managed third-party apps along with the managed data from the Notes app. It also will house a managed keychain that stores secure items like passwords and certificates; the authentication credentials for managed accounts; and mail attachments and full email bodies.

The system volume does host a central database for mail, including some metadata and five line previews, but this is removed as well when the enrollment ends.

Users’ personal apps and their data can’t be managed by the IT admin, so they’re never at risk of having their data read or erased.

And unlike device enrollments, user enrollments don’t provide a UDID or any other persistent identifier to the admin. Instead, it creates a new identifier called the “enrollment ID.” This identifier is used in communication with the MDM server for all communications and is destroyed when enrollment ends.

Apple also noted that one of the big reasons users fear corporate BYOD programs is because they think the IT admin will erase their entire device when the enrollment ends — including their personal apps and data.

To address this concern, the MDM queries can only return the managed results.

In practice, that means IT can’t even find out what personal apps are installed on the device — something that can feel like an invasion of privacy to end users. (This feature will be offered for device enrollments, too.) And because IT doesn’t know which personal apps are installed, it also can’t restrict certain apps’ use.

User enrollments will also not support the “erase device” command — and they don’t have to, because IT will know the sensitive data and emails are gone. There’s no need for a full device wipe.

Similarly, the Exchange Server can’t send its remote wipe command — just the account-only remote wipe to remove the managed data.

Another new feature related to user enrollments is how traffic for managed accounts is guided through the corporate VPN. Using the per-app VPN feature, traffic from the Mail, Contacts and Calendars built-in apps will only go through the VPN if the domains match that of the business. For example, mail.acme.com can pass through the VPN, but not mail.aol.com. In other words, the user’s personal mail remains private.

This addresses what has been an ongoing concern about how some MDM solutions operate — routing traffic through a corporate proxy meant the business could see the employees’ personal emails, social networking accounts and other private information.

User enrollments also only enforces a six-digit non-simple passcode, as the MDM server can’t help users by clearing the past code if the user forgets it.

Some today advise users to not accept BYOD MDM policies because of the impact to personal privacy. While a business has every right to manage and wipe its own apps and data, IT has overstepped with some of its remote management capabilities — including its ability to erase entire devices, access personal data, track a phone’s location, restrict personal use of apps and more.

Apple’s MDM policies haven’t included GPS tracking, however, nor does this new option.

Apple’s new policy is a step toward a better balance of concerns, but will require that users understand the nuances of these more technical details — which they may not.

That user education will come down to the businesses that insist on these MDM policies to begin with — they will need to establish their own documentation, explainers, and establish new privacy policies with their employees that detail what sort of data they can and cannot access, as well as what sort of control they have over corporate devices.

May
02
2019
--

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

May
02
2019
--

InCountry raises $7M to help multinationals store private data in countries of origin

The last few years have seen a rapid expansion of national regulations that, in the name of data protection, govern how and where organizations like healthcare and insurance companies, financial services companies and others store residents’ personal data that is used and collected through their services.

But keeping abreast of and following those rules has proven to be a minefield for companies. Now, a startup is coming out of stealth with a new product to to help.

InCountry, which provides “data residency-as-a-service” to businesses and other organizations, is launching with $7 million in funding and its first product: Profile, which focuses on user profile and registration information in 50 countries on six continents. There will be more products launched covering payment, transaction and health data later in the year, co-founder and CEO Peter Yared said in an interview.

The funding — a seed round — is coming from Caffeinated Capital , Felicis Ventures, Ridge Ventures, Bloomberg Beta, CRV, Global Founders Capital.

InCountry is founded and led by Yared, a repeat entrepreneur who most recently co-founded and eventually sold the “micro-app” startup Sapho, which was acquired by Citrix. Other companies he’s sold startups to include VMWare, Sun, and Oracle, and he was also once the CIO of CBS Interactive. 

Yared told me in an interview that he has actually been self-funding, running and quietly accruing customers for InCountry for two years. He decided to raise this seed round — a number of investors in this list are repeat backers of his ventures — to start revving up the engines. (One of those ‘revs’ is an interesting talent hire. Today the company is also announcing Alex Castro as chief product officer. Castro was an early employee working on Amazon Web Services and Mircosoft’s move into CRM, and also worked on autonomous at Uber.)

If you have never heard of the term “data residency-as-a-service”, that might be because it’s something that has been coined by Yared himself to describe the function of his startup.

InCountry is part tech provider, part consultancy.

On the tech side, it provides the technical aspects of providing personal data storage in a specific national border for companies that might otherwise run other aspects of their services from other locations. That includes SDKs that link to a variety of data centers and cloud service providers that allow new countries to be added in under 10 minutes; two types of encryption on the data to make sure that it remains secure; and managed services for its biggest clients. (InCountry is not disclosing any client names right now, except for video-editing company Revl.)

On the consultancy side, it has an in-house team of researchers and partnerships with law firms to continually update its policies and ensure that customers remain compliant with any changes. InCountry says that to provide further assurance to customers, it provides insurance of up to three times the value of a customer’s spend.

InCountry’s aim is twofold: first, to solve the many pain points that a company or other organization has to go through when considering how to comply with data hosting regulations; and second, to make sure that by making it easy, companies actually do what’s required of them.

As Yared describes it, the process for becoming data compliant can be painful, but his startup is applying an economy of scale, since the process is essentially one that everyone will have to follow:

“They have to figure out what the requirements are, find the facility, audit the facility, which includes making sure it’s not owned by the state, make sure the network is properly segregated, develop the right software layer to manage the data, hire program managers, network operations people and more,” he said. And for those handling this themselves, cloud service providers will typically cover a smaller footprint of regions, 17 at most for the biggest. “We take care of all that, and add on more as we need to.”

The problem is that because the process is so painful, many companies often flout the requirements, which isn’t good for its customers, nor for the companies themselves, which run the risk of getting fined.

“It’s universally acknowledged that the way data is stored and handled by most companies and handled is not meeting the average requirements of citizens rights,” Yared said. “That’s why we now have GDPR, and will see more GDPR-like regulations get rolled out.”

One thing that InCountry is not touching is data such as messages between users and other kinds of personal files — data that has been the subject of sometimes very controversial data regulations. Its limit are the pieces of personal information about users — bank details, health information, social security numbers, and so on — that are part and parcel of what we provide to companies in the course of interacting with them online.

“In early outreach, we have had people as for private data storage, but we would be ethically uncomfortable with that,” Yared said. “We want to be in the business of helping people who have regulated data, by storing that in a compliant manner that is more helpful, and more fruitful to users.”

The aim will be to add more services over time covering ever more countries, to keep in line with the growing trend among regulators to put more data residency laws in place.

“We’re witnessing more countries signing in data laws each week, and we’re only going to see those numbers increase,” said Sundeep Peechu, Managing Director at Felicis Ventures, in a statement. “We’re excited to be leading the round and reinvesting in Peter as he launches his seventh company. He recognized the problem early on and started working on a solution nearly two years ago that goes beyond regional data centers and patchwork in-house DIY solutions.”

Apr
02
2019
--

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with record keeping laws but raises questions about potential admissibility as evidence.

Mar
04
2019
--

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics” though has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”

In other words, the scale of those predictions completely undermines notice and consent.

Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.

Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias, and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.

Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Since people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.

He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”

Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies who can take the value out of this tech without risking provoking consumer ire.

Who will steal your data more: China or America?

The Huawei logo is seen in the center of Warsaw, Poland

Jaap Arriens/NurPhoto via Getty Images

Talking about data ethics, Europe is in the middle of a superpower pincer. China’s telecom giant Huawei has made expansion on the continent a major priority, while the United States has been sending delegation after delegation to convince its Western allies to reject Chinese equipment. The dilemma was quite visible last week at MWC-Barcelona, where the two sides each tried to make their case.

It’s been years since the Snowden revelations showed that the United States was operating an enormous eavesdropping infrastructure targeting countries throughout the world, including across Europe. Huawei has reiterated its stance that it does not steal information from its equipment, and has repeated its demands that the Trump administration provide public proof of flaws in its security.

There is an abundance of moral relativism here, but I see this as increasingly a litmus test of the West on China. China has not hidden its ambitions to take a prime role in East Asia, nor has it hidden its intentions to build a massive surveillance network over its own people or to influence the media overseas.

Those tactics, though, are straight out of the American playbook, which lost its moral legitimacy over the past two decades from some combination of the Iraq War, Snowden, Wikileaks, and other public scandals that have undermined trust in the country overseas.

Security and privacy might have been a competitive advantage for American products over their Chinese counterparts, but that advantage has been weakened for many countries to near zero. We are increasingly going to see countries choose a mix of Chinese and American equipment in sensitive applications, if only to ensure that if one country is going to steal their data, it might as well be balanced.

Things that seem interesting that I haven’t read yet

Obsessions

  • Perhaps some more challenges around data usage and algorithmic accountability
  • We have a bit of a theme around emerging markets, macroeconomics, and the next set of users to join the internet.
  • More discussion of megaprojects, infrastructure, and “why can’t we build things”

Thanks

To every member of Extra Crunch: thank you. You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people, and companies. If I can ever be of assistance, hit reply, or send an email to danny@techcrunch.com.

This newsletter is written with the assistance of Arman Tabatabai from New York.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

Jan
26
2019
--

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Jul
11
2018
--

Facial recognition startup Kairos acquires Emotion Reader

Kairos, the face recognition technology used for brand marketing, has announced the acquisition of EmotionReader.

EmotionReader is a Limerick, Ireland-based startup that uses algorithms to analyze facial expressions around video content. The startup allows brands and marketers to measure viewers emotional response to video, analyze viewer response via an analytics dashboard, and make different decisions around media spend based on viewer response.

The acquisition makes sense considering that Kairos core business is focused on facial identification for enterprise clients. Knowing who someone is, paired with how they feel about your content, is a powerful tool for brands and marketers.

The idea for Kairos started when founder Brian Brackeen was making HR time-clocking systems for Apple. People were cheating the system, so he decided to implement facial recognition to ensure that employees were actually clocking in and out when they said they were.

That premise spun out into Kairos, and Brackeen soon realized that facial identification as a service was much more powerful than any niche time clocking service.

But Brackeen is very cautious with the technology Kairos has built.

While Kairos aims to make facial recognition technology (and all the powerful insights that come with it) accessible and available to all businesses, Brackeen has been very clear about the fact that Kairos isn’t interested in selling this technology to government agencies.

Brackeen recently contributed a post right here on TechCrunch outlining the various reasons why governments aren’t ready for this type of technology. Alongside the outstanding invasion of personal privacy, there are also serious issues around bias against people of color.

From the post:

There is no place in America for facial recognition that supports false arrests and murder. In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities does not serve citizens, and will only lead to further unrest.

As part of the deal, EmotionReader CTO Dr. Stephen Moore will run Kairos’ new Singapore-based R&D center, allowing for upcoming APAC expansion.

Kairos has raised approximately $8 million from investors New World Angels, Kapor Capital, 500 Startups, Backstage Capital, Morgan Stanley, Caerus Ventures, and Florida Institute, and is now closing on its $30 million crowd sale.

Mar
09
2018
--

InfoSum’s first product touts decentralized big data insights

Nick Halstead’s new startup, InfoSum, is launching its first product today — moving one step closer to his founding vision of a data platform that can help businesses and organizations unlock insights from big data silos without compromising user privacy, data security or data protection law. So a pretty high bar then.

If the underlying tech lives up to the promises being made for it, the timing for this business looks very good indeed, with the European Union’s new General Data Protection Regulation (GDPR) mere months away from applying across the region — ushering in a new regime of eye-wateringly large penalties to incentivize data handling best practice.

InfoSum bills its approach to collaboration around personal data as fully GDPR compliant — because it says it doesn’t rely on sharing the actual raw data with any third parties.

Rather a mathematical model is used to make a statistical comparison, and the platform delivers aggregated — but still, says Halstead — useful insights. Though he says the regulatory angle is fortuitous, rather than the full inspiration for the product.

“Two years ago, I saw that the world definitely needed a different way to think about working on knowledge about people,” he tells TechCrunch. “Both for privacy [reasons] — there isn’t a week where we don’t see some kind of data breach… they happen all the time — but also privacy isn’t enough by itself. There has to be a commercial reason to change things.”

The commercial imperative he reckons he’s spied is around how “unmanageable” big data can become when it’s pooled for collaborative purposes.

Datasets invariably need a lot of cleaning up to make different databases align and overlap. And the process of cleaning and structuring data so it can be usefully compared can run to multiple weeks. Yet that effort has to be put in before you really know if it will be worth your while doing so.

That snag of time + effort is a major barrier preventing even large companies from doing more interesting things with their data holdings, argues Halstead.

So InfoSum’s first product — called Link — is intended to give businesses a glimpse of the “art of the possible”, as he puts it — in just a couple of hours, rather than the “nine, ten weeks” he says it might otherwise take them.

“I set myself a challenge… could I get through the barriers that companies have around privacy, security, and the commercial risks when they handle consumer data. And, more importantly, when they need to work with third parties or need to work across their corporation where they’ve got numbers of consumer data and they want to be able to look at that data and look at the combined knowledge across those.

“That’s really where I came up with this idea of non-movement of data. And that’s the core principle of what’s behind InfoSum… I can connect knowledge across two data sets, as if they’ve been pooled.”

Halstead says that the problem with the traditional data pooling route — so copying and sharing raw data with all sorts of partners (or even internally, thereby expanding the risk vector surface area) — is that it’s risky. The myriad data breaches that regularly make headlines nowadays are a testament to that.

But that’s not the only commercial consideration in play, as he points out that raw data which has been shared is immediately less valuable — because it can’t be sold again.

“If I give you a data set in its raw form, I can’t sell that to you again — you can take it away, you can slice it and dice it as many ways as you want. You won’t need to come back to me for another three or four years for that same data,” he argues. “From a commercial point of view [what we’re doing] makes the data more valuable. In that data is never actually having to be handed over to the other party.”

Not blockchain for privacy

Decentralization, as a technology approach, is also of course having a major moment right now — thanks to blockchain hype. But InfoSum is definitely not blockchain. Which is a good thing. No sensible person should be trying to put personal data on a blockchain.

“The reality is that all the companies that say they’re doing blockchain for privacy aren’t using blockchain for the privacy part, they’re just using it for a trust model, or recording the transactions that occur,” says Halstead, discussing why blockchain is terrible for privacy.

“Because you can’t use the blockchain and say it’s GDPR compliant or privacy safe. Because the whole transparency part of it and the fact that it’s immutable. You can’t have an immutable database where you can’t then delete users from it. It just doesn’t work.”

Instead he describes InfoSum’s technology as “blockchain-esque” — because “everyone stays holding their data”. “The trust is then that because everyone holds their data, no one needs to give their data to everyone else. But you can still crucially, through our technology, combine the knowledge across those different data sets.”

So what exactly is InfoSum doing to the raw personal data to make it “privacy safe”? Halstead claims it goes “beyond hashing” or encrypting it. “Our solution goes beyond that — there is no way to re-identify any of our data because it’s not ever represented in that way,” he says, further claiming: “It is absolutely 100 per cent data isolation, and we are the only company doing this in this way.

“There are solutions out there where traditional models are pooling it but with encryption on top of it. But again if the encryption gets broken the data is still ending up being in a single silo.”

InfoSum’s approach is based on mathematically modeling users, using a “one way model”, and using that to make statistical comparisons and serve up aggregated insights.

“You can’t read things out of it, you can only test things against it,” he says of how it’s transforming the data. “So it’s only useful if you actually knew who those users were beforehand — which obviously you’re not going to. And you wouldn’t be able to do that unless you had access to our underlying code-base. Everyone else either users encryption or hashing or a combination of both of those.”

This one-way modeling technique is in the process of being patented — so Halstead says he can’t discuss the “fine details” — but he does mention a long standing technique for optimizing database communications, called bloom filters, saying those sorts of “principles” underpin InfoSum’s approach.

Although he also says it’s using those kind of techniques differently. Here’s how InfoSum’s website describes this process (which it calls Quantum):

InfoSum Quantum irreversibly anonymises data and creates a mathematical model that enables isolated datasets to be statistically compared. Identities are matched at an individual level and results are collated at an aggregate level – without bringing the datasets together.

On the surface, the approach shares a similar structure to Facebook’s Custom Audiences Product, where advertisers’ customer lists are locally hashed and then uploaded to Facebook for matching against its own list of hashed customer IDs — with any matches used to create a custom audience for ad targeting purposes.

Though Halstead argues InfoSum’s platform offers more for even this kind of audience building marketing scenario, because its users can use “much more valuable knowledge” to model on — knowledge they would not comfortably share with Facebook “because of the commercial risks of handing over that first person valuable data”.

“For instance if you had an attribute that defined which were your most valuable customers, you would be very unlikely to share that valuable knowledge — yet if you could safely then it would be one of the most potent indicators to model upon,” he suggests.

He also argues that InfoSum users will be able to achieve greater marketing insights via collaborations with other users of the platform vs being a customer of Facebook Custom Audiences — because Facebook simply “does not open up its knowledge”.

“You send them your customer lists, but they don’t then let you have the data they have,” he adds. “InfoSum for many DMPs [data management platforms] will allow them to collaborate with customers so the whole purchasing of marketing can be much more transparent.”

He also emphasizes that marketing is just one of the use-cases InfoSum’s platform can address.

Decentralized bunkers of data

One important clarification: InfoSum customers’ data does get moved — but it’s moved into a “private isolated bunker” of their choosing, rather than being uploaded to a third party.

“The easiest one to use is where we basically create you a 100 per cent isolated instance in Amazon [Web Services],” says Halstead. “We’ve worked with Amazon on this so that we’ve used a whole number of techniques so that once we create this for you, you put your data into it — we don’t have access to it. And when you connect it to the other part we use this data modeling so that no data then moves between them.”

“The ‘bunker’ is… an isolated instance,” he adds, elaborating on how communications with these bunkers are secured. “It has its own firewall, a private VPN, and of course uses standard SSL security. And once you have finished normalising the data it is turned into a form in which all PII [personally identifiable information] is deleted.

“And of course like any other security related company we have had independent security companies penetration test our solution and look at our architecture design.”

Other key pieces of InfoSum’s technology are around data integration and identity mapping — aimed at tackling the (inevitable) problem of data in different databases/datasets being stored in different formats. Which again is one of the commercial reasons why big data silos often stay just that: Silos.

Halstead gave TechCrunch a demo showing how the platform ingests and connects data, with users able to use “simple steps” to teach the system what is meant by data types stored in different formats — such as that ‘f’ means the same as ‘female’ for gender category purposes — to smooth the data mapping and “try to get it as clean as possible”.

Once that step has been completed, the user (or collaborating users) are able to get a view on how well linked their data sets are — and thus to glimpse “the start of the art of the possible”.

In practice this means they can choose to run different reports atop their linked datasets — such as if they want to enrich their data holdings by linking their own users across different products to gain new insights, such as for internal research purposes.

Or, where there’s two InfoSum users linking different data sets, they could use it for propensity modeling or lookalike modeling of customers, says Halstead. So, for example, a company could link models of their users with models of the users of a third party that holds richer data on its users to identify potential new customer types to target marketing at.

“Because I’ve asked to look at the overlap I can literally say I only know the gender of these people but I would also like to know what their income is,” he says, fleshing out another possible usage scenario. “You can’t drill into this, you can’t do really deep analytics — that’s what we’ll be launching later. But Link allows you to get this idea of what would it look like if I combine our datasets.

“The key here is it’s opening up a whole load of industries where sensitivity around doing this — and where, even in industries that share a lot of data already but where GDPR is going to be a massive barrier to it in the future.”

Halstead says he expects big demand from the marketing industry which is of course having to scramble to rework its processes to ensure they don’t fall foul of GDPR.

“Within marketing there is going to be a whole load of new challenges for companies where they were currently enhancing their databases, buying up large raw datasets and bringing their data into their own CRM. That world’s gone once we’ve got GDPR.

“Our model is safer, faster, and actually still really lets people do all the things they did before but while protecting the customers.”

But it’s not just marketing exciting him. Halstead believes InfoSum’s approach to lifting insights from personal data could be very widely applicable — arguing, for example, that it’s only a minority of use-cases, such as credit risk and fraud within banking, where companies actually need to look at data at an individual level.

One area he says he’s “very passionate” about InfoSum’s potential is in the healthcare space.

“We believe that this model isn’t just about helping marketing and helping a whole load of others — healthcare especially for us I think is going to be huge. Because [this affords] the ability to do research against health data where health data is never been actually shared,” he says.

“In the UK especially we’ve had a number of massive false starts where companies have, for very good reasons, wanted to be able to look at health records and combine data — which can turn into vital research to help people. But actually their way of doing it has been about giving out large datasets. And that’s just not acceptable.”

He even suggests the platform could be used for training AIs within the isolated bunkers — flagging a developer interface that will be launching after Link which will let users query the data as a traditional SQL query.

Though he says he sees most initial healthcare-related demand coming from analytics that need “one or two additional attributes” — such as, for example, comparing health records of people with diabetes with activity tracker data to look at outcomes for different activity levels.

“You don’t need to drill down into individuals to know that the research capabilities could give you incredible results to understand behavior,” he adds. “When you do medical research you need bodies of data to be able to prove things so the fact that we can only work at an aggregate level is not, I don’t think, any barrier to being able to do the kind of health research required.”

Another area he believes could really benefit is M&A — saying InfoSum’s platform could offer companies a way to understand how their user bases overlap before they sign on the line. (It is also of course handling and thus simplifying the legal side of multiple entities collaborating over data sets.)

“There hasn’t been the technology to allow them to look at whether there’s an overlap before,” he claims. “It puts the power in the hands of the buyer to be able to say we’d like to be able to look at what your user base looks like in comparison to ours.

“The problem right now is you could do that manually but if they then backed out there’s all kinds of legal problems because I’ve had to hand the raw data over… so no one does it. So we’re going to change the M&A market for allowing people to discover whether I should acquire someone before they go through to the data room process.”

While Link is something of a taster of what InfoSum’s platform aims to ultimately offer (with this first product priced low but not freemium), the SaaS business it’s intending to get into is data matchmaking — whereby, once it has a pipeline of users, it can start to suggest links that might be interesting for its customers to explore.

“There is no point in us reinventing the wheel of being the best visualization company because there’s plenty that have done that,” he says. “So we are working on data connectors for all of the most popular BI tools that plug in to then visualize the actual data.

“The long term vision for us moves more into being more of an introductory service — i.e. one we’ve got 100 companies in this how do we help those companies work out what other companies that they should be working with.”

“We’ve got some very good systems for — in a fully anonymized way — helping you understand what the intersection is from your data to all of the other datasets, obviously with their permission if they want us to calculate that for them,” he adds.

“The way our investors looked at this, this is the big opportunity going forward. There is not limit, in a decentralized world… imagine 1,000 bunkers around the world in these different corporates who all can start to collaborate. And that’s our ultimate goal — that all of them are still holding onto their own knowledge, 100% privacy safe, but then they have that opportunity to work with each other, which they don’t right now.”

Engineering around privacy risks?

But does he not see any risks to privacy of enabling the linking of so many separate datasets — even with limits in place to avoid individuals being directly outed as connected across different services?

“However many data sets there are the only thing it can reveal extra is whether every extra data has an extra bit of knowledge,” he responds on that. “And every party has the ability to define  what bit of data they would then want to be open to others to then work on.

“There are obviously sensitivities around certain combinations of attributes, around religion, gender and things like that. Where we already have a very clever permission system where the owners can define what combinations are acceptable and what aren’t.”

“My experience of working with all the social networks has meant — I hope — that we are ahead of the game of thinking about those,” he adds, saying that the matchmaking stage is also six months out at this point.

“I don’t see any down sides to it, as long as the controls are there to be able to limit it. It’s not like it’s going to be a sudden free for all. It’s an introductory service, rather than an open platform so everyone can see everything else.”

The permission system is clearly going to be important. But InfoSum does essentially appear to be heading down the platform route of offloading responsibility for ethical considerations — in its case around dataset linkages — to its customers.

Which does open the door to problematic data linkages down the line, and all sorts of unintended dots being joined.

Say, for example, a health clinic decides to match people with particular medical conditions to users of different dating apps — and the relative proportions of HIV rates across straight and gay dating apps in the local area gets published. What unintended consequences might spring from that linkage being made?

Other equally problematic linkages aren’t hard to imagine. And we’ve seen the appetite businesses have for making creepy observations about their users public.

“Combining two sets of aggregate data meaningfully is not easy,” says Eerke Boiten, professor of cyber security at De Montfort University, discussing InfoSum’s approach. “If they can make this all work out in a way that makes sense, preserves privacy, and is GDPR compliant, then they deserve a patent I suppose.”

On data linkages, Boiten points to the problems Facebook has had with racial profiling as illustrative of the potential pitfalls.

He also says there may also be GDPR-specific risks around customer profiling enabled by the platform. In an edge case scenario, for example, where two overlapped datasets are linked and found to have a 100% user match, that would mean people’s personal data had been processed by default — so that processing would have required a legal basis to be in place beforehand.

And there may be wider legal risks around profiling too. If, for example, linkages are used to deny services or vary pricing to certain types or blocks of customers, is that legal or ethical?

“From a company’s perspective, if it already has either consent or a legitimate purpose (under GDPR) to use customer data for analytical/statistical purposes then it can use our products,” says InfoSum’s COO Danvers Baillieu, on data processing consent. “Where a company has an issue using InfoSum as a sub-processor, then… we can set up the system differently so that we simply supply the software and they run it on their own machines (so we are not a data processor) –- but this is not yet available in Link.”

Baillieu also notes that the bin sizes InfoSum’s platform aggregates individuals into are configurable in its first product. “The default bin size is 10, and the absolute minimum is three,” he adds.

“The other key point around disclosure control is that our system never needs to publish the raw data table. All the famous breaches from Netflix onwards are because datasets have been pseudonymised badly and researchers have been able to run analysis across the visible fields and then figure out who the individuals are — this is simply not possible with our system as this data is never revealed.”

‘Fully GDPR compliant’ is certainly a big claim — and one that it going to have a lot of slings and arrows thrown at it as data gets ingested by InfoSum’s platform.

It’s also fair to say that a whole library of books could be written about technology’s unintended consequences.

Indeed, InfoSum’s own website credits Halstead as the inventor of the embedded retweet button, noting the technology is “something that is now ubiquitous on almost every website in the world”.

Those ubiquitous social plugins are also of course a core part of the infrastructure used to track Internet users wherever and almost everywhere they browse. So does he have any regrets about the invention, given how that bit of innovation has ended up being so devastating for digital privacy?

“When I invented it, the driving force for the retweet button was only really as a single number to count engagement. It was never to do with tracking. Our version of the retweet button never had any trackers in it,” he responds on that. “It was the number that drove our algorithms for delivering news in a very transparent way.

“I don’t need to add my voice to all the US pundits of the regrets of the beast that’s been unleashed. All of us feel that desire to unhook from some of these networks now because they aren’t being healthy for us in certain ways. And I certainly feel that what we’re not doing for improving the world of data is going to be good for everyone.”

When we first covered the UK-based startup it was going under the name CognitiveLogic — a placeholder name, as three weeks in Halstead says he was still figuring out exactly how to take his idea to market.

The founder of DataSift has not had difficulties raising funding for his new venture. There was an initial $3M from Upfront Ventures and IA Ventures, with the seed topped up by a further $5M last year, with new investors including Saul Klein (formerly Index Ventures) and Mike Chalfen of Mosaic Ventures. Halstead says he’ll be looking to raise “a very large Series A” over the summer.

In the meanwhile he says he has a “very long list” of hundreds customers wanting to get their hands on the platform to kick its tires. “The last three months has been a whirlwind of me going back to all of the major brands, all of the big data companies, there no large corporate that doesn’t have these kinds of challenges,” he adds.

“I saw a very big client this morning… they’re a large multinational, they’ve got three major brands where the three customer sets had never been joined together. So they don’t even know what the overlap of those brands are at the moment. So even giving them that insight would be massively valuable to them.”

Mar
09
2018
--

InfoSum’s first product touts decentralized big data insights

 Nick Halstead’s new startup, InfoSum, is launching its first product today — moving one step closer to his founding vision of a data platform that can help businesses and organizations unlock insights from big data silos without compromising user privacy, data security or data protection law. So a pretty high bar then. Read More

Jan
29
2018
--

BigID pulls in $14 million Series A to help identify private customer data across big data stores

 As data privacy becomes an increasingly important notion, especially with the EU’s GDPR privacy laws coming online in May, companies need to find ways to understand their customer’s private data. BigID thinks it has a solution and it landed a $14 million Series A investment today to help grow the idea.
Comcast Ventures, SAP (via SAP.io), ClearSky Security Fund and one of the… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com