Jun
30
2021
--

Google tightens UK policy on financial ads after watchdog pressure over scams

The UK’s more expansive, post-Brexit role in digital regulation continues to be felt today via a policy change by Google which has announced that it will, in the near future, only run ads for financial products and services when the advertiser in question has been verified by the financial watchdog, the FCA.

The Google Ads Financial Products and Services policy will be updated from August 30, per Google, which specifies that it will start enforcing the new policy from September 6 — meaning that purveyors of online financial scams who’ve been relying on its ad network to net their next victim still have more than two months to harvest unsuspecting clicks before the party is over (well, in the UK, anyway).

Google’s decision to allow only regulator authorized financial entities to run ads for financial products & services follows warnings from the Financial Conduct Authority that it may take legal action if Google continued to accept unscreened financial ads, as the Guardian reported earlier.

The FCA told a parliamentary committee this month that it’s able to contemplate taking such action as a result of no longer being bound by European Union rules on financial adverts, which do not extend to online platforms, per the newspaper’s report.

Until gaining the power to go after Google itself, the FCA appears to have been trying to combat the scourge of online financial fraud by paying Google large amounts of UK taxpayer money to fight scams with anti-scam warnings.

According to the Register, the FCA paid Google more than £600,000 (~$830k) in 2020 and 2021 to run ‘anti-scam’ ads — with the regulator essentially engaged in a bidding war with scammers to pour enough money into Google’s coffers so that regulator warnings about financial scams might appear higher than the scams themselves.

The full-facepalm situation was presumably highly lucrative for Google. But the threat of legal action appears to have triggered a policy rethink.

Writing in its blog post, Ronan Harris, a VP and MD for Google UK & Ireland, said: “Financial services advertisers will be required to demonstrate that they are authorised by the UK Financial Conduct Authority or qualify for one of the limited exemptions described in the UK Financial Services verification page.”

“This new update builds on significant work in partnership with the FCA over the last 18 months to help tackle this issue,” he added. “Today’s announcement reflects significant progress in delivering a safer experience for users, publishers and advertisers. While we understand that this policy update will impact a range of advertisers in the financial services space, our utmost priority is to keep users safe on our platforms — particularly in an area so disproportionately targeted by fraudsters.”

The company’s blog also claims that it has pledged $5M in advertising credits to support financial fraud public awareness campaigns in the UK. So not $5M in actual money then.

Per the Register, Google did offer to refund the FCA’s anti-scam ad spend — but, again, with advertising credits.

The UK parliament’s Treasury Committee was keen to know whether the tech giant would be refunding the spend in cash. But the FCA’s director of enforcement and market insight, Mark Steward, was unable to confirm what it would do, according to the Register’s report of the committee hearing.

We’ve reached out to the FCA for comment on Google’s policy change, and with questions about the refund situation, and will update this report with any response.

In recent years the financial watchdog has also been concerned about financial scam ads running on social media platforms.

Back in 2018, legal action by a well-known UK consumer advice personality, Martin Lewis — who filed a defamation suit against Facebook — led the social media giant to add a ‘report scam ad’ button in the market as of July 2019.

However research by consumer group, Which?, earlier this year, suggested that neither Facebook nor Google had entirely purged financial scam ads — even when they’d been reported.

Per the BBC, Which?’s survey found that Google had failed to remove around a third (34%) of the scam adverts reported to it vs Facebook failing to remove well over a fifth (26%).

It’s almost like the incentives for online ad giants to act against lucrative online scam ads simply aren’t pressing enough…

More recently, Lewis has been pushing for scam ads to be included in the scope of the UK’s Online Safety Bill.

The sweeping piece of digital regulation aims to tackle a plethora of so-called ‘online harms’ by focusing on regulating user generated content. However Lewis makes the point that a scammer merely needs to pay an ad platform to promote their fraudulent content for it to escape the scope of the planned rules, telling the Good Morning Britain TV program today that the situation is “ludicrous” and “needs to change”.

It’s certainly a confusing carve-out, as we reported at the time the bill was presented. Nor is it the only confusing component of the planned legislation. However on the financial fraud point the government may believe the FCA has the necessary powers to tackle the problem.

We’ve contacted the Department for Digital, Media, Culture and Sport for comment.

Update: A government spokesperson said:

“We have brought user-generated fraud into the scope of our new online laws to increase people’s protection from the devastating impact of scams. The move is just one part of our plan to tackle fraud in all its forms. We continue to pursue fraudsters and close down the vulnerabilities they exploit, are helping people spot and report scams, and we will shortly be considering whether tougher regulation on online advertising is also needed.”

The government also noted that the Home Office is developing a Fraud Action Plan, which is slated to be published after the 2021 spending review; and pointed to the Online Advertising Programme which it said will consider the extent to which the current regulatory regime is equipped to tackle the challenges posed by the rapid technological developments seen in online advertising — including via a consultation and review of online advertising it plans to launch later this year.

May
13
2021
--

Worksome pulls $13M into its high skill freelancer talent platform

More money for the now very buzzy business of reshaping how people work: Worksome is announcing it recently closed a $13 million Series A funding round for its “freelance talent platform” — after racking up 10x growth in revenue since January 2020, just before the COVID-19 pandemic sparked a remote working boom.

The 2017 founded startup, which has a couple of ex-Googlers in its leadership team, has built a platform to connect freelancers looking for professional roles with employers needing tools to find and manage freelancer talent.

It says it’s seeing traction with large enterprise customers that have traditionally used Managed Service Providers (MSPs) to manage and pay external workforces — and views employment agency giants like Randstad, Adecco and Manpower as ripe targets for disruption.

“Most multinational enterprises manage flexible workers using legacy MSPs,” says CEO and co-founder Morten Petersen (one of the Xooglers). “These largely analogue businesses manage complex compliance and processes around hiring and managing freelance workforces with handheld processes and outdated technology that is not built for managing fluid workforces. Worksome tackles this industry head on with a better, faster and simpler solution to manage large freelancer and contractor workforces.”

Worksome focuses on helping medium/large companies — who are working with at least 20+ freelancers at a time — fill vacancies within teams rather than helping companies outsource projects, per Petersen, who suggests the latter is the focus for the majority of freelancer platforms.

“Worksome helps [companies] onboard people who will provide necessary skills and will be integral to longer-term business operations. It makes matches between companies and skilled freelancers, which the businesses go on to trust, form relationships with and come back to time and time again,” he goes on.

“When companies hire dozens or hundreds of freelancers at one time, processes can get very complicated,” he adds, arguing that on compliance and payments Worksome “takes on a much greater responsibility than other freelancing platforms to make big hires easier”.

The startup also says it’s concerned with looking out for (and looking after) its freelancer talent pool — saying it wants to create “a world of meaningful work” on its platform, and ensure freelancers are paid fairly and competitively. (And also that they are paid faster than they otherwise might be, given it takes care of their payroll so they don’t have to chase payments from employers.)

The business started life in Copenhagen — and its Series A has a distinctly Nordic flavor, with investment coming from the Danish business angel and investor on the local version of the Dragons’ Den TV program Løvens Hule; the former Minister for Higher Education and Science, Tommy Ahlers; and family home manufacturer Lind & Risør.

It had raised just under $6M prior to thus round, per Crunchbase, and also counts some (unnamed) Google executives among its earlier investors.

Freelancer platforms (and marketplaces) aren’t new, of course. There are also an increasing number of players in this space — buoyed by a new flush of VC dollars chasing the ‘future of work’, whatever hybrid home-office flexible shape that might take. So Worksome is by no means alone in offering tech tools to streamline the interface between freelancers and businesses.

A few others that spring to mind include Lystable (now Kalo), Malt, Fiverr — or, for techie job matching specifically, the likes of HackerRank — plus, on the blue collar work side, Jobandtalent. There’s also a growing number of startups focusing on helping freelancer teams specifically (e.g. Collective), so there’s a trend towards increasing specialism.

Worksome says it differentiates vs other players (legacy and startups) by combining services like tax compliance, background and ID checks and handling payroll and other admin with an AI powered platform that matches talent to projects.

Although it’s not the only startup offering to do the back-office admin/payroll piece, either, nor the only one using AI to match skilled professionals to projects. But it claims it’s going further than rival ‘freelancer-as-a-service’ platforms — saying it wants to “address the entire value chain” (aka: “everything from the hiring of freelance talent to onboarding and payment”).

Worksome has 550 active clients (i.e. employers in the market for freelancer talent) at this stage; and has accepted 30,000 freelancers into its marketplace so far.

Its current talent pool can take on work across 12 categories, and collectively offers more than 39,000 unique skills, per Petersen.

The biggest categories of freelancer talent on the platform are in Software and IT; Design and Creative Work; Finance and Management Consulting; plus “a long tail of niche skills” within engineering and pharmaceuticals.

While its largest customers are found in the creative industries, tech and IT, pharma and consumer goods. And its biggest markets are the U.K. and U.S.

“We are currently trailing at +20,000 yearly placements,” says Petersen, adding: “The average yearly spend per client is $300,000.”

Worksome says the Series A funding will go on stoking growth by investing in marketing. It also plans to spend on product dev and on building out its team globally (it also has offices in London and New York).

Over the past 12 months the startup doubled the size of its team to 50 — and wants to do so again within 12 months so it can ramp up its enterprise client base in the U.S., U.K. and euro-zone.

“Yes, there are a lot of freelancer platforms out there but a lot of these don’t appreciate that hiring is only the tip of the iceberg when it comes to reducing the friction in working with freelancers,” argues Petersen. “Of the time that goes into hiring, managing and paying freelancers, 75% is currently spent on admin such as timesheet approvals, invoicing and compliance checks, leaving only a tiny fraction of time to actually finding talent.”

Worksome woos employers with a “one-click-hire” offer — touting its ability to find and hire freelancers “within seconds”.

If hiring a stranger in seconds sounds ill-advised, Worksome greases this external employment transaction by taking care of vetting the freelancers itself (including carrying out background checks; and using proprietary technology to asses freelancers’ skills and suitability for its marketplace).

“We have a two-step vetting process to ensure that we only allow the best freelance talent onto the Worksome platform,” Petersen tells TechCrunch. “For step one, an inhouse-built robot assesses our freelancer applicants. It analyses their skillset, social media profiles, profile completeness and hourly or daily rate, as well as their CV and work history, to decide whether each person is a good fit for Worksome.

“For step two, our team of talent specialists manually review and decline or approve the freelancers that pass through step one with a score of 85% or more. We have just approved our 30,000th freelancer and will be able to both scale and improve our vetting procedure as we grow.”

A majority of freelancer applicants fail Worksome’s proprietary vetting processes. This is clear because it says it has received 80,000 applicants so far — but only approved 30,000.

That raises interesting questions about how it’s making decisions on who is (and isn’t) an ‘appropriate fit’ for its talent marketplace.

It says its candidate assessing “robot” looks at “whether freelancers can demonstrate the skillset, matching work history, industry experience and profile depth” deemed necessary to meet its quality criteria — giving the example that it would not accept a freelancer who says they can lead complex IT infrastructure projects if they do not have evidence of relevant work, education and skills.

On the AI freelancer-to-project matching side, Worksome says its technology aims to match freelancers “who have the highest likelihood of completing a job with high satisfaction, based on their work-history, and performance and skills used on previous jobs”.

“This creates a feedback loop that… ensure that both clients and freelancers are matched with great people and great work,” is its circular suggestion when we ask about this.

But it also emphasizes that its AI is not making hiring decisions on its own — and is only ever supporting humans in making a choice. (An interesting caveat since existing EU data protection rules, under Article 22 of the GDPR, provide for a right for individuals to object to automated decision making if significant decisions are being taken without meaningful human interaction.) 

Using automation technologies (like AI) to make assessments that determine whether a person gains access to employment opportunities or doesn’t can certainly risk scaled discrimination. So the devil really is in the detail of how these algorithmic assessments are done.

That’s why such uses of technology are set to face close regulatory scrutiny in the European Union — under incoming rules on ‘high risk’ users of artificial intelligence — including the use of AI to match candidates to jobs.

The EU’s current legislative proposals in this area specifically categorize “employment, workers management and access to self-employment” as a high risk use of AI, meaning applications like Worksome are likely to face some of the highest levels of regulatory supervision in the future.

Nonetheless, Worksome is bullish when we ask about the risks associated with using AI as an intermediary for employment opportunities.

“We utilise fairly advanced matching algorithms to very effectively shortlist candidates for a role based solely on objective criteria, rinsed from human bias,” claims Petersen. “Our algorithms don’t take into account gender, ethnicity, name of educational institutions or other aspects that are usually connected to human bias.”

“AI has immense potential in solving major industry challenges such as recruitment bias, low worker mobility and low access to digital skills among small to medium sized businesses. We are firm believers that technology should be utilized to remove human bias’ from any hiring process,” he goes on, adding: “Our tech was built to this very purpose from the beginning, and the new proposed legislation has the potential to serve as a validator for the hard work we’ve put into this.

“The obvious potential downside would be if new legislation would limit innovation by making it harder for startups to experiment with new technologies. As always, legislation like this will impact the Davids more than the Goliaths, even though the intentions may have been the opposite.”

Zooming back out to consider the pandemic-fuelled remote working boom, Worksome confirms that most of the projects for which it supplied freelancers last year were conducted remotely.

“We are currently seeing a slow shift back towards a combination of remote and onsite work and expect this combination to stick amongst most of our clients,” Petersen goes on. “Whenever we are in uncertain economic times, we see a rise in the number of freelancers that companies are using. However, this trend is dwarfed by a much larger overall trend towards flexible work, which drives the real shift in the market. This shift has been accelerated by COVID-19 but has been underway for many years.

“While remote work has unlocked an enormous potential for accessing talent everywhere, 70% of the executives expect to use more temporary workers and contractors onsite than they did before COVID-19, according to a recent McKinsey study. This shows that businesses really value the flexibility in using an on-demand workforce of highly skilled specialists that can interact directly with their own teams.”

Asked whether it’s expecting growth in freelancing to sustain even after we (hopefully) move beyond the pandemic — including if there’s a return to physical offices — Petersen suggests the underlying trend is for businesses to need increased flexibility, regardless of the exact blend of full-time and freelancer staff. So platforms like Worksome are confidently poised to keep growing.

“When you ask business leaders, 90% believe that shifting their talent model to a blend of full-time and freelancers can give a future competitive advantage (Source: BCG),” he says. “We see two major trends driving this sentiment; access to talent, and building an agile and flexible organization. This has become all the more true during the pandemic — a high degree of flexibility is allowing organisations to better navigate both the initial phase of the pandemic as well the current pick up of business activity.

“With the amount of change that we’re currently seeing in the world, and with businesses are constantly re-inventing themselves, the access to highly skilled and flexible talent is absolutely essential — now, in the next 5 years, and beyond.”

Sep
25
2020
--

Privacy data management innovations reduce risk, create new revenue channels

Privacy data mismanagement is a lurking liability within every commercial enterprise. The very definition of privacy data is evolving over time and has been broadened to include information concerning an individual’s health, wealth, college grades, geolocation and web surfing behaviors. Regulations are proliferating at state, national and international levels that seek to define privacy data and establish controls governing its maintenance and use.

Existing regulations are relatively new and are being translated into operational business practices through a series of judicial challenges that are currently in progress, adding to the confusion regarding proper data handling procedures. In this confusing and sometimes chaotic environment, the privacy risks faced by almost every corporation are frequently ambiguous, constantly changing and continually expanding.

Conventional information security (infosec) tools are designed to prevent the inadvertent loss or intentional theft of sensitive information. They are not sufficient to prevent the mismanagement of privacy data. Privacy safeguards not only need to prevent loss or theft but they must also prevent the inappropriate exposure or unauthorized usage of such data, even when no loss or breach has occurred. A new generation of infosec tools is needed to address the unique risks associated with the management of privacy data.

The first wave of innovation

A variety of privacy-focused security tools emerged over the past few years, triggered in part by the introduction of GDPR (General Data Protection Regulation) within the European Union in 2018. New capabilities introduced by this first wave of innovation were focused in the following three areas:

Data discovery, classification and cataloging. Modern enterprises collect a wide variety of personal information from customers, business partners and employees at different times for different purposes with different IT systems. This data is frequently disseminated throughout a company’s application portfolio via APIs, collaboration tools, automation bots and wholesale replication. Maintaining an accurate catalog of the location of such data is a major challenge and a perpetual activity. BigID, DataGuise and Integris Software have gained prominence as popular solutions for data discovery. Collibra and Alation are leaders in providing complementary capabilities for data cataloging.

Consent management. Individuals are commonly presented with privacy statements describing the intended use and safeguards that will be employed in handling the personal data they supply to corporations. They consent to these statements — either explicitly or implicitly — at the time such data is initially collected. Osano, Transcend.io and DataGrail.io specialize in the management of consent agreements and the enforcement of their terms. These tools enable individuals to exercise their consensual data rights, such as the right to view, edit or delete personal information they’ve provided in the past.

Nov
13
2019
--

Messaging app Wire confirms $8.2M raise, responds to privacy concerns after moving holding company to the US

Big changes are afoot for Wire, an enterprise-focused end-to-end encrypted messaging app and service that advertises itself as “the most secure collaboration platform”. In February, Wire quietly raised $8.2 million from Morpheus Ventures and others, we’ve confirmed — the first funding amount it has ever disclosed — and alongside that external financing, it moved its holding company in the same month to the US from Luxembourg, a switch that Wire’s CEO Morten Brogger described in an interview as “simple and pragmatic.”

He also said that Wire is planning to introduce a freemium tier to its existing consumer service — which itself has half a million users — while working on a larger round of funding to fuel more growth of its enterprise business — a key reason for moving to the US, he added: There is more money to be raised there.

“We knew we needed this funding and additional to support continued growth. We made the decision that at some point in time it will be easier to get funding in North America, where there’s six times the amount of venture capital,” he said.

While Wire has moved its holding company to the US, it is keeping the rest of its operations as is. Customers are licensed and serviced from Wire Switzerland; the software development team is in Berlin, Germany; and hosting remains in Europe.

The news of Wire’s US move and the basics of its February funding — sans value, date or backers — came out this week via a blog post that raises questions about whether a company that trades on the idea of data privacy should itself be more transparent about its activities.

Specifically, the changes to Wire’s financing and legal structure were only communicated to users when news started to leak out, which brings up questions not just about transparency, but about the state of Wire’s privacy policy, given the company’s holding company now being on US soil.

It was an issue picked up and amplified by NSA whistleblower Edward Snowden . Via Twitter, he described the move to the US as “not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on.”

“There was no change in control and [the move was] very tactical [because of fundraising],” Brogger said about the company’s decision not to communicate the move, adding that the company had never talked about funding in the past, either. “Our evaluation was that this was not necessary. Was it right or wrong? I don’t know.”

The other key question is whether Wire’s shift to the US puts users’ data at risk — a question that Brogger claims is straightforward to answer: “We are in Switzerland, which has the best privacy laws in the world” — it’s subject to Europe’s General Data Protection Regulation framework (GDPR) on top of its own local laws — “and Wire now belongs to a new group holding, but there no change in control.”

In its blog post published in the wake of blowback from privacy advocates, Wire also claims it “stands by its mission to best protect communication data with state-of-the-art technology and practice” — listing several items in its defence:

  • All source code has been and will be available for inspection on GitHub (github.com/wireapp).
  • All communication through Wire is secured with end-to-end encryption — messages, conference calls, files. The decryption keys are only stored on user devices, not on our servers. It also gives companies the option to deploy their own instances of Wire in their own data centers.
  • Wire has started working on a federated protocol to connect on-premise installations and make messaging and collaboration more ubiquitous.
  • Wire believes that data protection is best achieved through state-of-the-art encryption and continues to innovate in that space with Messaging Layer Security (MLS).

But where data privacy and US law are concerned, it’s complicated. Snowden famously leaked scores of classified documents disclosing the extent of US government mass surveillance programs in 2013, including how data-harvesting was embedded in US-based messaging and technology platforms.

Six years on, the political and legal ramifications of that disclosure are still playing out — with a key judgement pending from Europe’s top court which could yet unseat the current data transfer arrangement between the EU and the US.

Privacy versus security

Wire launched at a time when interest in messaging apps was at a high watermark. The company made its debut in the middle of February 2014, and it was only one week later that Facebook acquired WhatsApp for the princely sum of $19 billion.

We described Wire’s primary selling point at the time as a “reimagining of how a communications tool like Skype should operate had it been built today” rather than in in 2003. That meant encryption and privacy protection, but also better audio tools and file compression and more.

It was a pitch that seemed especially compelling considering the background of the company. Skype co-founder Janus Friis and funds connected to him were the startup’s first backers (and they remain the largest shareholders);Wire was co-founded in by Skype alums Jonathan Christensen and Alan Duric (former no longer with the company, latter is its CTO); and even new investor Morpheus has Skype roots.

Yet even with that Skype pedigree, the strategy faced a big challenge.

“The consumer messaging market is lost to the Facebooks of the world, which dominate it,” Brogger said today. “However, we made a clear insight, which is the core strength of Wire: security and privacy.”

That, combined with trend around the consumerization of IT that’s brought new tools to business users, is what led Wire to the enterprise market in 2017 — a shift that’s seen it pick up a number of big names among its 700 enterprise customers, including Fortum, Aon, EY and SoftBank Robotics.

But fast forward to today, and it seems that even as security and privacy are two sides of the same coin, it may not be so simple when deciding what to optimise in terms of features and future development, which is part of the question now and what critics are concerned with.

“Wire was always for profit and planned to follow the typical venture backed route of raising rounds to accelerate growth,” one source familiar with the company told us. “However, it took time to find its niche (B2B, enterprise secure comms).

“It needed money to keep the operations going and growing. [But] the new CEO, who joined late 2017, didn’t really care about the free users, and the way I read it now, the transformation is complete: ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.’”

And that is the message you get from Brogger, too, who describes individual consumers as “not part of our strategy”, but also not entirely removed from it, either, as the focus shifts to enterprises and their security needs.

Brogger said there are still half a million individuals on the platform, and they will come up with ways to continue to serve them under the same privacy policies and with the same kind of service as the enterprise users. “We want to give them all the same features with no limits,” he added. “We are looking to switch it into a freemium model.”

On the other side, “We are having a lot of inbound requests on how Wire can replace Skype for Business,” he said. “We are the only one who can do that with our level of security. It’s become a very interesting journey and we are super excited.”

Part of the company’s push into enterprise has also seen it make a number of hires. This has included bringing in two former Huddle C-suite execs, Brogger as CEO and Rasmus Holst as chief revenue officer — a bench that Wire expanded this week with three new hires from three other B2B businesses: a VP of EMEA sales from New Relic, a VP of finance from Contentful; and a VP of Americas sales from Xeebi.

Such growth comes with a price-tag attached to it, clearly. Which is why Wire is opening itself to more funding and more exposure in the US, but also more scrutiny and questions from those who counted on its services before the change.

Brogger said inbound interest has been strong and he expects the startup’s next round to close in the next two to three months.

Jul
08
2019
--

The startups creating the future of RegTech and financial services

Technology has been used to manage regulatory risk since the advent of the ledger book (or the Bloomberg terminal, depending on your reference point). However, the cost-consciousness internalized by banks during the 2008 financial crisis combined with more robust methods of analyzing large datasets has spurred innovation and increased efficiency by automating tasks that previously required manual reviews and other labor-intensive efforts.

So even if RegTech wasn’t born during the financial crisis, it was probably old enough to drive a car by 2008. The intervening 11 years have seen RegTech’s scope and influence grow.

RegTech startups targeting financial services, or FinServ for short, require very different growth strategies — even compared to other enterprise software companies. From a practical perspective, everything from the security requirements influencing software architecture and development to the sales process are substantially different for FinServ RegTechs.

The most successful RegTechs are those that draw on expertise from security-minded engineers, FinServ-savvy sales staff as well as legal and compliance professionals from the industry. FinServ RegTechs have emerged in a number of areas due to the increasing directives emanating from financial regulators.

This new crop of startups performs sophisticated background checks and transaction monitoring for anti-money laundering purposes pursuant to the Bank Secrecy Act, the Office of Foreign Asset Control (OFAC) and FINRA rules; tracks supervision requirements and retention for electronic communications under FINRA, SEC, and CFTC regulations; as well as monitors information security and privacy laws from the EU, SEC, and several US state regulators such as the New York Department of Financial Services (“NYDFS”).

In this article, we’ll examine RegTech startups in these three fields to determine how solutions have been structured to meet regulatory demand as well as some of the operational and regulatory challenges they face.

Know Your Customer and Anti-Money Laundering

Apr
02
2019
--

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with record keeping laws but raises questions about potential admissibility as evidence.

Jan
26
2019
--

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com