Jan
26
2019
--

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Jan
16
2019
--

HyperScience, the machine learning startup tackling data entry, raises $30 million Series B

HyperScience, the machine learning company that turns human readable data into machine readable data, has today announced the close of a $30 million Series B funding round led by Stripes Group, with participation from existing investors FirstMark Capital and Felicis Ventures, as well as new investors Battery Ventures, Global Founders Capital, TD Ameritrade and QBE.

HyperScience launched out of stealth in 2016 with a suite of enterprise products focused on the healthcare, insurance, finance and government industries. The original products were HSForms (which handled data-entry by converting hand-written forms to digital), HSFreeForm (which did a similar function for hand-written emails or other non-form content) and HSEvaluate (which could parse through complex data on a form to help insurance companies approve or deny claims by pulling out all the relevant info).

Now, the company has combined all three of those products into a single product called HyperScience. The product is meant to help companies and organizations reduce their data-entry backlog and better serve their customers, saving money and resources.

The idea is that many of the forms we use in life or in the workplace are in an arbitrary format. My bank statements don’t look the same as your bank statements, and invoices from your company might look different than invoices from my company.

HyperScience is able to take those forms and pipe them into the system quickly and easily, without help from humans.

Instead of charging by seat, HyperScience charges by documents, as the mere use of HyperScience should mean that fewer humans are actually “using” the product.

The latest round brings HyperScience’s total funding to $50 million, and the company plans to use a good deal of that funding to grow the team.

“We have a product that works and a phenomenally good product market fit,” said CEO Peter Brodsky. “What will determine our success is our ability to build and scale the team.”

Jan
15
2019
--

Microsoft continues to build government security credentials ahead of JEDI decision

While the DoD is in the process of reviewing the $10 billion JEDI cloud contract RFPs (assuming the work continues during the government shutdown), Microsoft continues to build up its federal government security bona fides, regardless.

Today the company announced it has achieved the highest level of federal government clearance for the Outlook mobile app, allowing US Government Community Cloud (GCC) High and Department of Defense employees to use the mobile app. This is on top of FedRamp compliance, the company achieved last year.

“To meet the high level of government security and compliance requirements, we updated the Outlook mobile architecture so that it establishes a direct connection between the Outlook mobile app and the compliant Exchange Online backend services using a native Microsoft sync technology and removes middle tier services,” the company wrote in a blog post announcing the update.

The update will allows these highly security-conscious employees to access some of the more recent updates to Outlook Mobile such as the ability to add a comment when canceling an event.

This is in line with government security updates the company made last year. While none of these changes are specifically designed to help win the $10 billion JEDI cloud contract, they certainly help make a case for Microsoft from a technology standpoint

As Microsoft corporate vice president for Azure, Julia White stated in a blog post last year, which we covered, “Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly,” White wrote at the time. The Outlook Mobile release is clearly in line with that.

Today’s announcement comes after the Pentagon announced just last week that it has awarded Microsoft a separate large contract for $1.7 billion. This involves providing Microsoft Enterprise Services for the Department of Defense (DoD), Coast Guard and the intelligence community, according to a statement from DoD.

All of this comes ahead of decision on the massive $10 billion, winner-take-all cloud contract. Final RFPs were submitted in October and the DoD is expected to make a decision in April. The process has not been without controversy with Oracle and IBM submitting a formal protests even before the RFP deadline — and more recently, Oracle filing a lawsuit alleging the contract terms violate federal procurement laws. Oracle has been particularly concerned that the contract was designed to favor Amazon, a point the DoD has repeatedly denied.

Jan
10
2019
--

Daily Crunch: How the government shutdown is damaging cybersecurity and future IPOs

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. How Trump’s government shutdown is harming cyber and national security
The government has been shut down for nearly three weeks, and there’s no end in sight. While most of the core government departments — State, Treasury, Justice and Defense — are still operational, others like Homeland Security, which takes the bulk of the government’s cybersecurity responsibilities, are suffering the most.

2. With SEC workers offline, the government shutdown could screw IPO-ready companies
The SEC has been shut down since December 27 and only has 285 of its 4,436 employees on the clock for emergency situations. While tech’s most buzz-worthy unicorns like Uber and Lyft won’t suffer too much from the shutdown, smaller businesses, particularly those in need of an infusion of capital to continue operating, will bear the brunt of any IPO delays.

3. The state of seed 

In 2018, seed activity as a percentage of all deals shrank from 31 percent to 25 percent — a decade low — while the share and size of late-stage deals swelled to record highs.

4. Banking startup N26 raises $300 million at $2.7 billion valuation

N26 is building a retail bank from scratch. The company prides itself on the speed and simplicity of setting up an account and managing assets. In the past year, N26’s valuation has exploded as its user base has tripled, with nearly a third of customers paying for a premium account.

5. E-scooter startup Bird is raising another $300M 

Bird is reportedly nearing a deal to extend its Series C round with a $300 million infusion led by Fidelity. The funding, however, comes at a time when scooter companies are losing steam and struggling to prove that its product is the clear solution to last-mile transportation.

6. AWS gives open source the middle finger 

It’s no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

7. The Galaxy S10 is coming on February 20 

Looks like Samsung is giving Mobile World Congress the cold shoulder and has decided to announce its latest flagship phone a week earlier in San Francisco.

Jan
07
2019
--

HQ2 fight continues as New York City and Seattle officials hold anti-Amazon summit

The heated debate around Amazon’s recently announced Long Island City “HQ2” is showing no signs of cooling down.

On Monday morning, the Retail, Wholesale and Department Store Union (RWDSU) hosted a briefing in which labor officials, economic development analysts, Amazon employees and elected New York State and City representatives further underlined concerns around the HQ2 process, the awarded incentives, and the potential impacts Amazon’s presence would have on city workers and residents.

While many of the arguments posed at the Summit weren’t necessarily new, the wide variety of stakeholders that showed up to express concern looked to contextualize the far-reaching risks associated with the deal.

The day began with representatives from New York union groups recounting Amazon’s shaky history with employee working conditions and questioning how the city’s working standards will be impacted if the 50,000 promised jobs do actually show up.

Two current employees working in an existing Amazon New York City warehouse in Staten Island provided poignant examples of improper factory conditions and promised employee benefits that never came to fruition. According to the workers, Amazon has yet to follow through on shuttle services and ride-sharing services that were promised to ease worker commutes, forcing the workers to resort to overcrowded and unreliable public transportation. One of the workers detailed that with his now four-hour commute to get to and from work, coupled with his meaningfully long shifts, he’s been unable to see his daughter for weeks.

Various economic development groups and elected officials including, New York City Comptroller Scott Stringer, City Council Speaker Corey Johnson, City Council Member Jimmy Van Bramer, and New York State Senator Mike Gianaris supported the labor arguments with spirited teardowns of the economic terms of the deal.

Like many critics of the HQ2 process, the speakers’ expressed their beliefs that Amazon knew where it wanted to bring its second quarters throughout the entirety of its auction process, given the talent pool and resources in the chosen locations, and that the entire undertaking was meant to squeeze out the best economic terms possible. And according to City Council Speaker Johnson, New York City “got played”.

Comptroller Stringer argued that Amazon is taking advantage of New York’s Relocation and Employment Assistance Program (REAP) and Industrial and Commercial Abatement Program (ICAP), which Stringer described as outdated and in need of reform, to receive the majority of the $2 billion-plus in promised economic incentives that made it the fourth largest corporate incentive deal in US history.

The speakers continued to argue that the unprecedented level of incentives will be nearly impossible to recoup and that New York will also face economic damages from lower sales tax revenue as improved Amazon service in the city cannibalizes local brick & mortar retail.

Fears over how Amazon’s presence will impact the future of New York were given more credibility with the presence of Seattle City Council members Lisa Herbold & Teresa Mosqueda, who had flown to New York from Seattle to discuss lessons learned from having Amazon’s Headquarters in the city and to warn the city about the negative externalities that have come with it.

Herbold and Mosqueda focused less on an outright rejection of the deal but instead emphasized that New York was in a position to negotiate for better terms focused on equality and corporate social responsibility, which could help the city avoid the socioeconomic turnover that has plagued Seattle and could create a new standard for public-private partnerships.

While the New York City Council noted it was looking into legal avenues, the opposition seemed to have limited leverage to push back or meaningfully negotiate the deal. According to state officials, the most clear path to fight the deal would be through votes by the state legislature and through the state Public Authorities Control Board who has to unanimously approve the subsidy package.

With the significant turnout seen at Monday’s summit, which included several high-ranking state and city officials, it seems clear that we’re still in the early innings of what’s likely to be a long battle ahead to close the HQ2 deal.

Amazon did not return requests for immediate comment.

Dec
15
2018
--

The limits of coworking

It feels like there’s a WeWork on every street nowadays. Take a walk through midtown Manhattan (please don’t actually) and it might even seem like there are more WeWorks than office buildings.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

Co-working has permeated cities around the world at an astronomical rate. The rise has been so remarkable that even the headline-dominating SoftBank seems willing to bet the success of its colossal Vision Fund on the shift continuing, having poured billions into WeWork – including a recent $4.4 billion top-up that saw the co-working king’s valuation spike to $45 billion.

And there are no signs of the trend slowing down. With growing frequency, new startups are popping up across cities looking to turn under-utilized brick-and-mortar or commercial space into low-cost co-working options.

It’s a strategy spreading through every type of business from retail – where companies like Workbar have helped retailers offer up portions of their stores – to more niche verticals like parking lots – where companies like Campsyte are transforming empty lots into spaces for outdoor co-working and corporate off-sites. Restaurants and bars might even prove most popular for co-working, with startups like Spacious and KettleSpace turning restaurants that are closed during the day into private co-working space during their off-hours.

Before you know it, a startup will be strapping an Aeron chair to the top of a telephone pole and calling it “WirelessWorking”.

But is there a limit to how far co-working can go? Are all of the storefronts, restaurants and open spaces that line city streets going to be filled with MacBooks, cappuccinos and Moleskine notebooks? That might be too tall a task, even for the movement taking over skyscrapers.

The co-working of everything

Photo: Vasyl Dolmatov / iStock via Getty Images

So why is everyone trying to turn your favorite neighborhood dinner spot into a part-time WeWork in the first place? Co-working offers a particularly compelling use case for under-utilized space.

First, co-working falls under the same general commercial zoning categories as most independent businesses and very little additional infrastructure – outside of a few extra power outlets and some decent WiFi – is required to turn a space into an effective replacement for the often crowded and distracting coffee shops used by price-sensitive, lean, remote, or nomadic workers that make up a growing portion of the workforce.

Thus, businesses can list their space at little-to-no cost, without having to deal with structural layout changes that are more likely to arise when dealing with pop-up solutions or event rentals.

On the supply side, these co-working networks don’t have to purchase leases or make capital improvements to convert each space, and so they’re able to offer more square footage per member at a much lower rate than traditional co-working spaces. Spacious, for example, charges a monthly membership fee of $99-$129 dollars for access to its network of vetted restaurants, which is cheap compared to a WeWork desk, which can cost anywhere from $300-$800 per month in New York City.

Customers realize more affordable co-working alternatives, while tight-margin businesses facing increasing rents for under-utilized property are able to pool resources into a network and access a completely new revenue stream at very little cost. The value proposition is proving to be seriously convincing in initial cities – Spacious told the New York Times, that so many restaurants were applying to join the network on their own volition that only five percent of total applicants were ultimately getting accepted.

Basically, the business model here checks a lot of the boxes for successful marketplaces: Acquisition and transaction friction is low for both customers and suppliers, with both seeing real value that didn’t exist previously. Unit economics seem strong, and vetting on both sides of the market creates trust and community. Finally, there’s an observable network effect whereby suppliers benefit from higher occupancy as more customers join the network, while customers benefit from added flexibility as more locations join the network.

… Or just the co-working of some things

Photo: Caiaimage / Robert Daly via Getty Images

So is this the way of the future? The strategy is really compelling, with a creative solution that offers tremendous value to businesses and workers in major cities. But concerns around the scalability of demand make it difficult to picture this phenomenon becoming ubiquitous across cities or something that reaches the scale of a WeWork or large conventional co-working player.

All these companies seem to be competing for a similar demographic, not only with one another, but also with coffee shops, free workspaces, and other flexible co-working options like Croissant, which provides members with access to unused desks and offices in traditional co-working spaces. Like Spacious and KettleSpace, the spaces on Croissant own the property leases and are already built for co-working, so Croissant can still offer comparatively attractive rates.

The offer seems most compelling for someone that is able to work without a stable location and without the amenities offered in traditional co-working or office spaces, and is also price sensitive enough where they would trade those benefits for a lower price. Yet at the same time, they can’t be too price sensitive, where they would prefer working out of free – or close to free – coffee shops instead of paying a monthly membership fee to avoid the frictions that can come with them.

And it seems unclear whether the problem or solution is as poignant outside of high-density cities – let alone outside of high-density areas of high-density cities.

Without density, is the competition for space or traffic in coffee shops and free workspaces still high enough where it’s worth paying a membership fee for? Would the desire for a private working environment, or for a working community, be enough to incentivize membership alone? And in less-dense and more-sprawl oriented cities, members could also face the risk of having to travel significant distances if space isn’t available in nearby locations.

While the emerging workforce is trending towards more remote, agile and nomadic workers that can do more with less, it’s less certain how many will actually fit the profile that opts out of both more costly but stable traditional workspaces, as well as potentially frustrating but free alternatives. And if the lack of density does prove to be an issue, how many of those workers will live in hyper-dense areas, especially if they are price-sensitive and can work and live anywhere?

To be clear, I’m not saying the companies won’t see significant growth – in fact, I think they will. But will the trend of monetizing unused space through co-working come to permeate cities everywhere and do so with meaningful occupancy? Maybe not. That said, there is still a sizable and growing demographic that need these solutions and the value proposition is significant in many major urban areas.

The companies are creating real value, creating more efficient use of wasted space, and fixing a supply-demand issue. And the cultural value of even modestly helping independent businesses keep the lights on seems to outweigh the cultural “damage” some may fear in turning them into part-time co-working spaces.

And lastly, some reading while in transit:

Dec
08
2018
--

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

Dec
04
2018
--

Cove.Tool wants to solve climate change one efficient building at a time

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”

Oct
26
2018
--

Microsoft has no problem taking the $10B JEDI cloud contract if it wins

The Pentagon’s $10 billion JEDI cloud contract bidding process has drawn a lot of attention. Earlier this month, Google withdrew, claiming ethical considerations. Amazon’s Jeff Bezos responded in an interview at Wired25 that he thinks that it’s a mistake for big tech companies to turn their back on the U.S. military. Microsoft president Brad Smith agrees.

In a blog post today, he made clear that Microsoft intends to be a bidder in government/military contracts, even if some Microsoft employees have a problem with it. While acknowledging the ethical considerations of today’s most advanced technologies like artificial intelligence, and the ways they could be abused, he explicitly stated that Microsoft will continue to work with the government and the military.

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft,” Smith wrote in the blog post.

To that end, the company wants to win that JEDI cloud contract, something it has acknowledged from the start, even while criticizing the winner-take-all nature of the deal. In the blog post, Smith cited the JEDI contract as an example of the company’s desire to work closely with the U.S. government.

“Recently Microsoft bid on an important defense project. It’s the DOD’s Joint Enterprise Defense Infrastructure cloud project – or “JEDI” – which will re-engineer the Defense Department’s end-to-end IT infrastructure, from the Pentagon to field-level support of the country’s servicemen and women. The contract has not been awarded but it’s an example of the kind of work we are committed to doing,” he wrote.

He went on, much like Bezos, to wrap his company’s philosophy in patriotic rhetoric, rather than about winning lucrative contracts. “We want the people of this country and especially the people who serve this country to know that we at Microsoft have their backs. They will have access to the best technology that we create,” Smith wrote.

Microsoft president Brad Smith. Photo: Riccardo Savi/Getty Images

Throughout the piece, Smith continued to walk a fine line between patriotic duty to support the U.S. military, while carefully conceding that there will be different opinions in a large and diverse company population (some of whom aren’t U.S. citizens). Ultimately, he believes that it’s critical that tech companies be included in the conversation when the government uses advanced technologies.

“But we can’t expect these new developments to be addressed wisely if the people in the tech sector who know the most about technology withdraw from the conversation,” Smith wrote.

Like Bezos, he made it clear that the company leadership is going to continue to pursue contracts like JEDI, whether it’s out of a sense of duty or economic practicality or a little of both — whether employees agree or not.

Oct
15
2018
--

Jeff Bezos is just fine taking the Pentagon’s $10B JEDI cloud contract

Some tech companies might have a problem taking money from the Department of Defense, but Amazon isn’t one of them, as CEO Jeff Bezos made clear today at the Wired25 conference. Just last week, Google pulled out of the running for the Pentagon’s $10 billion, 10-year JEDI cloud contract, but Bezos suggested that he was happy to take the government’s money.

Bezos has been surprisingly quiet about the contract up until now, but his company has certainly attracted plenty of attention from the companies competing for the JEDI deal. Just last week IBM filed a formal protest with the Government Accountability Office claiming that the contract was stacked in favor one vendor. And while it didn’t name it directly, the clear implication was that company was the one owned by Bezos.

Last summer Oracle also filed a protest and also complained that they believed the government had set up the contract to favor Amazon, a charge spokesperson Heather Babb denied. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said last month.

While competitors are clearly worried about Amazon, which has a substantial lead in the cloud infrastructure market, the company itself has kept quiet on the deal until now. Bezos set his company’s support in patriotic terms and one of leadership.

“Sometimes one of the jobs of the senior leadership team is to make the right decision, even when it’s unpopular. And if if big tech companies are going to turn their back on the US Department of Defense, this country is going to be in trouble,” he said.

“I know everyone is conflicted about the current politics in this country, but this country is a gem,” he added.

While Google tried to frame its decision as taking a principled stand against misuse of technology by the government, Bezos chose another tack, stating that all technology can be used for good or ill. “Technologies are always two-sided. You know there are ways they can be misused as well as used, and this isn’t new,” Bezos told Wired25.

He’s not wrong of course, but it’s hard not to look at the size of the contract and see it as purely a business decision on his part. Amazon is as hot for that $10 billion contract as any of its competitors. What’s different in this talk is that Bezos made it sound like a purely patriotic decision, rather than economic one.

The Pentagon’s JEDI contract could have a value of up to $10 billion with a maximum length of 10 years. The contract is framed as a two year deal with two three-year options and a final one for two years. The DOD can opt out before exercising any of the options.

Bidding for the contract closed last Friday. The DOD is expected to choose the winning vendor next April.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com