Oct
14
2020
--

Zoom to start first phase of E2E encryption rollout next week

Zoom will begin rolling out end-to-end encryption to users of its videoconferencing platform from next week, it said today.

The platform, whose fortunes have been supercharged by the pandemic-driven boom in remote working and socializing this year, has been working on rebooting its battered reputation in the areas of security and privacy since April — after it was called out on misleading marketing claims of having E2E encryption (when it did not). E2E is now finally on its way though.

“We’re excited to announce that starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days,” it writes in a blog post. “Zoom users — free and paid — around the world can host up to 200 participants in an E2EE meeting on Zoom, providing increased privacy and security for your Zoom sessions.”

Zoom acquired Keybase in May, saying then that it was aiming to develop “the most broadly used enterprise end-to-end encryption offering”.

However, initially, CEO Eric Yuan said this level of encryption would be reserved for fee-paying users only. But after facing a storm of criticism the company enacted a swift U-turn — saying in June that all users would be provided with the highest level of security, regardless of whether they are paying to use its service or not.

Zoom confirmed today that Free/Basics users who want to get access to E2EE will need to participate in a one-time verification process — in which it will ask them to provide additional pieces of information, such as verifying a phone number via text message — saying it’s implementing this to try to reduce “mass creation of abusive accounts”.

“We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our work with human rights and children’s safety organizations and our users’ ability to lock down a meeting, report abuse, and a myriad of other features made available as part of our security icon — we can continue to enhance the safety of our users,” it writes.

Next week’s roll out of a technical preview is phase 1 of a four-stage process to bring E2E encryption to the platform.

This means there are some limitations — including on the features that are available in E2EE Zoom meetings (you won’t have access to join before host, cloud recording, streaming, live transcription, Breakout Rooms, polling, 1:1 private chat, and meeting reactions); and on the clients that can be used to join meetings (for phase 1 all E2EE meeting participants must join from the Zoom desktop client, mobile app, or Zoom Rooms). 

The next phase of the E2EE rollout — which will include “better identity management and E2EE SSO integration”, per Zoom’s blog — is “tentatively” slated for 2021.

From next week, customers wanting to check out the technical preview must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.

All meeting participants must have the E2EE setting enabled in order to join an E2EE meeting. Hosts can enable the setting for E2EE at the account, group, and user level and can be locked at the account or group level, Zoom notes in an FAQ.

The AES 256-bit GCM encryption that’s being used is the same as Zoom currently uses but here combined with public key cryptography — which means the keys are generated locally, by the meeting host, before being distributed to participants, rather than Zoom’s cloud performing the key generating role.

“Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents,” it explains of the E2EE implementation.

If you’re wondering how you can be sure you’ve joined an E2EE Zoom meeting a dark padlock will be displayed atop the green shield icon in the upper left corner of the meeting screen. (Zoom’s standard GCM encryption shows a checkmark here.)

Meeting participants will also see the meeting leader’s security code — which they can use to verify the connection is secure. “The host can read this code out loud, and all participants can check that their clients display the same code,” Zoom notes.

Oct
08
2020
--

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad Wi-Fi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies building visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the co-founder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “COVID-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

All the same, the reasons why Headroom are interesting are also likely going to be the ones that will pose big challenges for it. The new ubiquity (and our present lives working at home) might make us more open to using video calling, but for better or worse, we’re all also now pretty used to what we already use. And for many companies, they’ve now paid up as premium users to one service or another, so they may be reluctant to try out new and less-tested platforms.

But as we’ve seen in tech so many times, sometimes it pays to be a late mover, and the early movers are not always the winners.

The first iteration of Headroom will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: The same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).

Feb
19
2019
--

GN acquires Altia Systems for $125M to add video to its advanced audio solutions

Some interesting M&A is afoot in the world of hardware and software that’s aiming to improve the quality of audio and video communications over digital networks.

GN Group — the Danish company that broke new ground in mobile when it inked deals first with Apple and then Google to stream audio from their phones directly to smart, connected hearing aids — is expanding from audio to video, and from Europe to Silicon Valley.

Today, the company announced that it would acquire Altia Systems, a startup out of Cupertino that makes a “surround” videoconferencing device and software called the PanaCast (we reviewed it oncedesigned to replicate the panoramic, immersive experience of vision that we have as humans

GN is paying $125 million for the startup. For some context, this price represents a decent return: according to PitchBook, Altia was last valued at around $78 million with investors including Intel Capital and others.

Intel’s investment was one of several strategic partnerships that Altia had inked over the years. (Another was with Zoom to provide a new video solution for Uber.)

The Intel partnership, for one, will continue post-acquisition. “Intel invested in Altia Systems to bring an industry leading immersive, Panoramic-4K camera experience to business video collaboration,” said Dave Flanagan, Vice President of Intel Corporation and Senior Managing Director of Intel Capital, in a statement. “Over the past few years, Altia Systems has collaborated with Intel to use AI and to deliver more intelligent conference rooms and business meetings. This helps customers make better decisions, automate workflows and improve business efficiency. We are excited to work with GN to further scale this technology on a global basis.”

We have seen a lot of applications of AI in just about every area of technology, but one of the less talked about, but very interesting, areas has been in how it’s being used to enhance audio in digital network. Pindrop, as one example, is creating and tracking “audio fingerprints” for security applications, specifically fraud prevention (to authenticate users and to help weed out imposters based not just on the actual voice but on all the other aural cues we may not pick up as humans but can help build a picture of a caller’s location and so on).

GN, meanwhile, has been building AI-based algorithms to help those who cannot hear as well, or who simply needs to hear better, be able to listen to calls on digital networks and make out what’s being said. This not only requires technology to optimise the audio quality, but also algorithms that can help tailor that quality to the specific person’s own unique hearing needs.

One of the more obvious applications of services like these are for those who are hard of hearing and use hearing aids (which can be awful or impossible to use with mobile phones), another is in call centers, and this appears to be the area where GN is hoping to address with the Altia acquisition.

GN already offers two products for call centre workers, Jabra and BlueParrot — headsets and speakerphones with their own proprietary software that it claims makes workers more efficient and productive just by making it easier to understand what callers are saying.

Altia will be integrated into that solution to expand it to include videoconferencing around unified communications solutions, creating more natural experiences for those who are not actually in physical rooms together.

“Combining GN Audio’s sound expertise, partner eco-system and global channel access with the video technology from Altia Systems, we will take the experience of conference calls to a completely new level,” said René Svendsen-Tune, President and CEO of GN Audio, in a statement.

What’s notable is that GN is a vertically-integrated company, building not just hardware but software to run on it. The AI engine underpinning some of its software development will be getting a vast new trove of data fed into it now by way of the PanaCast solution: not jut in terms of video, but the large amount of audio that will naturally come along with it.

“Combining PanaCast’s immersive, intelligent video with GN Audio’s intelligent audio solutions will enable us to deliver a whole new class of collaboration products for our customers,” said Aurangzeb Khan, President and CEO of Altia Systems, in a statement. “PanaCast’s solutions enable companies to improve meeting participants’ experience, automate workflows, and enhance business efficiency and real estate utilization with data lakes of valid information.”

Given GN’s work with Android and iOS devices, it will be interesting to see how and if these video solutions make their way to those platforms as well, either by way of solutions that work on their phones or perhaps more native integrations down the line.

Regardless of how that develops, what’s clear is that there remains a market not just for basic tools to get work done, but technology to improve the quality of those tools, and that’s where GN hopes it will resonate with this deal.

Jul
08
2016
--

Siris Capital to buy Polycom for $2B in cash, Polycom cancels its $1.96B Mitel merger

globe with arrows Here’s an interesting twist in one of the bigger enterprise acquisition stories of 2016. After Mitel earlier this year announced that it would acquire Polycom for $1.96 billion and consolidate the two companies’ enterprise communication businesses, today private equity firm Siris Capital has come in with a higher offer: it has agreed to acquire Polycom for $2 billion in cash and… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com