Oct
08
2020
--

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad Wi-Fi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies building visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the co-founder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “COVID-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

All the same, the reasons why Headroom are interesting are also likely going to be the ones that will pose big challenges for it. The new ubiquity (and our present lives working at home) might make us more open to using video calling, but for better or worse, we’re all also now pretty used to what we already use. And for many companies, they’ve now paid up as premium users to one service or another, so they may be reluctant to try out new and less-tested platforms.

But as we’ve seen in tech so many times, sometimes it pays to be a late mover, and the early movers are not always the winners.

The first iteration of Headroom will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: The same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).

May
13
2020
--

FortressIQ snags $30M Series B to streamline processes with AI-fueled data

As we move through life in the pandemic, companies are being forced to review and understand how workflows happen. How do you distribute laptops to your workforce? How do you make sure everyone has the correct tool set? FortressIQ, a startup that wants to help companies use data to understand and improve internal processes, announced a $30 million Series B investment today.

M12, Microsoft’s venture fund and Tiger Global Management led the round with help from previous investors Boldstart Ventures, Comcast Ventures, Eniac Ventures and Lightspeed Venture Partners. The company has now raised almost $65 million, according to Pitchbook data.

As the product has matured, founder and CEO Pankaj Chowdhry, says its focus has shifted a bit. Whereas before it was primarily about using computer vision to understand workflows, customers are now using that data to help drive their own internal transformations.

That used to require a high priced consulting team to pull off, but FortressIQ is trying to use software, data and artificial intelligence to replace the consultant and expose processes that could be improved.

“We’re building this kind of cool computer vision to help with process discovery, mostly in the automation space to help you automate processes. But what we’ve seen is people leveraging our data to drive transformation strategies, of which automation ends up being a pretty small component,” Chowdry explained.

He said that this is helping define new ways of using the tool they hadn’t considered when they first started the company. “If you think about it, we can use analytics to drive better experiences, better training, all of that. We’ve seen how customers are driving overall improvement strategies by leveraging the data coming out of this system,” he said.

The company currently has 65 employees, but he couldn’t commit to a future number at this point because of the uncertainty that exists in the economy. He knows he wants to hire, but he’s not sure what that will look like. He said they used to revisit hiring every six months. Now it’s ever six weeks, and so they keep having to reevaluate based on an ever-shifting set of conditions.

Chowdry believes that companies will need to be more agile moving forward to react more quickly to changing circumstances beyond the current crisis, and he thinks that’s going to require solid business relationships to pull off.

“I think the idea is to be leveraging this time to build that relationship with your customers so as they do start looking at what are they going to do and where they need to be invested in the business, that we’ve got both the data and the infrastructure to help them do that.”

Apr
02
2019
--

Edgybees’s new developer platform brings situational awareness to live video feeds

San Diego-based Edgybees today announced the launch of Argus, its API-based developer platform that makes it easy to add augmented reality features to live video feeds.

The service has long used this capability to run its own drone platform for first responders and enterprise customers, which allows its users to tag and track objects and people in emergency situations, for example, to create better situational awareness for first responders.

I first saw a demo of the service a year ago, when the team walked a group of journalists through a simulated emergency, with live drone footage and an overlay of a street map and the location of ambulances and other emergency personnel. It’s clear how these features could be used in other situations as well, given that few companies have the expertise to combine the video footage, GPS data and other information, including geographic information systems, for their own custom projects.

Indeed, that’s what inspired the team to open up its platform. As the Edgybees team told me during an interview at the Ourcrowd Summit last month, it’s impossible for the company to build a new solution for every vertical that could make use of it. So instead of even trying (though it’ll keep refining its existing products), it’s now opening up its platform.

“The potential for augmented reality beyond the entertainment sector is endless, especially as video becomes an essential medium for organizations relying on drone footage or CCTV,” said Adam Kaplan, CEO and co-founder of Edgybees. “As forward-thinking industries look to make sense of all the data at their fingertips, we’re giving developers a way to tailor our offering and set them up for success.”

In the run-up to today’s launch, the company has already worked with organizations like the PGA to use its software to enhance the live coverage of its golf tournaments.

Apr
25
2018
--

Allegro.AI nabs $11M for ‘deep learning as a service’, for businesses to build computer vision products

Artificial intelligence and the application of it across nearly every aspect of our lives is shaping up to be one of the major step changes of our modern society. Today, a startup that wants to help other companies capitalise on AI’s advances is announcing funding and emerging from stealth mode.

Allegro.AI, which has built a deep learning platform that companies can use to build and train computer-vision-based technologies — from self-driving car systems through to security, medical and any other services that require a system to read and parse visual data — is today announcing that it has raised $11 million in funding, as it prepares for a full-scale launch of its commercial services later this year after running pilots and working with early users in a closed beta.

The round may not be huge by today’s startup standards, but the presence of strategic investors speaks to the interest that the startup has sparked and the gap in the market for what it is offering. It includes MizMaa Ventures — a Chinese fund that is focused on investing in Israeli startups, along with participation from Robert Bosch Venture Capital GmbH (RBVC), Samsung Catalyst Fund and Israeli fund Dynamic Loop Capital. Other investors (the $11 million actually covers more than one round) are not being disclosed.

Nir Bar-Lev, the CEO and cofounder (Moses Guttmann, another cofounder, is the company’s CTO; and the third cofounder, Gil Westrich, is the VP of R&D), started Allegro.AI first as Seematics in 2016 after he left Google, where he had worked in various senior roles for over 10 years. It was partly that experience that led him to the idea that with the rise of AI, there would be an opportunity for companies that could build a platform to help other less AI-savvy companies build AI-based products.

“We’re addressing a gap in the industry,” he said in an interview. Although there are a number of services, for example Rekognition from Amazon’s AWS, which allow a developer to ping a database by way of an API to provide analytics and some identification of a video or image, these are relatively basic and couldn’t be used to build and “teach” full-scale navigation systems, for example.

“An ecosystem doesn’t exist for anything deep-learning based.” Every company that wants to build something would have to invest 80-90 percent of their total R&D resources on infrastructure, before getting to the many other apsects of building a product, he said, which might also include the hardware and applications themselves. “We’re providing this so that the companies don’t need to build it.”

Instead, the research scientists that will buy in the Allegro.AI platform — it’s not intended for non-technical users (not now at least) — can concentrate on overseeing projects and considering strategic applications and other aspects of the projects. He says that currently, its direct target customers are tech companies and others that rely heavily on tech, “but are not the Googles and Amazons of the world.”

Indeed, companies like Google, AWS, Microsoft, Apple and Facebook have all made major inroads into AI, and in one way or another each has a strong interest in enterprise services and may already be hosting a lot of data in their clouds. But Bar-Lev believes that companies ultimately will be wary to work with them on large-scale AI projects:

“A lot of the data that’s already on their cloud is data from before the AI revolution, before companies realized that the asset today is data,” he said. “If it’s there, it’s there and a lot of it is transactional and relational data.

“But what’s not there is all the signal-based data, all of the data coming from computer vision. That is not on these clouds. We haven’t spoken to a single automotive who is sharing that with these cloud providers. They are not even sharing it with their OEMs. I’ve worked at Google, and I know how companies are afraid of them. These companies are terrified of tech companies like Amazon and so on eating them up, so if they can now stop and control their assets they will do that.”

Customers have the option of working with Allegro either as a cloud or on-premise product, or a combination of the two, and this brings up the third reason that Allegro believes it has a strong opportunity. The quantity of data that is collected for image-based neural networks is massive, and in some regards it’s not practical to rely on cloud systems to process that. Allegro’s emphasis is on building computing at the edge to work with the data more efficiently, which is one of the reasons investors were also interested.

“AI and machine learning will transform the way we interact with all the devices in our lives, by enabling them to process what they’re seeing in real time,” said David Goldschmidt, VP and MD at Samsung Catalyst Fund, in a statement. “By advancing deep learning at the edge, Allegro.AI will help companies in a diverse range of fields—from robotics to mobility—develop devices that are more intelligent, robust, and responsive to their environment. We’re particularly excited about this investment because, like Samsung, Allegro.AI is committed not just to developing this foundational technology, but also to building the open, collaborative ecosystem that is necessary to bring it to consumers in a meaningful way.”

Allegro.AI is not the first company with hopes of providing AI and deep learning as a service to the enterprise world: Element.AI out of Canada is another startup that is being built on the premise that most companies know they will need to consider how to use AI in their businesses, but lack the in-house expertise or budget (or both) to do that. Until the wider field matures and AI know-how becomes something anyone can buy off-the-shelf, it’s going to present an interesting opportunity for the likes of Allegro and others to step in.

 

 

 

Mar
01
2018
--

Microsoft advances several of its hosted artificial intelligence algorithms

 Microsoft Cognitive Services is home to the company’s hosted artificial intelligence algorithms. Today, the company announced advances to several Cognitive Services tools including Microsoft Custom Vision Service, the Face API and Bing Entity Search . Joseph Sirosh, who leads the Microsoft’s cloud AI efforts, defined Microsoft Cognitive Services in a company blog post announcing… Read More

Nov
22
2017
--

AWS ramps up in AI with new consultancy services and Rekognition features

 Ahead of Amazon’s big AWS division Re:invent conference next week, the company has announced two developments in the area of artificial intelligence. AWS is opening a machine learning lab, ML Solutions Lab, to pair Amazon machine learning experts with customers. And it’s releasing new features within Amazon Rekognition, Amazon’s deep learning-based image recognition platform. Read More

Nov
14
2017
--

iUNU aims to build cameras on rails for growers to keep track of their crop health

 You’ve probably spent a lot of time keeping track of your plants and all the minor details, like the coloration of the leaves, in order to make sure they’re healthy — but for professional growers in greenhouses, this means keeping track of thousands of plants all at once. That can get out of hand really quickly as it could involve just walking through a greenhouse with an… Read More

Sep
18
2017
--

Goodbye, photo studios. Hello, colormass virtual photoshoots

 Berlin-based colormass, one of the startups presenting today at TechCrunch Disrupt as part of the Battlefield, has developed a platform that lets you recreate an IKEA-style experience for your own merchandise: highly realistic, but digitally manipulated 3D facsimiles. Read More

Sep
07
2017
--

Canvas’ robot cart could change how factories work

We stopped by Andy Rubin’s Playground in Palo Alto to check out a new autonomous cart from Canvas Technologies. The startup aims to replace existing fixed and expensive factory infrastructure, like conveyor belts, with its lightweight and adaptable computer-vision-powered cart. Read More

Mar
25
2017
--

Matroid can watch videos and detect anything within them

 If a picture is worth a thousand words, a video is worth that times the frame rate. Matroid, a computer vision startup launching out of stealth today, enables anyone to take advantage of the information inherently embedded in video. You can build your own detector within the company’s intuitive, non-technical, web platform to detect people and most other objects. Reza Zadeh, founder… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com