May
03
2018
--

SoundHound has raised a big $100M round to take on Alexa and Google Assistant

As SoundHound looks to leverage its ten-plus years of experience and data to create a voice recognition tool that companies can bake into any platform, it’s raising another big $100 million round of funding to try to make its Houndify platform a third neutral option compared to Alexa and Google Assistant.

While Amazon works to get developers to adopt Alexa, SoundHound has been collecting data since it started as an early mobile app for the iPhone and Android devices. That’s given it more than a decade of data to work with as it tries to build a robust audio recognition engine and tie it into a system with dozens of different queries and options that it can tie to those sounds. The result was always a better SoundHound app, but it’s increasingly started to try to open up that technology to developers and show it’s more powerful (and accurate) than the rest of the voice assistants on the market — and get them to use it in their services.

“We launched [Houndify] before Google and Amazon,” CEO Keyvan Mohajer said. “Obviously, good ideas get copied, and Google and Amazon have copied us. Amazon has the Alexa fund to invest in smaller companies and bribe them to adopt the Alexa Platform. Our reaction to that was, we can’t give $100 million away, so we came up with a strategy which was the reverse. Instead of us investing in smaller companies, let’s go after big successful companies that will invest in us to accelerate Houndify. We think it’s a good strategy. Amazon would be betting on companies that are not yet successful, we would bet on companies that are already successful.”

This round is all coming in from strategic investors. Part of the reason is that taking on these strategic investments allows SoundHound to capture important partnerships that it can leverage to get wider adoption for its technology. The companies investing, too, have a stake in SoundHound’s success and will want to get it wherever possible. The strategic investors include Tencent Holdings Limited, Daimler AG, Hyundai Motor Company, Midea Group, and Orange S.A. SoundHound already has a number of strategic investors that include Samsung, NVIDIA, KT Corporation, HTC, Naver, LINE, Nomura, Sompo, and Recruit. It’s a ridiculously long list, but again, the company is trying to get that technology baked in wherever it can.

So it’s pretty easy to see what SoundHound is going to get out of this: access to China through partners, deeper integration into cars, as well as increased expansion to other avenues through all of its investors. Mohajer said the company could try to get into China on its own (or ignore it altogether), but there has been a very limited number of companies that have had any success there whatsoever. Google and Facebook, two of the largest technology companies in the world, are not on that list of successes.

“China is a very important market, it’s very big and has a lot of potential, and it’s growing,” Mohajer said. “You can go to Canada without having to rethink a big strategy, but China is so different. We saw even companies like Google and Facebook tried to do that and didn’t succeed. When those bigger companies didn’t succeed, it was a signal to us that strategy wouldn’t work. [Tencent] was looking at the space and they saw we have the best technology in the world. They appreciated it and were respectful, they helped us get there. We looked at so many partners and [Tencent and Midea Group] were the ones that worked out.”

The idea here is that developers in all sorts of different markets — whether that’s cars or apps — will want to have some element of voice interaction. SoundHound is betting that companies like Daimler will want to control the experience in their cars, and not be saying “Alexa” whenever they want to make a request while driving. Instead, it may come down to something as simple as a wake word that could change the entire user experience, and that’s why SoundHound is pitching Houndify as a flexible and customizable option that isn’t demanding a brand on top of it.

SoundHound still does have its stable of apps. The original SoundHound app is around, though those features are also baked into Hound, its main consumer app. That is more of a personal assistant-style voice recognition service where you can string together a sentence of as many as a dozen parameters and get a decent search result back. It’s more of a party trick than anything else, but it is a good demonstration of the technical capabilities SoundHound has as it looks to embed that software into lots of different pieces of hardware and software.

SoundHound may have raised a big round with a fresh set of strategic partners, but that certainly doesn’t mean it’s a surefire bet. Amazon is, after all, one of the most valuable companies in the world and Alexa has proven to be a very popular platform, even if it’s mostly for nominal requests and listening to music (and party tricks) at this point. SoundHound is going to have to convince companies — small and large — to bake in its tools, rather than go with massive competitors like Amazon with pockets deep enough to buy a whole grocery chain.

“We think every company is going to need to have a strategy in voice AI, jus like ten years ago everyone needed a mobile strategy,” Mohajer said. “Everyone should think about it. There aren’t many providers, mainly because it takes a long time to build the core technology. It took us 12 years. To Houndify everything we need to be global, we need to support all the main languages and regions in the world. We built the technology to be language independent, but there’s a lot of resources and execution involved.”

Apr
30
2018
--

Suki raises $20M to create a voice assistant for doctors

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next.

It was there that Soni said he figured out one of the most annoying pain points for doctors in any office: writing down notes and documentation. That’s why he decided to start Suki — previously Robin AI — to create a way for doctors to simply start talking aloud to take notes when working with patients, rather than having to put everything into a medical record system, or even writing those notes down by hand. That seemed like the lowest hanging fruit, offering an opportunity to make it easier for doctors that see dozens of patients to make their lives significantly easier, he said.

“We decided we had found a powerful constituency who were burning out because of just documentation,” Soni said. “They have underlying EMR systems that are much older in design. The solution aligns with the commoditization of voice and machine learning. If you put it all together, if we can build a system for doctors and allow doctors to use it in a relatively easy way, they’ll use it to document all the interactions they do with patients. If you have access to all data right from a horse’s mouth, you can use that to solve all the other problems on the health stack.”

The company said it has raised a $15 million funding round led by Venrock, with First Round, Social+Capital, Nat Turner of Flatiron Health, Marc Benioff, and other individual Googlers and angels. Venrock also previously led a $5 million seed financing round, bringing the company’s total funding to around $20 million. It’s also changing its name from Robin AI to Suki, though the reason is actually a pretty simple one: “Suki” is a better wake word for a voice assistant than “Robin” because odds are there’s someone named Robin in the office.

The challenge for a company like Suki is not actually the voice recognition part. Indeed, that’s why Soni said they are actually starting a company like this today: voice recognition is commoditized. Trying to start a company like Suki four years ago would have meant having to build that kind of technology from scratch, but thanks to incredible advances in machine learning over just the past few years, startups can quickly move on to the core business problems they hope to solve rather than focusing on early technical challenges.

Instead, Suki’s problem is one of understanding language. It has to ingest everything that a doctor is saying, parse it, and figure out what goes where in a patient’s documentation. That problem is even more complex because each doctor has a different way of documenting their work with a patient, meaning it has to take extra care in building a system that can scale to any number of doctors. As with any company, the more data it collects over time, the better those results get — and the more defensible the business becomes, because it can be the best product.

“Whether you bring up the iOS app or want to bring it in a website, doctors have it in the exam room,” Soni said. “You can say, ‘Suki, make sure you document this, prescribe this drug, and make sure this person comes back to me for a follow-up visit.’ It takes all that, it captures it into a clinically comprehensive note and then pushes it to the underlying electronic medical record. [Those EMRs] are the system of record, it is not our job to day-one replace these guys. Our job is to make sure doctors and the burnout they are having is relieved.”

Given that voice recognition is commoditized, there will likely be others looking to build a scribe for doctors as well. There are startups like Saykara looking to do something similar, and in these situations it often seems like the companies that are able to capture the most data first are able to become the market leaders. And there’s also a chance that a larger company — like Amazon, which has made its interest in healthcare already known — may step in with its comprehensive understanding of language and find its way into the doctors’ office. Over time, Soni hopes that as it gets more and more data, Suki can become more intelligent and more than just a simple transcription service.

“You can see this arc where you’re going from an Alexa, to a smarter form of a digital assistant, to a device that’s a little bit like a chief resident of a doctor,” Soni said. “You’ll be able to say things like, ‘Suki, pay attention,’ and all it needs to do is listen to your conversation with the patient. I’m, not building a medical transcription company. I’m basically trying to build a digital assistant for doctors.”

Apr
02
2018
--

Apple, in a very Apple move, is reportedly working on its own Mac chips

Apple is planning to use its own chips for its Mac devices, which could replace the Intel chips currently running on its desktop and laptop hardware, according to a report from Bloomberg.

Apple already designs a lot of custom silicon, including its chipsets like the W-series for its Bluetooth headphones, the S-series in its watches, its A-series iPhone chips, as well as customized GPU for the new iPhones. In that sense, Apple has in a lot of ways built its own internal fabless chip firm, which makes sense as it looks for its devices to tackle more and more specific use cases and remove some of its reliance on third parties for their equipment. Apple is already in the middle of in a very public spat with Qualcomm over royalties, and while the Mac is sort of a tertiary product in its lineup, it still contributes a significant portion of revenue to the company.

Creating an entire suite of custom silicon could do a lot of things for Apple, the least of which bringing in the Mac into a system where the devices can talk to each other more efficiently. Apple already has a lot of tools to shift user activities between all its devices, but making that more seamless means it’s easier to lock users into the Apple ecosystem. If you’ve ever compared connecting headphones with a W1 chip to the iPhone and just typical Bluetooth headphones, you’ve probably seen the difference, and that could be even more robust with its own chipset. Bloomberg reports that Apple may implement the chips as soon as 2020.

Intel may be the clear loser here, and the market is reflecting that. Intel’s stock is down nearly 8% after the report came out, as it would be a clear shift away from the company’s typical architecture where it has long held its ground as Apple moves on from traditional silicon to its own custom designs. Apple, too, is not the only company looking to design its own silicon, with Amazon looking into building its own AI chips for Alexa in another move to create a lock-in for the Amazon ecosystem. And while the biggest players are looking at their own architecture, there’s an entire suite of startups getting a lot of funding building custom silicon geared toward AI.

Apple declined to comment.

Nov
30
2017
--

WeWork has big plans for Alexa for Business

 Amazon is soon to announce Alexa for Business, and WeWork is one of the first partners to have hopped on the platform. WeWork’s vision is to use technology to help businesses make the most out of their physical space, all while customizing that space to each individual’s personal needs. The co-working giant has been on the Alexa for Business platform for about a month now, as part… Read More

Nov
29
2017
--

Amazon is putting Alexa in the office

 The interface is evolving. What has long been dominated by screens of all shapes and sizes is now being encroached upon by the voice. And while many companies are building voice interfaces — Apple with Siri, Google with Assistant, and Microsoft with Cortana — none are quite as dominant as Amazon has been with Alexa. At the AWS reinvent conference, Amazon announced Alexa for… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com