Apr
20
2021
--

Laiye, China’s answer to UiPath, closes $50 million Series C+

Robotic process automation has become buzzy in the last few months. New York-based UiPath is on course to launch an initial public offering after gaining an astounding valuation of $35 billion in February. Over in China, homegrown RPA startup Laiye is making waves as well.

Laiye, which develops software to mimic mundane workplace tasks like keyboard strokes and mouse clicks, announced it has raised $50 million in a Series C+ round. The proceeds came about a year after the Beijing-based company pulled in the first tranche of its Series C round.

Laiye, six years old and led by Baidu veterans, has raised over $130 million to date according to public information.

Leading investors in the Series C+ round were Ping An Global Voyager Fund, an early-stage strategic investment vehicle of Chinese financial conglomerate Ping An, and Shanghai Artificial Intelligence Industry Equity Investment Fund, a government-backed fund. Other participants included Lightspeed China Partners, Lightspeed Venture Partners, Sequoia China and Wu Capital.

RPA tools are attracting companies looking for ways to automate workflows during COVID-19, which has disrupted office collaboration. But the enterprise tech was already gaining traction prior to the pandemic. As my colleague Ron Miller wrote this month on the heels of UiPath’s S1 filing:

“The category was gaining in popularity by that point because it addressed automation in a legacy context. That meant companies with deep legacy technology — practically everyone not born in the cloud — could automate across older platforms without ripping and replacing, an expensive and risky undertaking that most CEOs would rather not take.”

In one case, Laiye’s RPA software helped the social security workers in the city of Lanzhou speed up their account reconciliation process by 75%; in the past, they would have to type in pensioners’ information and check manually whether the details were correct.

In another instance, Laiye’s chatbot helped automate the national population census in several southern Chinese cities, freeing census takers from visiting households door-to-door.

Laiye said its RPA enterprise business achieved positive cash flow and its chatbot business turned profitability in the fourth quarter of 2020. Its free-to-use edition has amassed over 400,000 developers, and the company also runs a bot marketplace connecting freelance developers to small-time businesses with automation needs.

Laiye is expanding its services globally and boasts that its footprint now spans Asia, the United States and Europe.

“Laiye aims to foster the world’s largest developer community for software robots and built the world’s largest bot marketplace in the next three years, and we plan to certify at least one million software robot developers by 2025,” said Wang Guanchun, chair and CEO of Laiye.

“We believe that digital workforce and intelligent automation will reach all walks of life as long as more human workers can be up-skilled with knowledge in RPA and AI”.

May
20
2020
--

Directly, which taps experts to train chatbots, raises $11M, closes out Series B at $51M

Directly, a startup whose mission is to help build better customer service chatbots by using experts in specific areas to train them, has raised more funding as it opens up a new front to grow its business: APIs and a partner ecosystem that can now also tap into its expert network. Today Directly is announcing that it has added $11 million to close out its Series B at $51 million (it raised $20 million back in January of this year, and another $20 million as part of the Series B back in 2018).

The funding is coming from Triangle Peak Partners and Toba Capital, while its previous investors in the round included strategic backers Samsung NEXT and Microsoft’s M12 Ventures (who are both customers, alongside companies like Airbnb), as well as Industry Ventures, True Ventures, Costanoa Ventures and Northgate. (As we reported when covering the initial close, Directly’s valuation at that time was at $110 million post-money, and so this would likely put it at $120 million or higher, given how the business has expanded.)

While chatbots have now been around for years, a key focus in the tech world has been how to help them work better, after initial efforts saw so many disappointing results that it was fair to ask whether they were even worth the trouble.

Directly’s premise is that the most important part of getting a chatbot to work well is to make sure that it’s trained correctly, and its approach to that is very practical: find experts both to troubleshoot questions and provide answers.

As we’ve described before, its platform helps businesses identify and reach out to “experts” in the business or product in question, collect knowledge from them, and then fold that into a company’s AI to help train it and answer questions more accurately. It also looks at data input and output into those AI systems to figure out what is working, and what is not, and how to fix that, too.

The information is typically collected by way of question-and-answer sessions. Directly compensates experts both for submitting information as well as to pay out royalties when their knowledge has been put to use, “just as you would in traditional copyright licensing in music,” its co-founder Antony Brydon explained to me earlier this year.

It can take as little as 100 experts, but potentially many more, to train a system, depending on how much the information needs to be updated over time. (Directly’s work for Xbox, for example, used 1,000 experts but has to date answered millions of questions.)

Directly’s pitch to customers is that building a better chatbot can help deflect more questions from actual live agents (and subsequently cut operational costs for a business). It claims that customer contacts can be reduced by up to 80%, with customer satisfaction by up to 20%, as a result.

What’s interesting is that now Directly sees an opportunity in expanding that expert ecosystem to a wider group of partners, some of which might have previously been seen as competitors. (Not unlike Amazon’s AI powering a multitude of other businesses, some of which might also be in the market of selling the same services that Amazon does).

The partner ecosystem, as Directly calls it, use APIs to link into Directly’s platform. Meya, Percept.ai, and SmartAction — which themselves provide a range of customer service automation tools — are three of the first users.

“The team at Directly have quickly proven to be trusted and invaluable partners,” said Erik Kalviainen, CEO at Meya, in a statement. “As a result of our collaboration, Meya is now able to take advantage of a whole new set of capabilities that will enable us to deliver automated solutions both faster and with higher resolution rates, without customers needing to deploy significant internal resources. That’s a powerful advantage at a time when scale and efficiency are key to any successful customer support operation.”

The prospect of a bigger business funnel beyond even what Directly was pulling in itself is likely what attracted the most recent investment.

“Directly has established itself as a true leader in helping customers thrive during these turbulent economic times,” said Tyler Peterson, Partner at Triangle Peak Partners, in a statement. “There is little doubt that automation will play a tremendous role in the future of customer support, but Directly is realizing that potential today. Their platform enables businesses to strike just the right balance between automation and human support, helping them adopt AI-powered solutions in a way that is practical, accessible, and demonstrably effective.”

In January, Mike de la Cruz, who took over as CEO at the time of the funding announcement, said the company was gearing up for a larger Series C in 2021. It’s not clear how and if that will be impacted by the current state of the world. But in the meantime, as more organizations are looking for ways to connect with customers outside of channels that might require people to physically visit stores, or for employees to sit in call centres, it presents a huge opportunity for companies like this one.

“At its core, our business is about helping customer support leaders resolve customer issues with the right mix of automation and human support,” said de la Cruz in a statement. “It’s one thing to deliver a great product today, but we’re committed to ensuring that our customers have the solutions they need over the long term. That means constantly investing in our platform and expanding our capabilities, so that we can keep up with the rapid pace of technological change and an unpredictable economic landscape. These new partnerships and this latest expansion of our recent funding round have positioned us to do just that. We’re excited to be collaborating with our new partners, and very thankful to all of our investors for their support.”

Nov
04
2019
--

Microsoft launches Power Virtual Agents, its no-code bot builder

Microsoft today announced the public preview of its Power Virtual Agents tool, a new no-code tool for building chatbots that’s part of the company’s Power Platform, which also includes the Microsoft Flow automation tool, which is being renamed to Power Automate today, and Power BI.

Built on top of Azure’s existing AI smarts and tools for building bots, Power Virtual Agents promises to make building a chatbot almost as easy as writing a Word document. With this, anybody within an organization could build a bot that walks a new employee through the onboarding experience, for example.

“Power Virtual Agent is the newest addition to the Power Platform family,” said Microsoft’s Charles Lamanna in an interview ahead of today’s announcement. “Power Virtual Agent is very much focused on the same type of low-code, accessible to anybody, no matter whether they’re a business user or business analyst or professional developer, to go build a conversational agent that’s AI-driven and can actually solve problems for your employees, for your customers, for your partners, in a very natural way.”

Power Virtual Agents handles the full lifecycle of the bot-building experience, from the creation of the dialog to making it available in chat systems that include Teams, Slack, Facebook Messenger and others. Using Microsoft’s AI smarts, users don’t have to spend a lot of time defining every possible question and answer, but can instead rely on the tool to understand intentions and trigger the right action. “We do intent understanding, as well as entity extraction, to go and find the best topic for you to go down,” explained Lamanna. Like similar AI systems, the service also learns over time, based on feedback it receives from users.

One nice feature here is that if your setup outgrows the no-code/low-code stage and you need to get to the actual code, you’ll be able to convert the bot to Azure resources as that’s what’s powering the bot anyway. Once you’ve edited the code, you obviously can’t take it back into the no-code environment. “We have an expression for Power Platform, which is ‘no cliffs.’ […] The idea of ‘no cliffs’ is that the most common problem with a low-code platform is that, at some point, you want more control, you want code. And that’s frequently where low-code platforms run out of gas and you really have issues because you can’t have the pro dev take it over, you can’t make it mission-critical.”

The service is also integrated with tools like Power Automate/Microsoft Flow to allow users to trigger actions on other services based on the information the chatbot gathers.

Lamanna stressed that the service also generates lots of advanced analytics for those who are building bots with it. With this, users can see what topics are being asked about and where the system fails to provide answers, for example. It also visualizes the different text inputs that people provide so that bot builders can react to that.

Over the course of the last two or three years, we went from a lot of hype around chatbots to deep disillusionment with the experience they actually delivered. Lamanna isn’t fazed by that. In part, those earlier efforts failed because the developers weren’t close enough to the users. They weren’t product experts or part of the HR team inside a company. By using a low-code/no-code tool, he argues, the actual topic experts can build these bots. “If you hand it over to a developer or an AI specialist, they’re geniuses when it comes to developing code, but they won’t know the details and ins and outs of, say, the shoe business — and vice versa. So it actually changes how development happens.”

Aug
01
2019
--

Dasha AI is calling so you don’t have to

While you’d be hard-pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the U.S. alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120 million people worldwide… and they are all subject for disruption, potentially.”

The New York-based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2 million seed round, led by RTP Ventures and RTP Global: An early-stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology.” “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine;” a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc. that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in less than 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform also can detect a caller’s gender — a feature that can be useful for healthcare use cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data.”

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And, of course, Dasha intends their “Digital Assistant Super Human Alike” to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put toward launching a consumer-facing product atop its B2B platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fueled future will entail an awful lot of robots talking to each other… ???

Chernyshov says this B2C call-screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3 billion robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3 million calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30 billion worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve,” as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc., all those cases” — will be able to be automated “just like typing in a natural language.”

So if Dasha’s AI-fueled vision of voice-based business process automation comes to fruition, then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the “robocaller” plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human,” though it’s not clear on what sample size the claim is based. (To my ear there are definite “tells” in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in the future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil,’ ” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental,” taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancellation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject, then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson,” dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modeling engine.” “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modeling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and 10 masters of science in computer science.

It has an R&D office in Russia, which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers.” A recent hire — chief research scientist, Dr. Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement, the door is being left open for “John” to slip cheerily by. “Bladerunner” here we come.

The team’s driving conviction is that emphasis on modeling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions, which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series “Knight Rider.” Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in “Red Dwarf.” (Or, indeed, Kryten, the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI.” “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human-level speech recognition, we have human-level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word “robotic.” (And wouldn’t it be funny if the term “robotic” came to mean “hyper entertaining” or even “especially empathetic” thanks to advances in AI.)

Let’s not get carried away though.

In the meantime, there are “uncanny valley” pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know “John from Acme Dental” was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal “deepfakes” to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the “bot or not” question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.

Mar
19
2019
--

AI has become table stakes in sales, customer service and marketing software

Artificial intelligence and machine learning has become essential if you are selling sales, customer service and marketing software, especially in large enterprises. The biggest vendors from Adobe to Salesforce to Microsoft to Oracle are jockeying for position to bring automation and intelligence to these areas.

Just today, Oracle announced several new AI features in its sales tools suite and Salesforce did the same in its customer service cloud. Both companies are building on artificial intelligence underpinnings that have been in place for several years.

All of these companies want to help their customers achieve their business goals by using increasing levels of automation and intelligence. Paul Greenberg, managing principal at The 56 Group, who has written multiple books about the CRM industry, including CRM at the Speed of Light, says that while AI has been around for many years, it’s just now reaching a level of maturity to be of value for more businesses.

“The investments in the constant improvement of AI by companies like Oracle, Microsoft and Salesforce are substantial enough to both indicate that AI has become part of what they have to offer — not an optional [feature] — and that the demand is high for AI from companies that are large and complex to help them deal with varying needs at scale, as well as smaller companies who are using it to solve customer service issues or minimize service query responses with chatbots,” Greenberg explained.

This would suggest that injecting intelligence in applications can help even the playing field for companies of all sizes, allowing the smaller ones to behave like they were much larger, and for the larger ones to do more than they could before, all thanks to AI.

The machine learning side of the equation allows these algorithms to see patterns that would be hard for humans to pick out of the mountains of data being generated by companies of all sizes today. In fact, Greenberg says that AI has improved enough in recent years that it has gone from predictive to prescriptive, meaning it can suggest the prospect to call that is most likely to result in a sale, or the best combination of offers to construct a successful marketing campaign.

Brent Leary, principle at CRM Insights, says that AI, especially when voice is involved, can make software tools easier to use and increase engagement. “If sales professionals are able to use natural language to interact with CRM, as opposed to typing and clicking, that’s a huge barrier to adoption that begins to crumble. And making it easier and more efficient to use these apps should mean more data enters the system, which result in quicker, more relevant AI-driven insights,” he said.

All of this shows that AI has become an essential part of these software tools, which is why all of the major players in this space have built AI into their platforms. In an interview last year at the Adobe Summit, Adobe CTO Abhay Parasnis had this to say about AI: “AI will be the single most transformational force in technology,” he told TechCrunch. He appears to be right. It has certainly been transformative in sales, customer service and marketing.

Mar
07
2018
--

Begin, a new app from Ryan Block, uses natural language to generate tasks from your Slack

Over two years after leaving Aol (now known as Oath) back in September 2015 to build a new startup, serial entrepreneur Ryan Block, with co-founder Brian LeRoux, is finally taking the wraps off the new venture: Begin, an intelligent app designed to help you keep track of things that you have to do, and when you should do them, as they come up in the stream of a messaging app.

By extension, Begin is also solving one of the more persistent problems of messaging apps: losing track of things you need to remember in the wider thread of the conversation.

Begin is launching today as an integration on Slack — which also happens to be one of its backers, by way of the Slack Fund .

Taking tasking apps to task

As you might have already seen, there are a lot apps out there today to help you track tasks and larger work you have to do, from software based around project management and specific to-do lists like Asana, Todoist, Wrike and Microsoft’s Wunderlist/To-Do, through to those geared more to planning performance management, like BetterWorks.

The problem is that while some people have found these various dedicated apps useful, for a good proportion, task apps are where tasks go to die. Block — despite a pretty productive resume that includes editor of Engadget, co-founder of Gdgt, and VP of Product at Aol — was one of the latter group.

“Every time I’ve ever tried using a task management app I found it sad and discouraging to use,” he said. “You just end up getting a massive backlog of tasks and you lose sight of what’s really important.” (I’m guessing he is not the only one: the proliferation of these dozens of apps, without any single, clear market leader, you could argue is one indication of how none of them have achieved a critical enough mass of users.)

So Block and LeRoux started to think about how you could fix task apps and make them a more natural part of how work gets done.

They thought of messaging apps and their role in communications today. And more specifically, they turned to Slack. With its rapid-fire conversations and ability to draw in data from other apps, Slack has not only been one of the fastest-growing services in the enterprise world, but it has changed the conversation around business communications (literally and figuratively) .

“Slack to us is more than just enterprise productivity,” Block said. “We see Slack as a primary tentpole in the future of work.”

Synchronicity

But, if you are one of the tens of millions of people that uses a messaging app like Slack at work, you’ll know that it can be wonderful and frustrating in equal parts. Wonderful because messaging can spur conversations or get answers quickly from people who might not be directly next to you; but frustrating because messaging can be — especially in group chat rooms — noisy, distracting and hard to track if you’re not paying attention all the time.

Begin has honed in on the third of these challenges of messaging platforms, and specifically in how it pertains to being in the working world.

Sometimes in the course of a conversation an item might come up that you or a coworker needs to follow up. Sometimes you might not even be a part of the conversation when that item comes up. In the course of a chat, the conversation might abruptly turn to another subject before you’ve had a chance to address an item. Begin is for all those moments.

Or, as Block described it to me, “It’s the difference between synchronous versus asynchronous work. Slack is good for certain things but for tracking things, it can be very hard.”

With Begin, the idea is that, when something arises that you need to follow up, you set yourself — or someone else — a reminder by essentially calling Begin (@begin) into the conversation and making a note of that task using normal language. (Example: “@begin Check in with @katie and @lynley about the earnings schedule tomorrow”.)

The two people I’ve tagged in my example don’t have to be there when I’m mentioning this, and they won’t have to look through Slack mentions to find what I said, nor do I need to leave the conversation to write the reminder. For all of us, we can turn to Begin itself to check out the tasks when we have time, and on Begin those tasks get ordered by their timing.

It’s a simple solution that is surprisingly not a part of the Slack experience today.

Aside from Slack, other investors in Begin’s seed round (of an undisclosed amount) include SV Angel, 415, SV Angel and General Catalyst, which took a stake in Begin as its first “bot” investment nearly two years ago.

Fast forward to today, with bot hype subsiding, Block is happy to say that while Begin has a degree of intelligence, particularly around reading natural language and turning that into an action of sorts (an action for you to do), he plays down the bot aspect. “We think of this as a Slack app, not as a bot,” he said.

Begin is, in fact, beginning small when it comes to features.

It’s only on Slack, you can’t draw in other apps or data into your tasks, it doesn’t give you a lot of short gradation when it comes to timing (days are currently the shortest increment for setting a task), and it doesn’t synchronise with any calendars.

Those are all areas that Block says that the company is working on for future iterations, either by being baked directly into the app by Begin itself, or there for others to integrate by way of an API.

Does Begin have a way of setting a task to look at your tasks? It’s one question that underscores the fact that ultimately you will still have to, at some point, look at a list. That may be something that the Begin team might try to address over time, too, but for now, it’s the simple creation that is the focus.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com