Dec
11
2018
--

TechSee nabs $16M for its customer support solution built on computer vision and AR

Chatbots and other AI-based tools have firmly found footing in the world of customer service, used either to augment or completely replace the role of a human responding to questions and complaints, or (sometimes, annoyingly, at the same time as the previous two functions) sell more products to users.

Today, an Israeli startup called TechSee is announcing $16 million in funding to help build out its own twist on that innovation: an AI-based video service, which uses computer vision, augmented reality and a customer’s own smartphone camera to provide tech support to customers, either alongside assistance from live agents, or as part of a standalone customer service “bot.”

Led by Scale Venture Partners — the storied investor that has been behind some of the bigger enterprise plays of the last several years (including Box, Chef, Cloudhealth, DataStax, Demandbase, DocuSign, ExactTarget, HubSpot, JFrog and fellow Israeli AI assistance startup WalkMe), the Series B also includes participation from Planven Investments, OurCrowd, Comdata Group and Salesforce Ventures. (Salesforce was actually announced as a backer in October.)

The funding will be used both to expand the company’s current business as well as move into new product areas like sales.

Eitan Cohen, the CEO and co-founder, said that the company today provides tools to some 15,000 customer service agents and counts companies like Samsung and Vodafone among its customers across verticals like financial services, tech, telecoms and insurance.

The potential opportunity is big: Cohen estimates there are about 2 million customer service agents in the U.S., and about 14 million globally.

TechSee is not disclosing its valuation. It has raised around $23 million to date.

While TechSee provides support for software and apps, its sweet spot up to now has been providing video-based assistance to customers calling with questions about the long tail of hardware out in the world, used for example in a broadband home Wi-Fi service.

In fact, Cohen said he came up with the idea for the service when his parents phoned him up to help them get their cable service back up, and he found himself challenged to do it without being able to see the set-top box to talk them through what to do.

So he thought about all the how-to videos that are on platforms like YouTube and decided there was an opportunity to harness that in a more organised way for the companies providing an increasing array of kit that may never get the vlogger treatment.

“We are trying to bring that YouTube experience for all hardware,” he said in an interview.

The thinking is that this will become a bigger opportunity over time as more services get digitised, the cost of components continues to come down and everything becomes “hardware.”

“Tech may become more of a commodity, but customer service does not,” he added. “Solutions like ours allow companies to provide low-cost technology without having to hire more people to solve issues [that might arise with it.]”

The product today is sold along two main trajectories: assisting customer reps; and providing unmanned video assistance to replace some of the easier and more common questions that get asked.

In cases where live video support is provided, the customer opts in for the service, similar to how she or he might for a support service that “takes over” the device in question to diagnose and try to fix an issue. Here, the camera for the service becomes a customer’s own phone.

Over time, that live assistance is used in two ways that are directly linked to TechSee’s artificial intelligence play. First, it helps to build up TechSee’s larger back catalogue of videos, where all identifying characteristics are removed with the focus solely on the device or problem in question. Second, the experience in the video is also used to build TechSee’s algorithms for future interactions. Cohen said there are now “millions” of media files — images and videos — in the company’s catalogue.

The effectiveness of its system so far has been pretty impressive. TechSee’s customers — the companies running the customer support — say they have on average seen a 40 percent increase in customer satisfaction (NPS scores), a 17 percent decrease in technician dispatches and between 20 and 30 percent increase in first-call resolutions, depending on the industry.

TechSee is not the only company that has built a video-based customer engagement platform: others include Stryng, CallVU and Vee24. And you could imagine companies like Amazon — which is already dabbling in providing advice to customers based on what its Echo Look can see — might be interested in providing such services to users across the millions of products that it sells, as well as provide that as a service to third parties.

According to Cohen, what TechSee has going for it compared to those startups, and also the potential entry of companies like Microsoft or Amazon into the mix, is a head start on raw data and a vision of how it will be used by the startup’s AI to build the business.

“We believe that anyone who wants to build this would have a challenge making it from scratch,” he said. “This is where we have strong content, millions of images, down to specific model numbers, where we can provide assistance and instructions on the spot.”

Salesforce’s interest in the company, he said, is a natural progression of where that data and customer relationship can take a business beyond responsive support into areas like quick warranty verification (for all those times people have neglected to do a product registration), snapping fender benders for insurance claims and of course upselling to other products and services.

“Salesforce sees the synergies between the sales cloud and the service cloud,” Cohen said.

“TechSee recognized the great potential for combining computer vision AI with augmented reality in customer engagement,” said Andy Vitus, partner at Scale Venture Partners, who joins the board with this round. “Electronic devices become more complex with every generation, making their adoption a perennial challenge. TechSee is solving a massive problem for brands with a technology solution that simplifies the customer experience via visual and interactive guidance.”

Dec
08
2018
--

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

Dec
05
2018
--

Workato raises $25M for its integration platform

Workato, a startup that offers an integration and automation platform for businesses that competes with the likes of MuleSoft, SnapLogic and Microsoft’s Logic Apps, today announced that it has raised a $25 million Series B funding round from Battery Ventures, Storm Ventures, ServiceNow and Workday Ventures. Combined with its previous rounds, the company has now received investments from some of the largest SaaS players, including Salesforce, which participated in an earlier round.

At its core, Workato’s service isn’t that different from other integration services (you can think of them as IFTTT for the enterprise), in that it helps you to connect disparate systems and services, set up triggers to kick off certain actions (if somebody signs a contract on DocuSign, send a message to Slack and create an invoice). Like its competitors, it connects to virtually any SaaS tool that a company would use, no matter whether that’s Marketo and Salesforce, or Slack and Twitter. And like some of its competitors, all of this can be done with a drag-and-drop interface.

What’s different, Workato founder and CEO Vijay Tella tells me, is that the service was built for business users, not IT admins. “Other enterprise integration platforms require people who are technical to build and manage them,” he said. “With the explosion in SaaS with lines of business buying them — the IT team gets backlogged with the various integration needs. Further, they are not able to handle all the workflow automation needs that businesses require to streamline and innovate on the operations.”

Battery Ventures’ general partner Neeraj Agrawal also echoed this. “As we’ve all seen, the number of SaaS applications run by companies is growing at a very rapid clip,” he said. “This has created a huge need to engage team members with less technical skill-sets in integrating all these applications. These types of users are closer to the actual business workflows that are ripe for automation, and we found Workato’s ability to empower everyday business users super compelling.”

Tella also stressed that Workato makes extensive use of AI/ML to make building integrations and automations easier. The company calls this Recipe Q. “Leveraging the tens of billions of events processed, hundreds of millions of metadata elements inspected and hundreds of thousands of automations that people have built on our platform — we leverage ML to guide users to build the most effective integration/automation by recommending next steps as they build these automations,” he explained. “It recommends the next set of actions to take, fields to map, auto-validates mappings, etc. The great thing with this is that as people build more automations — it learns from them and continues to make the automation smarter.”

The AI/ML system also handles errors and offers features like sentiment analysis to analyze emails and detect their intent, with the ability to route them depending on the results of that analysis.

As part of today’s announcement, the company is also launching a new AI-enabled feature: Automation Editions for sales, marketing and HR (with editions for finance and support coming in the future). The idea here is to give those departments a kit with pre-built workflows that helps them to get started with the service without having to bring in IT.

Dec
04
2018
--

Cove.Tool wants to solve climate change one efficient building at a time

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”

Dec
04
2018
--

FortressIQ raises $12M to bring new AI twist to process automation

FortressIQ, a startup that wants to bring a new kind of artificial intelligence to process automation called imitation learning, emerged from stealth this morning and announced it has raised $12 million.

The Series A investment came entirely from a single venture capital firm, Light Speed Venture Partners. Today’s funding comes on top of $4 million in seed capital the company raised previously from Boldstart Ventures, Comcast Ventures and Eniac Ventures.

Pankaj Chowdhry, founder and CEO of FortressIQ, says that his company basically replaces high-cost consultants who are paid to do time and motion studies and automates that process in a fairly creative way. It’s a bit like Robotics Process Automation (RPA), a space that is attracting a lot of investment right now, but instead of simply recording what’s happening on the desktop, and reproducing that digitally, it takes it a step further in a process called “imitation learning.”

“We want to be able to replicate human behavior through observation. We’re targeting this idea of how can we help people understand their processes. But imitation learning is I think the most interesting area of artificial intelligence because it focuses not on what AI can do, but how can AI learn and adapt,” he explained

They start by capturing a low-bandwidth movie of the process. “So we build virtual processors. And basically the idea is we have an agent that gets deployed by your enterprise IT group, and it integrates into the video card,” Chowdhry explained.

He points out that it’s not actually using a camera, but it captures everything going on, as a person interacts with a Windows desktop. In that regard it’s similar to RPA. “The next component is our AI models and computer vision. And we build these models that can literally watch the movie and transcribe the movie into what we call a series of software interactions,” he said.

Another key differentiator here is that they have built a data mining component on top of this, so if the person in the movie is doing something like booking an invoice, and stops to check email or Slack, FortressIQ can understand when an activity isn’t part of the process and filters that out automatically.

The product will be offered as a cloud service. Chowdhry’s previous company, Third Pillar Systems, was acquired by Genpact in 2013.

Dec
04
2018
--

Forethought scores $9M Series A in wake of Battlefield win

It’s been a whirlwind few months for Forethought, a startup with a new way of looking at enterprise search that relies on artificial intelligence. In September, the company took home the TechCrunch Disrupt Battlefield trophy in San Francisco, and today it announced a $9 million Series A investment.

It’s pretty easy to connect the dots between the two events. CEO and co-founder Deon Nicholas said they’ve seen a strong uptick in interest since the win. “Thanks to TechCrunch Disrupt, we have had a lot of things going on including a bunch of new customer interest, but the biggest news is that we’ve raised our $9 million Series A round,” he told TechCrunch.

The investment was led by NEA with K9 Ventures, Village Global and several angel investors also participating. The angel crew includes Front CEO Mathilde Collin, Robinhood CEO Vlad Tenev and Learnvest CEO Alexa von Tobel.

Forethought aims to change conventional enterprise search by shifting from the old keyword kind of approach to using artificial intelligence underpinnings to retrieve the correct information from a corpus of documents.

“We don’t work on keywords. You can ask questions without keywords and using synonyms to help understand what you actually mean, we can actually pull out the correct answer [from the content] and deliver it to you,” Nicholas told TechCrunch in September.

He points out that it’s still early days for the company. It had been in stealth for a year before launching at TechCrunch Disrupt in September. Since the event, the three co-founders have brought on six additional employees and they will be looking to hire more in the next year, especially around machine learning and product and UX design.

At launch, they could be embedded in Salesforce and Zendesk, but are looking to expand beyond that.

The company is concentrating on customer service for starters, but with the new money in hand, it intends to begin looking at other areas in the enterprise that could benefit from a smart information retrieval system. “We believe that this can expand beyond customer support to general information retrieval in the enterprise,” Nicholas said.

Nov
28
2018
--

Amazon Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Amazon Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change, and if you are using a template as a workaround for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like Social Security numbers, dates of birth and addresses, and interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a Social Security number. If forms change, Textract won’t miss it,” Jassy explained.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, SageMaker and the new Elastic Inference Engine announced today.

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage

Nov
15
2018
--

RPA startup Automation Anywhere nabs $300M from SoftBank at a $2.6B valuation

The market for RPA — Robotic Process Automation — is getting a hat trick of news this week: Automation Anywhere has today announced that it has raised $300 million from the SoftBank Vision Fund. This funding, which values Automation Anywhere at $2.6 billion post-money, is an extension to the Series A the company announced earlier this year, which was at a $1.8 billion valuation. It brings the total size of the round to $550 million.

The news comes just a day after one of the startup’s bigger competitors, UiPath, announced a $265 million raise at a $3 billion valuation; and a week after Kofax, another competitor, announced it would be acquiring a division of Nuance for $400 million to beef up its business.

It’s also yet one more example of a one-two punch in funding. It was only in July that Automation Anywhere announced its $250 million raise.

This latest round adds some significant investors to the company’s cap table, specifically from the SoftBank Vision Fund, which counts a number of tech giants like Apple and Qualcomm as LPs, along with others. Specifically, the fund has been under fire for the last few weeks because of the fact that a large swathe of its backing comes from Saudi money.

The Saudi Arabian government has been in the spotlight over its involvement in the killing of journalist Jamal Khashoggi in its embassy in Turkey. By extension of that, there have been many questions raised in recent weeks over the ethics of taking money from the Vision Fund, with so many questions still in the air over that affair.

In an interview, Mihir Shukla, CEO and Co-Founder at Automation Anywhere, said that while what happened to Khashoggi was “not acceptable,” his conversations started with SoftBank before that and they did not impact the startup’s decision over whether to work with the Fund.

He declined to comment on the timing of the term sheet getting signed, when asked whether it was before or after the news broke of the murder.

What attracted us to SoftBank was that Masayoshi Son” — the CEO and founder of SoftBank — “has a vision and he is investing in foundational platforms that will change how we work and travel,” Shukla said. “We share that vision.”

He also pointed out that getting funding from SoftBank will “naturally” lead to more opportunities to partner with companies in SoftBank’s network of companies, which cover dozens of investments and outright ownerships.

While it feels like artificial intelligence is something that you see referenced at every turn these days in the tech world, RPA is an interesting area because it’s one of the more tangible applications of it, across a wide set of businesses.

In short, it’s a set of software-based “robots” that help companies automate mundane and repetitive tasks that would otherwise be done by human workers, employing AI-based technology in areas like computer vision and machine learning to get the work done.

Competition among companies to grab pole position in the space is fierce. Automation Anywhere has 1,400 organizations as customers, it says. By comparison, UiPath has 2,100 and claims an annual revenue run rate at the moment of $150 million. Shukla declined to disclose any financials for his company.

But in light of all that, the company will be using the funding to build out its business specifically ahead of rivals.

“With this additional capital, we are in a position to do far more than any other provider,” said Shukla in a statement. “We will not only continue to deliver the most advanced RPA to the market, but we will help bring AI to millions. Like the introduction of the PC, we see a world where every office employee will work alongside digital workers, amplifying human contributions. Today, employees must know how to use a PC and very soon employees will have to know how to build a bot.”

Automation Anywhere claims that its Bot Store is the industry’s largest marketplace for bot applications, designed both by itself and partners, to execute different business processes, with 65,000 users since launching in March 2018.

Nov
14
2018
--

Microsoft to acquire Xoxco as focus on AI and bot developers continues

Microsoft has been all in on AI this year, and in the build versus buy equation, the company has been leaning heavily toward buying. This morning, the company announced its intent to acquire Xoxco, an Austin-based software developer with a focus on bot design, making it the fourth AI-related company Microsoft has purchased this year.

“Today, we are announcing we have signed an agreement to acquire Xoxco, a software product design and development studio known for its conversational AI and bot development capabilities,” Lili Cheng, corporate VP for conversational AI at Microsoft wrote in a blog post announcing the acquisition.

Xoxco, which was founded in 2009 — long before most of us were thinking about conversational bots — has raised $1.5 million. It began working on bots in 2013, and is credited with developing the first bot for Slack to help schedule meetings. The companies did not reveal the price, but it fits nicely with Microsoft’s overall acquisition strategy this year, and an announcement today involving a new bot building tool to help companies build conversational bots more easily.

When you call into a call center these days, or even interact on chat, chances are your initial interaction is with a conversational bot, rather than a human. Microsoft is trying to make it easier for developers without AI experience to tap into Microsoft’s expertise on the Azure platform (or by downloading the bot framework from its newly acquired GitHub).

“With this acquisition, we are continuing to realize our approach of democratizing AI development, conversation and dialog, and integrating conversational experiences where people communicate,” Cheng wrote.

The new Virtual Assistant Accelerator solution announced today also aligns with the Xoxco purchase. Eric Boyd, corporate VP for AI at Microsoft, says the Virtual Assistant Accelerator pulls together some AI tools such as speech-to-text, natural language processing and an action engine into a single place to simplify bot creation.

“It’s a tool that makes it much easier for you to go and create a virtual assistant. It orchestrates a number of components that we offer, but we didn’t make them easy to use [together]. And so it’s really simplifying the creation of a virtual assistant,” he explained.

Today’s acquisition comes on the heels of a number of AI-related acquisitions. The company bought Semantic Machines in May to give users a more life-like conversation with bots. It snagged Bonsai in June to help simplify AI development. And it grabbed Lobe in September, another tool for making it easier for developers to incorporate AI in their applications.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com