Mar
19
2019
--

AI has become table stakes in sales, customer service and marketing software

Artificial intelligence and machine learning has become essential if you are selling sales, customer service and marketing software, especially in large enterprises. The biggest vendors from Adobe to Salesforce to Microsoft to Oracle are jockeying for position to bring automation and intelligence to these areas.

Just today, Oracle announced several new AI features in its sales tools suite and Salesforce did the same in its customer service cloud. Both companies are building on artificial intelligence underpinnings that have been in place for several years.

All of these companies want to help their customers achieve their business goals by using increasing levels of automation and intelligence. Paul Greenberg, managing principal at The 56 Group, who has written multiple books about the CRM industry, including CRM at the Speed of Light, says that while AI has been around for many years, it’s just now reaching a level of maturity to be of value for more businesses.

“The investments in the constant improvement of AI by companies like Oracle, Microsoft and Salesforce are substantial enough to both indicate that AI has become part of what they have to offer — not an optional [feature] — and that the demand is high for AI from companies that are large and complex to help them deal with varying needs at scale, as well as smaller companies who are using it to solve customer service issues or minimize service query responses with chatbots,” Greenberg explained.

This would suggest that injecting intelligence in applications can help even the playing field for companies of all sizes, allowing the smaller ones to behave like they were much larger, and for the larger ones to do more than they could before, all thanks to AI.

The machine learning side of the equation allows these algorithms to see patterns that would be hard for humans to pick out of the mountains of data being generated by companies of all sizes today. In fact, Greenberg says that AI has improved enough in recent years that it has gone from predictive to prescriptive, meaning it can suggest the prospect to call that is most likely to result in a sale, or the best combination of offers to construct a successful marketing campaign.

Brent Leary, principle at CRM Insights, says that AI, especially when voice is involved, can make software tools easier to use and increase engagement. “If sales professionals are able to use natural language to interact with CRM, as opposed to typing and clicking, that’s a huge barrier to adoption that begins to crumble. And making it easier and more efficient to use these apps should mean more data enters the system, which result in quicker, more relevant AI-driven insights,” he said.

All of this shows that AI has become an essential part of these software tools, which is why all of the major players in this space have built AI into their platforms. In an interview last year at the Adobe Summit, Adobe CTO Abhay Parasnis had this to say about AI: “AI will be the single most transformational force in technology,” he told TechCrunch. He appears to be right. It has certainly been transformative in sales, customer service and marketing.

Mar
19
2019
--

Salesforce update brings AI and Quip to customer service chat experience

When Salesforce introduced Einstein, its artificial intelligence platform in 2016, it was laying the ground work for artificial intelligence underpinnings across the platform. Since then the company has introduced a variety of AI enhancements to the Salesforce product family. Today, customer service got some AI updates.

The goal of any customer service interaction is to get the customer answers as quickly as possible. Many users opt to use chat over phone, and Salesforce has added some AI features to help customer service agents get answers more quickly in the chat interface. (The company hinted that phone customer service enhancements are coming.)

For starters, Salesforce is using machine learning to deliver article recommendations, response recommendations and next best actions to the agent in real time as they interact with customers.  “With Einstein article recommendations, we can use machine learning on past cases and we can look at how articles were used to successfully solve similar cases in the past, and serve up the best article right in the console to help the agent with the case,” Martha Walchuk, senior director of product marketing for Salesforce Service Cloud explained.

Salesforce Service Console. Screenshot: Salesforce

The company is also using similar technology to provide response recommendations, which the agent can copy and paste into the chat to speed up the time to response. Before the interaction ends, the company can offer the next best action (which was announced last year) based on the conversation. For example, they could offer related information, an upsell recommendation or whatever type of action the customer defines.

Salesforce is also using machine learning to help route each person to the most appropriate customer service rep. As Salesforce describes it, this feature uses machine learning to filter cases and route them to the right queue or agent automatically, based on defined criteria such as best qualified agent or past outcomes.

Finally, the company is embedding Quip, the company it acquired in 2016 for $750 million, into the customer service console to allow agents to communicate with one another to find answers to difficult problems. That not only helps solve the issues faster, the conversations themselves become part of the knowledge base, which Salesforce can draw upon to help teach the machine learning algorithms about the correct responses to commonly asked questions in the future.

As with the Oracle AI announcement this morning, this use of artificial intelligence in sales, service and marketing is part of a much broader industry trend, as these companies try to inject intelligence into workflows to make them run more efficiently.

Mar
19
2019
--

Oracle adds more AI features to its suite of sales tools

As the biggest sales and marketing technology firms mature, they are all turning to AI and machine learning to advance the field. This morning it was Oracle’s turn, announcing several AI-fueled features for its suite of sales tools.

Rob Tarkoff, who had previous stints at EMC, Adobe and Lithium, and is now EVP of Oracle CX Cloud says that the company has found ways to increase efficiency in the sales and marketing process by using artificial intelligence to speed up previously manual workflows, while taking advantage of all the data that is part of modern sales and marketing.

For starters, the company wants to help managers and salespeople understand the market better to identify the best prospects in the pipeline. To that end, Oracle is announcing integration with DataFox, the company it purchased last fall. The acquisition gave Oracle the ability to integrate highly detailed company profiles into their Customer Experience Cloud, including information such as SEC filings, job postings, news stories and other data about the company.

DataFox company profile. Screenshot: Oracle

“One of the things that DataFox helps you you do better is machine learning-driven sales planning, so you can take sales and account data and optimize territory assignments,” he explained.

The company also announced an AI sales planning tool. Tarkoff says that Oracle created this tool in conjunction with its ERP team. The goal is to use machine learning to help finance make more accurate performance predictions based on internal data.

“It’s really a competitor to companies like Anaplan, where we are now in the business of helping sales leaders optimize planning and forecasting, using predictive models to identify better future trends,” Tarkoff said.

Sales forecasting tool. Screenshot: Oracle

The final tool is really about increasing sales productivity by giving salespeople a virtual assistant. In this case, it’s a chatbot that can help handle tasks like scheduling meetings and offering task reminders to busy sales people, while allowing them to use their voices to enter information about calls and tasks. “We’ve invested a lot in chatbot technology, and a lot in algorithms to help our bots with specific dialogues that have sales- and marketing-industry specific schema and a lot of things that help optimize the automation in a rep’s experience working with sales planning tools,” Tarkoff said.

Brent Leary, principal at CRM Essentials, says that this kind of voice-driven assistant could make it easier to use CRM tools. “The Smarter Sales Assistant has the potential to not only improve the usability of the application, but by letting users interact with the system with their voice it should increase system usage,” he said.

All of these enhancements are designed to increase the level of automation and help sales teams run more efficiently with the ultimate goal of using data to more sales and making better use of sales personnel. They are hardly alone in this goal as competitors like Salesforce, Adobe and Microsoft are bringing a similar level of automation to their sales and marketing tools

The sales forecasting tool and the sales assistant are generally available starting today. The DataFox integration will GA in June.

Mar
13
2019
--

Determined AI nabs $11M Series A to democratize AI development

Deep learning involves a highly iterative process where data scientists build models and test them on GPU-powered systems until they get something they can work with. It can be expensive and time-consuming, often taking weeks to fashion the right model. New startup Determined AI wants to change that by making the process faster, cheaper and more efficient. It emerged from stealth today with $11 million in Series A funding.

The round was led by GV (formerly Google Ventures) with help from Amplify Partners, Haystack and SV Angel. The company also announced an earlier $2.6 million seed round from 2017, for a total $13.6 million raised to date.

Evan Sparks, co-founder and CEO at Determined AI, says that up until now, only the largest companies like Facebook, Google, Apple and Microsoft could set up the infrastructure and systems to produce sophisticated AI like self-driving cars and voice recognition technologies. “Our view is that a big reason why [these big companies] can do that is that they all have internal software infrastructure that enables their teams of machine learning engineers and data scientists to be effective and produce applications quickly,” Sparks told TechCrunch.

Determined’s idea is to create software to handle everything from managing cluster compute resources to automating workflows, thereby putting some of that big-company technology within reach of any organization. “What we exist to do is to build that software for everyone else,” he said. The target market is Fortune 500 and Global 2000 companies.

The company’s solution is based on research conducted over the last several years at AmpLab at the University of California, Berkeley (which is probably best known for developing Apache Spark). It used the knowledge generated in the lab to build sophisticated solutions that help make better use of a customer’s GPU resources.

“We are offering kind of a base layer that is scheduling and resource sharing for these highly expensive resources, and then on top of that we’ve layered some services around workflow automation.” Sparks said the team has generated state of the art results that are somewhere between five and 50 times faster than the results from tools that are available to most companies today.

For now, the startup is trying to help customers move away from generic kinds of solutions currently available to more customized approaches, using Determined AI tools to help speed up the AI production process. The money from today’s round should help fuel growth, add engineers and continue building the solution.

Mar
06
2019
--

Clari platform aims to unify go-to-market operations data

Clari started as a company that wanted to give sales teams more information about their sales process than could be found in the CRM database. Today, the company announced a much broader platform, one that can provide insight across sales, marketing and customer service to give a more unified view of a company’s go-to-market operations, all enhanced by AI.

Company co-founder and CEO Andy Byrne says this involves pulling together a variety of data and giving each department the insight to improve their mission. “We are analyzing large volumes of data found in various revenue systems — sales, marketing, customer success, etc. — and we’re using that data to provide a new platform that’s connecting up all of the different revenue departments,” Byrne told TechCrunch.

For sales, that would mean driving more revenue. For marketing it would it involve more targeted plans to drive more sales. And for customer success it would be about increasing customer retention and reducing churn.

Screenshot: ClariThe company’s original idea when it launched in 2012 was looking at a range of data that touched the sales process, such as email, calendars and the CRM database, to bring together a broader view of sales than you could get by looking at the basic customer data stored in the CRM alone. The Clari data could tell the reps things like which deals would be most likely to close and which ones were at risk.

“We were taking all of these signals that had been historically disconnected from each other and we were connecting it all into a new interface for sales teams that’s very different than a CRM,” Byrne said.

Over time, that involved using AI and machine learning to make connections in the data that humans might not have been seeing. The company also found that customers were using the product to look at processes adjacent to sales, and they decided to formalize that and build connectors to relevant parts of the go-to-market system like marketing automation tools from Marketo or Eloqua and customer tools such as Dialpad, Gong.io and Salesloft.

With Clari’s approach, companies can get a unified view without manually pulling all this data together. The goal is to provide customers with a broad view of the go-to-market operation that isn’t possible looking at siloed systems.

The company has experienced tremendous growth over the last year, leaping from 80 customers to 250. These include Okta and Alteryx, two companies that went public in recent years. Clari is based in the Bay Area and has around 120 employees. It has raised more than $60 million. The most recent round was a $35 million Series C last May led by Tenaya Capital.

Feb
07
2019
--

Microsoft Azure sets its sights on more analytics workloads

Enterprises now amass huge amounts of data, both from their own tools and applications, as well as from the SaaS applications they use. For a long time, that data was basically exhaust. Maybe it was stored for a while to fulfill some legal requirements, but then it was discarded. Now, data is what drives machine learning models, and the more data you have, the better. It’s maybe no surprise, then, that the big cloud vendors started investing in data warehouses and lakes early on. But that’s just a first step. After that, you also need the analytics tools to make all of this data useful.

Today, it’s Microsoft turn to shine the spotlight on its data analytics services. The actual news here is pretty straightforward. Two of these are services that are moving into general availability: the second generation of Azure Data Lake Storage for big data analytics workloads and Azure Data Explorer, a managed service that makes easier ad-hoc analysis of massive data volumes. Microsoft is also previewing a new feature in Azure Data Factory, its graphical no-code service for building data transformation. Data Factory now features the ability to map data flows.

Those individual news pieces are interesting if you are a user or are considering Azure for your big data workloads, but what’s maybe more important here is that Microsoft is trying to offer a comprehensive set of tools for managing and storing this data — and then using it for building analytics and AI services.

(Photo credit:Josh Edelson/AFP/Getty Images)

“AI is a top priority for every company around the globe,” Julia White, Microsoft’s corporate VP for Azure, told me. “And as we are working with our customers on AI, it becomes clear that their analytics often aren’t good enough for building an AI platform.” These companies are generating plenty of data, which then has to be pulled into analytics systems. She stressed that she couldn’t remember a customer conversation in recent months that didn’t focus on AI. “There is urgency to get to the AI dream,” White said, but the growth and variety of data presents a major challenge for many enterprises. “They thought this was a technology that was separate from their core systems. Now it’s expected for both customer-facing and line-of-business applications.”

Data Lake Storage helps with managing this variety of data since it can handle both structured and unstructured data (and is optimized for the Spark and Hadoop analytics engines). The service can ingest any kind of data — yet Microsoft still promises that it will be very fast. “The world of analytics tended to be defined by having to decide upfront and then building rigid structures around it to get the performance you wanted,” explained White. Data Lake Storage, on the other hand, wants to offer the best of both worlds.

Likewise, White argued that while many enterprises used to keep these services on their on-premises servers, many of them are still appliance-based. But she believes the cloud has now reached the point where the price/performance calculations are in its favor. It took a while to get to this point, though, and to convince enterprises. White noted that for the longest time, enterprises that looked at their analytics projects thought $300 million projects took forever, tied up lots of people and were frankly a bit scary. “But also, what we had to offer in the cloud hasn’t been amazing until some of the recent work,” she said. “We’ve been on a journey — as well as the other cloud vendors — and the price performance is now compelling.” And it sure helps that if enterprises want to meet their AI goals, they’ll now have to tackle these workloads, too.

Feb
06
2019
--

Big companies are not becoming data-driven fast enough

I remember watching MIT professor Andrew McAfee years ago telling stories about the importance of data over gut feeling, whether it was predicting successful wines or making sound business decisions. We have been hearing about big data and data-driven decision making for so long, you would think it has become hardened into our largest organizations by now. As it turns out, new research by NewVantage Partners finds that most large companies are having problems implementing an organization-wide, data-driven strategy.

McAfee was fond of saying that before the data deluge we have today, the way most large organizations made decisions was via the HiPPO — the highest paid person’s opinion. Then he would chide the audience that this was not the proper way to run your business. Data, not gut feelings, even those based on experience, should drive important organizational decisions.

While companies haven’t failed to recognize McAfee’s advice, the NVP report suggests they are having problems implementing data-driven decision making across organizations. There are plenty of technological solutions out there today to help them, from startups all the way to the largest enterprise vendors, but the data (see, you always need to go back to the data) suggests that it’s not a technology problem, it’s a people problem.

Executives can have farsighted vision that their organizations need to be data-driven. They can acquire all of the latest solutions to bring data to the forefront, but unless they combine that with a broad cultural shift and a deep understanding of how to use that data inside business processes, they will continue to struggle.

The study’s authors, Randy Bean and Thomas H. Davenport, wrote about the people problem in their study’s executive summary. “We hear little about initiatives devoted to changing human attitudes and behaviors around data. Unless the focus shifts to these types of activities, we are likely to see the same problem areas in the future that we’ve observed year after year in this survey.”

The survey found that 72 percent of respondents have failed in this regard, reporting they haven’t been able to create a data-driven culture, whatever that means to individual respondents. Meanwhile, 69 percent reported they had failed to create a data-driven organization, although it would seem that these two metrics would be closely aligned.

Perhaps most discouraging of all is that the data is trending the wrong way. Over the last several years, the report’s authors say that those organizations calling themselves data-driven has actually dropped each year from 37.1 percent in 2017 to 32.4 percent in 2018 to 31.0 percent in the latest survey.

This matters on so many levels, but consider that as companies shift to artificial intelligence and machine learning, these technologies rely on abundant amounts of data to work effectively. What’s more, every organization, regardless of its size, is generating vast amounts of data, simply as part of being a digital business in the 21st century. They need to find a way to control this data to make better decisions and understand their customers better. It’s essential.

There is so much talk about innovation and disruption, and understanding and affecting company culture, but so much of all this is linked. You need to be more agile. You need to be more digital. You need to be transformational. You need to be all of these things — and data is at the center of all of it.

Data has been called the new oil often enough to be cliché, but these results reveal that the lesson is failing to get through. Companies need to be data-driven now, this instant. This isn’t something to be working toward at this point. This is something you need to be doing, unless your ultimate goal is to become irrelevant.

Feb
05
2019
--

Databricks raises $250M at a $2.75B valuation for its analytics platform

Databricks, the company founded by the original team behind the Apache Spark big data analytics engine, today announced that it has raised a $250 million Series E round led by Andreessen Horowitz. Coatue Management, Green Bay Ventures, Microsoft and NEA, also participated in this round, which brings the company’s total funding to $498.5 million. Microsoft’s involvement here is probably a bit of a surprise, but it’s worth noting that it also worked with Databricks on the launch of Azure Databricks as a first-party service on the platform, something that’s still a rarity in the Azure cloud.

As Databricks also today announced, its annual recurring revenue now exceeds $100 million. The company didn’t share whether it’s cash flow-positive at this point, but Databricks CEO and co-founder Ali Ghodsi shared that the company’s valuation is now $2.75 billion.

Current customers, which the company says number around 2,000, include the likes of Nielsen, Hotels.com, Overstock, Bechtel, Shell and HP.

“What Ali and the Databricks team have built is truly phenomenal,” Green Bay Ventures co-founder Anthony Schiller told me. “Their success is a testament to product innovation at the highest level. Databricks is without question best-in-class and their impact on the industry proves it. We were thrilled to participate in this round.”

While Databricks is obviously known for its contributions to Apache Spark, the company itself monetizes that work by offering its Unified Analytics platform on top of it. This platform allows enterprises to build their data pipelines across data storage systems and prepare data sets for data scientists and engineers. To do this, Databricks offers shared notebooks and tools for building, managing and monitoring data pipelines, and then uses that data to build machine learning models, for example. Indeed, training and deploying these models is one of the company’s focus areas these days, which makes sense, given that this is one of the main use cases for big data, after all.

On top of that, Databricks also offers a fully managed service for hosting all of these tools.

“Databricks is the clear winner in the big data platform race,” said Ben Horowitz, co-founder and general partner at Andreessen Horowitz, in today’s announcement. “In addition, they have created a new category atop their world-beating Apache Spark platform called Unified Analytics that is growing even faster. As a result, we are thrilled to invest in this round.”

Ghodsi told me that Horowitz was also instrumental in getting the company to re-focus on growth. The company was already growing fast, of course, but Horowitz asked him why Databricks wasn’t growing faster. Unsurprisingly, given that it’s an enterprise company, that means aggressively hiring a larger sales force — and that’s costly. Hence the company’s need to raise at this point.

As Ghodsi told me, one of the areas the company wants to focus on is the Asia Pacific region, where overall cloud usage is growing fast. The other area the company is focusing on is support for more verticals like mass media and entertainment, federal agencies and fintech firms, which also comes with its own cost, given that the experts there don’t come cheap.

Ghodsi likes to call this “boring AI,” since it’s not as exciting as self-driving cars. In his view, though, the enterprise companies that don’t start using machine learning now will inevitably be left behind in the long run. “If you don’t get there, there’ll be no place for you in the next 20 years,” he said.

Engineering, of course, will also get a chunk of this new funding, with an emphasis on relatively new products like MLFlow and Delta, two tools Databricks recently developed and that make it easier to manage the life cycle of machine learning models and build the necessary data pipelines to feed them.

Jan
17
2019
--

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former director of Engineering for Facebook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in less than a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from Nvidia and Google are virtually available for anyone to run their tests.

Individual users can get on for free, specify the type of GPU they need to compute their experiment and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a “spell hyper” command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the “size” of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent, as well as a designer.

Jan
16
2019
--

Nvidia’s T4 GPUs are now available in beta on Google Cloud

Google Cloud today announced that Nvidia’s Turing-based Tesla T4 data center GPUs are now available in beta in its data centers in Brazil, India, Netherlands, Singapore, Tokyo and the United States. Google first announced a private test of these cards in November, but that was a very limited alpha test. All developers can now take these new T4 GPUs for a spin through Google’s Compute Engine service.

The T4, which essentially uses the same processor architecture as Nvidia’s RTX cards for consumers, slots in-between the existing Nvidia V100 and P4 GPUs on the Google Cloud Platform . While the V100 is optimized for machine learning, though, the T4 (as its P4 predecessor) is more of a general-purpose GPU that also turns out to be great for training models and inferencing.

In terms of machine and deep learning performance, the 16GB T4 is significantly slower than the V100, though if you are mostly running inference on the cards, you may actually see a speed boost. Unsurprisingly, using the T4 is also cheaper than the V100, starting at $0.95 per hour compared to $2.48 per hour for the V100, with another discount for using preemptible VMs and Google’s usual sustained use discounts.

Google says that the card’s 16GB memory should easily handle large machine learning models and the ability to run multiple smaller models at the same time. The standard PCI Express 3.0 card also comes with support for Nvidia’s Tensor Cores to accelerate deep learning and Nvidia’s new RTX ray-tracing cores. Performance tops out at 260 TOPS and developers can connect up to four T4 GPUs to a virtual machine.

It’s worth stressing that this is also the first GPU in the Google Cloud lineup that supports Nvidia’s ray-tracing technology. There isn’t a lot of software on the market yet that actually makes use of this technique, which allows you to render more lifelike images in real time, but if you need a virtual workstation with a powerful next-generation graphics card, that’s now an option.

With today’s beta launch of the T4, Google Cloud now offers quite a variety of Nvidia GPUs, including the K80, P4, P100 and V100, all at different price points and with different performance characteristics.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com