Oct
10
2018
--

Egnyte hauls in $75M investment led by Goldman Sachs

Egnyte launched in 2007 just two years after Box, but unlike its enterprise counterpart, which went all-cloud and raised hundreds of millions of dollars, Egnyte saw a different path with a slow and steady growth strategy and a hybrid niche, recognizing that companies were going to keep some content in the cloud and some on prem. Up until today it had raised a rather modest $62.5 million, and hadn’t taken a dime since 2013, but that all changed when the company announced a whopping $75 million investment.

The entire round came from a single investor, Goldman Sachs’ Private Capital Investing arm, a part of Goldman’s Special Situations group. Holger Staude, vice president of Goldman Sachs Private Capital Investing will join Egnyte’s board under the terms of the deal. He says Goldman liked what it saw, a steady company poised for bigger growth with the right influx of capital. In fact, the company has had more than eight straight quarters of growth and have been cash flow positive since Q4 in 2016.

“We were impressed by the strong management team and the company’s fiscal discipline, having grown their top line rapidly without requiring significant outside capital for the past several years. They have created a strong business model that we believe can be replicated with success at a much larger scale,” Staude explained.

Company CEO Vineet Jain helped start the company as a way to store and share files in a business context, but over the years, he has built that into a platform that includes security and governance components. Jain also saw a market poised for growth with companies moving increasing amounts of data to the cloud. He felt the time was right to take on more significant outside investment. He said his first step was to build a list of investors, but Goldman shined through, he said.

“Goldman had reached out to us before we even started the fundraising process. There was inbound interest. They were more aggressive compared to others. Given there was prior conversations, the path to closing was shorter,” he said.

He wouldn’t discuss a specific valuation, but did say they have grown 6x since the 2013 round and he got what he described as “a decent valuation.” As for an IPO, he predicted this would be the final round before the company eventually goes public. “This is our last fund raise. At this level of funding, we have more than enough funding to support a growth trajectory to IPO,” he said.

Philosophically, Jain has always believed that it wasn’t necessary to hit the gas until he felt the market was really there. “I started off from a point of view to say, keep building a phenomenal product. Keep focusing on a post sales experience, which is phenomenal to the end user. Everything else will happen. So this is where we are,” he said.

Jain indicated the round isn’t about taking on money for money’s sake. He believes that this is going to fuel a huge growth stage for the company. He doesn’t plan to focus these new resources strictly on the sales and marketing department, as you might expect. He wants to scale every department in the company including engineering, posts-sales and customer success.

Today the company has 450 employees and more than 14,000 customers across a range of sizes and sectors including Nasdaq, Thoma Bravo, AppDynamics and Red Bull. The deal closed at the end of last month.

Oct
09
2018
--

Cloud Foundry expands its support for Kubernetes

Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses run the Application Platform and their container-based applications in parallel. In addition, Cloud Foundry has also long been the force behind BOSH, a tool for building, deploying and managing cloud applications.

The addition of the Container Runtime a year go seemed to muddle the organization’s mission a bit, but now that the dust has settled, the intent here is starting to become clearer. As Cloud Foundry CTO Chip Childers told me, what enterprises are mostly using the Container Runtime for is for running the pre-packaged applications they get from their vendors. “The Container Runtime — or really any deployment of Kubernetes — when used next to or in conjunction with the App Runtime, that’s where people are largely landing packaged software being delivered by an independent software vendor,” he told me. “Containers are the new CD-ROM. You just want to land it in a good orchestration platform.”

Because the Application Runtime launched well before Kubernetes was a thing, the Cloud Foundry project built its own container service, called Diego.

Today, the Cloud Foundry foundation is launching two new Kubernetes-related projects that take the integration between the two to a new level. The first is Project Eirini, which was launched by IBM and is now being worked on by Suse and SAP as well. This project has been a long time in the making and it’s something that the community has expected for a while. It basically allows developers to choose between using the existing Diego orchestrator and Kubernetes when it comes to deploying applications written for the Application Runtime. That’s a big deal for Cloud Foundry.

“What Eirini does, is it takes that Cloud Foundry Application Runtime — that core PaaS experience that the [Cloud Foundry] brand is so tied to and it allows the underlying Diego scheduler to be replaced with Kubernetes as an option for those use cases that it can cover,” Childers explained. He added that there are still some use cases the Diego container management system is better suited for than Kubernetes. One of those is better Windows support — something that matters quite a bit to the enterprise companies that use Cloud Foundry. Childers also noted that the multi-tenancy guarantees of Kubernetes are a bit less stringent than Diego’s.

The second new project is CF Containerization, which was initially developed by Suse. Like the name implies, CF Containerization basically allows you to package the core Cloud Foundry Application Runtime and deploy it in Kubernetes clusters with the help of the BOSH deployment tool. This is pretty much what Suse is already using to ship its Cloud Foundry distribution.

Clearly then, Kubernetes is becoming part and parcel of what the Cloud Foundry PaaS service will sit on top of and what developers will use to deploy the applications they write for it in the near future. At first glance, this focus on Kubernetes may look like it’s going to make Cloud Foundry superfluous, but it’s worth remembering that, at its core, the Cloud Foundry Application Runtime isn’t about infrastructure but about a developer experience and methodology that aims to manage the whole application development lifecycle. If Kubernetes can be used to help manage that infrastructure, then the Cloud Foundry project can focus on what it does best, too.

Oct
09
2018
--

Microsoft shows off government cloud services with JEDI due date imminent

Just a day after Google decided to drop out of the Pentagon’s massive $10 billion, 10-year JEDI cloud contract bidding, Microsoft announced increased support services for government clients. In a long blog post, the company laid out its government focused cloud services.

While today’s announcement is not directly related to JEDI per se, the timing is interesting just three days ahead of the October 12th deadline for submitting RFPs. Today’s announcement is about showing just how comprehensive the company’s government-specific cloud services are.

In a blog post, Microsoft corporate vice president for Azure, Julia White made it clear the company was focusing hard on the government business. “In the past six months we have added over 40 services and features to Azure Government, as well as publishing a new roadmap for the Azure Government regions providing ongoing transparency into our upcoming releases,” she wrote.

“Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly. In addition, we are adding new options for buying and onboarding cloud services to make it easier to move to the cloud. Finally, we are bringing an array of new hybrid and edge capabilities to government to ensure that government customers have full access to the technology of the intelligent edge and intelligent cloud era,” White added.

While much of the post was around the value proposition of Azure in general such as security, identity, artificial intelligence and edge data processing services, there were a slew of items aimed specifically at the government clients.

For starters, the company is increasing its FedRAMP compliance, a series of regulations designed to ensure vendors deliver cloud services securely to federal government customers. Specifically Microsoft is moving from FedRAMP moderate to high ratings on 50 services.

“By taking the broadest regulatory compliance approach in the industry, we’re making commercial innovation more accessible and easier for government to adopt,” White wrote.

In addition, Microsoft announced it’s expanding Azure Secret Regions, a solution designed specifically for dealing with highly classified information in the cloud. This one appears to take direct aim at JEDI. “We are making major progress in delivering this cloud designed to meet the regulatory and compliance requirements of the Department of Defense and the Intelligence Community. Today, we are announcing these newest regions will be available by the end of the first quarter of 2019. In addition, to meet the growing demand and requirements of the U.S. Government, we are confirming our intent to deliver Azure Government services to meet the highest classification requirements, with capabilities for handling Top Secret U.S. classified data,” White wrote.

The company’s announcements, which included many other pieces that have been previously announced, is clearly designed to show off its government chops at a time where a major government contract is up for grabs. The company announced Azure Stack for Government in August, another piece mentioned in this blog post.

Oct
04
2018
--

GitHub gets a new and improved Jira Software Cloud integration

Atlassian’s Jira has become a standard for managing large software projects in many companies. Many of those same companies also use GitHub as their source code repository and, unsurprisingly, there has long been an official way to integrate the two. That old way, however, was often slow, limited in its capabilities and unable to cope with the large code bases that many enterprises now manage on GitHub .

Almost as if to prove that GitHub remains committed to an open ecosystem, even after the Microsoft acquisition, the company today announced a new and improved integration between the two products.

“Working with Atlassian on the Jira integration was really important for us,” GitHub’s director of ecosystem engineering Kyle Daigle told me ahead of the announcement. “Because we want to make sure that our developer customers are getting the best experience of our open platform that they can have, regardless of what tools they use.”

So a couple of months ago, the team decided to build its own Jira integration from the ground up, and it’s committed to maintaining and improving it over time. As Daigle noted, the improvements here include better performance and a better user experience.

The new integration now also makes it easier to view all the pull requests, commits and branches from GitHub that are associated with a Jira issue, search for issues based on information from GitHub and see the status of the development work right in Jira, too. And because changes in GitHub trigger an update to Jira, too, that data should remain up to date at all times.

The old Jira integration over the so-called Jira DVCS connector will be deprecated and GitHub will start prompting existing users to do the upgrade over the next few weeks. The new integration is now a GitHub app, so that also comes with all of the security features the platform has to offer.

Oct
03
2018
--

Palo Alto Networks to acquire RedLock for $173 M to beef up cloud security

Palo Alto Networks launched in 2005 in the age of firewalls. As we all know by now, the enterprise expanded beyond the cozy confines of a firewall long ago and vendors like Palo Alto have moved to securing data in the cloud now too. To that end, the company announced its intent to pay $173 million for RedLock today, an early-stage startup that helps companies make sure their cloud instances are locked down and secure.

The cloud vendors take responsibility for securing their own infrastructure, and for the most part the major vendors have done a decent job. What they can’t do is save their customers from themselves and that’s where a company like RedLock comes in.

As we’ve seen time and again, data has been exposed in cloud storage services like Amazon S3, not through any fault of Amazon itself, but because a faulty configuration has left the data exposed to the open internet. RedLock watches configurations like this and warns companies when something looks amiss.

When the company emerged from stealth just a year ago, Varun Badhwar, company founder and CEO told TechCrunch that this is part of Amazon’s shared responsibility model. “They have diagrams where they have responsibility to secure physical infrastructure, but ultimately it’s the customer’s responsibility to secure the content, applications and firewall settings,” Badhwar told TechCrunch last year.

Badhwar speaking in a video interview about the acquisition says they have been focused on helping developers build cloud applications safely and securely, whether that’s Amazon Web Services, Microsoft Azure or Google Cloud Platform. “We think about [RedLock] as guardrails or as bumper lanes in a bowling alley and just not letting somebody get that gutter ball and from a security standpoint, just making sure we don’t deviate from the best practices,” he explained.

“We built a technology platform that’s entirely cloud-based and very quick time to value since customers can just turn it on through API’s, and we love to shine the light and show our customers how to safely move into public cloud,” he added.

The acquisition will also fit nicely with Evident.io, a cloud infrastructure security startup, the company acquired in March for $300 million. Badhwar believes that customers will benefit from Evident’s compliance capabilities being combined with Red Lock’s analytics capabilities to provide a more complete cloud security solution.

RedLock launched in 2015 and has raised $12 million. The $173 million purchase would appear to be a great return for the investors who put their faith in the startup.

Sep
29
2018
--

What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

Sep
27
2018
--

Dropbox overhauls internal search to improve speed and accuracy

Over the last several months, Dropbox has been undertaking an overhaul of its internal search engine for the first time since 2015. Today, the company announced that the new version, dubbed Nautilus, is ready for the world. The latest search tool takes advantage of a new architecture powered by machine learning to help pinpoint the exact piece of content a user is looking for.

While an individual user may have a much smaller body of documents to search across than the World Wide Web, the paradox of enterprise search says that the fewer documents you have, the harder it is to locate the correct one. Yet Dropbox faces of a host of additional challenges when it comes to search. It has more than 500 million users and hundreds of billions of documents, making finding the correct piece for a particular user even more difficult. The company had to take all of this into consideration when it was rebuilding its internal search engine.

One way for the search team to attack a problem of this scale was to put machine learning to bear on it, but it required more than an underlying level of intelligence to make this work. It also required completely rethinking the entire search tool from an architectural level.

That meant separating two main pieces of the system, indexing and serving. The indexing piece is crucial of course in any search engine. A system of this size and scope needs a fast indexing engine to cover the number of documents in a whirl of changing content. This is the piece that’s hidden behind the scenes. The serving side of the equation is what end users see when they query the search engine, and the system generates a set of results.

Nautilus Architecture Diagram: Dropbox

Dropbox described the indexing system in a blog post announcing the new search engine: “The role of the indexing pipeline is to process file and user activity, extract content and metadata out of it, and create a search index.” They added that the easiest way to index a corpus of documents would be to just keep checking and iterating, but that couldn’t keep up with a system this large and complex, especially one that is focused on a unique set of content for each user (or group of users in the business tool).

They account for that in a couple of ways. They create offline builds every few days, but they also watch as users interact with their content and try to learn from that. As that happens, Dropbox creates what it calls “index mutations,” which they merge with the running indexes from the offline builds to help provide ever more accurate results.

The indexing process has to take into account the textual content assuming it’s a document, but it also has to look at the underlying metadata as a clue to the content. They use this information to feed a retrieval engine, whose job is to find as many documents as it can, as fast it can and worry about accuracy later.

It has to make sure it checks all of the repositories. For instance, Dropbox Paper is a separate repository, so the answer could be found there. It also has to take into account the access-level security, only displaying content that the person querying has the right to access.

Once it has a set of possible results, it uses machine learning to pinpoint the correct content. “The ranking engine is powered by a [machine learning] model that outputs a score for each document based on a variety of signals. Some signals measure the relevance of the document to the query (e.g., BM25), while others measure the relevance of the document to the user at the current moment in time,” they explained in the blog post.

After the system has a list of potential candidates, it ranks them and displays the results for the end user in the search interface, but a lot of work goes into that from the moment the user types the query until it displays a set of potential files. This new system is designed to make that process as fast and accurate as possible.

Sep
26
2018
--

Putting the Pentagon $10B JEDI cloud contract into perspective

Sometimes $10 billion isn’t as much as you think.

It’s true that when you look at the bottom line number of the $10 billion Joint Enterprise Defense Infrastructure (JEDI) cloud contract, it’s easy to get lost in the sheer size of it, and the fact that it’s a one-vendor deal. The key thing to remember as you think about this deal is that while it’s obviously a really big number, it’s spread out over a long period of time and involves a huge and growing market.

It’s also important to remember that the Pentagon has given itself lots of out clauses in the way the contract is structured. This could be important for those who are worried about one vendor having too much power in a deal like this. “This is a two-year contract, with three option periods: one for three years, another for three years, and a final one for two years,” Heather Babb, Pentagon spokeswoman told TechCrunch.

The contract itself has been set up to define the department’s cloud strategy for the next decade. The thinking is that by establishing a relationship with a single vendor, it will improve security and simplify overall management of the system. It’s also part of a broader view of setting technology policy for the next decade and preparing the military for more modern requirements like Internet of Things and artificial intelligence applications.

Many vendors have publicly expressed unhappiness at the winner-take-all, single vendor approach, which they believe might be unfairly tilted toward market leader Amazon. Still, the DOD, which has stated that the process is open and fair, seems determined to take this path, much to the chagrin of most vendors, who believe that a multi-vendor strategy makes more sense.

John Dinsdale, chief analyst at Synergy Research Group, a firm that keeps close tabs on the cloud market, says it’s also important to keep the figure in perspective compared to the potential size of the overall market.

“The current worldwide market run rate is equivalent to approximately $60 billion per year and that will double in less than three years. So in very short order you’re going to see a market that is valued at greater than $100 billion per year – and is continuing to grow rapidly,” he said.

Put in those terms, $10 billion over a decade, while surely a significant figure, isn’t quite market altering if the market size numbers are right. “If the contract is truly worth $10 billion that is clearly a very big number. It would presumably be spread over many years which then puts it at only a very small share of the total market,” he said.

He also acknowledges that it would be a big feather in the cap of whichever company wins the business, and it could open the door for other business in the government and private sector. After all, if you can handle the DOD, chances are you can handle just about any business where a high level of security and governance would be required.

Final RFPs are now due on October 12th with a projected award date of April 2019, but even at $10 billion, an astronomical sum of money to be sure, it ultimately might not shift the market in the way you think.

Sep
25
2018
--

With MuleSoft in fold, Salesforce gains access to data wherever it lives

When Salesforce bought MuleSoft last spring for the tidy sum of $6.5 billion, it looked like money well spent for the CRM giant. After all, it was providing a bridge between the cloud and the on-prem data center and that was a huge missing link for a company with big ambitions like Salesforce.

When you want to rule the enterprise, you can’t be limited by where data lives and you need to be able to share information across disparate systems. Partly that’s a simple story of enterprise integration, but on another level it’s purely about data. Salesforce introduced its intelligence layer, dubbed Einstein, at Dreamforce in 2016.

With MuleSoft in the fold, it’s got access to data cross systems wherever it lives, in the cloud or on-prem. Data is the fuel of artificial intelligence, and Salesforce has been trying desperately to get more data for Einstein since its inception.

It lost out on LinkedIn to Microsoft, which flexed its financial muscles and reeled in the business social network for $26.5 billion a couple of years ago. It’s undoubtedly a rich source of data that the company longed for. Next, it set its sights on Twitter (although Twitter was ultimately never sold, of course). After board and stockholder concerns, the company walked away.

Each of these forays was all about the data; frustrated, Salesforce went back to the drawing board. While MuleSoft did not supply the direct cache of data that a social network would have, it did provide a neat way for them to get at backend data sources — the very type of data that matters most to its enterprise customers.

Today, they have extended that notion beyond pure data access to a graph. You can probably see where this is going. The idea of a graph, the connections between say a buyer and the things they tend to buy or a person on a social network and people they tend to interact with, can be extended even to the network/API level, and that is precisely the story that Salesforce is trying to tell this week at the Dreamforce customer conference in San Francisco.

Visualizing connections in a data integration network in MuleSoft. Screenshot: Salesforce/MuleSoft

Maureen Fleming, program vice president for integration and process automation research at IDC, says that it is imperative that organizations view data as a strategic asset and act accordingly. “Very few companies are getting all the value from their data as they should be, as it is locked up in various applications and systems that aren’t designed to talk to each other. Companies who are truly digitally capable will be able to connect these disparate data sources, pull critical business-level data from these connections, and make informed business decisions in a way that delivers competitive advantage,” Fleming explained in a statement.

Configuring data connections on MuleSoft Anypoint Platform. Gif: Salesforce/MuleSoft

It’s hard to underestimate the value of this type of data is to Salesforce, which has already put MuleSoft to work internally to help build the new Customer 360 product announced today. It can point to how it’s providing this very type of data integration to which Fleming is referring on its own product set.

Bret Taylor, president and chief product officer at Salesforce, says that for his company all of this is ultimately about enhancing the customer experience. You need to be able to stitch together these different computing environments and data silos to make that happen.

“In the short term, [customer] infrastructure is often fragmented. They often have some legacy applications on premise, they’ll have some cloud applications like Salesforce, but some infrastructure in on Amazon or Google and Azure, and to actually transform the customer experience, they need to bring all this data together. And so it’s a really a unique time for integration technologies, like MuleSoft because it enables you to create a seamless customer experience, no matter where that data lives, and that means you don’t need to wait for infrastructure to be perfect before you can transform your customer experience.”

Sep
25
2018
--

Salesforce wants to end customer service frustration with Customer 360

How many times have you called into a company, answered a bunch of preliminary questions about the purpose of your call, then found that those answers didn’t make their way to the CSR who ultimately took your call.

This usually is because System A can’t talk to System B and it’s frustrating for the caller, who is already angry about having to repeat the same information again. Salesforce wants to help bring an end to that problem with their new Customer 360 product announced today at Dreamforce, the company’s customer conference taking place this week in San Francisco.

What’s interesting about Customer 360 from a product development perspective is that Salesforce took the technology from the $6.5 billion Mulesoft acquisition, and didn’t just turn that into a product, it also used the same technology internally to pull the various pieces together into a more unified view of the Salesforce product family. This should in theory allow the customer service representative talking to you on the phone to get the total picture of your interactions with the company, thereby reducing that need to repeat yourself because the information wasn’t passed on.

Screenshot: Salesforce

The idea here is to bring all of the different products — sales, service, community, commerce and marketing — into a single unified view of the customer. And they allow you to do this without actually writing any code, according to the company.

Adding a data source to Customer 360 Gif: Salesforce

This allows anyone who interacts with the customer to see the whole picture, a process that has eluded many companies and upset many customers. The customer record in Salesforce CRM is only part of the story, as is the marketing pitches and the ecommerce records. It all comes together to tell a story about that customer, but if the data is often trapped in silos, nobody can see that. That’s what Customer 360 is supposed to solve.

While Bret Taylor, Salesforce’s president and chief product officer says there were ways to make this happen before in Salesforce, they have never offered a product that does so in such a direct way. He says that the big brands like Apple, Amazon and Google have changed expectations in terms of how we presume to be treated when we connect with a brand. Customer 360 is focused on helping companies achieve that expectation level.

“Now, when people don’t get that experience, where the company that you’re interacting with doesn’t know who you are, it’s gone from a pleasant experience to an expectation, and that’s what we hear time and time again from our customers. And that’s why we’re so focused on integration, that single view of the customer is the ultimate value proposition of these experiences,” Taylor explained.

This product is aimed at the Salesforce admins who have been responsible in the past for configuring and customizing Salesforce products for the unique needs of each department or overall organization. They can configure the Customer 360 to pull data from Salesforce and other products too.

Customer 360 is being piloted in North America right now and should GA some time next year.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com