Dec
03
2019
--

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Dec
03
2019
--

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.

 

Dec
02
2019
--

New Amazon tool simplifies delivery of containerized machine learning models

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool, combined with SageMaker, can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

Nov
26
2019
--

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s MySQL (and Postgres)-compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Asay wrote in a blog post announcing these new capabilities.

Asay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

Nov
20
2019
--

Reimagine inside sales to ramp up B2B customer acquisition

Slack makes customer acquisition look easy.

The day we acquired our first Highspot customer, it was raining hard in Seattle. I was on my way to a startup event when I answered my cell phone and our prospect said, “We’re going with Highspot.” Relief, then excitement, hit me harder than the downpour outside. It was a milestone moment – one that came after a long journey of establishing product-market fit, developing a sustainable competitive advantage, and iterating repeatedly based on prospect feedback. In other words, it was anything but easy.

User-first products are driving rapid company growth in an era where individuals discover, adopt, and share software they like throughout their organizations. This is great if you’re a Slack, Shopify, or Dropbox, but what if your company doesn’t fit that profile?

Product-led growth is a strategy that works for the right technologies, but it’s not the end-all, be-all for B2B customer acquisition. For sophisticated enterprise software platforms designed to drive company-wide value, such as Marketo, ServiceNow and Workday, that value is realized when the product is adopted en masse by one or more large segments.

If you’re selling broad account value, rather than individual user or team value, acquisition boils down to two things: elevating account based-selling and revolutionizing the inside sales model. Done correctly, you lay a foundation capable of doubling revenue growth year-over-year, 95 percent company-wide retention, and more than 100 percent growth in new customer logos annually. Here are the steps you can take to build a model that realizes on-par results.

Work the account, not the deal

Account-based selling is not a new concept, but the availability of data today changes the game. Advanced analytics enable teams to develop comprehensive and personalized approaches that meet modern customers’ heightened expectations. And when 77 percent of business buyers feel that technology has significantly changed how companies should interact with them, you have no choice but to deliver.

Despite the multitude of products created to help sellers be more productive and personal, billions of cookie-cutter emails are still flooding the inboxes of a few decision makers. The market is loud. Competition is cut throat. It’s no wonder 40 percent of sales reps say getting a response from a prospect is more difficult than ever before. Even pioneers of sales engagement are recognizing the need for evolution – yesterday’s one-size-fits-all approach to outreach only widens the gap between today’s sellers and buyers.

Companies must radically change their approach to account-based selling by building trusted relationships over time from the first-touch onward. This requires that your entire sales force – from account development representatives to your head of sales – adds tailored, tangible value at every stage of the journey. Modern buyers don’t want to be sold. They want to be advised. But the majority of companies are still missing the mark, favoring spray-and-pray tactics over personalized guidance.

One reason spamming remains prevalent, despite growing awareness of the need for quality over quantity, is that implementing a tailored approach is hard work. However, companies can make great strides by doing just three things:

  • Invest in personalization: Sales reps have quota, and sales leaders carry revenue targets. The pressure is as real as the numbers. But high velocity outreach tactics simply don’t work consistently. New research from Monetate and WBR Research found that 93% of businesses with advanced personalization strategies increased their revenue last year. And while scaling personalization may sound like an oxymoron, we now have artificial intelligence (AI) technology capable of doing just that. Of course, not all AI is created equal, so take the time to discern AI-powered platforms that deliver real value from the imposters. With a little research, you’ll find sales tools that discard  rinse-and-repeat prospecting methods in favor of intelligent guidance and actionable analytics.

Nov
19
2019
--

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Nov
18
2019
--

18 months after acquisition, MuleSoft is integrating more deeply into Salesforce

A year and a half after getting acquired by Salesforce for $6.5 billion, MuleSoft is beginning to resemble a Salesforce company — using its language and its methodologies to describe new products and services. This week at Dreamforce, as the company’s mega customer conference begins in San Francisco, MuleSoft announced a slew of new services as it integrates more deeply into the Salesforce family of products.

MuleSoft creates APIs to connect different systems together. This could be quite useful for Salesforce as a bridge between older software that may be on-prem or in the cloud. It allows Salesforce and its customers to access data wherever it lives, even from different parts of the Salesforce ecosystem itself.

MuleSoft made a number of announcements designed to simplify that process and put it in the hands of more customers. For starters, it’s announcing Accelerators, which are pre-defined integrations that let companies connect more easily to other systems. Not surprisingly, two of the first ones connect data from external products and services to Salesforce Service Cloud and Salesforce Commerce Cloud.

“What we’ve done is we’ve pre-built integrations to common back-end systems like ServiceNow and JIRA in Service Cloud, and we prebuilt those integrations, and then automatically connected that data and services through a Salesforce Lightning component directly in the Service console,” Lindsey Irvine, chief marketing officer at MuleSoft, explained.

What this does is allow the agent to get a more complete view of the customer by getting not just the data that’s stored in Salesforce, but in other systems as well.

The company also wants to put these kinds of integration skills in the hands of more Salesforce customers, so they have designed a set of courses in Trailhead, the company’s training platform, with the goal of helping 100,000 Salesforce admins, developers, integration architects and line of business users develop expertise around creating and managing these kinds of integrations.

The company is also putting resources into creating the API Community Manager, a place where people involved in building and managing these integrations can get help from a community of users, all built on Salesforce products and services, says Mark Dao, chief product officer at MuleSoft.

“We’re leveraging Community Cloud, Service Cloud and Marketing Cloud to create a true developer experience platform. And what’s interesting is that it’s targeting both the business users — in other words, business development teams and marketing teams — as well as external developers,” he said. He added that the fact this is working with business users as well as the integration experts is something new, and the goal is to drive increased usage of APIs using MuleSoft inside Salesforce customer organizations.

Finally, the company announced Flow Designer, a new tool fueled by Einstein AI, which helps automate the creation of workflows and integrations between systems in a more automated fashion without requiring coding skills.

MuleSoft Flow Designer requires no coding (Screenshot: MuleSoft)

Dao says this is about putting MuleSoft in reach of more users. “It’s about enabling use cases for less technical users in the context of the MuleSoft Anypoint Platform. This really requires a new way of thinking around creating integrations, and we’ve been making Flow Designer simpler and simpler, and removing that technical layer from those users,” he said.

API Community Manager is available now. Accelerators will be available in January and Flow Designer updates will be available Q2 2020, according to the company.

These and other features are all designed to take some of the complexity out of using MuleSoft to help connect various systems across the organization, including both Salesforce and external programs, to make use of data wherever it lives. MuleSoft does requires a fair bit of technical skill, so if the company is able to simplify integration tasks, it could help put it in the hands of more users.

Nov
18
2019
--

Salesforce, Apple partnership begins to come to life

Last year at Dreamforce, Salesforce’s enormous annual customer conference, Apple and Salesforce announced the beginnings of a partnership where the two organizations would work together to enhance Salesforce products running on Apple devices. Today, as this year’s Dreamforce conference begins, the companies announced the fruits of that labor with general availability of two new tools that were first announced at last year’s event.

For starters, Apple has been working with Salesforce to redesign the Salesforce Mobile app to build in Apple iOS features into the app like being able to use Siri shortcuts to get work done faster, using your voice instead of typing, something that’s sometimes awkward to do on a mobile device.

Hey Siri example in Salesforce Mobile app.

Photo: Salesforce

For instance, you could say, “Hey Siri, next sales meeting,” and Siri can interact with Salesforce CRM to tell you who your meeting is with, the name of his or her company, when you last met and what the Einstein opportunity score is to help you predict how likely it is that you could make a sale today (or eventually).

In addition, the Mobile App takes advantage of Apple’s Handoff feature to reflect changes across devices immediately, and Apple’s Face ID for easy log on to the app.

Salesforce also announced a pilot of Einstein Voice on Salesforce Mobile, allowing reps to enter notes, add tasks and update the CRM database using voice. Einstein is Salesforce’s general artificial intelligence layer, and the voice feature uses natural language understanding to interpret what the rep asks.

The company reports that over 1000 companies participated in piloting the updated app, which constitutes the largest pilot in the history of the organization.

Salesforce also announced its new mobile development platform SDK, built specifically for iOS and iPadOS using the Swift language. The idea is to provide a tool to give Salesforce developers with the ability to build apps for iPad and iPhone, then package them up with a new tool called Swift UI and Package Manager.

Trailhead Go

Photo: Salesforce

Trailhead Go is the mobile version of the company’s online learning platform designed specifically for iPad and iPhone. It was built using the new Mobile SDK, and allows users to access the same courses they can on the web in a mobile context. The new mobile tool includes the ability to Handoff between devices along with support for picture-in-picture and split view for multi-tasking when it makes sense.

Salesforce Mobile and Trailhead Go are available starting today for free in the iOS App Store. The Salesforce Mobile SDK will be available later this year.

As this partnership continues to develop, both companies should benefit. Salesforce gets direct access to Apple features, and can work with Apple to implement them in an optimized way. Apple gets deeper access to the enterprise with help from Salesforce, one of the biggest enterprise software vendors around.

Nov
15
2019
--

Three of Apple and Google’s former star chip designers launch NUVIA with $53M in series A funding

Silicon is apparently the new gold these days, or so VCs hope.

What was once a no-go zone for venture investors, who feared the long development lead times and high technical risk required for new entrants in the semiconductor field, has now turned into one of the hottest investment areas for enterprise and data VCs. Startups like Graphcore have reached unicorn status (after its $200 million series D a year ago) while Groq closed $52M from the likes of Chamath Palihapitiya of Social Capital fame and Cerebras raised $112 million in investment from Benchmark and others while announcing that it had produced the first trillion transistor chip (and who I profiled a bit this summer).

Today, we have another entrant with another great technical team at the helm, this time with a Santa Clara, CA-based startup called NUVIA. The company announced this morning that it has raised a $53 million series A venture round co-led by Capricorn Investment Group, Dell Technologies Capital (DTC), Mayfield, and WRVI Capital, with participation from Nepenthe LLC.

Despite only getting started earlier this year, the company currently has roughly 60 employees, 30 more at various stages of accepted offers, and the company may even crack 100 employees before the end of the year.

What’s happening here is a combination of trends in the compute industry. There has been an explosion in data and by extension, the data centers required to store all of that information, just as we have exponentially expanded our appetite for complex machine learning algorithms to crunch through all of those bits. Unfortunately, the growth in computation power is not keeping pace with our demands as Moore’s Law slows. Companies like Intel are hitting the limits of physics and our current know-how to continue to improve computational densities, opening the ground for new entrants and new approaches to the field.

Finding and building a dream team with a “chip” on their shoulder

There are two halves to the NUVIA story. First is the story of the company’s founders, which include John Bruno, Manu Gulati, and Gerard Williams III, who will be CEO. The three overlapped for a number of years at Apple, where they brought their diverse chip skillsets together to lead a variety of initiatives including Apple’s A-series of chips that power the iPhone and iPad. According to a press statement from the company, the founders have worked on a combined 20 chips across their careers and have received more than 100 patents for their work in silicon.

Gulati joined Apple in 2009 as a micro architect (or SoC architect) after a career at Broadcom, and a few months later, Williams joined the team as well. Gulati explained to me in an interview that, “So my job was kind of putting the chip together; his job was delivering the most important piece of IT that went into it, which is the CPU.” A few years later in around 2012, Bruno was poached from AMD and brought to Apple as well.

Gulati said that when Bruno joined, it was expected he would be a “silicon person” but his role quickly broadened to think more strategically about what the chipset of the iPhone and iPad should deliver to end users. “He really got into this realm of system-level stuff and competitive analysis and how do we stack up against other people and what’s happening in the industry,” he said. “So three very different technical backgrounds, but all three of us are very, very hands-on and, you know, just engineers at heart.”

Gulati would take an opportunity at Google in 2017 aimed broadly around the company’s mobile hardware, and he eventually pulled over Bruno from Apple to join him. The two eventually left Google earlier this year in a report first covered by The Information in May. For his part, Williams stayed at Apple for nearly a decade before leaving earlier this year in March.

The company is being stealthy about exactly what it is working on, which is typical in the silicon space because it can take years to design, manufacture, and get a product into market. That said, what’s interesting is that while the troika of founders all have a background in mobile chipsets, they are indeed focused on the data center broadly conceived (i.e. cloud computing), and specifically reading between the lines, to finding more energy-efficient ways that can combat the rising climate cost of machine learning workflows and computation-intensive processing.

Gulati told me that “for us, energy efficiency is kind of built into the way we think.”

The company’s CMO did tell me that the startup is building “a custom clean sheet designed from the ground up” and isn’t encumbered by legacy designs. In other words, the company isn’t building on top of ARM or other existing chip architectures.

Building an investor syndicate that’s willing to “chip” in

Outside of the founders, the other half of this NUVIA story is the collective of investors sitting around the table, all of whom not only have deep technical backgrounds, but also deep pockets who can handle the technical risk that comes with new silicon startups.

Capricorn specifically invested out of what it calls its Technology Impact Fund, which focuses on funding startups that use technology to make a positive impact on the world. Its portfolio according to a statement includes Tesla, Planet Labs, and Helion Energy.

Meanwhile, DTC is the venture wing of Dell Technologies and its associated companies, and brings a deep background in enterprise and data centers, particularly from the group’s server business like Dell EMC. Scott Darling, who leads DTC, is joining NUVIA’s board, although the company is not disclosing the board composition at this time. Navin Chaddha, an electrical engineer by training who leads Mayfield, has invested in companies like HashiCorp, Akamai, and SolarCity. Finally, WRVI has a long background in enterprise and semiconductor companies.

I chatted a bit with Darling of DTC about what he saw in this particular team and their vision for the data center. In addition to liking each founder individually, Darling felt the team as a whole was just very strong. “What’s most impressive is that if you look at them collectively, they have a skillset and breadth that’s also stunning,” he said.

He confirmed that the company is broadly working on data center products, but said the company is going to lie low on its specific strategy during product development. “No point in being specific, it just engenders immune reactions from other players so we’re just going to be a little quiet for a while,” he said.

He apologized for “sounding incredibly cryptic” but said that the investment thesis from his perspective for the product was that “the data center market is going to be receptive to technology evolutions that have occurred in places outside of the data center that’s going to allow us to deliver great products to the data center.”

Interpolating that statement a bit with the mobile chip backgrounds of the founders at Google and Apple, it seems evident that the extreme energy-to-performance constraints of mobile might find some use in the data center, particularly given the heightened concerns about power consumption and climate change among data center owners.

DTC has been a frequent investor in next-generation silicon, including joining the series A investment of Graphcore back in 2016. I asked Darling whether the firm was investing aggressively in the space or sort of taking a wait-and-see attitude, and he explained that the firm tries to keep a consistent volume of investments at the silicon level. “My philosophy on that is, it’s kind of an inverted pyramid. No, I’m not gonna do a ton of silicon plays. If you look at it, I’ve got five or six. I think of them as the foundations on which a bunch of other stuff gets built on top,” he explained. He noted that each investment in the space is “expensive” given the work required to design and field a product, and so these investments have to be carefully made with the intention of supporting the companies for the long haul.

That explanation was echoed by Gulati when I asked how he and his co-founders came to closing on this investor syndicate. Given the reputations of the three, they would have had easy access to any VC in the Valley. He said about the final investors:

They understood that putting something together like this is not going to be easy and it’s not for everybody … I think everybody understands that there’s an opportunity here. Actually capitalizing upon it and then building a team and executing on it is not something that just anybody could possibly take on. And similarly, it is not something that every investor could just possibly take on in my opinion. They themselves need to have a vision on their side and not just believe our story. And they need to strategically be willing to help and put in the money and be there for the long haul.

It may be a long haul, but Gulati noted that “on a day-to-day basis, it’s really awesome to have mostly friends you work with.” With perhaps 100 employees by the end of the year and tens of millions of dollars already in the bank, they have their war chest and their army ready to go. Now comes the fun (and hard) part as we learn how the chips fall.

Nov
14
2019
--

Adobe announces GA of customer data platform

The customer data platform (CDP) is the newest tool in the customer experience arsenal as big companies try to help customers deal with data coming from multiple channels. Today, Adobe announced the general availability of its CDP.

The CDP is like a central data warehouse for all the information you have on a single customer. This crosses channels like web, email, text, chat and brick and mortar in-person visits, as well as systems like CRM, e-commerce and point of sale. The idea is to pull all of this data together into a single record to help companies have a deep understanding of the customer at an extremely detailed level. They then hope to leverage that information to deliver highly customized cross-channel experiences.

The idea is to take all of this information and give marketers the tools they need to take advantage of it. “We want to make sure we create an offering that marketers can leverage and makes use of all of that goodness that’s living within Adobe Experience platform,” Nina Caruso, product marketing manager for Adobe Audience Manager, explained.

She said that would involve packaging and presenting the data in such a way to make it easier for marketers to consume, such as dashboards to deliver the data they want to see, while taking advantage of artificial intelligence and machine learning under the hood to help them find the data to populate the dashboards without having to do the heavy lifting.

Beyond that, having access to real-time streaming data in one place under the umbrella of the Adobe Experience Platform should enable marketers to create much more precise market segments. “Part of real-time CDP will be building productized primo maintained integrations for marketers to be able to leverage, so that they can take segmentations and audiences that they’ve built into campaigns and use those across different channels to provide a consistent customer experience across that journey life cycle,” Caruso said.

As you can imagine, bringing all of this information together, while providing a platform for customization for the customer, raises all kinds of security and privacy red flags at the same time. This is especially true in light of GDPR and the upcoming California privacy law. Companies need to be able to enforce data usage rules across the platform.

To that end, the company also announced the availability of Adobe Experience Platform Data Governance, which helps companies define a set of rules around the data usage. This involves “frameworks that help [customers] enforce data usage policies and facilitate the proper use of their data to comply with regulations, obligations and restrictions associated with various data sets,” according to the company.

“We want to make sure that we offer our customers the controls in place to make sure that they have the ability to appropriately govern their data, especially within the evolving landscape that we’re all living in when it comes to privacy and different policies,” Caruso said.

These tools are now available to Adobe customers.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com