Jun
12
2019
--

RealityEngines.AI raises $5.25M seed round to make ML easier for enterprises

RealityEngines.AI, a research startup that wants to help enterprises make better use of AI, even when they only have incomplete data, today announced that it has raised a $5.25 million seed funding round. The round was led by former Google CEO and Chairman Eric Schmidt and Google founding board member Ram Shriram. Khosla Ventures, Paul Buchheit, Deepchand Nishar, Elad Gil, Keval Desai, Don Burnette and others also participated in this round.

The fact that the service was able to raise from this rather prominent group of investors clearly shows that its overall thesis resonates. The company, which doesn’t have a product yet, tells me that it specifically wants to help enterprises make better use of the smaller and noisier data sets they have and provide them with state-of-the-art machine learning and AI systems that they can quickly take into production. It also aims to provide its customers with systems that can explain their predictions and are free of various forms of bias, something that’s hard to do when the system is essentially a black box.

As RealityEngines CEO Bindu Reddy, who was previously the head of products for Google Apps, told me, the company plans to use the funding to build out its research and development team. The company, after all, is tackling some of the most fundamental and hardest problems in machine learning right now — and that costs money. Some, like working with smaller data sets, already have some available solutions like generative adversarial networks that can augment existing data sets and that RealityEngines expects to innovate on.

Reddy is also betting on reinforcement learning as one of the core machine learning techniques for the platform.

Once it has its product in place, the plan is to make it available as a pay-as-you-go managed service that will make machine learning more accessible to large enterprise, but also to small and medium businesses, which also increasingly need access to these tools to remain competitive.

Jun
10
2019
--

Qubole launches Quantum, its serverless database engine

Qubole, the data platform founded by Apache Hive creator and former head of Facebook’s Data Infrastructure team Ashish Thusoo, today announced the launch of Quantum, its first serverless offering.

Qubole may not necessarily be a household name, but its customers include the likes of Autodesk, Comcast, Lyft, Nextdoor and Zillow . For these users, Qubole has long offered a self-service platform that allowed their data scientists and engineers to build their AI, machine learning and analytics workflows on the public cloud of their choice. The platform sits on top of open-source technologies like Apache Spark, Presto and Kafka, for example.

Typically, enterprises have to provision a considerable amount of resources to give these platforms the resources they need. These resources often go unused and the infrastructure can quickly become complex.

Qubole already abstracts most of this away, offering what is essentially a serverless platform. With Quantum, however, it is going a step further by launching a high-performance serverless SQL engine that allows users to query petabytes of data with nothing else but ANSI-SQL, giving them the choice between using a Presto cluster or a serverless SQL engine to run their queries, for example.

The data can be stored on AWS and users won’t have to set up a second data lake or move their data to another platform to use the SQL engine. Quantum automatically scales up or down as needed, of course, and users can still work with the same metastore for their data, no matter whether they choose the clustered or serverless option. Indeed, Quantum is essentially just another SQL engine without Qubole’s overall suite of engines.

Typically, Qubole charges enterprises by compute minutes. When using Quantum, the company uses the same metric, but enterprises pay for the execution time of the query. “So instead of the Qubole compute units being associated with the number of minutes the cluster was up and running, it is associated with the Qubole compute units consumed by that particular query or that particular workload, which is even more fine-grained,” Thusoo explained. “This works really well when you have to do interactive workloads.”

Thusoo notes that Quantum is targeted at analysts who often need to perform interactive queries on data stored in object stores. Qubole integrates with services like Tableau and Looker (which Google is now in the process of acquiring). “They suddenly get access to very elastic compute capacity, but they are able to come through a very familiar user interface,” Thusoo noted.

 

Jun
10
2019
--

With Tableau and Mulesoft, Salesforce gains full view of enterprise data

Back in the 2010 timeframe, it was common to say that content was king, but after watching Google buy Looker for $2.6 billion last week and Salesforce nab Tableau for $15.7 billion this morning, it’s clear that data has ascended to the throne in a business context.

We have been hearing about Big Data for years, but we’ve probably reached a point in 2019 where the data onslaught is really having an impact on business. If you can find the key data nuggets in the big data pile, it can clearly be a competitive advantage, and companies like Google and Salesforce are pulling out their checkbooks to make sure they are in a position to help you out.

While Google, as a cloud infrastructure vendor, is trying to help companies on its platform and across the cloud understand and visualize all that data, Salesforce as a SaaS vendor might have a different reason — one that might surprise you — given that Salesforce was born in the cloud. But perhaps it recognizes something fundamental. If it truly wants to own the enterprise, it has to have a hybrid story, and with Mulesoft and Tableau, that’s precisely what it has — and why it was willing to spend around $23 billion to get it.

Making connections

Certainly, Salesforce chairman Marc Benioff has no trouble seeing the connections between his two big purchases over the last year. He sees the combination of Mulesoft connecting to the data sources and Tableau providing a way to visualize as a “beautiful thing.”

Jun
10
2019
--

Salesforce is buying data visualization company Tableau for $15.7B in all-stock deal

On the heels of Google buying analytics startup Looker last week for $2.6 billion, Salesforce today announced a huge piece of news in a bid to step up its own work in data visualization and (more generally) tools to help enterprises make sense of the sea of data that they use and amass: Salesforce is buying Tableau for $15.7 billion in an all-stock deal.

The latter is publicly traded and this deal will involve shares of Tableau Class A and Class B common stock getting exchanged for 1.103 shares of Salesforce common stock, the company said, and so the $15.7 billion figure is the enterprise value of the transaction, based on the average price of Salesforce’s shares as of June 7, 2019.

This is a huge jump on Tableau’s last market cap: it was valued at $10.79 billion at close of trading Friday, according to figures on Google Finance. (Also: trading has halted on its stock in light of this news.)

The two boards have already approved the deal, Salesforce notes. The two companies’ management teams will be hosting a conference call at 8am Eastern and I’ll listen in to that as well to get more details.

This is a huge deal for Salesforce as it continues to diversify beyond CRM software and into deeper layers of analytics.

The company reportedly worked hard to — but ultimately missed out on — buying LinkedIn (which Microsoft picked up instead), and while there isn’t a whole lot in common between LinkedIn and Tableau, this deal will also help Salesforce extend its engagement (and data intelligence) for the customers that Salesforce already has — something that LinkedIn would have also helped it to do.

This also looks like a move designed to help bulk up against Google’s move to buy Looker, announced last week, although I’d argue that analytics is a big enough area that all major tech companies that are courting enterprises are getting their ducks in a row in terms of squaring up to stronger strategies (and products) in this area. It’s unclear whether (and if) the two deals were made in response to each other, although it seems that Salesforce has been eyeing up Tableau for years.

“We are bringing together the world’s #1 CRM with the #1 analytics platform. Tableau helps people see and understand data, and Salesforce helps people engage and understand customers. It’s truly the best of both worlds for our customers–bringing together two critical platforms that every customer needs to understand their world,” said Marc Benioff, chairman and co-CEO, Salesforce, in a statement. “I’m thrilled to welcome Adam and his team to Salesforce.”

Tableau has about 86,000 business customers, including Charles Schwab, Verizon (which owns TC), Schneider Electric, Southwest and Netflix. Salesforce said Tableau will operate independently and under its own brand post-acquisition. It will also remain headquartered in Seattle, Wash., headed by CEO Adam Selipsky along with others on the current leadership team.

Indeed, later during the call, Benioff let it drop that Seattle would become Salesforce’s official second headquarters with the closing of this deal.

That’s not to say, though, that the two will not be working together.

On the contrary, Salesforce is already talking up the possibilities of expanding what the company is already doing with its Einstein platform (launched back in 2016, Einstein is the home of all of Salesforce’s AI-based initiatives); and with “Customer 360,” which is the company’s product and take on omnichannel sales and marketing. The latter is an obvious and complementary product home, given that one huge aspect of Tableau’s service is to provide “big picture” insights.

“Joining forces with Salesforce will enhance our ability to help people everywhere see and understand data,” said Selipsky. “As part of the world’s #1 CRM company, Tableau’s intuitive and powerful analytics will enable millions more people to discover actionable insights across their entire organizations. I’m delighted that our companies share very similar cultures and a relentless focus on customer success. I look forward to working together in support of our customers and communities.”

“Salesforce’s incredible success has always been based on anticipating the needs of our customers and providing them the solutions they need to grow their businesses,” said Keith Block, co-CEO, Salesforce. “Data is the foundation of every digital transformation, and the addition of Tableau will accelerate our ability to deliver customer success by enabling a truly unified and powerful view across all of a customer’s data.”

Jun
06
2019
--

Daily Crunch: Google is acquiring Looker

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Google to acquire analytics startup Looker for $2.6 billion

Google Cloud has been mired in third place in the cloud infrastructure market, and grabbing Looker gives it an analytics company with a solid track record. The startup has raised more than $280 million in funding.

Like other big acquisitions, this deal is subject to regulatory approval, but it is expected to close later this year if all goes well.

2. Uber Copter offers on-demand JFK helicopter service for top-tier users

Uber is adding regular helicopter air service with Uber Copter — a new service line launched today that will provide on-demand transportation from Lower Manhattan to JFK airport for, on average, between $200 and $225 per person. That price includes car service to and from the helipad at each end.

3. In trying to clear ‘confusion’ over anti-harassment policy, YouTube creates more confusion

After a series of tweets that made it seem as if YouTube was ignoring its own anti-harassment policies, the video platform published a blog post in an attempt to clarify its stance. Instead, the post raises more questions about YouTube’s commitment to fighting harassment and hate speech on its platform.

4. Sources: Bird is in talks to acquire scooter startup Scoot

The stage of the negotiations is not clear, but it sounds like the deal is not closed. Both Scoot and Bird declined to comment.

5. Apple’s global accessibility head on the company’s new features for iOS 13 and macOS Catalina

“One of the things that’s been really cool this year is the [accessibility] team has been firing on [all] cylinders across the board,“ Apple’s Sarah Herrlinger told us. “There’s something in each operating system and things for a lot of different types of use cases.”

6. A first look at Amazon’s new delivery drone

The drone has an ingenious hexagonal hybrid design with very few moving parts, and Amazon says it’s chock-full of sensors and a suite of compute modules to keep the drone safe.

7. This year’s Computex was a wild ride with dueling chip releases, new laptops and 467 startups

Computex picked up the pace this year, with dueling chip launches by rivals AMD and Intel and a slew of laptop releases. (Extra Crunch membership required.)

Jun
06
2019
--

Google to acquire analytics startup Looker for $2.6 billion

Google made a big splash this morning when it announced it’s going to acquire Looker, a hot analytics startup that’s raised more than $280 million. It’s paying $2.6 billion for the privilege and adding the company to Google Cloud.

Thomas Kurian, the man who was handed the reins to Google Cloud at the end of last year, sees the two companies bringing together a complete data analytics solution for customers. “The combination provides an end-to-end analytics platform to connect, collect, analyze and visualize data across Google Cloud, Azure, AWS, on-premises databases and ISV applications,” Kurian explained at a media event this morning.

Google Cloud has been mired in third place in the cloud infrastructure market, and grabbing Looker gives it an analytics company with a solid track record. The last time I spoke to Looker, it was announcing a hefty $103 million in funding on a $1.6 billion valuation. Today’s price is a nice even billion over that.

As I wrote at the time, Looker’s CEO Frank Bien wasn’t all that interested in bragging about valuations; he wanted to talk about what he considered more important numbers:

He reported that the company has 1,600 customers now and just crossed the $100 million revenue run rate, a significant milestone for any enterprise SaaS company. What’s more, Bien reports revenue is still growing 70 percent year over year, so there’s plenty of room to keep this going.

Today, in a media briefing on the deal, he said that from the start, his company was really trying to disrupt the business intelligence and analytics market. “What we wanted to do was disrupt this pretty staid ecosystem of data visualization tools and data prep tools that companies were being forced to build solutions. We thought it was time to rationalize a new platform for data, a single place where we could really reconstitute a single view of information and make it available in the enterprise for business purposes,” he said.

Diagram: Google & Looker

Slide: Google & Looker

Bien saw today’s deal as a chance to gain the scale of the Google cloud platform, and as successful as the company has been, it’s never going to have the reach of Google Cloud. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said.

 

Kurian also emphasized that the company will intend to support multiple databases and multiple deployment strategies, whether multi-cloud, hybrid or on premises.

Perhaps, it’s not a coincidence that Google went after Looker as the two companies had a strong existing partnership and 350 common customers, according to Google. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said.

This is a huge deal for Google Cloud, easily topping the $625 million it paid for Apigee in 2016. It marks the first major deal in the Kurian era as Google tries to beef up its market share. While the two companies share common customers, the addition of Looker should bring a net gain that could help them upsell to other parts of the Looker customer base.

Per usual, this deal is going to be subject to regulatory approval, but it is expected to close later this year if all goes well.

Jun
05
2019
--

Google Cloud gets capacity reservations, extends committed use discounts beyond CPUs

Google Cloud made two significant pricing announcements today. Those, you’ll surely be sad to hear, don’t involve the usual price drops for compute and storage. Instead, Google Cloud today announced that it is extending its committed-use discounts, which give you a significant discount when you commit to using a certain number of resources for one or three years, to GPUs, Cloud TPU Pods and local SSDs. In return for locking yourself into a long-term plan, you can get discounts of 55% off on-demand prices.

In addition, Google is launching a capacity reservation system for Compute Engine that allows users to reserve resources in a specific zone for later use to ensure that they have guaranteed access to these resources when needed.

At first glance, capacity reservations may seem like a weird concept in the cloud. The promise of cloud computing, after all, is that you can just spin machines up and down at will — and never really have to think about availability.

So why launch a reservation system? “This is ideal for use cases like disaster recovery or peace of mind, so a customer knows that they have some extra resources, but also for retail events like Black Friday or Cyber Monday,” Google senior product manager Manish Dalwadi told me.

These users want to have absolute certainty that when they need the resources, they will be available to them. And while many of us think of the large clouds as having a virtually infinite amount of virtual machines available at any time, some machine types may occasionally only be available in a different availability zone, for example, that is not the same zone as where the rest of your compute resources are.

Users can create or delete reservations at any time and any existing discounts — including sustained use discounts and committed use discounts — will be applied automatically.

As for committed-use discounts, it’s worth noting that Google always took a pretty flexible approach to this. Users don’t have to commit to using a specific machine type for three years, for example. Instead, they commit to using a specific number of CPU cores and memory, for example.

“What we heard from customers was that other commit models are just too inflexible and their utilization rates were very low, like 70, 60% utilization,” Google product director Paul Nash told me. “So one of our design goals with committed-use discounts was to figure out how we could provide something that gives us the capacity planning signal that we need, provides the same amount of discounts that we want to pass on to customers, but do it in a way that customers actually feel like they are getting a great deal and so that they don’t have to hyper-manage these things in order to get the most out of them.”

Both the extended committed-use discounts and the new capacity reservation system for Compute Engine resources are now live in the Google Cloud.

Jun
04
2019
--

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com