Apr
05
2019
--

On balance, the cloud has been a huge boon to startups

Today’s startups have a distinct advantage when it comes to launching a company because of the public cloud. You don’t have to build infrastructure or worry about what happens when you scale too quickly. The cloud vendors take care of all that for you.

But last month when Pinterest announced its IPO, the company’s cloud spend raised eyebrows. You see, the company is spending $750 million a year on cloud services, more specifically to AWS. When your business is primarily focused on photos and video, and needs to scale at a regular basis, that bill is going to be high.

That price tag prompted Erica Joy, a Microsoft engineer to publish this Tweet and start a little internal debate here at TechCrunch. Startups, after all, have a dog in this fight, and it’s worth exploring if the cloud is helping feed the startup ecosystem, or sending your bills soaring as they have with Pinterest.

For starters, it’s worth pointing out that Ms. Joy works for Microsoft, which just happens to be a primary competitor of Amazon’s in the cloud business. Regardless of her personal feelings on the matter, I’m sure Microsoft would be more than happy to take over that $750 million bill from Amazon. It’s a nice chunk of business, but all that aside, do startups benefit from having access to cloud vendors?

Feb
20
2019
--

Google’s hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.

Feb
19
2019
--

Redis Labs raises a $60M Series E round

Redis Labs, a startup that offers commercial services around the Redis in-memory data store (and which counts Redis creator and lead developer Salvatore Sanfilippo among its employees), today announced that it has raised a $60 million Series E funding round led by private equity firm Francisco Partners.

The firm didn’t participate in any of Redis Labs’ previous rounds, but existing investors Goldman Sachs Private Capital Investing, Bain Capital Ventures, Viola Ventures and Dell Technologies Capital all participated in this round.

In total, Redis Labs has now raised $146 million and the company plans to use the new funding to accelerate its go-to-market strategy and continue to invest in the Redis community and product development.

Current Redis Labs users include the likes of American Express, Staples, Microsoft, Mastercard and Atlassian . In total, the company now has more than 8,500 customers. Because it’s pretty flexible, these customers use the service as a database, cache and message broker, depending on their needs. The company’s flagship product is Redis Enterprise, which extends the open-source Redis platform with additional tools and services for enterprises. The company offers managed cloud services, which give businesses the choice between hosting on public clouds like AWS, GCP and Azure, as well as their private clouds, in addition to traditional software downloads and licenses for self-managed installs.

Redis Labs CEO Ofer Bengal told me the company’s isn’t cash positive yet. He also noted that the company didn’t need to raise this round but that he decided to do so in order to accelerate growth. “In this competitive environment, you have to spend a lot and push hard on product development,” he said.

It’s worth noting that he stressed that Francisco Partners has a reputation for taking companies forward and the logical next step for Redis Labs would be an IPO. “We think that we have a very unique opportunity to build a very large company that deserves an IPO,” he said.

Part of this new competitive environment also involves competitors that use other companies’ open-source projects to build their own products without contributing back. Redis Labs was one of the first of a number of open-source companies that decided to offer its newest releases under a new license that still allows developers to modify the code but that forces competitors that want to essentially resell it to buy a commercial license. Ofer specifically notes AWS in this context. It’s worth noting that this isn’t about the Redis database itself but about the additional modules that Redis Labs built. Redis Enterprise itself is closed-source.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

Oct
28
2018
--

Forget Watson, the Red Hat acquisition may be the thing that saves IBM

With its latest $34 billion acquisition of Red Hat, IBM may have found something more elementary than “Watson” to save its flagging business.

Though the acquisition of Red Hat  is by no means a guaranteed victory for the Armonk, N.Y.-based computing company that has had more downs than ups over the five years, it seems to be a better bet for “Big Blue” than an artificial intelligence program that was always more hype than reality.

Indeed, commentators are already noting that this may be a case where IBM finally hangs up the Watson hat and returns to the enterprise software and services business that has always been its core competency (albeit one that has been weighted far more heavily on consulting services — to the detriment of the company’s business).

Watson, the business division focused on artificial intelligence whose public claims were always more marketing than actually market-driven, has not performed as well as IBM had hoped and investors were losing their patience.

Critics — including analysts at the investment bank Jefferies (as early as one year ago) — were skeptical of Watson’s ability to deliver IBM from its business woes.

As we wrote at the time:

Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “not ready for human investigational or clinical use.”

The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.

The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.

That’s not the only trouble IBM has had with Watson’s healthcare results. Earlier this year, the online medical journal Stat reported that Watson was giving clinicians recommendations for cancer treatments that were “unsafe and incorrect” — based on the training data it had received from the company’s own engineers and doctors at Sloan-Kettering who were working with the technology.

All of these woes were reflected in the company’s latest earnings call where it reported falling revenues primarily from the Cognitive Solutions business, which includes Watson’s artificial intelligence and supercomputing services. Though IBM chief financial officer pointed to “mid-to-high” single digit growth from Watson’s health business in the quarter, transaction processing software business fell by 8% and the company’s suite of hosted software services is basically an afterthought for business gravitating to Microsoft, Alphabet, and Amazon for cloud services.

To be sure, Watson is only one of the segments that IBM had been hoping to tap for its future growth; and while it was a huge investment area for the company, the company always had its eyes partly fixed on the cloud computing environment as it looked for areas of growth.

It’s this area of cloud computing where IBM hopes that Red Hat can help it gain ground.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, IBM Chairman, President and Chief Executive Officer, in a statement announcing the acquisition. “IBM will become the world’s number-one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.”

The acquisition also puts an incredible amount of marketing power behind Red Hat’s various open source services business — giving all of those IBM project managers and consultants new projects to pitch and maybe juicing open source software adoption a bit more aggressively in the enterprise.

As Red Hat chief executive Jim Whitehurst told TheStreet in September, “The big secular driver of Linux is that big data workloads run on Linux. AI workloads run on Linux. DevOps and those platforms, almost exclusively Linux,” he said. “So much of the net new workloads that are being built have an affinity for Linux.”

Jul
24
2018
--

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

Apr
04
2018
--

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

Mar
28
2018
--

GoDaddy to move most of its infrastructure to AWS, not including domain management for its 75M domains

It really is Go Time for GoDaddy . Amazon’s cloud services provider AWS and GoDaddy, the domain registration and management giant, may have competed in the past when it comes to working with small businesses to provide them with web services, but today the two took a step closer together. AWS said that GoDaddy is now migrating “the majority” of its infrastructure to AWS in a multi-year deal that will also see AWS becoming a partner in selling on some products of GoDaddy’s — namely Managed WordPress and GoCentral for managing domains and building and running websites.

The deal — financial terms of which are not being disclosed — is wide-ranging, but it will not include taking on domain management for GoDaddy’s 75 million domains currently under management, a spokesperson for the company confirmed to me.

“GoDaddy is not migrating the domains it manages to AWS,” said Dan Race, GoDaddy’s VP of communications. “GoDaddy will continue to manage all customer domains. Domain management is obviously a core business for GoDaddy.”

The move underscores Amazon’s continuing expansion as a powerhouse in cloud hosting and related services, providing a one-stop shop for customers who come for one product and stay for everything else (not unlike its retail strategy in that regard). Also, it is a reminder of how the economies of scale in the cloud business make it financially challenging to compete if you are not already one of the big players, or lack deep pockets to sustain your business as you look to grow. GoDaddy has been a direct victim of those economics: just last summer, GoDaddy killed off Cloud Servers, its AWS-style business for building, testing and scaling cloud services on GoDaddy infrastructure. It also already was hosting some services on AWS prior to this: its enterprise-grade Managed WordPress service was already being hosted there, for example.

The AWS deal also highlights how GoDaddy is trimming operational costs to improve its overall balance sheet under Scott Wagner, the COO who took over as CEO from Blake Irving at the beginning of this year. 

“As a technology provider with more than 17 million customers, it was very important for GoDaddy to select a cloud provider with deep experience in delivering a highly reliable global infrastructure, as well as an unmatched track record of technology innovation, to support our rapidly expanding business,” said Charles Beadnall, CTO at GoDaddy, in a statement.

AWS provides a superior global footprint and set of cloud capabilities which is why we selected them to meet our needs today and into the future. By operating on AWS, we’ll be able to innovate at the speed and scale we need to deliver powerful new tools that will help our customers run their own ventures and be successful online,” he continued.

AWS said that GoDaddy will be using AWS’s Elastic Container Service for Kubernetes and Elastic Compute Cloud P3 instances, as well as machine learning, analytics, and other database-related and container technology. Race told TechCrunch that the infrastructure components that the company is migrating to AWS currently run at GoDaddy but will be gradually moved away as part of its multi-year migration.

“As a large, high-growth business, GoDaddy will be able to leverage AWS to innovate for its customers around the world,” said Mike Clayville, VP, worldwide commercial sales at AWS, in a statement. “Our industry-leading services will enable GoDaddy to leverage emerging technologies like machine learning, quickly test ideas, and deliver new tools and solutions to their customers with greater frequency. We look forward to collaborating with GoDaddy as they build anew in the cloud and innovate new solutions to help people turn their ideas into reality online.”

Mar
12
2018
--

Dropbox sets IPO range $16-18, valuing it below $10B, as Salesforce ponies up $100M

 After announcing an IPO in February, today Dropbox updated its S-1 filing with pricing. The cloud services and storage company said that it expects to price its IPO at between $16 and $18 per share when it sells 36,000,000 shares to raise $648 million as “DBX” on the Nasdaq exchange. In addition to that, Dropbox announced that it will be selling $100 million in stock to… Read More

Mar
12
2018
--

Dropbox sets IPO range $16-18, valuing it below $10B, as Salesforce ponies up $100M

After announcing an IPO in February, today Dropbox updated its S-1 filing with pricing. The cloud services and storage company said that it expects to price its IPO at between $16 and $18 per share when it sells 36,000,000 shares to raise $648 million as “DBX” on the Nasdaq exchange.

In addition to that, Dropbox announced that it will be selling $100 million in stock to Salesforce — its new integration partner — right after the IPO, “at a price per share equal to the initial offering price.”

A specific date has not yet been set for Dropbox’s listing later this month.

The IPO pricing values the company at between $7 billion and nearly $8 billion when you factor in restricted stock units — making it the biggest tech IPO since Snap last year, but still falling well below the $10 billion valuation that Dropbox crept up to back in 2014 when it raised $350 million in venture funding.

Many will be watching Dropbox’s IPO to see if it stands up longer term and becomes a bellwether for the fortunes and fates of many other outsized “startups” that many have also expecting to list, including those that have already filed to go public like Spotify, as well as those that have yet to make any official pronouncements, like Airbnb.

Some might argue that it’s illogical to compare a company whose business model is built around cloud storage with a travel and accommodation business, or a music streaming platform. Perhaps especially now: at a time when people are still wincing from Snap’s drastic drop — the company is trading more than 30 percent down from its IPO debut — Dropbox presents a challenging picture.

On the plus side, the company has helped bring the concept of cloud storage services to the masses. Riding on the wave of mobile devices, lightweight apps, and faster internet connections, it has changed the conversation about how many conceive of handling their data and offloading it off of their devices. Today, Dropbox has more than 500 million users in more than 180 countries.

On the minus side, only around 11 million of those customers are paying users. The company reported around $1.1 billion in revenues in 2017, representing a rise on $845 million in 2016 and $604 million in 2015. But it’s unprofitable, reporting a loss of $112 million in 2017.

Again, that’s a large improvement when you compare Dropbox’s 2016 loss of $210 million in 2016 and $326 million in 2015. But it does raise more pressing questions: Does Dropbox have a big plan for how to convert more people into paying users? And will its investors have the patience to watch its business models play out?

In that regard, the Salesforce investment and integration, and its timing of being announced alongside the sober IPO range, is a notable vote of confidence in Dropbox. Salesforce has staked its whole business model around cloud services — its ticker may be “CRM,” but its logo is its name inside a cloud — and it’s passed into the pantheon of tech giants with flying colors.

Having Salesforce buy into Dropbox not only shows how it’s bolstering its new partner Dropbox in the next phase, but I’d argue also gives Dropbox one potential exit strategy. Salesforce, after all, has been interested in playing more directly in this space for years at this point.

Mar
06
2018
--

Lucidworks launches site search as a service tool

Lucidworks has been helping large organizations like Reddit with complex content build search tools that reach across massive content stores, but the company wanted to make the underlying search technology available to a wider market. Today, it released Lucidworks Site Search, a cloud service that enables companies to embed Lucidworks search in any application or website with a couple of lines of code.

It’s more of a pre-packaged solution, but it still takes advantage of the same natural language processing (NLP) and machine learning as its more complex and flexible cousin. It has been tuned specifically to engage the user in your site or application, and designed to provide a quick way to narrow their search based on factors you might know about them.

CEO Will Hayes says the company wanted to take the power of Fusion search and apply to it to applications, particularly around site search. “What we have done is turn this into SaaS service as a way to consume the Fusion data,” he said. “We have been building a smart data platform and search is how you engage and ranking and relevance is how you push the best user experiences,” he added.

The approach is to make it as simple as possible to insert Lucidworks search into an application or website simply by adding a couple of lines of javascript and then connecting some data. As soon as the data sources are configured, it’s basically ready to go, he said.

The underlying artificial intelligence also monitors what it knows about the visitor to help customize the content that it surfaces for that person. “Better data experience is low hanging fruit in terms of uplift. You can always enhance that experience by providing better data. Let us crawl your content, and look at web logs and user behavior and we will start displaying better content for your users.”

In terms of privacy especially in light of the upcoming GDPR regulations in the EU, Hayes says his company has been working with enterprise companies for some time, who have needed to do things like isolate personally identifiable information (PII) and enforce policies around geography, so they are ready for that as anyone.

Hayes says this just the first of many tools it plans to roll out in the future built on top of the Lucidworks platform.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com