Jul
12
2019
--

With $34B Red Hat deal closed, IBM needs to execute now

In a summer surprise this week, IBM announced it had closed its $34 billion blockbuster deal to acquire Red Hat. The deal, which was announced in October, was expected to take a year to clear all of the regulatory hurdles, but U.S. and EU regulators moved surprisingly quickly. For IBM, the future starts now, and it needs to find a way to ensure that this works.

There are always going to be layers of complexity in a deal of this scope, as IBM moves to incorporate Red Hat into its product family quickly and get the company moving. It’s never easy combining two large organizations, but with IBM mired in single-digit cloud market share and years of sluggish growth, it is hoping that Red Hat will give it a strong hybrid cloud story that can help begin to alter its recent fortunes.

As Box CEO (and IBM partner) Aaron Levie tweeted at the time the deal was announced, “Transformation requires big bets, and this is a good one.” While the deal is very much about transformation, we won’t know for some time if it’s a good one.

Transformation blues

Jun
05
2019
--

Yellowbrick Data raises $81M Series C for hybrid data warehouse

There’s lots of data in the world these days, and there are a number of companies vying to store that data in data warehouses or lakes or whatever they choose to call it. Old-school companies have tended to be on prem, while new ones like Snowflake are strictly in the cloud. Yellowbrick Data wants to play the hybrid angle, and today it got a healthy $81 million Series C to continue its efforts.

The round was led by DFJ Growth with help from Next47, Third Point Ventures, Menlo Ventures, GV (formerly Google Ventures), Threshold Ventures and Samsung. New investors joining the round included IVP and BMW i Ventures. Today’s investment brings the total raised to a brisk $173 million.

Yellowbrick sees a world that many of the public cloud vendors like Microsoft and Google see, one where enterprise companies will be living in a hybrid world where some data and applications will stay on prem and some in the cloud. They believe this situation will be in place for the foreseeable future, so its product plays to that hybrid angle, where your data can be on prem or in the cloud.

The company did not want to discuss valuation in spite of the high amount of raised dollars. Neither did it want to discuss revenue growth rates, other than to say that it was growing at a healthy rate.

Randy Glein, partner at DFJ Growth, did say one of the things that attracted his company to invest in Yellowbrick was its momentum along with the technology, which in his view provides a more modern way to build data warehouses. “Yellowbrick is quickly providing a new generation of ultra-high performance data warehouse capabilities for large enterprises. The technology is a step function improvement on every dimension compared to legacy solutions, helping modern enterprises digest and interpret massive data workloads in a fraction of the time at a fraction of the cost,” he said in a statement.

It’s interesting that a company with just 100 employees would require this kind of money, but as company COO Jason Snodgress told TechCrunch, it costs a lot of money to build out a data warehouse. He’s not wrong. Snowflake, a company that’s building a cloud data warehouse, has raised almost a billion dollars.

May
09
2019
--

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.

May
07
2019
--

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

Apr
09
2019
--

Apigee jumps on hybrid bandwagon with new API for hybrid environments

This year at Google Cloud Next, the theme is all about supporting hybrid environments, so it shouldn’t come as a surprise that Apigee, the API company it bought in 2016 for $265 million, is also getting into the act. Today, Apigee announced the Beta of Apigee Hybrid, a new product designed for hybrid environments.

Amit Zavery, who recently joined Google Cloud after many years at Oracle, and Nandan Sridhar, describe the new product in a joint blog post as “a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice.”

As with Anthos, the company’s approach to hybrid management announced earlier today, the idea is to have a single way to manage your APIs no matter where you choose to run them.

“With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise,” Zavery and Sridhar wrote in the blog post announcing the new approach.

The announcement is part of an overall strategy by the company to support a customer’s approach to computing across a range environments, often referred to as hybrid cloud. In the Cloud Native world, the idea is to present a single fabric to manage your deployments, regardless of location.

This appears to be an extension of that idea, which makes sense given that Google was the first company to develop and open source Kubernetes, which is at the forefront of containerization and Cloud Native computing. While this isn’t pure Cloud Native computing, it is keeping true to its ethos and it fits in the scope of Google Cloud’s approach to computing in general, especially as it is being defined at this year’s conference.

Apr
09
2019
--

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe or rosemary. That by itself would be interesting, but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds, as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open-source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with more than 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMware, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Robin.io, a data management service that offers a hyper-converged storage platform based on Kubernetes, also tells me that it worked closely with Google to develop the Anthos Storage API. “Robin Storage offers bare metal performance, powerful data management capabilities and Kubernetes-native management to support running enterprise applications on Google Cloud’s Anthos across on-premises data centers and the cloud,” said Premal Buch, CEO of Robin.io.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices will then be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.

Mar
01
2019
--

Rackspace announces it has laid off 200 workers

Rackspace, the hosted private cloud vendor, let go around 200 workers or 3 percent of its worldwide workforce of 6600 employees this week. The company says that it’s part of a recalibration where it is trying to find workers who are better suited to their current business approach.

A Rackspace spokesperson told TechCrunch that it is “a stable and profitable company.” In fact, it hired 1500 employees in 2018 and currently has 200 job openings. “We continue to invest in our business based on market opportunity and our customers’ needs – we take actions on an ongoing basis in some areas where we are over-invested and hire in areas where we are under invested,” a company spokesperson explained.

The company, which went public in 2008 and private again for $4.3 billion in 2016, has struggled in a cloud market dominated by giants like Amazon, Microsoft and Google, but according to Synergy Research, a firm that keeps close watch on the cloud market, it is one of the top 3 companies in the Hosted Private Cloud category.

It’s worth noting that the top company in this category is IBM and Rackspace could be a good target for Big Blue if it wanted to use its checkbook to get a boost in marketshare. IBM is in third or fourth place in the cloud infrastructure market, depending on whose numbers you look at, but it could move the needle a bit by buying a company like Rackspace. Neither company is suggesting this, however, and IBM bought Red Hat at the end of last year for $34 billion, making it less likely it will be in a spending mood this year.

For now the layoffs appear to be a company tweaking its workforce to meet current market conditions, but whatever the reason, it’s never a happy day when people lose their jobs.

Feb
20
2019
--

Google’s hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.

Dec
11
2018
--

Dell’s long game is in hybrid and private clouds

When Dell voted to buy back the VMware tracking stock and go public again this morning, you had to be wondering what exactly the strategy was behind these moves. While it’s clearly about gaining financial flexibility, the $67 billion EMC deal has always been about setting up the company for a hybrid and private cloud future.

The hybrid cloud involves managing workloads on premises and in the cloud, while private clouds are ones that companies run themselves, either in their own data centers or on dedicated hardware in the public cloud.

Patrick Moorhead, founder and principal analyst at Moor Insight & Strategy, says this approach takes a longer investment timeline, and that required the changes we saw this morning. “I believe Dell Technologies can better invest in its hybrid world with longer-term investors as the investment will be longer term, at least five years,” he said. Part of that, he said, is due to the fact that many more on-prem to public connectors services need to be built.

Dell could be the company that helps build some of those missing pieces. It has always been at its heart a hardware company, and as such either of these approaches could play to its strengths. When the company paid $67 billion for EMC in 2016, it had to have a long-term plan in mind. Michael Dell’s parents didn’t raise no fool, and he saw an opportunity with that move to push his company in a new direction.

It was probably never about EMC’s core storage offerings, although a storage component was an essential ingredient in this vision. Dell and his investor’s eyes probably were more focused on other pieces inside the federation — the loosely coupled set of companies inside the broader EMC Corporation.

The VMware bridge

The crown jewel in that group was of course VMware, the company that introduced the enterprise to server virtualization. Today, it has taken residency in the hybrid world between the on-premises data center and the cloud. Armed with broad agreements with AWS, VMware finagled its way to be a key bridge between on prem and the monstrously popular Amazon cloud. IT pros used to working with VMware would certainly be comfortable using it as a cloud control panel as they shifted their workloads to AWS cloud virtual machines.

In fact, speaking at a press conference at AWS re:Invent earlier this month, AWS CEO Andy Jassy said the partnership with VMware has been really transformational for his company on a lot of different levels. “Most of the world is virtualized on top of VMware and VMware is at the core of most enterprises. When you start trying to solve people’s problems between being on premises and in the cloud, having the partnership we have with VMware allows us to find ways for customers to use the tools they’ve been using and be able to use them on top of our platform the way they want,” Jassy told the press conference.

The two companies also announced an extension of the partnership with the new AWS Outposts servers, which bring the AWS cloud on prem where customers can choose between using VMware or AWS to manage the workloads, whether they live in the cloud or on premises. It’s unclear whether AWS will extend this to other companies’ hardware, but if they do you can be sure Dell would want to be a part of that.

Pivotal’s key role

But it’s not just VMware that Dell had its sights on when it bought EMC, it was Pivotal too. This is another company, much like VMware, that is publicly traded and operates independently of Dell, even while living inside the Dell family of products. While VMware handles managing the server side of the house, Pivotal is about building software products.

When the company went public earlier this year, CEO Rob Mee told TechCrunch that Dell recognizes that Pivotal works better as an independent entity. “From the time Dell acquired EMC, Michael was clear with me: You run the company. I’m just here to help. Dell is our largest shareholder, but we run independently. There have been opportunities to test that [since the acquisition] and it has held true,” Mee said at the time.

Virtustream could also be a key piece providing a link to run traditional enterprise applications on multi-tenant clouds. EMC bought this company in 2015 for $1.2 billion, then later spun it out as a jointly owned venture of EMC and VMware later that year. The company provides another link between applications like SAP that once only ran on prem.

Surely it had to take all the pieces to get the ones it wanted most. It might have been a big price to pay for transformation, especially since you could argue that some of the pieces were probably past their freshness dates (although even older products bring with them plenty of legacy licensing and maintenance revenue).

Even though the long-term trend is shifting toward moving to the cloud, there will be workloads that stay on premises for some time to come. It seems that Dell is trying to position itself as the hybrid/private cloud vendor and all that entails to serve those who won’t be all cloud, all the time. Whether this strategy will work long term remains to be seen, but Dell appears to be betting the house on this approach, and today’s moves only solidified that.

Dec
07
2018
--

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com