Sep
11
2019
--

IBM brings Cloud Foundry and Red Hat OpenShift together

At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat’s OpenShift container platform.

For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat’s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

On the business side, IBM’s acquisition of Red Hat has brought along some change, too. IBM long backed Cloud Foundry as a top-level foundation member, while Red Hat bet on its own platform instead. Now that the acquisition has closed, it’s maybe no surprise that IBM is working on bringing Cloud Foundry to Red Hat’s platform.

For now, this work is still officially still a technology experiment, but our understanding is that IBM plans to turn this into a fully supported project that will give Cloud Foundry users the option to deploy their application right to OpenShift, while OpenShift customers will be able to offer their developers the Cloud Foundry experience.

“It’s another proof point that these things really work well together,” Cloud Foundry Foundation CTO Chip Childers told me ahead of today’s announcement. “That’s the developer experience that the CF community brings and in the case of IBM, that’s a great commercialization story for them.”

While Cloud Foundry isn’t seeing the same hype as in some of its earlier years, it remains one of the most widely used development platforms in large enterprises. According to the Cloud Foundry Foundation’s latest user survey, the companies that are already using it continue to move more of their development work onto the platform and the according to the code analysis from source{d}, the project continues to see over 50,000 commits per month.

“As businesses navigate digital transformation and developers drive innovation across cloud native environments, one thing is very clear: they are turning to Cloud Foundry as a proven, agile, and flexible platform — not to mention fast — for building into the future,” said Abby Kearns, executive director at the Cloud Foundry Foundation. “The survey also underscores the anchor Cloud Foundry provides across the enterprise, enabling developers to build, support, and maximize emerging technologies.”image024

Also at this week’s Summit, Pivotal (which is in the process of being acquired by VMware) is launching the alpha version of the Pivotal Application Service (PAS) on Kubernetes, while Swisscom, an early Cloud Foundry backer, is launching a major update to its Cloud Foundry-based Application Cloud.

Sep
11
2019
--

Kubernetes co-founder Craig McLuckie is as tired of talking about Kubernetes as you are

“I’m so tired of talking about Kubernetes . I want to talk about something else,” joked Kubernetes co-founder and VP of R&D at VMware Craig McLuckie during a keynote interview at this week’s Cloud Foundry Summit in The Hague. “I feel like that 80s band that had like one hit song — Cherry Pie.”

He doesn’t quite mean it that way, of course (though it makes for a good headline, see above), but the underlying theme of the conversation he had with Cloud Foundry executive director Abby Kearns was that infrastructure should be boring and fade into the background, while enabling developers to do their best work.

“We still have a lot of work to do as an industry to make the infrastructure technology fade into the background and bring forwards the technologies that developers interface with, that enable them to develop the code that drives the business, etc. […] Let’s make that infrastructure technology really, really boring.”

IMG 20190911 115940

What McLuckie wants to talk about is developer experience and with VMware’s intent to acquire Pivotal, it’s placing a strong bet on Cloud Foundry as one of the premiere development platforms for cloud native applications. For the longest time, the Cloud Foundry and Kubernetes ecosystem, which both share an organizational parent in the Linux Foundation, have been getting closer, but that move has accelerated in recent months as the Cloud Foundry ecosystem has finished work on some of its Kubernetes integrations.

McLuckie argues that the Cloud Native Computing Foundation, the home of Kubernetes and other cloud-native, open-source projects, was always meant to be a kind of open-ended organization that focuses on driving innovation. And that created a large set of technologies that vendors can choose from.

“But when you start to assemble that, I tend to think about you building up this cake which is your development stack, you discover that some of those layers of the cake, like Kubernetes, have a really good bake. They are done to perfection,” said McLuckie, who is clearly a fan of the Great British Baking show. “And other layers, you look at it and you think, wow, that could use a little more bake, it’s not quite ready yet. […] And we haven’t done a great job of pulling it all together and providing a recipe that delivers an entirely consumable experience for everyday developers.”

EEK3PG1W4AAaasp

He argues that Cloud Foundry, on the other hand, has always focused on building that highly opinionated, consistent developer experience. “Bringing those two communities together, I think, is going to have incredibly powerful results for both communities as we start to bring these technologies together,” he said.

With the Pivotal acquisition still in the works, McLuckie didn’t really comment on what exactly this means for the path forward for Cloud Foundry and Kubernetes (which he still talked about with a lot of energy, despite being tired of it). But it’s clear that he’s looking to Cloud Foundry to enable that developer experience on top of Kubernetes that abstracts all of the infrastructure away for developers and makes deploying an application a matter of a single CLI command.

Bonus: Cherry Pie.

Sep
09
2019
--

With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

Long before Docker popularized containers for application deployment, though, Cloud Foundry had already bet on containers and written its own orchestration service, for example. With all of the momentum behind Kubernetes, though, it’s no surprise that many in the Cloud Foundry started to look at this new project to replace the existing container technology.

Aug
26
2019
--

Nvidia and VMware team up to make GPU virtualization easier

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

Note that vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

Aug
23
2019
--

How Pivotal got bailed out by fellow Dell family member, VMware

When Dell acquired EMC in 2016 for $67 billion, it created a complicated consortium of interconnected organizations. Some, like VMware and Pivotal, operate as completely separate companies. They have their own boards of directors, can acquire companies and are publicly traded on the stock market. Yet they work closely within Dell, partnering where it makes sense. When Pivotal’s stock price plunged recently, VMware saved the day when it bought the faltering company for $2.7 billion yesterday.

Pivotal went public last year, and sometimes struggled, but in June the wheels started to come off after a poor quarterly earnings report. The company had what MarketWatch aptly called “a train wreck of a quarter.”

How bad was it? So bad that its stock price was down 42% the day after it reported its earnings. While the quarter itself wasn’t so bad, with revenue up year over year, the guidance was another story. The company cut its 2020 revenue guidance by $40-$50 million and the guidance it gave for the upcoming 2Q 19 was also considerably lower than consensus Wall Street estimates.

The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on August 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on August 14th. That’s when VMware admitted it was thinking about buying the struggling company.

Aug
22
2019
--

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”

Aug
15
2019
--

Microsoft Azure CTO Mark Russinovich will join us for TC Sessions: Enterprise on September 5

Being the CTO for one of the three major hypercloud providers may seem like enough of a job for most people, but Mark Russinovich, the CTO of Microsoft Azure, has a few other talents in his back pocket. Russinovich, who will join us for a fireside chat at our TechCrunch Sessions: Enterprise event in San Francisco on September 5 (p.s. early-bird sale ends Friday), is also an accomplished novelist who has published four novels, all of which center around tech and cybersecurity.

At our event, though, we won’t focus on his literary accomplishments (except for maybe his books about Windows Server) as much as on the trends he’s seeing in enterprise cloud adoption. Microsoft, maybe more so than its competitors, always made enterprise customers and their needs the focus of its cloud initiatives from the outset. Today, as the majority of enterprises is looking to move at least some of their legacy workloads into the cloud, they are often stumped by the sheer complexity of that undertaking.

In our fireside chat, we’ll talk about what Microsoft is doing to reduce this complexity and how enterprises can maximize their current investments into the cloud, both for running new cloud-native applications and for bringing legacy applications into the future. We’ll also talk about new technologies that can make the move to the cloud more attractive to enterprises, including the current buzz around edge computing, IoT, AI and more.

Before joining Microsoft, Russinovich, who has a Ph.D. in computer engineering from Carnegie Mellon, was the co-founder and chief architect of Winternals Software, which Microsoft acquired in 2006. During his time at Winternals, Russinovich discovered the infamous Sony rootkit. Over his 13 years at Microsoft, he moved from Technical Fellow up to the CTO position for Azure, which continues to grow at a rapid clip as it looks to challenge AWS’s leadership in total cloud revenue.

Tomorrow, Friday, August 16 is your last day to save $100 on tickets before prices go up. Book your early-bird tickets now and keep that Benjamin in your pocket.

If you’re an early-stage startup, we only have three demo table packages left! Each demo package comes with four tickets and a great location for your company to get in front of attendees. Book your demo package today before we sell out!

Aug
14
2019
--

VMware says it’s looking to acquire Pivotal

VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).

Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers, as well. It’s a bit of an ouroboros of a transaction.

Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.

An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.

It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.

Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami, and last year it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.

Aug
01
2019
--

Microsoft Azure now lets you have a server all to yourself

Microsoft today announced the preview launch of Azure Dedicated Host, a new cloud service that will allow you to run your virtual machines on single-tenant physical services. That means you’re not sharing any resources on that server with anybody else and you’ll get full control over everything that’s running on that machine.

Previously, Azure already offered isolated Virtual Machine sizes for two very large virtual machine types. Those are still available, but their use cases are comparably limited to these new hosts, which offer far more flexibility.

With this move, Microsoft is following in the footsteps of AWS, which also offers Dedicated Hosts with very similar capabilities. Google Cloud, too, offers what it calls “sole-tenant nodes.”

Azure Dedicated Host will support Windows, Linux and SQL Server virtual machines and pricing is per host, independent of the number of virtual machines you end up running on them. You can currently opt for machines with up to 48 physical cores and prices start at $4.039 per hour.

To do this, Microsoft is offering two different processors to power these machines. Type 1 is based on the 2.3 GHz Intel Xeon E5-2673 v4 with up to 3.5 gigahertz of clock speed, while Type 2 features the Intel Xeon® Platinum 8168 with single-core clock speeds of up to 3.7 gigahertz. The available memory ranges from 144GiB to 448GiB. You can find more details here.

As Microsoft notes, these new dedicated hosts can help companies reach their compliance requirements for physical security, data integrity and monitoring. The dedicated hosts still share the same underlying infrastructure as any other host in the Azure data centers, but users have full control over any maintenance window that could impact their servers.

These dedicated hosts can also be grouped into larger host groups in a given Azure region, allowing you to build clusters of your own physical servers inside the Azure data center. Because you’re actually renting a physical machine, any hardware issue on that machine will impact the virtual machines you are running on them, so chances are you’ll want to have multiple dedicated hosts for your failover strategy anyway.

110b3725 54e2 4840 a609 adf18fcbe32f

Aug
01
2019
--

With the acquisition closed, IBM goes all in on Red Hat

IBM’s massive $34 billion acquisition of Red Hat closed a few weeks ago and today, the two companies are now announcing the first fruits of this process. For the most part, today’s announcement furthers IBM’s ambitions to bring its products to any public and private cloud. That was very much the reason why IBM acquired Red Hat in the first place, of course, so this doesn’t come as a major surprise, though most industry watchers probably didn’t expect this to happen this fast.

Specifically, IBM is announcing that it is bringing its software portfolio to Red Hat OpenShift, Red Hat’s Kubernetes-based container platform that is essentially available on any cloud that allows its customers to run Red Hat Enterprise Linux.

In total, IBM has already optimized more than 100 products for OpenShift and bundled them into what it calls “Cloud Paks.” There are currently five of these Paks: Cloud Pak for Data, Application, Integration, Automation and Multicloud Management. These technologies, which IBM’s customers can now run on AWS, Azure, Google Cloud Platform or IBM’s own cloud, among others, include DB2, WebSphere, API Connect, Watson Studio and Cognos Analytics.

“Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments,” said Jim Whitehurst, president and CEO of Red Hat, in today’s announcement. “This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime. Combined with IBM’s strong industry expertise and supported by a vast ecosystem of passionate developers and partners, customers can create modern apps with the technologies of their choice and the flexibility to deploy in the best environment for the app – whether that is on-premises or across multiple public clouds.”

IBM argues that a lot of the early innovation on the cloud was about bringing modern, customer-facing applications to market, with a focus on basic cloud infrastructure. Now, however, enterprises are looking at how they can take their mission-critical applications to the cloud, too. For that, they want access to an open stack that works across clouds.

In addition, IBM also today announced the launch of a fully managed Red Hat OpenShift service on its own public cloud, as well as OpenShift on IBM Systems, including the IBM Z and LinuxONE mainframes, as well as the launch of its new Red Hat consulting and technology services.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com