Sep
11
2019
--

IBM brings Cloud Foundry and Red Hat OpenShift together

At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat’s OpenShift container platform.

For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat’s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

On the business side, IBM’s acquisition of Red Hat has brought along some change, too. IBM long backed Cloud Foundry as a top-level foundation member, while Red Hat bet on its own platform instead. Now that the acquisition has closed, it’s maybe no surprise that IBM is working on bringing Cloud Foundry to Red Hat’s platform.

For now, this work is still officially still a technology experiment, but our understanding is that IBM plans to turn this into a fully supported project that will give Cloud Foundry users the option to deploy their application right to OpenShift, while OpenShift customers will be able to offer their developers the Cloud Foundry experience.

“It’s another proof point that these things really work well together,” Cloud Foundry Foundation CTO Chip Childers told me ahead of today’s announcement. “That’s the developer experience that the CF community brings and in the case of IBM, that’s a great commercialization story for them.”

While Cloud Foundry isn’t seeing the same hype as in some of its earlier years, it remains one of the most widely used development platforms in large enterprises. According to the Cloud Foundry Foundation’s latest user survey, the companies that are already using it continue to move more of their development work onto the platform and the according to the code analysis from source{d}, the project continues to see over 50,000 commits per month.

“As businesses navigate digital transformation and developers drive innovation across cloud native environments, one thing is very clear: they are turning to Cloud Foundry as a proven, agile, and flexible platform — not to mention fast — for building into the future,” said Abby Kearns, executive director at the Cloud Foundry Foundation. “The survey also underscores the anchor Cloud Foundry provides across the enterprise, enabling developers to build, support, and maximize emerging technologies.”image024

Also at this week’s Summit, Pivotal (which is in the process of being acquired by VMware) is launching the alpha version of the Pivotal Application Service (PAS) on Kubernetes, while Swisscom, an early Cloud Foundry backer, is launching a major update to its Cloud Foundry-based Application Cloud.

Sep
11
2019
--

Kubernetes co-founder Craig McLuckie is as tired of talking about Kubernetes as you are

“I’m so tired of talking about Kubernetes . I want to talk about something else,” joked Kubernetes co-founder and VP of R&D at VMware Craig McLuckie during a keynote interview at this week’s Cloud Foundry Summit in The Hague. “I feel like that 80s band that had like one hit song — Cherry Pie.”

He doesn’t quite mean it that way, of course (though it makes for a good headline, see above), but the underlying theme of the conversation he had with Cloud Foundry executive director Abby Kearns was that infrastructure should be boring and fade into the background, while enabling developers to do their best work.

“We still have a lot of work to do as an industry to make the infrastructure technology fade into the background and bring forwards the technologies that developers interface with, that enable them to develop the code that drives the business, etc. […] Let’s make that infrastructure technology really, really boring.”

IMG 20190911 115940

What McLuckie wants to talk about is developer experience and with VMware’s intent to acquire Pivotal, it’s placing a strong bet on Cloud Foundry as one of the premiere development platforms for cloud native applications. For the longest time, the Cloud Foundry and Kubernetes ecosystem, which both share an organizational parent in the Linux Foundation, have been getting closer, but that move has accelerated in recent months as the Cloud Foundry ecosystem has finished work on some of its Kubernetes integrations.

McLuckie argues that the Cloud Native Computing Foundation, the home of Kubernetes and other cloud-native, open-source projects, was always meant to be a kind of open-ended organization that focuses on driving innovation. And that created a large set of technologies that vendors can choose from.

“But when you start to assemble that, I tend to think about you building up this cake which is your development stack, you discover that some of those layers of the cake, like Kubernetes, have a really good bake. They are done to perfection,” said McLuckie, who is clearly a fan of the Great British Baking show. “And other layers, you look at it and you think, wow, that could use a little more bake, it’s not quite ready yet. […] And we haven’t done a great job of pulling it all together and providing a recipe that delivers an entirely consumable experience for everyday developers.”

EEK3PG1W4AAaasp

He argues that Cloud Foundry, on the other hand, has always focused on building that highly opinionated, consistent developer experience. “Bringing those two communities together, I think, is going to have incredibly powerful results for both communities as we start to bring these technologies together,” he said.

With the Pivotal acquisition still in the works, McLuckie didn’t really comment on what exactly this means for the path forward for Cloud Foundry and Kubernetes (which he still talked about with a lot of energy, despite being tired of it). But it’s clear that he’s looking to Cloud Foundry to enable that developer experience on top of Kubernetes that abstracts all of the infrastructure away for developers and makes deploying an application a matter of a single CLI command.

Bonus: Cherry Pie.

Sep
09
2019
--

With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

Long before Docker popularized containers for application deployment, though, Cloud Foundry had already bet on containers and written its own orchestration service, for example. With all of the momentum behind Kubernetes, though, it’s no surprise that many in the Cloud Foundry started to look at this new project to replace the existing container technology.

Aug
27
2019
--

Kubernetes – Introduction to Containers

Kubernetes - Introduction to Containers

Kubernetes - Introduction to ContainersHere at Percona’s Training and Education department, we are always at work with trying to keep our materials up-to-date and relevant to current technologies.  In addition, we keep an eye out for what topics are “hot” and are generating a lot of buzz.  Unless you’ve been living under a rock for the past year, Kubernetes is the current buzz word/technology. This is the first post in this blog series where we will dive into Kubernetes, and explore using it with Percona XtraDB Cluster.

Editor Disclaimer: This post is not intended to convince you to switch to a containerized environment. This post is simply aimed to educate about a growing trend in the industry.

In The Beginning…

Let’s start at the beginning. First, there were hardware-based servers; we still have these today. Despite the prevalence of “the cloud”, many large companies still use private datacenters and run their databases and applications “the traditional way.” Stepping up from hardware we venture into virtual machine territory. Popular solutions here are VMWare (commercial product), QEMU, KVM, and Virtual Box (the latter three are OSS).

Virtual Machines (VM) abstract away the hardware on which they are running using emulation software; this is generally referred to as the hypervisor. Such a methodology allows for an entirely separate operating system (OS) to run “on top of” the OS running natively against the hardware. For example, you may have CentOS 7 running inside a VM, which in turn, is running on your Mac laptop, or you could have Windows 10 running inside a VM, which is running on your Ubuntu 18 desktop PC.

There are several nice things about using VMs in your infrastructure. One of those is the ability to move a VM from one server to another. Likewise, deploying 10 more VMs just like the first one is usually just a couple of clicks or a few commands. Another is a better utilization of hardware resources. If your servers have 8 CPU cores and 128GB of memory, running one application on that server might not fully utilize all that hardware. Instead, you could run multiple VMs on that one server, thus taking advantage of all the resources. Additionally, using a VM allows you to bundle your application, and all associated libraries, and configs into one “unit” that can be deployed and copied very easily (We’ll come back to this concept in a minute).

One of the major downsides of the VM solution is the fact that you have an entire, separate, OS running.  It is a huge waste of resources to run a CentOS 7 VM on top of a CentOS 7 hardware-server.  If you have 5 CentOS 7 VMs running, you have five separate linux kernels (in addition to the host kernel), five separate copies of /usr, and five separate copies of the same libraries and application code taking up precious memory. Some might say that, in this setup, 90-99% of each VM is duplicating efforts already being made by the base OS. Additionally, each VM is dealing with its own CPU context switching management. Also, what happens when one VM needs a bit more CPU than another? There exists the concept of CPU “stealing” where one VM can take CPU from another VM. Imagine if that was your critical DB that had CPU stolen!

What if you could eliminate all that duplication but still give your application the ability to be bundled and deployed across tens or hundreds of servers with just a few commands? Enter: containers.

What is a Container?

Simply put, a container is a type of “lightweight” virtual machine without all the overhead of running an independent operating system on top of a hypervisor. A container can be as small as a single line script or as large as <insert super complicated application with lots of libraries, code, images, etc.>.

With containers, you essentially get all of the “pros” surrounding VMs with a lot less “cons,” mainly in the area of performance. A container is not an emulation, like VMs. On Linux, the kernel provides control groups (cgroups) functionality, which can limit and isolate CPU, memory, disk, and network. This action of isolation becomes the container; everything needed is ‘contained’ in one place/group. In more advanced setups, these cgroups can even limit/throttle the number of resources given to each container. With containers, there is no need for an additional OS install; no emulation, no hypervisor. Because of this, you can achieve “near bare-metal performance” while using containers.

Containers, and any applications running inside the containers, run inside their own, isolated, namespace, and thus, a container can only see its own process, disk contents, devices assigned, and granted network access.

Much like VMs, containers can be started, stopped, and copied from machine to machine, making deployment of applications across hundreds of servers quite easy.

One potential downside to containers is that they are inherently stateless. This means that if you have a running container, and make a change to its configuration or update a library, etc., that state is not saved if the container is restarted. The initial state of the launched container will be restored on restart. That said, most container implementations provide a method for supplying external storage which can hold stateful information to survive a restart or redeployment.

Dock the Ship, Capt’n

The concept of “containers” is generic. Many different implementations of containers exist, much like different distributions of Linux. The most popular implementation of containers in the Linux space is Docker. Docker is free, open-source software, but can also be sold with a commercial license for more advanced features. Docker is built upon one of the earlier linux container implementations, LXC.

Docker uses the concept of “images” to distribute containers. For example, you can download and install the Percona MySQL 8.0 image to your server, and from that image, create as many separate, independent containers of ‘Percona MySQL 8’ as your hardware can handle. Without getting too deep into the specifics, Docker images are the end result of multiple compressed layers which were used during the building process of this image. Herein lies another pro for Docker. If multiple images share common layers, there only needs to be one copy of that layer on any system, thus the potential for space savings can be huge.

Another nice feature of Docker containers is its built-in versioning capabilities. If you are familiar with the version control software Git, then you should know the concept of “tags” which is extremely similar to Docker tags. These tags allow you to track and specify Docker images when creating and deploying images. For example, using the same image (

percona/percona-server

 ), you can deploy

percona/percona-server:5.7

  or

percona/percona-server:8.0

  In this example, “5.7” and “8.0” are the tags used.

Docker also has a huge community ecosystem for sharing images. Visit https://hub.docker.com/ and search for just about any popular OSS project and you will probably find out that someone has already created an image for you to utilize.

Too Many Ships at Port

Docker containers are extremely simple to maintain, deploy, and use within your infrastructure. But what happens when you have hundreds of servers, and you’re trying to manage thousands of docker containers? Sure, if one server goes down, all you need to do, technically speaking, is re-launch those downed containers on another server. Do you have another server in your sea of servers with enough free capacity to handle this additional load? What if you were using persistent storage and need that volume moved along with the container? Running and managing large installations of containers can quickly become overwhelming.

Now we will discuss the topic of container orchestration.

Kubernetes

Pronounced, koo-ber-net-ies, and often abbreviated K8S (there are 8 letters between the ‘K’ and last ‘S’), is the current mainstream project for managing Docker containers at very large scale.  Kubernetes is Greek for “governor”, “helmsman”, or “captain”. Rather than repeating it here, you can read more about the history of K8S on your own.

Terminology

K8S has many moving parts to itself, as you might have imagined. Take a look at this image.

Kubernetes

Source: Wikipedia (modified)

Nodes

At the bottom of the image are two ‘Kubernetes Nodes’. These are usually your physical hardware servers, but could also be VMs at your cloud provider. These nodes will have a minimum OS installed, along with other necessary packages like K8S and Docker.

Kubelet is a management binary used by K8S to watch existing containers, run health checks, and launch new containers.

Kube-proxy is a simple port-mapping proxy service. Let’s say you had a simple webserver running as a container on 172.14.11.109:34223. Your end-users would access an external/public IP over 443 like normal, and kube-proxy would map/route that external access to the correct internal container.

Pods

The next concept to understand is a pod. This is typically the base term used when discussing what is running on a node. A pod can be a single container or can be an entire setup, running multiple containers, with mapped external storage, that should be grouped together as a single unit.

Networking

Networking within K8S is a complex topic, and a deep discussion is not suitable for this post. At an entry-level, what you need to know is that K8S creates an “overlay” network, allowing a container in a pod on node1 to talk to a container in a pod on node322 which is in another datacenter. Should your company policy require it, you can create multiple overlay networks across your entire K8S and use them as a way to isolate traffic between pods (ie: think old-school VLANs).

Master Node

At the top of the diagram, you can see the developer/operator interacting with the K8S master node. This node runs several processes which are critical to K8S operation:

kube-apiserver: A simple server to handle/route/process API requests from other K8S tools and even your own custom tools

etcd: A highly available key-value store. This is where K8S stores all of its stateful data.

kube-scheduler: Manages inventory and decides where to launch new pods based on available node resources.

You might have noticed that there is only 1 master node in the above diagram. This is the default installation. This is not highly available and should the master node go down, the entire K8S infrastructure will go down with it. In a true production environment, it is highly recommended that you run the master node in a highly-available manner.

Wrap It Up

In this post, we discussed how traditional hardware-based setups have been slowly migrating, first towards virtual machine setups and now to containerized deployments. We brought up the issues surrounding managing large container environments and introduced the leading container orchestration platform, Kubernetes. Lastly, we dove into K8S’s various components and architecture.

Once you have your K8S cluster set up with a master node, and many worker nodes, you’ll finally be able to launch some pods and have services accessible. This is what we will do in part 2 of this blog series.

Aug
26
2019
--

VMware is bringing VMs and containers together, taking advantage of Heptio acquisition

At VMworld today in San Francisco, VMware introduced a new set of services for managing virtual machines and containers in a single view. Called Tanzu, the product takes advantage of the knowledge the company gained when it acquired Heptio last year.

As companies face an increasingly fragmented landscape of maintaining traditional virtual machines, alongside a more modern containerized Kubernetes environment, managing the two together has created its own set of management challenges for IT. This is further complicated by trying to manage resources across multiple clouds, as well as the in-house data centers. Finally, companies need to manage legacy applications, while looking to build newer containerized applications.

VMware’s Craig McLuckie and fellow Heptio co-founder Joe Beda were part of the original Kubernetes development team. They came to VMware via last year’s acquisition. McLuckie believes that Tanzu can help with all of this by applying the power of Kubernetes across this complex management landscape.

“The intent is to construct a portfolio that has a set of assets that cover every one of these areas, a robust set of capabilities that bring the Kubernetes substrate everywhere — a control plane that enables organizations to start to think about [and view] these highly fragmented deployments with Kubernetes [as the] common lens, and then the technologies you need to be able to bring existing applications forward and to build new application and to support third-party vendors bringing their applications into [this],” McLuckie explained.

It’s an ambitious vision that involves bringing together not only VMware’s traditional VM management tooling and Kubernetes, but also open-source pieces and other recent acquisitions, including Bitnami and Cloud Health along with Wavefront, which it acquired in 2017. Although the vision was defined long before the acquisition of Pivotal last week, it will also play a role in this. Originally that was as a partner, but now it will be as part of VMware.

The idea is to eventually cover the entire gamut of building, running and managing applications in the enterprise. Among the key pieces introduced today as technology previews are the Tanzu Mission Control, a tool for managing Kubernetes clusters wherever they live, and Project Pacific, which embeds Kubernetes natively into vSphere, the company’s virtualization platform, bringing together virtual machines and containers.

Screenshot 2019 08 26 08.07.38 1

VMware Tanzu (Slide: VMware)

McLuckie sees bringing virtual machine and Kubernetes together in this fashion provides a couple of key advantages. “One is being able to bring a robust, modern API-driven way of thinking about accessing resources. And it turns out that there is this really good technology for that. It’s called Kubernetes. So being able to bring a Kubernetes control plane to vSphere is creating a new set of experiences for traditional VMware customers that is moving much closer to a kind of cloud-like agile infrastructure type of experience. At the same time, vSphere is bringing a whole bunch of capabilities to Kubernetes that’s creating more efficient isolation capabilities,” he said.

When you think about the cloud-native vision, it has always been about enabling companies to manage resources wherever they live through a single lens, and this is what this set of capabilities that VMware has brought together under Tanzu is intended to do. “Kubernetes is a way of bringing a control metaphor to modern IT processes. You provide an expression of what you want to have happen, and then Kubernetes takes that and interprets it and drives the world into that desired state,” McLuckie explained.

If VMware can take all of the pieces in the Tanzu vision and make this happen, it will be as powerful as McLuckie believes it to be. It’s certainly an interesting attempt to bring all of a company’s application and infrastructure creation and management under one roof using Kubernetes as the glue — and with Heptio co-founders McLuckie and Beda involved, it certainly has the expertise in place to drive the vision.

Aug
23
2019
--

How Pivotal got bailed out by fellow Dell family member, VMware

When Dell acquired EMC in 2016 for $67 billion, it created a complicated consortium of interconnected organizations. Some, like VMware and Pivotal, operate as completely separate companies. They have their own boards of directors, can acquire companies and are publicly traded on the stock market. Yet they work closely within Dell, partnering where it makes sense. When Pivotal’s stock price plunged recently, VMware saved the day when it bought the faltering company for $2.7 billion yesterday.

Pivotal went public last year, and sometimes struggled, but in June the wheels started to come off after a poor quarterly earnings report. The company had what MarketWatch aptly called “a train wreck of a quarter.”

How bad was it? So bad that its stock price was down 42% the day after it reported its earnings. While the quarter itself wasn’t so bad, with revenue up year over year, the guidance was another story. The company cut its 2020 revenue guidance by $40-$50 million and the guidance it gave for the upcoming 2Q 19 was also considerably lower than consensus Wall Street estimates.

The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on August 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on August 14th. That’s when VMware admitted it was thinking about buying the struggling company.

Aug
19
2019
--

The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF

The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

No. 1: Artificial intelligence
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few, if any, technologies before it. To peek ahead into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address it again head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

No. 2: Data, the cloud and Kubernetes
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today. Indeed, 90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution — Joe Beda (VMware), Aparna Sinha (Google) and others — will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data — which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx). 

No. 3: Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green. We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum computing, the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort, Jim Clarke, the director of quantum hardware at Intel Labs and Krysta Svore, who leads Microsoft’s quantum effort.

All told, there are 21 programming sessions.

No. 4: Network and get your questions answered
There will be two Q&A breakout sessions with top enterprise investors; this is for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the 20 early-stage startups exhibiting on the expo floor.

No. 5: SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions, featuring key SAP Chief Innovation Officer
Max Wessel, SAP Chief Designer and Futurist Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO), in sessions including how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait til the day of to book because prices go up at the door!

We still have two Startup Demo Tables left. Each table comes with four tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!

Aug
14
2019
--

VMware says it’s looking to acquire Pivotal

VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).

Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers, as well. It’s a bit of an ouroboros of a transaction.

Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.

An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.

It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.

Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami, and last year it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.

Aug
07
2019
--

Rookout lands $8M Series A to expand debugging platform

Rookout, a startup that provides debugging across a variety of environments, including serverless and containers, announced an $8 million Series A investment today. It plans to use the money to expand beyond its debugging roots.

The round was led by Cisco Investments along with existing investors TLV Partners and Emerge. Nat Friedman, CEO of GitHub; John Kodumal, CTO and co-founder of LaunchDarkly; and Raymond Colletti, VP of revenue at Codecov also participated.

Rookout from day one has been working to provide production debugging and collection capabilities to all platforms,” Or Weis, co-founder and CEO of Rookout told TechCrunch. That has included serverless like AWS Lambda, containers and Kubernetes and Platform-as-a-Service like Google App Engine and Elastic Beanstalk.

The company is also giving visibility into platforms that are sometimes hard to observe because of the ephemeral nature of the technology, and that go beyond its pure debugging capabilities. “In the last year, we’ve discovered that our customers are finding completely new ways to use Rookout’s code-level data collection capabilities and that we need to accommodate, support and enhance the many varied uses of code-level observability and pipelining,” Weiss said in a statement.

It was particularly telling that a company like Cisco was deeply involved in the round. Rob Salvagno, vice president of Cisco Global Corporate Development and Cisco Investments, likes the developer focus of the company.

“Developers have become key influencers of enterprise IT spend. By collecting data on-demand without re-deploying, Rookout created a Developer-centric software, which short-circuits complexities in the production debugging, increases Developer efficiency and reduces the friction which exists between IT Ops and Developers,” Salvagno said in a statement.

Rookout, which launched in 2017, has offices in San Francisco and Tel Aviv, with a total of 20 employees. It has raised more than $12 million.

Aug
05
2019
--

Mesosphere changes name to D2IQ, shifts focus to Kubernetes, cloud native

Mesosphere was born as the commercial face of the open-source Mesos project. It was surely a clever solution to make virtual machines run much more efficiently, but times change and companies change. Today the company announced it was changing its name to Day2IQ, or D2IQ for short, and fixing its sights on Kubernetes and cloud native, which have grown quickly in the years since Mesos appeared on the scene.

D2IQ CEO Mike Fey says that the name reflects the company’s new approach. Instead of focusing entirely on the Mesos project, it wants to concentrate on helping more mature organizations adopt cloud native technologies.

“We felt like the Mesosphere name was somewhat constrictive. It made statements about the company that really allocated us to a given technology, instead of to our core mission, which is supporting successful Day Two operations, making cloud native a viable approach not just for the early adopters, but for everybody,” Fey explained.

Fey is careful to point out that the company will continue to support the Mesos-driven DC/OS solution, but the general focus of the company has shifted, and the new name is meant to illustrate that. “The Mesos product line is still doing well, and there are things that it does that nothing else can deliver on yet. So we’re not abandoning that totally, but we do see that Kubernetes is very powerful, and the community behind it is amazing, and we want to be a value-added member of that community,” he said.

He adds that this is not about jumping on the cloud-native bandwagon all of a sudden. He points out his company has had a Kubernetes product for more than a year running on top of DC/OS, and it has been a contributing member to the cloud native community.

It’s not just about a name change and refocusing the company and the brand, it also involves several new cloud-native products that the company has built to serve the type of audience, the more mature organization, that the new name was inspired by.

For starters, it’s introducing its own flavor of Kubernetes called Konvoy, which, it says, provides an “enterprise-grade Kubernetes experience.” The company will also provide a support and training layer, which it believes is a key missing piece, and one that is required by larger organizations looking to move to cloud native.

In addition, it is offering a data integration layer, which is designed to help integrate large amounts of data in a cloud-native fashion. To that end, it is introducing a beta of Kudo, an open-source cloud-native tool for building stateful operations in Kubernetes. The company has already offered to donate this tool to the Cloud Native Computing foundation, the open-source organization that houses Kubernetes and other cloud-native projects.

The company faces stiff competition in this space from some heavy hitters, like the newly combined IBM and Red Hat, but it believes by adhering to a strong open-source ethos, it can move beyond its Mesos roots to become a player in the cloud-native space. Time will tell if it made a good bet.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com