Sep
11
2020
--

How Much Memory Does the Process Really Take on Linux?

Memory Process takes on Linux

Memory Process takes on LinuxOne of the questions you often will be faced with operating a Linux-based system is managing memory budget. If a program uses more memory than available you may get swapping to happen, oftentimes with a terrible performance impact, or have Out of Memory (OOM) Killer activated, killing process altogether.

Before adjusting memory usage, either by configuration, optimization, or just managing the load, it helps to know how much memory a given program really uses.

If your system runs essentially a single user program (there is always a bunch of system processes) it is easy.  For example, if I run a dedicated MySQL server on a system with 128GB of RAM I can use “used” as a good proxy of what is used and “available” as what can still be used.

root@rocky:/mnt/data2/mysql# free -h
              total        used        free      shared  buff/cache   available
Mem:          125Gi        88Gi       5.2Gi       2.0Mi        32Gi        36Gi
Swap:          63Gi        33Mi        63Gi

There is just swap to keep in mind but if the system is not swapping heavily, even there is swap space used, it usually keeps “unneeded junk” which does not need to factor in the calculation.

If you’re using Percona Monitoring and Management you will see it in Memory Utilization:

Percona Monitoring and Management memory utilization

And in the Swap Activity graphs in the “Node Summary” dashboard:

swap activity

If you’re running multiple processes that share system resources, things get complicated because there is no one-to-map mapping between “used”  memory and process.

Let’s list just some of those complexities:

  • Copy-on-Write semantics for “fork” of the process – All processes would share the same “used” memory until process modifies data, only in this case it gets its own copy.
  • Shared memory –  As the name says, shared memory is a memory that is shared across different processes.
  • Shared Libraries –  Libraries are mapped into every process which uses it and are part of its memory usage, though they are shared among all processes which use the same library.
  • Memory Mapped Files and Anonymous mmap()  –  There are a lot of complicated details here. For an example, check out Memory Mapped Files for more details. This discussion on StackExchange also has some interesting bits.

With that complexity in mind let’s look at the “top” output, one the most common programs to look at current load on Linux. By default, “top” sorts processes by CPU usage so we’ll press “Shift-M” to sort it by (resident) memory usage instead.

The first thing you will notice is that this system, which has only 1GB of physical memory, has a number of processes which has virtual memory (VIRT) in excess of 1GB.

For various reasons, modern memory allocators and programming languages (i.e. GoLang) can allocate a lot of virtual memory which they do not really use so virtual memory usage has little value to understand how much real memory a process needs to operate.

Now there is resident memory (RES) which shows us how much physical memory the process really uses.  This is good… but there is a problem. Memory can be non-resident either because it was not really “used” and exists as virtual memory only, or because it was swapped out.

If we look into the stats kernel actually provides for the process we’ll see there is more data available:

root@PMM2Server:~# cat /proc/3767/status
Name:   prometheus
Umask:  0022
State:  S (sleeping)
Tgid:   3767
Ngid:   0
Pid:    3767
PPid:   3698
TracerPid:      0
Uid:    1000    1000    1000    1000
Gid:    1000    1000    1000    1000
FDSize: 256
Groups: 1000
NStgid: 3767    17
NSpid:  3767    17
NSpgid: 3767    17
NSsid:  3698    1
VmPeak:  3111416 kB
VmSize:  3111416 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:    608596 kB
VmRSS:    291356 kB
RssAnon:          287336 kB
RssFile:            4020 kB
RssShmem:              0 kB
VmData:  1759440 kB
VmStk:       132 kB
VmExe:     26112 kB
VmLib:         8 kB
VmPTE:      3884 kB
VmSwap:   743116 kB
HugetlbPages:          0 kB
CoreDumping:    0
Threads:        11
SigQ:   0/3695
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe3bfa3a00
SigIgn: 0000000000000000
SigCgt: fffffffe7fc1feff
CapInh: 00000000a80425fb
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
NoNewPrivs:     0
Seccomp:        2
Speculation_Store_Bypass:       vulnerable
Cpus_allowed:   1
Cpus_allowed_list:      0
Mems_allowed:   00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:      0
voluntary_ctxt_switches:        346
nonvoluntary_ctxt_switches:     545

VmSwap is a particularly interesting data point, as it shows the amount of memory used by this process which was swapped out.

VmRSS+VmSwap is a much better indication of the “physical” memory the process needs. In the case above, it will be 1010MB, which is a lot higher than 284MB of resident set size but also a lot less than the 3038MB “virtual memory” size for this process.

The problem with the swapped out part, though we do not know if it was swapped out “for good” being dead weight, is, for example, some code or data which is not used in your particular program use case or if it was swapped out due to memory pressure, and we really would need to have it in real memory (RAM) for optimal performance – but we do not have enough memory available.

The helpful data point to look at in this case is major page faults. It is not in the output above but is available in another file –  /proc/[pid]/stat. Here is some helpful information on Stack Overflow.

A high number of major page faults indicates the stuff program needs is not in physical memory. Swap activity will be included here but also references to currently unmapped code in the shared library or references to the data in memory-mapped files, which is currently not in RAM. In any case, a high rate of major page faults often indicates RAM pressure.

Let’s go to our friend “top”, though, and see if we can get more helpful information displayed in it. You can use the “F” keyboard shortcut to select fields you want to be displayed.

You can add SWAP,  Major Faults Delta, and USED columns to display all the items we spoke about!

Looking at this picture we can see a large portion of “prometheus” process is swapped out and it has 2K major page faults/sec happening, pointing out the fact it is likely suffering.

The “clickhouse-serv” process is another interesting example, as it has over 4G “resident size” but has relatively little memory used and a lot less major page faults.

Finally, let’s look at “percona-qan-api” process which has a very small portion swapped out but shows 2K major page faults as well.   I’m honestly not quite sure what it is, but it does not seem to be swap-IO related.

Summary

Want to see how much memory process is using?  Do not look at virtual memory size or resident memory size but look at “used” memory defined as resident memory size + swap usage. Want to see if there is actual memory pressure? Check out system-wide swap input/output statistics, as well as major page fault rate, for a process you’re investigating. 

Aug
13
2020
--

Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: We want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lens makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate Lens and how to make life for developers more enjoyable in this cloud-native technology landscape,” Miska Kaipiainen, the former CEO of Kontena and now Mirantis’ director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context of what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.

Jul
08
2020
--

Google launches the Open Usage Commons, a new organization for managing open-source trademarks

Google, in collaboration with a number of academic leaders and its consulting partner SADA Systems, today announced the launch of the Open Usage Commons, a new organization that aims to help open-source projects manage their trademarks.

To be fair, at first glance, open-source trademarks may not sound like it would be a major problem (or even a really interesting topic), but there’s more here than meets the eye. As Google’s director of open source Chris DiBona told me, trademarks have increasingly become an issue for open-source projects, not necessarily because there have been legal issues around them, but because commercial entities that want to use the logo or name of an open-source project on their websites, for example, don’t have the reassurance that they are free to use those trademarks.

“One of the things that’s been rearing its ugly head over the last couple years has been trademarks,” he told me. “There’s not a lot of trademarks in open-source software in general, but particularly at Google, and frankly the higher tier, the more popular open-source projects, you see them more and more over the last five years. If you look at open-source licensing, they don’t treat trademarks at all the way they do copyright and patents, even Apache, which is my favorite license, they basically say, nope, not touching it, not our problem, you go talk.”

Traditionally, open-source licenses didn’t cover trademarks because there simply weren’t a lot of trademarks in the ecosystem to worry about. One of the exceptions here was Linux, a trademark that is now managed by the Linux Mark Institute on behalf of Linus Torvalds.

With that, commercial companies aren’t sure how to handle this situation and developers also don’t know how to respond to these companies when they ask them questions about their trademarks.

“What we wanted to do is give guidance around how you can share trademarks in the same way that you would share patents and copyright in an open-source license […],” DiBona explained. “And the idea is to basically provide that guidance, you know, provide that trademarks file, if you will, that you include in your source code.”

Google itself is putting three of its own open-source trademarks into this new organization: the Angular web application framework for mobile, the Gerrit code review tool and the Istio service mesh. “All three of them are kind of perfect for this sort of experiment because they’re under active development at Google, they have a trademark associated with them, they have logos and, in some cases, a mascot.”

One of those mascots is Diffi, the Kung Fu Code Review Cuckoo, because, as DiBona noted, “we were trying to come up with literally the worst mascot we could possibly come up with.” It’s now up to the Open Usage Commons to manage that trademark.

DiBona also noted that all three projects have third parties shipping products based on these projects (think Gerrit as a service).

Another thing DiBona stressed is that this is an independent organization. Besides himself, Jen Phillips, a senior engineering manager for open source at Google is also on the board. But the team also brought in SADA’s CTO Miles Ward (who was previously at Google); Allison Randal, the architect of the Parrot virtual machine and member of the board of directors of the Perl Foundation and OpenStack Foundation, among others; Charles Lee Isbell Jr., the dean of the Georgia Institute of Technology College of Computing, and Cliff Lampe, a professor at the School of Information at the University of Michigan and a “rising star,” as DiBona pointed out.

“These are people who really have the best interests of computer science at heart, which is why we’re doing this,” DiBona noted. “Because the thing about open source — people talk about it all the time in the context of business and all the rest. The reason I got into it is because through open source we could work with other people in this sort of fertile middle space and sort of know what the deal was.”

Update: even though Google argues that the Open Usage Commons are complementary to other open source organizations, the Cloud Native Computing Foundation (CNCF) released the following statement by Chris Aniszczyk, the CNCF’s CTO: “Our community members are perplexed that Google has chosen to not contribute the Istio project to the Cloud Native Computing Foundation (CNCF), but we are happy to help guide them to resubmit their old project proposal from 2017 at any time. In the end, our community remains focused on building and supporting our service mesh projects like Envoy, linkerd and interoperability efforts like the Service Mesh Interface (SMI). The CNCF will continue to be the center of gravity of cloud native and service mesh collaboration and innovation.”

 

Jul
08
2020
--

Suse acquires Kubernetes management platform Rancher Labs

Suse, which describes itself as “the world’s largest independent open source company,” today announced that it has acquired Rancher Labs, a company that has long focused on making it easier for enterprises to make their container clusters.

The two companies did not disclose the price of the acquisition, but Rancher was well funded, with a total of $95 million in investments. It’s also worth mentioning that it has only been a few months since the company announced its $40 million Series D round led by Telstra Ventures. Other investors include the likes of Mayfield and Nexus Venture Partners, GRC SinoGreen and F&G Ventures.

Like similar companies, Rancher’s original focus was first on Docker infrastructure before it pivoted to putting its emphasis on Kubernetes, once that became the de facto standard for container orchestration. Unsurprisingly, this is also why Suse is now acquiring this company. After a number of ups and downs — and various ownership changes — Suse has now found its footing again and today’s acquisition shows that its aiming to capitalize on its current strengths.

Just last month, the company reported the annual contract value of its booking increased by 30% year over year and that it saw a 63% increase in customer deals worth more than $1 million in the last quarter, with its cloud revenue growing 70%. While it is still in the Linux distribution business that the company was founded on, today’s Suse is a very different company, offering various enterprise platforms (including its Cloud Foundry-based Cloud Application Platform), solutions and services. And while it already offered a Kubernetes-based container platform, Rancher’s expertise will only help it to build out this business.

“This is an incredible moment for our industry, as two open source leaders are joining forces. The merger of a leader in Enterprise Linux, Edge Computing and AI with a leader in Enterprise Kubernetes Management will disrupt the market to help customers accelerate their digital transformation journeys,” said Suse CEO Melissa Di Donato in today’s announcement. “Only the combination of SUSE and Rancher will have the depth of a globally supported and 100% true open source portfolio, including cloud native technologies, to help our customers seamlessly innovate across their business from the edge to the core to the cloud.”

The company describes today’s acquisition as the first step in its “inorganic growth strategy” and Di Donato notes that this acquisition will allow the company to “play an even more strategic role with cloud service providers, independent hardware vendors, systems integrators and value-added resellers who are eager to provide greater customer experiences.”

Jun
11
2020
--

OpenStack adds the StarlingX edge computing stack to its top-level projects

The OpenStack Foundation today announced that StarlingX, a container-based system for running edge deployments, is now a top-level project. With this, it joins the main OpenStack private and public cloud infrastructure project, the Airship lifecycle management system, Kata Containers and the Zuul CI/CD platform.

What makes StarlingX a bit different from some of these other projects is that it is a full stack for edge deployments — and in that respect, it’s maybe more akin to OpenStack than the other projects in the foundation’s stable. It uses open-source components from the Ceph storage platform, the KVM virtualization solution, Kubernetes and, of course, OpenStack and Linux. The promise here is that StarlingX can provide users with an easy way to deploy container and VM workloads to the edge, all while being scalable, lightweight and providing low-latency access to the services hosted on the platform.

Early StarlingX adopters include China UnionPay, China Unicom and T-Systems. The original codebase was contributed to the foundation by Intel and Wind River System in 2018. Since then, the project has seen 7,108 commits from 211 authors.

“The StarlingX community has made great progress in the last two years, not only in building great open source software but also in building a productive and diverse community of contributors,” said Ildiko Vancsa, ecosystem technical lead at the OpenStack Foundation. “The core platform for low-latency and high-performance applications has been enhanced with a container-based, distributed cloud architecture, secure booting, TPM device enablement, certificate management and container isolation. StarlingX 4.0, slated for release later this year, will feature enhancements such as support for Kata Containers as a container runtime, integration of the Ussuri version of OpenStack, and containerization of the remaining platform services.”

It’s worth remembering that the OpenStack Foundation has gone through a few changes in recent years. The most important of these is that it is now taking on other open-source infrastructure projects that are not part of the core OpenStack project but are strategically aligned with the organization’s mission. The first of these to graduate out of the pilot project phase and become top-level projects were Kata Containers and Zuul in April 2019, with Airship joining them in October.

Currently, the only pilot project for the OpenStack Foundation is its OpenInfra Labs project, a community of commercial vendors and academic institutions, including the likes of Boston University, Harvard, MIT, Intel and Red Hat, that are looking at how to better test open-source code in production-like environments.

 

May
04
2020
--

IBM and Red Hat expand their telco, edge and AI enterprise offerings

At its Think Digital conference, IBM and Red Hat today announced a number of new services that all center around 5G edge and AI. The fact that the company is focusing on these two areas doesn’t come as a surprise, given that both edge and AI are two of the fastest-growing businesses in enterprise computing. Virtually every telecom company is now looking at how to best capitalize on the upcoming 5G rollouts, and most forward-looking enterprises are trying to figure out how to best plan around this for their own needs.

As IBM’s recently minted president Jim Whitehurst told me ahead of today’s announcement, he believes that IBM (in combination with Red Hat) is able to offer enterprises a very differentiated service because, unlike the large hyper clouds, IBM isn’t interested in locking these companies into a homogeneous cloud.

“Where IBM is competitively differentiated, is around how we think about helping clients on a journey to what we call hybrid cloud,” said Whitehurst, who hasn’t done a lot of media interviews since he took the new role, which still includes managing Red Hat. “Honestly, everybody has hybrid clouds. I wish we had a more differentiated term. One of the things that’s different is how we’re talking about how you think about an application portfolio that, by necessity, you’re going to have in multiple ways. If you’re a large enterprise, you probably have a mainframe running a set of transactional workloads that probably are going to stay there for a long time because there’s not a great alternative. And there’s going to be a set of applications you’re going to want to run in a distributed environment that need to access that data — all the way out to you running a factory floor and you want to make sure that the paint sprayer doesn’t have any defects while it’s painting a door.”

BARCELONA, CATALONIA, SPAIN – 2019/02/25: The IBM logo is seen during MWC 2019. (Photo by Paco Freire/SOPA Images/LightRocket via Getty Images)

He argues that IBM, at its core, is all about helping enterprises think about how to best run their workloads software, hardware and services perspective. “Public clouds are phenomenal, but they are exposing a set of services in a homogeneous way to enterprises,” he noted, while he argues that IBM is trying to weave all of these different pieces together.

Later in our discussion, he argued that the large public clouds essentially force enterprises to fit their workloads to those clouds’ service. “The public clouds do extraordinary things and they’re great partners of ours, but their primary business is creating these homogeneous services, at massive volumes, and saying ‘if your workloads fit into this, we can run it better, faster, cheaper etc.’ And they have obviously expanded out. They’ve added services. They are not saying we can put a box on-premise, but you’re still fitting into their model.”

On the news side, IBM is launching new services to automate business planning, budgeting and forecasting, for example, as well as new AI-driven tools for building and running automation apps that can handle routine tasks either autonomously or with the help of a human counterpart. The company is also launching new tools for call-center automation.

The most important AI announcement is surely Watson AIOps, though, which is meant to help enterprises detect, diagnose and respond to IT anomalies in order to reduce the effects of incidents and outages for a company.

On the telco side, IBM is launching new tools like the Edge Application Manager, for example, to make it easier to enable AI, analytics and IoT workloads on the edge, powered by IBM’s open-source Open Horizon edge computing project. The company is also launching a new Telco Network Cloud manager built on top of Red Hat OpenShift and the ability to also leverage the Red Hat OpenStack Platform (which remains to be an important platform for telcos and represents a growing business for IBM/Red Hat). In addition, IBM is launching a new dedicated IBM Services team for edge computing and telco cloud to help these customers build out their 5G and edge-enabled solutions.

Telcos are also betting big on a lot of different open-source technologies that often form the core of their 5G and edge deployments. Red Hat was already a major player in this space, but the acquisition has only accelerated this, Whitehurst argued. “Since the acquisition […] telcos have a lot more confidence in IBM’s capabilities to serve them long term and be able to serve them in mission-critical context. But importantly, IBM also has the capability to actually make it real now.”

A lot of the new telco edge and hybrid cloud deployments, he also noted, are built on Red Hat technologies but built by IBM, and neither IBM nor Red Hat could have really brought these to fruition in the same way. Red Hat never had the size, breadth and skills to pull off some of these projects, Whitehurst argued.

Whitehurst also argued that part of the Red Hat DNA that he’s bringing to the table now is helping IBM to think more in terms of ecosystems. “The DNA that I think matters a lot that Red Hat brings to the table with IBM — and I think IBM is adopting and we’re running with it — is the importance of ecosystems,” he said. “All of Red Hat’s software is open source. And so really, what you’re bringing to the table is ecosystems.”

It’s maybe no surprise then that the telco initiatives are backed by partners like Cisco, Dell Technologies, Juniper, Intel, Nvidia, Samsung, Packet, Equinix, Hazelcast, Sysdig, Turbonomics, Portworx, Humio, Indra Minsait, EuroTech, Arrow, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks and ADVA.

In many ways, Red Hat pioneered the open-source business model and Whitehurst argued that having Red Hat as part of the IBM family means it’s now easier for the company to make the decision to invest even more in open source. “As we accelerate into this hybrid cloud world, we’re going to do our best to leverage open-source technologies to make them real,” he added.

May
04
2020
--

Nvidia acquires Cumulus Networks

Nvidia today announced its plans to acquire Cumulus Networks, an open-source-centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013, and they formed a first official partnership in 2016. Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1 billion in revenue in the last quarter, up 43% from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”

Apr
22
2020
--

Granulate announces $12M Series A to optimize infrastructure performance

As companies increasingly look to find ways to cut costs, Granulate, an early-stage Israeli startup, has come up with a clever way to optimize infrastructure usage. Today it was rewarded with a tidy $12 million Series A investment.

Insight Partners led the round with participation from TLV Partners and Hetz Ventures. Lonne Jaffe, managing director at Insight Partners, will be joining the Granulate board under the terms of the agreement. Today’s investment brings the total raised to $15.6 million, according to the company.

The startup claims it can cut infrastructure costs, whether on-prem or in the cloud, from between 20% and 80%. This is not insignificant if they can pull this off, especially in the economic maelstrom in which we find ourselves.

Asaf Ezra, co-founder and CEO at Granulate, says the company achieved the efficiency through a lot of studying about how Linux virtual machines work. Over six months of experimentation, they simply moved the bottleneck around until they learned how to take advantage of the way the Linux kernel operates to gain massive efficiencies.

It turns out that Linux has been optimized for resource fairness, but Granulate’s founders wanted to flip this idea on its head and look for repetitiveness, concentrating on one function instead of fair allocation across many functions, some of which might not really need access at any given moment.

“When it comes to production systems, you have a lot of repetitiveness in the machine, and you basically want it to do one thing really well,” he said.

He points out that it doesn’t even have to be a VM. It could also be a container or a pod in Kubernetes. The important thing to remember is that you no longer care about the interactivity and fairness inherent in Linux; instead, you want that the machine to be optimized for certain things.

“You let us know what your utility function for that production system is, then our agents. basically optimize all the decision making for that utility function. That means that you don’t even have to do any code changes to gain the benefit,” Ezra explained.

What’s more, the solution uses machine learning to help understand how the different utility functions work to provide greater optimization to improve performance even more over time.

Insight’s Jaffe certainly recognized the potential of such a solution, especially right now.

“The need to have high-performance digital experiences and lower infrastructure costs has never been more important, and Granulate has a highly differentiated offering powered by machine learning that’s not dependent on configuration management or cloud resource purchasing solutions,” Jaffe said in a statement.

Ezra understands that a product like his could be particularly helpful at the moment. “We’re in a unique position. Our offering right now helps organizations survive the downturn by saving costs without firing people,” he said.

The company was founded in 2018 and currently has 20 employees. They plan to double that by the end of 2020.

Apr
06
2020
--

Paul Cormier takes over as Red Hat CEO, as Jim Whitehurst moves to IBM

When Ginni Rometty indicated that she was stepping down as IBM CEO at the end of January, the company announced that Arvind Krishna would be taking over, while Red Hat CEO Jim Whitehurst would become president. To fill his role, Red Hat announced today that long-time executive Paul Cormier has been named president and CEO.

Cormier would seem to be a logical choice to run Red Hat, having been with the company since 2001. He joined as its VP of engineering and has seen the company grow from a small startup to a multi-billion dollar company.

Cormier spoke about the historical arc he has witnessed in his years at Red Hat. “Looking back to when I joined, we were in a different position and facing different issues, but the spirit was the same. We were on a mission to convince the world that open source was real, safe and enterprise-grade,” Cormier said in an email to employees about his promotion.

Former CEO Whitehurst certainly sees this as a sensible transition. “After working with him closely for more than a decade, I can confidently say that Paul was the natural choice to lead Red Hat. Having been the driving force behind Red Hat’s product strategy for nearly two decades, he’s been intimately involved in setting the company’s direction and uniquely understands how to help customers and partners make the most out of their cloud strategy,” he said in a statement.

In a Q&A with Cormier on the company website, he talked about the kind of changes he expects to see under his leadership in the next five years of the company. “There’s a term that we use today, ‘applications run the business.’ In five years, I see it becoming the case for the majority of enterprises. And with that, the infrastructure underpinning these applications will be even more critical. Management and security are paramount — and this isn’t just one environment. It’s bare metal and hypervisors to public and private clouds. It’s Linux, VMs, containers, microservices and more,” he said.

When IBM bought Red Hat in 2018 for $34 billion, there was widespread speculation that Whitehurst would eventually take over in an executive position there. Now that that has happened, Cormier will step into run Red Hat.

While Red Hat is under the IBM umbrella, it continues to operate as a separate company with its own executive structure, but that vision that Cormier outlined is in line with how it will fit within the IBM family as it tries to make its mark on the shifting cloud and enterprise open source markets.

Mar
11
2020
--

AWS launches Bottlerocket, a Linux-based OS for container hosting

AWS has launched its own open-source operating system for running containers on both virtual machines and bare metal hosts. Bottlerocket, as the new OS is called, is basically a stripped-down Linux distribution that’s akin to projects like CoreOS’s now-defunct Container Linux and Google’s container-optimized OS. The OS is currently in its developer preview phase, but you can test it as an Amazon Machine Image for EC2 (and by extension, under Amazon EKS, too).

As AWS chief evangelist Jeff Barr notes in his announcement, Bottlerocket supports Docker images and images that conform to the Open Container Initiative image format, which means it’ll basically run all Linux-based containers you can throw at it.

One feature that makes Bottlerocket stand out is that it does away with a package-based update system. Instead, it uses an image-based model that, as Barr notes, “allows for a rapid & complete rollback if necessary.” The idea here is that this makes updates easier. At the core of this update process is “The Update Framework,” an open-source project hosted by the Cloud Native Computing Foundation.

AWS says it will provide three years of support (after General Availability) for its own builds of Bottlerocket. As of now, the project is very much focused on AWS, of course, but the code is available on GitHub and chances are we will see others expand on AWS’ work.

The company is launching the project in cooperation with a number of partners, including Alcide, Armory, CrowdStrike, Datadog, New Relic, Sysdig, Tigera, Trend Micro and Waveworks.

“Container-optimized operating systems will give dev teams the additional speed and efficiency to run higher throughput workloads with better security and uptime,” said Michael Gerstenhaber, director of Product Management at Datadog.” We are excited to work with AWS on Bottlerocket, so that as customers take advantage of the increased scale they can continue to monitor these ephemeral environments with confidence.”

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com