Spring Cleaning: Discontinuing RHEL 6/CentOS 6 (glibc 2.12) and 32-bit Binary Builds of Percona Software

Discontinuing RHEL 6/CentOS 6

Discontinuing RHEL 6/CentOS 6As you are probably aware, Red Hat Enterprise Linux 6 (RHEL 6 or EL 6 in short) officially reached “End of Life” (EOL) on 2020-11-30 and is now in the so-called Extended Life Phase, which basically means that Red Hat will no longer provide bug fixes or security fixes.

Even though EL 6 and its compatible derivatives like CentOS 6 had reached EOL some time ago already, we continued providing binary builds for selected MySQL-related products for this platform.

However, this became increasingly difficult, as the MySQL code base continued to evolve and now depends on tools and functionality that are no longer provided by the operating system out of the box. This meant we already had to perform several modifications in order to prepare binary builds for this platform, e.g. installing custom compiler versions or newer versions of various system libraries.

As of MySQL 8.0.26, Oracle announced that they deprecated the TLSv1 and TLSv1.1 connection protocols and plan to remove these in a future MySQL version in favor of the more secure TLSv1.2 and TLSv1.3 protocols. TLSv1.3 requires that both the MySQL server and the client application be compiled with OpenSSL 1.1.1 or higher. This version of OpenSSL is not available in binary package format on EL 6 anymore, and manually rebuilding it turned out to be a “yak shaving exercise” due to the countless dependencies.

Our build & release team was able to update the build environments on all of our supported platforms (EL 7, EL 8, supported Debian and Ubuntu versions) for this new requirement. However, we have not been successful in getting all the required components and their dependencies to build on EL 6, as it would have required rebuilding quite a significant amount of core OS packages and libraries to achieve this.

Moreover, switching to this new OpenSSL version would have also required us to include some additional shared libraries in our packages to satisfy the runtime dependencies, adding more complexity and potential security issues.

In general, we believe that running a production system on an OS that is no longer actively supported by a vendor is not a recommended best practice from a security perspective, and we do not want to encourage such practices.

Because of these reasons and to simplify our build/release and QA processes, we decided to drop support for EL 6 for all products now. Percona Server for MySQL 8.0.27 was the last version for which we built binaries for EL 6 against the previous version of OpenSSL.

Going forward, the following products will no longer be built and released on this platform:

  • Percona Server for MySQL 5.7 and 8.0
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4 and 8.0
  • Percona Toolkit 3.2

This includes stopping both building RPM packages for EL 6 and providing binary tarballs that are linked against glibc 2.12.

Note that this OS platform was also the last one on which we still provided 32-bit binaries.

Most of the Enterprise Linux distributions have stopped providing 32-bit versions of their operating systems quite some time ago already. As an example, Red Hat Enterprise Linux 7 (released in June 2014) was the first release to no longer support installing directly on 32-bit Intel/AMD hardware (i686/x86). Already back in 2018, we had taken the decision that we will no longer be offering 32-bit binaries on new platforms or new major releases of our software.

Given today’s database workloads, we also think that 32-bit systems are simply not adequate anymore, and we already stopped building newer versions of our software for this architecture.

The demand for 32-bit downloads has also been declining steadily. A recent analysis of our download statistics revealed that only 2.3% of our total binary downloads are referring to i386 binaries. Looking at IP addresses, these downloads originated from 0.4% of the total range of addresses.

This change affects the following products:

  • Percona Server for MySQL 5.7
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4
  • Percona Toolkit

We’ve updated the Percona Release Lifecycle Overview web page accordingly to reflect this change. Previously released binaries for these platforms and architectures will of course remain accessible from our repositories.

If you’re still running EL 6 or a 32-bit database or OS, we strongly recommend upgrading to a more modern platform. Our Percona Services team would be happy to help you with that!


Linux 5.14 set to boost future enterprise application security

Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.

A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. 

“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.

Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.

“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.

At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations. 

The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.

“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.

While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14. 

The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.

McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.

The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view.  He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.

“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”




First Packages for Debian 11 “bullseye” Now Available

Percona Debian 11 bullseye

Percona Debian 11 bullseyeOver the weekend, the Debian project announced the availability of their newest major distribution release, Debian 11 (code name “bullseye”). We’d like to congratulate the Debian project and the open source community for achieving this major milestone! With over two years in the making, it contains an impressive amount of new and updated software for a wide range of applications (check out the release notes for details). The project’s emphasis on providing a stable Linux operating system makes Debian Linux a preferred choice for database workloads.

The packaging, release, and QA teams here at Percona have been working on adding support for Debian 11 to our products for quite some time already.

This week, we’ve released Percona Server for MongoDB 4.4.8 and 5.0.2 (Release Candidate) as well as Percona Backup for MongoDB 1.6.0, including packages for Debian 11 as a new supported OS platform. Please follow the installation instructions in the respective product documentation to install these versions.

We’ve also rebuilt a number of previously released products on Debian 11. At this point, the following products are available for download from our “testing” package repositories:

  • Percona Server for MySQL 5.7 and 8.0
  • Percona XtraDB Cluster 5.7 and 8.0
  • Percona XtraBackup 2.4 and 8.0

As usual, you can use the percona-release tool to enable the testing repository for these products. Please follow the installation instructions on how to install the tool and proceed.

As an example, if you’d like to install the latest version of Percona Server for MySQL 8.0 on Debian 11, perform the following steps after completing the installation of the base operating system and installing the percona-release tool:

$ sudo percona-release enable ps-80 testing
$ sudo apt update
$ sudo apt install percona-server-server percona-server-client

Percona Distribution for MongoDB is a freely available MongoDB database alternative, giving you a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together.

Download Percona Distribution for MongoDB Today!


VCs are betting big on Kubernetes: Here are 5 reasons why

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.


Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.


Installing Percona Server for MySQL on Rocky Linux 8

MySQL on Rocky Linux 8

MySQL on Rocky Linux 8With the CentOS project switching its focus to CentOS Stream, one of the alternatives that aim to function as a downstream build (building and releasing packages after they’re released by Red Hat) is Rocky Linux. This how-to shows how to install Percona Server for MySQL 8.0 on the Rocky Linux distribution.

You can get the information on the distribution release version by checking the /etc/redhat-release file:

[root@rocky ~]# cat /etc/redhat-release
Rocky Linux release 8.4 (Green Obsidian)

Installing and Setting up the Percona Server for MySQL 8.0  Repository

Downloading and Installing the percona-release repository package for Red Hat Linux and derivatives:

[root@rocky ~]# yum install -y

This should result in:


Verifying        : percona-release-1.0-26.noarch                                                                                                                                                                                                                                   1/1



Once the repository package is installed, you should set up the Percona Server for MySQL 8.0 repository by running:

[root@rocky ~]#  percona-release setup ps80

Please note that you’ll be prompted to disable the mysql module to install Percona Server packages:

* Disabling all Percona Repositories

On RedHat 8 systems it is needed to disable dnf mysql module to install Percona-Server

Do you want to disable it? [y/N] y

Disabling dnf module...

Percona Release release/noarch YUM repository    6.3 kB/s | 1.6 kB     00:00

Dependencies resolved.


Package        Architecture       Version      Repository           Size

Disabling modules:


Transaction Summary


dnf mysql module was disabled

* Enabling the Percona Server 8.0 repository

* Enabling the Percona Tools repository

<*> All done!

Installing and Setting up the Percona Server for MySQL 8.0 Binaries

This part is also covered in the Percona Server for MySQL documentation.

1. Installing the latest Percona Server 8.0 binaries:

[root@rocky ~]# yum -y install percona-server-server

This will also install all the required dependencies:




2. After installation is done, you can start the mysqld service:

[root@rocky ~]# systemctl start mysqld

3. Once the service is running you can check the status by running:

[root@rocky ~]# systemctl status mysqld

You should get similar output to:

? mysqld.service - MySQL Server   
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)

 Active: active (running) since Mon 2021-06-28 10:23:22 UTC; 6s ago
 Docs: man:mysqld(8)
Process: 37616 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 37698 (mysqld)
Status: "Server is operational"
Tasks: 39 (limit: 23393)
Memory: 450.7M
CGroup: /system.slice/mysqld.service
??37698 /usr/sbin/mysqld

Jun 28 10:23:12 rocky systemd[1]: Starting MySQL Server...
Jun 28 10:23:22 rocky systemd[1]: Started MySQL Server

From this process, we can see that the installation on RockyLinux is the same as installing Percona Server for MySQL on CentOS/Red Hat.


Understanding Processes Running on Linux Host with Percona Monitoring and Management

processes linux host percona monitoring

processes linux host percona monitoringA few years ago, I wrote about how to add information about processes to your Percona Monitoring and Management (PMM) instance as well as some helpful ways you can use this data.

Since that time, PMM has released a new major version (PMM v2) and the Process Exporter went through many changes, so it’s time to provide some updated instructions.

Why Bother?

Why do you need per-process data for your database hosts, to begin with? I find this data very helpful, as it allows us to validate how much activity and load is caused by the database process rather than something else. This “something else” may range from a backup process that takes too much CPU, some usually benign system process that went crazy today, or it might even be a crypto miner which was “helpfully” installed on your system. Simply assuming all load you’re observing on the system comes from the database process – which may be correct in most cases, but can also lead you astray –  you need to be able to verify that.


Process-monitoring awesomeness installation consists of two parts.  You install an exporter on every node on which you want to monitor process information, and then you install a dashboard onto your PMM server to visualize this data. External  Exporter Support was added in PMM 2.15, so you will need at least this version for those commands to work.

Installing The Exporter

The commands below will download and install the Prometheus Process Exporter and configure PMM to consume the data generated from it.

dpkg -i process-exporter_0.7.5_linux_amd64.deb
service process-exporter start
pmm-admin add external --group=processes  --listen-port=9256

Note: Different versions of Process Exporter may also work, but this particular version is what I tested with.

Installing the Dashboard

The easiest way to install a dashboard is from the Dashboard Library. In your Percona Monitoring and Management install, click the “+” sign in the toolbar on the left side and select “Import”.


percona monitoring and management grafana dashboard


Enter Dashboard ID 14239 and you are good to go.

If you’re looking for ways to automate this import process as you are provisioning PMM automatically, you can do that too. Just follow the instructions in the Automate PMM Dashboard Importing blog post.

Understanding Processes Running on your Linux Machine

Let’s now move to the most fun part, looking at the available dashboards and what they can tell us about the running system and how they can help with diagnostics and troubleshooting. In the new dashboard, which I updated from an older PMMv1 version, I decided to add relevant whole-system metrics which can help us to put the process metrics in proper context.


node processes percona monitoring and management


The CPU-focused row shows us how system CPU is used overall and to what extent the system or some CPU cores are overloaded, as well as top consumers of the “User” and “System” CPU Modes.

Note, because of additional MetricsQL functionality provided by VictoriaMetrics, we can show [other] as the total resource usage by processes that did not make it to the top.

How do you use this data?  Check if the processes using CPU resources are those which you would expect or if there are any processes that you did not expect to see taking as much CPU as they actually do.


pmm memory


Memory Utilization does the same, but for memory. There are a number of different memory metrics which can be a bit intimidating.

Resident Memory means the memory process (or technically group of processes) takes in physical RAM.  The “Proportional” means the method by how this consumption is counted. A single page in RAM sometimes is shared by multiple processes, and Proportional means it is divided up among all processes sharing it when memory allocation is accounted for rather than counted as part of every process. This ensures there is no double counting and you should not see total size of Resident memory for your processes well in excess of the physical memory you have.

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

Used Memory means the space process consumes in RAM plus space it consumes in the swap space. Note, this metric is different from Virtual Memory, which also includes virtual space which was assigned to the process but never really allocated.

I find these two metrics as the most practical to understand how physical and virtual memory is actually used by the processes on the system.


resident and used memory


Virtual Memory is the virtual address space that was allocated to process. In some cases, it will be close to memory used as in the case of the mysqld process, and in other cases, it may be very different; like dockerd process which is running on this system takes 5GB of virtual memory and less than 70MB of actual memory used.

Swapped Memory shows us which processes are swapped out and by how much.  I would pay special attention to this graph because if the Swap Activity panel shows serious IO going on, this means system performance might be significantly impacted. If unused processes are swapped out, or even some unused portions of the processes, it is not the problem. However, if you have half of MySQL’s buffer pool swapped out and heavy Swap IO going… you have work to do.


Process Disk IO Usage


Process Disk IO Usage allows seeing IO bandwidth and latency for the system overall as well as bandwidth used by reads and writes by different processes.  If you have any unexpected disk IO bandwidth, consumers will easily spot them using this dashboard.


processes Context Switches


Context Switches provide more details on what kind of context switches are happening in the system and what processes they correspond to.

A high number of Voluntary Context Switches (hundreds of thousands and millions per second) may indicate heavy contention, or it may just correspond to a high number of requests being served by the process, as in many architectures starting/stopping request handling requires a context switch.

A high number of Non-Voluntary Context Switches, on the other hand, can correspond to not having enough CPU available with processes moved off CPU by the scheduler when they have exceeded their allotted time slice, or for other reasons.


global file descriptors


File Descriptors show us the global limit of the file descriptors allowed in the operating system as well as for individual processes.  Running out of file descriptors for a whole system is really bad, as you will have many things start failing at random. Although on modern, powerful systems, the limit is so high you rarely hit this problem.

The limit of files process can open still applies so it is very helpful to see which processes require a lot of file descriptors (by number) and also how this number compares to the total number of descriptors allowed for the process.  In our case, we can see no process ever allocated more than 7% of file descriptors allowed, which is quite healthy.


Major and Minor page faults


This graph shows Major and Minor page faults for given processes.

Major page faults are relatively expensive, typically causing disk IO when they happen.

Minor page faults are less expensive, corresponding to accessing pages that are not mapped to the given process address space but otherwise are in memory.  Minor page faults are less expensive, requiring a switch to kernel mode and for the kernel to do some housekeeping.

See more details on Minor/Major page faults and general Linux Memory Management here.


Processes in Linux


Processes in Linux can cycle through different statuses; for us, the most important ones to consider are “Active” statuses which are either “Running” or “Waiting on Disk IO”.  These roughly can be seen as using CPU and Disk IO resources.

In this section, we can see an overview of the number of running and waiting processes in the system (basically the same stuff “r” and “b” columns in vmstat show), as well as more detailed stats showing which processes, in particular, were running… or waiting on disk IO.


process kernel waits


While we can see what is going on with Active Processes by looking at their statuses, this shows us what is going on with sleeping processes.  In particular, what kernel functions are they sleeping in.  We can see data grouped by the name of the function in which wait happens or by pair function – process name.

If you want to focus on what types of kernel functions a given process is waiting on, you can select it in the dashboard dropdown to filter data just by this process. For example, selecting “mysqld”, I see:


kernel wait details


Finally, we have the panel which shows the processes based on their uptime.


processes uptime


This can be helpful to spot if any processes were started recently. Frankly, I do not find this panel to be most useful but as Process Exporter captures this data, why not?


Process Exporter provides great insights on running processes, in addition to what basic PMM installation provides.  Please check it out and let us know how helpful it is in your environment.  Should we consider enabling it by default in Percona Monitoring and Management?


Platform End of Support Announcement for Ubuntu 16.04 LTS

EOL Ubuntu 16.04

EOL Ubuntu 16.04The End Of Support date for Ubuntu 16.04 LTS is coming soon. According to the Ubuntu Release Life Cycle, it will be at the end of April 2021. With this announcement comes some implications to support for Percona software running on these operating systems.

So we will no longer be producing new packages and binary builds for Ubuntu 16.04.

We generally align our platform end of life/support dates with those of the upstream platform vendor. The platform end of life/support dates are published in advance on our website on the  Percona Software support life cycle page

According to our policies, Percona will continue to provide operational support for your databases on Ubuntu 16.04. However, we will be unable to provide any bug fixes, builds, or OS-level assistance if you encounter an issue outside the database itself.

Each platform vendor has a supported migration or upgrade path to their next major release. Please reach out to us if you need assistance in migrating your database to your vendor’s supported platform – Percona will be happy to assist you.


Esri brings its flagship ArcGIS platform to Kubernetes

Esri, the geographic information system (GIS), mapping and spatial analytics company, is hosting its (virtual) developer summit today. Unsurprisingly, it is making a couple of major announcements at the event that range from a new design system and improved JavaScript APIs to support for running ArcGIS Enterprise in containers on Kubernetes.

The Kubernetes project was a major undertaking for the company, Esri Product Managers Trevor Seaton and Philip Heede told me. Traditionally, like so many similar products, ArcGIS was architected to be installed on physical boxes, virtual machines or cloud-hosted VMs. And while it doesn’t really matter to end-users where the software runs, containerizing the application means that it is far easier for businesses to scale their systems up or down as needed.

Esri ArcGIS Enterprise on Kubernetes deployment

Esri ArcGIS Enterprise on Kubernetes deployment. Image Credits: Esri

“We have a lot of customers — especially some of the larger customers — that run very complex questions,” Seaton explained. “And sometimes it’s unpredictable. They might be responding to seasonal events or business events or economic events, and they need to understand not only what’s going on in the world, but also respond to their many users from outside the organization coming in and asking questions of the systems that they put in place using ArcGIS. And that unpredictable demand is one of the key benefits of Kubernetes.”

Deploying Esri ArcGIS Enterprise on Kubernetes

Deploying Esri ArcGIS Enterprise on Kubernetes. Image Credits: Esri

The team could have chosen to go the easy route and put a wrapper around its existing tools to containerize them and call it a day, but as Seaton noted, Esri used this opportunity to re-architect its tools and break it down into microservices.

“It’s taken us a while because we took three or four big applications that together make up [ArcGIS] Enterprise,” he said. “And we broke those apart into a much larger set of microservices. That allows us to containerize specific services and add a lot of high availability and resilience to the system without adding a lot of complexity for the administrators — in fact, we’re reducing the complexity as we do that and all of that gets installed in one single deployment script.”

While Kubernetes simplifies a lot of the management experience, a lot of companies that use ArcGIS aren’t yet familiar with it. And as Seaton and Heede noted, the company isn’t forcing anyone onto this platform. It will continue to support Windows and Linux just like before. Heede also stressed that it’s still unusual — especially in this industry — to see a complex, fully integrated system like ArcGIS being delivered in the form of microservices and multiple containers that its customers then run on their own infrastructure.

Image Credits: Esri

In addition to the Kubernetes announcement, Esri also today announced new JavaScript APIs that make it easier for developers to create applications that bring together Esri’s server-side technology and the scalability of doing much of the analysis on the client-side. Back in the day, Esri would support tools like Microsoft’s Silverlight and Adobe/Apache Flex for building rich web-based applications. “Now, we’re really focusing on a single web development technology and the toolset around that,” Esri product manager Julie Powell told me.

A bit later this month, Esri also plans to launch its new design system to make it easier and faster for developers to create clean and consistent user interfaces. This design system will launch April 22, but the company already provided a bit of a teaser today. As Powell noted, the challenge for Esri is that its design system has to help the company’s partners put their own style and branding on top of the maps and data they get from the ArcGIS ecosystem.



Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the “cloud financial management” space to establish best practices and standards. As the term implies, “cloud financial management” is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, vice president of Engineering and Product at Google Cloud. “More visibility, efficiency and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open-source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, executive director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the second of three dedicated Premier Member Technical Advisory Council seats.”


Slapdash raises $3.7M seed to ship a workplace apps command bar

The explosion in productivity software amid a broader remote work boom has been one of the pandemic’s clearest tech impacts. But learning to use a dozen new programs while having to decipher which data is hosted where can sometimes seem to have an adverse effect on worker productivity. It’s all time that users can take for granted, even when carrying out common tasks like navigating to the calendar to view more info to click a link to open the browser to redirect to the native app to open a Zoom call.

Slapdash is aiming to carve a new niche out for itself among workplace software tools, pushing a desire for peak performance to the forefront with a product that shaves seconds off each instance where a user needs to find data hosted in a cloud app or carry out an action. While most of the integration-heavy software suites to emerge during the remote work boom have focused on promoting visibility or re-skinning workflows across the tangled weave of SaaS apps, Slapdash founder Ivan Kanevski hopes that the company’s efforts to engineer a quicker path to information will push tech workers to integrate another tool into their workflow.

The team tells TechCrunch that they’ve raised $3.7 million in seed funding from investors that include S28 Capital, Quiet Capital, Quarry Ventures and Twenty Two Ventures. Angels participating in the round include co-founders at companies like Patreon, Docker and Zynga.

Image Credits: Slapdash

Kanevski says the team sought to emulate the success of popular apps like Superhuman, which have pushed low-latency command line interface navigation while emulating some of the sleek internal tools used at companies like Facebook, where he spent nearly six years as a software engineer.

Slapdash’s command line widget can be pulled up anywhere, once installed, with a quick keyboard shortcut. From there, users can search through a laundry list of indexable apps including Slack, Zoom, Jira and about 20 others. Beyond command line access, users can create folders of files and actions inside the full desktop app or create their own keyboard shortcuts to quickly hammer out a task. The app is available on Mac, Windows, Linux and the web.

“We’re not trying to displace the applications that you connect to Slapdash,” he says. “You won’t see us, for example, building document editing, you won’t see us building project management, just because our sort of philosophy is that we’re a neutral platform.”

The company offers a free tier for users indexing up to five apps and creating 10 commands and spaces; any more than that and you level up into a $12 per month paid plan. Things look more customized for enterprise-wide pricing. As the team hopes to make the tool essential to startups, Kanevski sees the app’s hefty utility for individual users as a clear asset in scaling up.

“If you anticipate rolling this out to larger organizations, you would want the people that are using the software to have a blast with it,” he says. “We have quite a lot of confidence that even at this sort of individual atomic level, we built something pretty joyful and helpful.”

Powered by WordPress | Theme: Aeros 2.0 by