DBaaS and the Enterprise

DBaaS and the Enterprise

DBaaS and the EnterpriseInstall a database server. Give the application team an endpoint. Set up backups and monitor in perpetuity. This is a pattern I hear about regularly from DBAs with most of my enterprise clients. Rarely do they get to troubleshoot or work with application teams to tune queries or design schemas. This is what triggers the interest in a DBaaS platform from database administrators.

What is DBaaS?

DBaaS stands for “Database as a Service”. When this acronym is thrown out, the first thought is generally a cloud offering such as RDS. While this is a very commonly used service, a DBaaS is really just a managed database platform that offloads much of the operational burden from the DBA team. Tasks handled by the platform include:

  • Installing the database software
  • Configuring the database
  • Setting up backups
  • Managing upgrades
  • Handling failover scenarios

A common misconception is that a DBaaS is limited to the public cloud. As many enterprises already have large data centers and heavy investments in hardware, an on-premise DBaaS can also be quite appealing. Keeping the database in-house is often favored when the hardware and resources are already available. In addition, there are extra compliance and security concerns when looking at a public cloud offering.

DBaaS also represents a difference in mindset. In conventional deployments, systems and architecture are often designed in very exotic ways making automation a challenge. With a DBaaS, automation, standardization, and best practices are the priority. While this can be seen as limiting flexibility, this approach can lead to larger and more robust infrastructures that are much easier to manage and maintain.

Why is DBaaS Appealing?

From a DBA perspective (and being a former DBA myself), I always enjoyed working on more challenging issues. Mundane operations like launching servers and setting up backups make for a less-than-exciting daily work experience. When managing large fleets, these operations make up the majority of the work.

As applications grow more complex and data sets grow rapidly, it is much more interesting to work with the application teams to design and optimize the data tier. Query tuning, schema design, and workflow analysis are much more interesting (and often beneficial) when compared to the basic setup. DBAs are often skilled at quickly identifying issues and understanding design issues before they become problems.

When an enterprise adopts a DBaaS model, this can free up the DBAs to work on more complex problems. They are also able to better engage and understand the applications they are supporting. A common comment I get when discussing complex tickets with clients is: “well, I have no idea what the application is doing, but we have an issue with XYZ”. If this could be replaced with a detailed understanding from the design phase to the production deployment, these discussions would be very different.

From an application development perspective, a DBaaS is appealing because new servers can be launched much faster. Ideally, with development or production deployment options, an application team can have the resources they need ready in minutes rather than days. It greatly speeds up the development life cycle and makes developers much more self-reliant.

DBaaS Options

While this isn’t an exhaustive list, the main options when looking to move to a DBaaS are:

  • Public cloud
    • Amazon RDS, Microsoft Azure SQL, etc
  • Private/Internal cloud
    • Kubernetes (Percona DBaaS), VMWare, etc
  • Custom provisioning/operations on bare-metal

Looking at public cloud options for a DBaaS, security and compliance are generally the first concern. While they are incredibly easy to launch and generally offer some pay-as-you-go options, managing access is a major consideration.

Large enterprises with existing hardware investments often want to explore a private DBaaS. I’ve seen clients work to create their own tooling within their existing infrastructure. While this is a viable option, it can be very time-consuming and require many development cycles. Another alternative is to use an existing DBaaS solution. For example, Percona currently has a DBaaS deployment as part of Percona Monitoring and Management in technical preview. The Percona DBaaS automates PXC deployments and management tasks on Kubernetes through a user-friendly UI.

Finally, a custom deployment is just what it sounds like. I have some clients that manage fleets (1000s of servers) of bare metal servers with heavy automation and custom scripting. To the end-user, it can look just like a normal DBaaS (an endpoint with all operations hidden). On the backend, the DBA team spends significant time just supporting the infrastructure.

How Can Percona help?

Percona works to meet your business where you are. If that is supporting Percona Server for MySQL on bare metal or a fleet of RDS instances, we can help. If your organization is leveraging Kubernetes for the data tier, the Percona Private DBaaS is a great option to standardize and simplify your deployments while following best practices. We can help from the design phase through the entire life cycle. Let us know how we can help!


In Application and Database Design, Small Things Can Have a Big Impact

Application and Database Design

Application and Database DesignWith modern application design, systems are becoming more diverse, varied and have more components than ever before. Developers are often forced to become master chefs adding the ingredients from dozens of different technologies and blending them together to create something tasty and amazing. But with so many different ingredients, it is often difficult to understand how the individual ingredients interact with each other. The more diverse the application, the more likely it is that some seemingly insignificant combination of technology may cause cascading effects.

Many people I talk to have hundreds if not thousands of different libraries, APIs, components, and services making up the systems they support. In this type of environment, it is very difficult to know what small thing could add up to something much bigger. Look at some of the more recent big cloud or application outages, they often have their root cause in something rather small.

Street Fight: Python 3.10 -vs- 3.9.7

Let me give you an example of something I ran across recently. I noticed that performance on two different nodes running the same hardware/application code was performing drastically different than one another. The app server was running close to 100% CPU on one server while the other was around 40%. Each had the same workload and the same database, etc. It turned out that one server was using Python 3.10.0 and the other was running 3.9.7. This combined with the MySQL Connector/Python lead to almost a 50% reduction in database throughput (the 3.10.0 release saw a regression). However, this performance change was not seen either in my PostgreSQL testing or in testing the mysqlclient connector. It happened only when running the pure python version of the MySQL connector. See the results below:

Python 3.10 -vs- 3.9.7

Note: This workload was using a warmed BP, with workload running for over 24 hours before the test. I cycled the application server making the change to either the version of python or the MySQL library. These tests are repeatable regardless of the length of the run. All data fits into memory here. I am not trying to make an authoritative statement on a specific technology, merely pointing out the complexity of layers of technology.

Looking at this purely from the user perspective, this particular benchmark simulates certain types of users.  Let’s look at the number of users who could complete their actions per second.  I will also add another pure python MySQL driver to the mix.

Note: There appears to be a significant regression in 3.10. The application server was significantly busier when using Python 3.10 and one of the pure python drivers than when running the same test in 3.9 or earlier.

The main difference between the MySQL Connector and mysqlclient is the mysqlclient is using the C libmysqlclient. Oddly the official MySQL Connector says it should switch between the pure python and C version if available, but I was not seeing that behavior ( so I have to look into it ). This resulted in the page load time for the app in Python 3.10.0 taking 0.05 seconds up from 0.03 seconds in Python 3.9.7. However, the key thing I wanted to highlight is that sometimes seemingly small or insignificant changes can lead to a drastic difference in performance and stability. You could be running along fine for months or even years before something upgrades to something that you would not think would drastically impact performance.

Can This Be Fixed With Better Testing?

You may think this is a poster child for testing before upgrading components, and while that is a requirement, it won’t necessarily prevent this type of issue. While technology combinations, upgrades, and releases can often have some odd side effects, oftentimes issues they introduce remain hidden and don’t manifest until some outside influence pops up. Note: These generally happen at the worst time possible, like during a significant marketing campaign, event, etc. The most common way I see nasty bugs get exposed is not through a release or in testing but often with a change in workload. Workload changes can hide or raise bottlenecks at the worst times. Let’s take a look at the above combination of Python version and different connectors with a different workload:

Here the combination of reporting and read/write workload push all the app nodes and database nodes to the redline. These look fairly similar in terms of performance, but the workload is hiding the above issues I mentioned. A system pushed to the red will behave differently than it will in the real world. If you ended up testing upgrading python on your app servers to 3.10.0 by pushing your systems to the max, you may see the above small regression as within acceptable limits. In reality, however, the upgrade could net you seeing a 50% decrease in throughput when moved to production.

Do We Care?

Depending on how your application is built many people won’t notice the above-mentioned performance regression after the upgrade happens. First, most people do not run their servers even close to 100% load, adding more load on the boxes may not immediately impact their user’s performance. Adding .02 seconds of load time to a user may be imperceptible unless under heavy load ( which would increase that load time). The practical impact is to speed up the point at which you need to either add more nodes or upgrade their instances sooner.

Second, scaling application nodes automatically is almost a requirement in most modern cloud-native environments. Reaching a point where you need to add more nodes and more processing power will come with increases in users on your application, so it is easily explained away.

Suppose users won’t immediately notice and the system will automatically expand as needed ( preventing you from knowing or getting involved ). In that case, do we, or should we care about adding more nodes or servers? Adding nodes is cheap; it is not free.

First, there is a direct cost to you. Take your hosting costs for your application servers and double them in the case above. What is that? $10K, $100K, $1M a year? That is money that is wasted. Look no further than the recent news lamenting the ever-increasing costs of the cloud i.e.:

Second, there is a bigger cost that comes with complexity. Observability is such a huge topic because everything we end up doing in modern environments is done in mass. The more nodes and servers you have the more potential exists for issues or one node behaving badly. While our goal is to create a system where everything is replaceable and can be torn down and rebuilt to overcome problems, this is often not the reality. Instead, we end up replicating bad code, underlying bottlenecks, and making a single problem a problem at 100x the scale.

We need to care. Application workload is a living entity that grows, shrinks, and expands with users. While modern systems need to scale quickly up and down to meet the demand, that should not preclude us from having to look out for hidden issues and bottlenecks. It is vitally important that we understand our applications workloads and look for deviations in the normal patterns.  We need to ask why something changed and dig in to find the answer.  Just because we can build automation to mitigate the problem does not mean we should get complacent and lazy about fixing and optimizing our systems.


2021 Digest: The HOSS Talks FOSS Most Popular Episodes

2021 Year Digest: The HOSS Talks FOSS Most Popular Episodes

2021 Year Digest: The HOSS Talks FOSS Most Popular Episodes2022 is coming! While we look forward to it, it is time to summarize the results of this year to find out what content was the most viewed, and raised the most interest, during the past months.

Percona’s HOSS (Head of Open Source Strategy) Matt Yonkovit hosts an amazing podcast with tech talks on all things open source. He invites absolutely different people (CEOs, engineers, DBAs, community leaders) to chat about their experiences.

Here is an overview of the Top 10 Most Popular episodes of the HOSS Talks FOSS podcast, the ones that attracted the largest part of the viewer’s attention. Look through to find something to entertain yourself on the long winter evenings!

  1. The absolute hit is the episode with Peter Zaitsev talking about Open Source and the SSPL (Server-side Public License). Peter and Matt discussed if ??SSPL is good or bad and what is the possible impact on open source.
  2. The second place takes the episode with Karthik Ranganathan, CTO at Yugabyte. Karthik was on the team that first built Apache Cassandra, helped optimize and scale HBase, and most recently built Yugabyte. In this video, he talks about lessons he learned and how Yugabyte takes those lessons and implements them.
  3. Data Geekery GmbH and JooQ to speed up open source databases – Matt sits down with Lucas Eder to discuss the open-source Java framework that gives you an API that allows writing SQL statements through natural Java API calls.
  4. Get involved in Databases on Kubernetes (DoK) Community with Bart Farrell, Head of the DoK community DoK. Bart and Matt chatted about what is happening in the community, how people can get involved, and what fun things are coming up next.
  5. All things Vitess, PlanetScale, MySQL, and Sailing Passion – in this video, Matt talked to Alkin Tezuysal on his role as a Vitess maintainer, who is using & contributing to Vitess, and where it makes sense for companies to switch to this solution.
  6. PingCAP, HTAP, NewSQL, and TiDB with Liquan Pei talking to Matt about hybrid transactional analytical processing ( HTAP), and getting deeper into TiDB.
  7. MySQL Backups and Database Recovery Best Practices with Walter Garcia, Percona’s remote DBA experts, talking about recovery best practices, common backup mistakes, the difference between logical and physical backups, and more.
  8. Open Source Databases, Debugging, GDB & RR – Record and Replay with Marcelo Altmann, Software Engineer at Percona. Listen to it if you are interested in finding and debugging hard-to-find bugs.
  9. Oracle, MySQL, PostgreSQL, and MongoDB DBAs to DBREs – a talk with David Murphy about his experience of bringing more of a DevOps and SRE mentality to a company and why keeping control of your data is so important.
  10. The episode with MySQL and open source veteran Morgan Tocker about database monitoring, alerting, contributing to MySQL and TiDB, and waffles.

There is even more great content on the Percona YouTube channel! Subscribe to stay tuned to new episodes in the upcoming year.

Want to participate in tech conversations with Matt? Join our weekly online meetups, ask your questions to database experts, or become a guest yourself!


A New Way to Experience Databases — The Percona Platform

The Percona Platform

The Percona PlatformWe are thrilled to announce the Percona Platform, which brings together database distributions, support expertise, services, management, and automated insights into a single product. Building on our expertise with databases including MySQL, PostgreSQL, and MongoDB, the new Percona Platform will simplify how you can monitor, manage, and optimize your database instances across any infrastructure. In addition, the Percona Platform will enable you to run your own private Database-as-a-Service (DBaaS) instances.

Addressing Your Biggest Database Problems

We’re always listening and with the Percona Platform, we aim to address three of the biggest challenges database administrators have according to our last research report. The three are standardization, performance, and availability.  In fact, 59% of respondents lost sleep at night due to concerns around downtime and availability. Let’s be honest, there’s not much worse than losing sleep.

biggest Database Problems

Developers just want things to work, and they want self-service support so they can iterate faster with fewer cross-team handoffs. Database administrators want help to manage multiple database instances in a consistent approach, so they can deliver a highly available and performant database platform for their development teams as well as their customers. With this release, we are taking the first steps towards meeting these requirements.

Announcing the Percona Platform

Percona Platform is our response, bringing together everything that developers and DBAs need to implement environments quickly, with less effort and complexity. This launch will help developers create their applications faster while building upon a strong foundation provided by their database team.

This launch will preview the Percona Platform, allowing you to unify the entire database experience from packages and backup to monitoring and management, with general availability in early 2022. Percona brings together distributions of MySQL, PostgreSQL, and MongoDB including a range of open source tools for data backup, availability, and management. It also includes Percona Monitoring and Management (PMM) for database management, monitoring, and automated insights, making it easier to manage database deployments. The Percona Platform preview will have no up-front commitments or fees to use.

Percona Platform will cover all deployment configurations, from internal data center deployments through to public, private, and hybrid cloud instances. At general availability, the platform will also cover self-managed and Percona-managed instances, allowing you the ultimate flexibility to run things the way that best suits your environment, level of confidence, and operating models.

What’s in the Percona Platform?

The Preview will include:

  • Percona Portal — provides access to all Percona services and product expertise in one placePercona Platform
  • Percona Account — simplifies user account management for all Percona products and services
  • Private DBaaS — enables you to easily provision and manage databases on a self-service basis across development and testing instancesPercona Private DBaaS
  • Advisors — automated insights to ensure your database is performing at its bestPercona Advisors

The platform is available as a preview, and you can get started at percona.com/platform-preview. Try it out today!


Percona’s Most Popular Posts on Open Source Database Technologies

Percona Blog Most Popular Posts on Open Source Database Technologies

While we prepare to enter 2022 and start to sum up its results and schedule a plan for the future, let’s look back at the awesome posts published in the Percona Blog and find out which products and themes raised more engagement among readers.

We count the number of views of each blog post and recognize its author internally with the Blogger of the Month award: an exclusive hat and a bonus.

So here are the most popular articles in each month that draw the most of your attention.

January ChampionPostgreSQL on ARM-based AWS EC2 Instances: Is It Any Good? by Jobin Augustine and Sergey Kuzmichev. They took an independent look at the price/performance of the new instances from the standpoint of running PostgreSQL, and perform some tests described in the post.

February ChampionPostgreSQL Database Security: Authentication by Ibrar Ahmed. It covers each and every PostgreSQL internal authentication method in detail.

March Champion – Overview of MySQL Alternative Storage Engines by Sri Sakthivel. InnoDB engines are very popular but there are some alternatives for MySQL to consider. This post is an overview of other engines with examples.

April ChampionUpgrading to MySQL 8: Embrace the Challenge by Mike Benshoof. He shared some tips to facilitate and approach the upgrade to MySQL 8.

May ChampionManage MySQL Users with Kubernetes by Sergey Pronin. Quite a common request that we receive from the community and customers is to provide a way to manage database users with Operators. Sergey described the way to take it to another level and create a way to provision users on any database through the Kubernetes control plane with the help of Crossplane.io.

June ChampionAutoscaling Databases in Kubernetes for MongoDB, MySQL, and PostgreSQL by Dmitriy Kostiuk and Mykola Marzhan explores to what extent it is possible to automate horizontal scaling of those databases in Kubernetes and how we do it.

July ChampionImprove PostgreSQL Query Performance Insights with pg_stat_monitor by Ibrar Ahmed and Hamid Akthar. It describes how pg_stat_monitor helps to look at query performance and avoid combining manually client/connection information from pg_stat_activity, statistical data from pg_stat_statements, and query plan from auto_analyze to complete the dataset to understand query performance patterns.

August ChampionMigrating PostgreSQL to Kubernetes by Sergey Pronin. Sergey described the way to migrate PostgreSQL with minimal downtime using Percona Distribution for PostgreSQL Operator.

September ChampionPostgreSQL 14 – Performance, Security, Usability, and Observability by Umair Shahid. It is an overview of the PostgreSQL 14 enhancements for heavy transactional workloads, specialized use cases, and additional support for distributed data.

October ChampionWhy Linux HugePages are Super Important for Database Servers: A Case with PostgreSQL by Jobin Augustine. He described how Linux Huge Pages can potentially save the database server from OOM Killers and associated crashes with a testable and repeatable case.

And while we don’t know the November most popular post now, we definitely have the leader – Should I Create an Index on Foreign Keys in PostgreSQL?  by Charly Batista, with the answer to the question he’s seen in the webinars he presented.

Some articles did not take first place the month they were published, but they definitely deserve special mention as very popular ones covering some interesting themes.

Often we also invite authors of those posts and other active community members to take part in another Community project – The HOSS Talks FOSS podcast, hosted by Matt Yonkovit. There they have fun, chat, and discuss all things open source. Episodes are available on the Percona YouTube channel, as well as the recordings of weekly online meetups for PostgreSQL, MySQL, MongoDB, and Percona Monitoring and Management. Subscribe to the channel updates to stay tuned for new episodes!


Upcoming End of Life for PMM v1 and MongoDB 4.0, Ubuntu 16.04 End of Support

End of Life for PMM v1


The table below summarizes the message in this blog post. For more details and reasoning, please continue reading.

What What is happening and when
Percona Monitoring and Management v1 End of Life – May 2022
Percona Server for MongoDB v4.0

Percona Distribution for MongoDB v4.0

End of Life – April 2022
Packaging and support for Ubuntu 16.04 No new releases, effective immediately 

Percona Monitoring and Management v1 EOL

Percona Monitoring and Management (PMM) started its history in April 2016 when PMM Beta 1.0.0 was released. Almost 3 years later, in September 2019, we have released PMM 2.0.0 – a successor and feature-rich replacement for v1. Since that time, all our engineering efforts have been focused on version 2, but we were still shipping minor updates and security patches to version 1.

Six months from now, on May 1, 2022, Percona Monitoring and Management version 1 will enter the End of Life stage, which means it is not going to receive any security updates, bug fixes, and improvements.

Because of the significant architectural changes between PMM v1 and PMM v2, there is no direct upgrade path. Please review Running PMM1 and PMM2 Clients on the Same Host for how to perform a smooth transition from one version to another.


Ubuntu Linux 16.04 EOL

Ubuntu Linux 16.04 has reached its end of life on April 30, 2021. Even though Canonical announced that they will provide paid Extended Security Maintenance (ESM) for 16.04 for five more years, we have decided to stop providing releases of our software for this platform in favor of focusing our efforts on more recent versions of Ubuntu. There are two reasons:

  • ESM addresses security issues only, but the latest packages, libraries, and software are not shipped. Such an approach would slow down the innovation for our engineering teams and complicate the release process. 
  • We see community dropping support for Ubuntu 16.04 as well – MySQL, MongoDB (5.0), PostgreSQL

MongoDB 4.0 EOL

As per official MongoDB Software Lifecycle Schedules, MongoDB 4.0 is going End of Life in April 2022. At Percona we are going to follow the same schedule, which means that starting in April 2022:

Our support team will continue to provide operational support (bug fixes and software builds will no longer be generated) for 4.0 even after April 2022, but we strongly encourage our users and customers to prepare for the upgrade beforehand.  At the time of writing this blog post, the recommended version to upgrade to is Percona Server for MongoDB 4.2.17-17. Please read this how-to on how to perform a major version upgrade from 4.0 to 4.2.

If you need assistance with the upgrade, please get in touch with us by raising a support ticket, contacting our sales team, or raising a topic on our Community Forum.

Call for Action

We understand that upgrading the software might be challenging, especially for mid- and large-sized environments. We would like to hear your story and concerns in the corresponding forum threads and see if we can help you with addressing them:


Cloud-Native Through the Prism of Percona: Episode 1

Percona Cloud Native Series 1

Percona Cloud Native Series 1The cloud-native landscape matures every day, and new great tools and products continue to appear. We are starting a series of blog posts that are going to focus on new tools in the container and cloud-native world, and provide a holistic view through the prism of Percona products.

In this blog:

  • VMware Tanzu Community edition
  • Data on Kubernetes survey
  • Azure credits for open source projects
  • Percona Distribution for PostgreSQL Operator is GA
  • kube-fledged
  • kubescape
  • m3o – new generation public cloud

VMware Tanzu Community Edition

I personally like this move by VMware to open source Tanzu, the set of products to run and manage Kubernetes clusters and applications. Every time I deploy Amazon EKS I feel like I’ve been punished for something. With VMware Tanzu, deployment of the cluster on Amazon (not EKS) is a smooth experience. It has its own quirks, but still much much better.

Tanzu Community Edition is not only about AWS EKS, but also other public clouds and even local environments with docker.

I also was able to successfully deploy Percona Operators on the Tanzu provisioned cluster. Keep in mind that you need a storage class to be created to run stateful workloads. It can be easily done with Tanzu’s packaging system.

Data on Kubernetes Survey

The Data on Kubernetes (DoK) community does a great job in promoting and evangelizing stateful workloads on Kubernetes. I strongly recommend you check out the DoK 2021 report that was released this week. Key takeaways:

  • Kubernetes is on the rise (nothing new). Half of the respondents run 50% or more of production workloads in k8s.
  • K8S is ready to run stateful workloads – 90% think so, and 70% already run data on k8s.
  • The key benefits of running stateful applications on Kubernetes:
    • Consistency
    • Standardizing
    • Simplify management
    • Enable develop self-service
  • Operators help with:
    • Management
    • Scalability
    • Improve app lifecycle mgmt

There are lots of other interesting data points, I encourage you to go through them.

Azure Credits for Open Source Projects

Percona’s motto is “Keeping Open Source Open”, which is why an announcement from Microsoft to issue Azure credits for open source projects caught our attention. This is a good move from Microsoft helping the open source community to certify products on Azure without spending a buck.

Percona Distribution for PostgreSQL Operator is GA

I cannot miss the opportunity to share with you that Percona’s PostgreSQL Operator has reached the General Availability stage. It was a long journey for us and we were constantly focused on improving the quality of our Operator through introduction of rigorous end-to-end testing. Please read more about this release on the PostgreSQL news mailing list. I also encourage you to look into our GitHub repository and try out the Operator by following these installation instructions.


Back in the days of my Operations career, I was looking for an easy way to have container images pre-pulled on my Kubernetes nodes. kube-fledged does exactly this. The most common use cases are applications that require rapid start-up or some batch-processing which is fired randomly. If we talk about Percona Operators, then kube-fledged is useful if you scale your databases frequently and don’t want to waste valuable seconds on pulling the image. I have tried it out for Percona Distribution for PostgreSQL Operator and it worked like a charm.

kube-fledged is an operator and it controls which images to pull to the nodes with ImageCache custom resource. I have prepared an


manifest for Percona PostgreSQL Operator as an example – please find it here. This instructs kube-fledge to pull the images that we use in PostgreSQL cluster deployment on all nodes.


In every container-related survey, we see security as one of the top concerns. kubescape is a neat tool to test if Kubernetes and apps are deployed securely as defined in National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CICA) Hardening Guidance.

It provides both details and a summary of failures. Here is for example the failure of Resource policy control for containers for default Percona MongoDB Operator deployment:

[control: Resource policies] failed ?
Description: CPU and memory resources should have a limit set for every container to prevent resource exhaustion. This control identifies all the Pods without resource limit definition.
   Namespace default
      Deployment - percona-server-mongodb-operator
      StatefulSet - my-cluster-name-cfg
      StatefulSet - my-cluster-name-rs0
Summary - Passed:3   Excluded:0   Failed:3   Total:6
Remediation: Define LimitRange and ResourceQuota policies to limit resource usage for namespaces or nodes.

It might be a good idea for developers to add kubescape into the CICD pipeline to get additional automated security policy checks.

M3O – New Generation Public Cloud

M3O is an open source AWS alternative built for the next generation of developers. Consume public APIs as simpler programmable building blocks for a 10x better developer experience.” – not my words, but from their website – m3o.com. In a nutshell, it is a set of APIs to greatly simplify the development. In m3o’s GitHub repo there is an example of how to build the Reddit Clone utilizing these APIs only. You can explore available APIs here. As an example I used URL shortener API for this blog post link:

$ curl "https://api.m3o.com/v1/url/Shorten" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MICRO_API_TOKEN" \
-d '{
"destinationURL": "https://www.percona.com/blog/cloud-native-series-1"


Looks neat! For now, most of the APIs are free to use, but I assume this is going to change soon once the project gets more traction and grows its user base.

It is also important to note, that this is an open source project, meaning that anyone can deploy their own M3O platform on Kubernetes (yes, k8s again) and have these APIs exposed privately and for free, not as a SaaS offering. See m3o/platform repo for more details and Pulumi example to deploy it.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!


Do You Believe in the Future of Open Source Databases? Tell Us Your Views!

2021 Percona Open Source Survey

2021 Percona Open Source SurveyComplete the 2021 Open Source Data Management Software Survey to share your knowledge and experience, and help inform the open source database community.

In 2020 we ran our second Open Source Data Management Software Survey. This resulted in some interesting data on the state of the open source database market. 

Some key statistics:

  • 41% of buying decisions are now made by architects, giving them significant power over software adoption within a company.
  • 81% of respondents gave cost savings as the most important reason for adoption. In this challenging economic climate, many companies are actively avoiding vendor license costs and lock-in.
  • 82% of respondents reported at least a 5% database footprint growth over the last year, with 62% reporting more significant growth and 12% growing over 50%.
  • Although promoted as a cheap and convenient alternative, cloud costs can spiral, with 22% of companies spending more on cloud hosting than planned.

To see how the landscape has changed over the last (turbulent) year, we are pleased to announce the launch of our third annual Open Source Data Management Software Survey

The final results will be 100% anonymous and made freely available via Creative Commons Attribution ShareAlike (CC BY-SA).

Access to Accurate Market Data is Important

There are millions of open source projects currently running, and most are dependent on databases. 

Accurate data helps people track the popularity of different databases, and see how and where these databases are running successfully. 

The information helps people build better software and take advantage of shifting trends. It also helps businesses understand industry direction and make informed decisions on the software and services they choose.

We want to help developers, architects, DBAs, and any other users choose the best database or tool for their business, and understand how and where to deploy it. 

How Can You Help This Survey Succeed?

Firstly, we would love you to complete the survey and share your insight into current trends and new developments in open source database software. 

Secondly, we hope you will share this survey with other people who use open source database software and encourage them to contribute.

The more responses we receive, the more useful this data will be to the community. If there is anything we missed or you would like to ask in the future, we welcome your feedback.

The survey should take around 10 minutes to complete. 

Click here to complete the survey!


PostgreSQL 14 – Performance, Security, Usability, and Observability

Percona PostgreSQL 14

Today saw the launch of PostgreSQL14. For me, the important area to focus on is what the community has done to improve performance for heavy transactional workloads and additional support for distributed data.

The amount of data that enterprises have to process continues to grow exponentially. Being able to scale up and out to support these workloads on PostgreSQL 14 through better handling of concurrent connections and additional features for query parallelism makes a lot of sense for performance, while the expansion of logical replication will also help.

Performance Enhancements

Server Side Enhancements

Reducing B-tree index bloat

Frequently updated indexes tend to have dead tuples that cause index bloat. Typically, these tuples are removed only when a vacuum is run. Between vacuums, as the page gets filled up, an update or insert will cause a page split – something that is not reversible. This split would occur even though dead tuples inside the existing page could have been removed, making room for additional tuples.

PostgreSQL 14 implements the improvement where dead tuples are detected and removed even between vacuums, allowing for a reduced number of page splits thereby reducing index bloat.

Eagerly delete B-tree pages

Reducing overhead from B-trees, the vacuum system has been enhanced to eagerly remove deleted pages. Previously, it took 2 vacuum cycles to do so, with the first one marking the page as deleted and the second one actually freeing up that space.

Enhancements for Specialized Use Cases

Pipeline mode for libpq

High latency connections with frequent write operations can slow down client performance as libpq waits for each transaction to be successful before sending the next one. With PostgreSQL 14, ‘pipeline mode’ has been introduced to libpq allowing the client to send multiple transactions at the same time, potentially giving a tremendous boost to performance. What’s more – because this is a client-side feature, PostgreSQL 14’s libpq can even be used with older versions of the PostgreSQL server.

Replicating in-progress transactions

Logical replication has been expanded to allow streaming in-progress transactions to subscribers. Large transactions were previously written to disk till the transaction was completed before replicating to the subscriber. By allowing in-progress transactions to be streamed, users gain significant performance benefits along with more confidence in their distributed workloads.

Add LZ4 compression to TOAST

PostgreSQL 14 adds support for LZ4 compression for TOAST, a system used to efficiently store large data. LZ4 is a lossless compression algorithm that focuses on the speed of compression and decompression. LZ4 can be configured at the column as well as the system level. Previously, the only option was pglz compression – which is fast but has a small compression ratio.

Enhancements for Distributed Workloads

The PostgreSQL foreign data wrapper – postgres_fdw – has been instrumental in easing the burdens of handling distributed workloads. With PostgreSQL 14, there are 2 major improvements in the FDW to improve performance for such transactions: Query parallelism can now be leveraged for parallel table scans on foreign tables, and Bulk insert of data is now allowed on foreign tables.

Both these improvements further cement the increasing ability of PostgreSQL to scale horizontally and natively handle distributed databases.

Improvement to SQL parallelism

PostgreSQL’s support for query parallelism allows the system to engage multiple CPU cores in multiple threads to execute queries in parallel, thereby drastically improving performance. PostgreSQL 14 brings more refinement to this system by adding support for RETURN QUERY and REFRESH MATERIALIZED VIEW to execute queries in parallel. Improvements have also been rolled out to the performance of parallel sequential scans and nested loop joins.


SCRAM as default authentication

SCRAM-SHA-256 authentication was introduced in PostgreSQL 10 and has now been made the default in PostgreSQL 14. The previous default MD5 authentication has had some weaknesses that have been exploited in the past. SCRAM is much more powerful, and it allows for easier regulatory compliance for data security.

Predefined roles

Two predefined roles have been added in PostgreSQL 14 – pg_read_all_data and pg_write_all_data. The former makes it convenient to grant read-only access for a user to all tables, views, and schemas in the database. This role will have read access by default to any new tables that are created. The latter makes it convenient to create super-user-styled privileges, which means that one needs to be quite careful when using it!

Convenience for Application Developers

Access JSON using subscripts

From an application development perspective, I have always found support for JSON in PostgreSQL very interesting. PostgreSQL has supported this unstructured data form since version 9.2, but it has had a unique syntax for retrieving data. In version 14, support for subscripts has been added, making it easier for developers to retrieve JSON data using a commonly recognized syntax.

Multirange types

PostgreSQL has had Range types since version 9.2. PostgreSQL 14 now introduces ‘multirange’ support which allows for non-contiguous ranges, helping developers write simpler queries for complex sequences. A simple example of practical use would be specifying the ranges of time a meeting room is booked through the day.

OUT parameters in stored procedures

Stored procedures were added in PostgreSQL 11, giving developers transactional control in a block of code. PostgreSQL 14 implements the OUT parameter, allowing developers to return data using multiple parameters in their stored procedures. This feature will be familiar to Oracle developers and welcome addition for folks trying to migrate from Oracle to PostgreSQL.


Observability is one of the biggest buzzwords of 2021, as developers want more insight into how their applications are performing over time. PostgreSQL 14 adds more features to help with monitoring, with one of the biggest changes being the move of query hash system from pg_stat_statement to the core database. This allows for monitoring a query using a single ID across several PostgreSQL systems and logging functions. This version also adds new features to track the progress of COPY, WAL activity, and replication slots statistics.

The above is a small subset of the more than 200 features and enhancements that make up the PostgreSQL 14 release. Taken altogether, this version update will help PostgreSQL continue to lead the open source database sector.

As more companies look at migrating away from Oracle or implementing new databases alongside their applications, PostgreSQL is often the best option for those who want to run on open source databases.

Read Our New White Paper:

Why Customers Choose Percona for PostgreSQL


Percona and PostgreSQL: Better Together

Percona PostgreSQL

The future of technology adoption belongs to developers.

The future of application deployment belongs in the cloud.

The future of databases belongs to open source.

That’s essentially the summary of Percona’s innovation strategy and that’s the reason I decided to join Percona – I believe the company can provide me with the perfect platform for my vision of the future of PostgreSQL.

So What Does That Mean, Really?

Gone are the days when deals were closed by executives on a golf course. That was Sales-Led Growth. The days of inbound leads based on content are also numbered. That is Marketing-Led Growth.

We are living in an age where tech decision-making is increasingly bottom-up. Where end-users get to decide which technology will suit their needs best. Where the only way to ensure a future for your PostgreSQL offerings is to delight the end users – the developers.

That’s Product-Led Growth.

PostgreSQL has been rapidly gaining in popularity. The Stack Overflow Developer survey ranked it as the most wanted database and DB-Engines declaring it the DBMS of the year 2020. DB-Engines shows steady growth in popularity of PostgreSQL, which outpaces any other database out there. It is ACID compliant, secure, fast, reliable, with a liberal license, and backed by a vibrant community.

PostgreSQL is the World’s Most Advanced Open Source Relational Database.

Being a leader in open source databases, I believe Percona is well-positioned to drive a Product Led Growth strategy for PostgreSQL. We want to offer PostgreSQL that is freely available, fully supported, and certified. We want to offer you the flexibility of using it yourself, using it with our help, or letting us completely handle your database.

Our plan is to focus on the end-users, make it easy for developers to build applications using our technology, delight with our user experience, and deliver value with our expertise to all businesses – large or small.

Data is exciting. Data has the ability to tell stories. Data is here to stay. 

Data is increasingly stored in open source databases. 

PostgreSQL is the most wanted database by developers. 

That is why I believe: Percona and PostgreSQL – Better Together.

In our latest white paper, we explore how customers can optimize their PostgreSQL databases, tune them for performance, and reduce complexity while achieving the functionality and security your business needs to succeed.

Download to Learn Why Percona and PostgreSQL are Better Together

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com