Jun
05
2019
--

Enterprise Database Solution – Maximizing Return for the Lowest Cost

Enterprise Data Solution

enterprise database solutionIt used to be easy: a company developed a new application, chose a database solution, launched the new application and then tuned the chosen database solution. A team of DBAs looked at the infrastructure as well as the workload and made changes (or suggestions) as needed. The application then stayed in production for years and small tweaks were made as needed.

Those days are long gone.

As technology has evolved, so has the workflow and deployment strategy within the large enterprise.  Large, monolithic applications are being split into several microservices, generally decoupled but still working together and somewhat interdependent. Waterfall deployment strategies are replaced with agile methodology and continuous code deployment. Tuning and maintaining large installations of physical hardware has become less of the focus with the advent of virtualization, containerization, and orchestrated deployments.

Despite all of these changes and radical shifts in the market, one question for executives and management has remained constant: what approach should I use to maximize my return and give me the most productive environment for the lowest cost? As any good consultant will tell you, “it depends”.  Even with all the advances in technology, frameworks, and deployment strategies, there is still no silver bullet that achieves everything you need within your organization (while also preparing your meals and walking your dog).

Choosing an Enterprise Database Solution

In this post, we’ll discuss some of the paths you can take as a guide on your journey of choosing an enterprise database solution. It’s not meant to provide technical advice or suggest a “best option.”

Before going into some of the options, let’s put a few assumptions out there:

  • Your organization wants to use the right database solution for the job (or a few limited solutions)
  • You DO NOT want to rack new physical servers every time you need a new server or expect growth
  • Your application teams far outnumber your operations and database team (in terms of number of teams and overall members)
  • The question of “what does your application do” is more accurately replaced with several variations of “what does this particular application do”

Now that we have that out of the way, let’s start with buzzword number one: the cloud. While it is used all the time, there are a few different meanings. Originally (and most commonly), the cloud is referring to the “public” cloud — entities like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.  When it first came to fruition, the most common barrier to organizations moving to the cloud was security. As more and more PII data is stored by large enterprises, the inherent fear of a breach in the public cloud led many companies to shy away. Although this is much less of a concern given all the advances in security, there are some instances where an organization might still believe that storing data in a “public” datacenter is a hard no. If this is your organization, feel free to skip ahead to the on-premise discussion below.

Public Cloud

Assuming that you can engineer proper security in the public cloud of your choosing, some of the main benefits of outsourcing your infrastructure quickly bubble to the top:

  • Elasticity
  • Flexibility
  • Agility
  • Cost

Elasticity

In many circumstances, you need increased capacity now, but only for a limited time. Does this scenario sound familiar? The beauty of the public cloud is that you generally only pay for what you are using. Looking at things from a long-term cost perspective, if you only need two times your capacity for two weeks out the year, why should you pay for half of your infrastructure to sit idle for the other fifty weeks annually?

Since you don’t have to actually maintain any physical gear in the public cloud, you have the ability to add/remove capacity as needed. There is no need to plan for or provision for additional hardware — and everything that comes with that (e.g., maintaining the cooling systems for double the number of data center servers, increased power costs, expanded physical space, etc.).

Flexibility / Agility

Most public clouds offer more than simply instant access to additional compute instances. There are managed services for several common use cases: relational databases, NoSQL databases, big data stores, message queues, and the list goes on. This flexibility is evident in using various managed services as glue to hold other managed services together.

In traditional environments, you may identify the need for a technology (think message queue), but opt against it due to the complexity of needing to actually manage it and use a less efficient alternative (a relational database for example). With these components readily available in most public clouds, your organization has the flexibility to use the correct technology for each use case without the burden of maintaining it.

Along with the flexibility of plugging in the appropriate technology, you greatly increase the speed at which this can be done. There is much less need from an infrastructure standpoint to plan for supporting a new technology. With the click of a button, the new technology is ready to go in your stack.  In an agile work environment, having an agile platform to accompany the methodology is very important.

Cost

While the above benefits are all really great, the bottom line is always (the most) important. Depending on how you determine the overall cost of your infrastructure (i.e., hardware only, or do you include operations staff, building costs, etc.) you can see cost savings. One of the big challenges with running physical gear is the initial cost. If I want to run a rack of 20 servers, I have to buy 20 servers, rack them up and turn them on. My ongoing operational cost is likely going to be less than in the cloud (remember, in the cloud you are paying as you use it), but I also need to spread the initial cost over time.

While an overall cost analysis is well outside the scope of this document, you can see how determining cost savings using the public cloud vs. an on-premise solution can be challenging. With all else being equal, you will generally have a more predictable monthly cost when using the public cloud and often can get volume (or reserved) discounts. For example, AWS provides a “CTO Calculator” to estimate how you could save on cost by switching to the public cloud: https://aws.amazon.com/tco-calculator/.

On-Premise

So the powers that be at your company have drawn a line in the sand and said “no” to using the public cloud. Does that mean that each time an application team needs a database, your operations team is racking a server and setting it up? It very well could, but let’s explore a few of the options available to your infrastructure team.

Dedicated Hardware

While this option can seem outdated, there are several benefits to provisioning bare metal machines in your data center:

  • Complete control over the machine
    • OS tuning
    • Hardware choices
    • Physical control
  • Easy to make different “classes” of machine
    • Spinning disks for DR slaves
    • SSD for slaves
    • Flash storage for masters
    • Etc
  • Easier troubleshooting
    • Less of a need to determine which “layer” is having problems
  • Less overhead for virtualization/containerization
  • No “extra servers” needed for managing the infrastructure

In a relatively static environment, this is still a great choice as you have full access and minimal layers to deal with. If you see disk errors, you don’t have to decide which “layer” is actually having problems – it is likely the disk. While this is nice, it can be cumbersome and a burden on your operations staff when there are always new databases being added (for microservices or scaling).

In this model, each server is assumed to be a static resource. Generally, you wouldn’t provision a bare metal machine with an OS and database and then wipe it and start over repeatedly. Rather, this model of deployment is best suited to an established application running a predictable workload, where scaling is slow and over time.

A major downside to this approach is resource utilization. Normally, you wouldn’t want to only use half of everything that you purchase. When dealing with bare metal machines, you generally don’t want to have everything running at maximum capacity all the time so that you can handle spikes in traffic.  When provisioning bare metal machines, this means you either have to pay for all of your potential resources and then watch most of them sit idle much of the time or risk outages while continuously running at the limits.

Virtualization/Containers

Right up there with “the cloud”, another buzzword these days is “containers”. At a high level, containers and virtualization are similar in that they both allow you to use part of a larger physical server to emulate a smaller server. This gives operations teams the ability to create “images” that can be used to quickly provision “servers” on larger bare metal machines.

While this does add a new layer to your stack, and can potentially introduce some additional complexity in tuning and/or troubleshooting, two major problems with bare metal provisioning are addressed:

  • Flexibility
  • Resource utilization

In terms of flexibility, operations teams are able to have a collection of standard images for various systems, such as application servers or database servers, and quickly spin them up on readily waiting hardware. This makes it much easier when an application team says “we need a new database for this service and will need four application servers with it.”  Rather than racking up and setting up five physical machines and installing the OS along with various packages, the operations team simply starts five virtual machines (or containers for those of you “containerites” out there) and hands them off.

This also helps with resource utilization. Rather than setting one application server up on a physical machine and keeping it under 50% utilization all the time, you are able to launch multiple VMs on this machine, each just using a portion. When the physical machine reaches maximum capacity, you can move an image to a new physical machine. This process gets rinsed and repeated as traffic patterns change and resource demands shift. It decreases some of the pain that comes from watching bare machines sit idle.

Private Cloud

Now, let’s put it all together and talk about creating a private cloud. It’s the best of both worlds, right?  All the flexibility and elasticity of the public cloud, but in your own data center where you can retain full control of everything. In this scenario, an organization is generally doing the following:

  • Managing a data center of generic, physical machines
  • Leveraging virtualization and/or containerization to quickly launch/destroy server images
  • Using an orchestration layer to manage all of the VMs/containers

This is a great fit for organizations that already have made an investment in a large physical infrastructure. You likely already have hundreds of servers at your disposal, so why not get the most utilization you can out of them and make your infrastructure much more dynamic?

Consider this…

While this sounds amazing (and quite often IS the best fit), here’s what to consider.  When dealing with a large internal cloud, you will need people experienced in managing this sort of infrastructure. Even though application teams now just hit a button to launch a database and application server, the cloud is still backed by a traditional data center with bare metal servers. An operations team is still a very needed entity — even though they may not be your traditional “DBA” or “ops guy”.

Also, the complexity of managing (and definitely troubleshooting) an environment such as this generally increases by an order of magnitude. Generic questions like “why is my application running slow?” used to be easier to answer: you check the application server and the database server, look at some metrics, and can generally pinpoint what is happening. In a large private cloud, now you’ll need to look at:

  • Application/query layer
  • Orchestration layer
  • Virtualization / container layer
  • Physical layer

It is not to say it isn’t worth it, but managing an internal cloud is not a trivial task and much thought needs to be put in.

How Can Percona Help?

Having been in the open source database space for years, Percona has seen and worked on just about every possible MySQL deployment possible. We also focus on picking the proper tool for the job and will meet your organization where you are. Running Postgres on bare metal servers? We can help.  Serving your application off of EC2 instances backed by an RDS database? No problem. MongoDB on Kubernetes in your private cloud? Check.

We can also work with your organization to help you choose the best path to follow. We love open source databases and the flexibility that they can provide. Our team has experience designing and deploying architectures ranging from a single database cloud server to hundreds of bare metal machines spanning across multiple data centers. With that sort of experience, we can help your organization with an enterprise database solution too!

Contact Me ?
Photo by Carl Nenzen Loven on Unsplash

Jan
10
2019
--

PostgreSQL Updatable Views: Performing Schema Updates With Minimal Downtime

postgres updatable views

postgres updatable viewsRecently, one of our customers asked us how to minimize downtime when upgrading the database structure with changes that are not backwards-compatible. It’s an interesting question and I would like to visit some alternatives here. I will use PostgreSQL for this series of posts and walk through updatable views, INSTEAD OF Triggers, and the Rule System. Later, we’ll discuss alternatives available for other databases like MySQL.

This first post will give an overview of the problem and also the first implementation of the solution in PostgreSQL using updatable Views.

The Motivation

Software is like a living organism and as such, they evolve. It’s not surprising that the database schemas also evolve, and this brings us a problem: how to minimize downtime when performing upgrades? Or even further, is it possible to upgrade them without activating maintenance mode thereby making the service unavailable for our customers?

Let’s say that we want to push out an update 2.0. It’s a major update, and in this update, there are application code changes and changes to the database such as altered tables, dropped columns, new tables and so on. Checking the changelog, we notice that most of the database changes are backwards-compatible but a few modified tables are not so we can’t just push out the new database changes without breaking some functionality in the existing codebase. To avoid triggering errors while we upgrade the database, we need to shutdown the application servers, update the database, update the codebase, and then get the servers back and running again. That means that we need an unwanted maintenance window!

As per our definition of the problem, we want to get to the point where we don’t have to use this maintenance window, a point where the old and new codebase could coexist for a period of time while we upgrade the system. One solution is to not make changes that the current codebase can’t handle, but, as you may have already assumed, it isn’t really an option when we are constantly trying to optimize and improve our databases. Another option, then, would be to use PostgreSQL updatable views.

Updatable Views

PostgreSQL has introduced automatically updatable views in 9.3. The documentation[1] says that simple views are automatically updatable and the system will allow INSERT, UPDATE or DELETE statements to be used on the view in the same way as on a regular table. A view is automatically updatable if it satisfies all of the following conditions:

  • The view must have exactly one entry in its FROM list, which must be a table or another updatable view.
  • The view definition must not contain WITH, DISTINCT, GROUP BY, HAVING, LIMIT, or OFFSET clauses at the top level.
  • The view definition must not contain set operations (UNION, INTERSECT or EXCEPT) at the top level.
  • The view’s select list must not contain any aggregates, window functions, or set-returning functions.

Note that the idea is to give a simple mechanism that helps when using views, and if the view is automatically updatable the system will convert any INSERT, UPDATE or DELETE statement on the view into the corresponding statement on the underlying base table. This can also be used to increase the security granularity giving the power to define privilege that operates at the level. If using a WHERE clause in the view we can use the CHECK OPTION to prevent the user from being able to UPDATE or INSERT rows that are not in the scope of the view. For example, let’s say we have a view created to limit the user to view records from a specific country.  If the user changes the country of any record, those records would disappear from the view. The CHECK OPTION can help to prevent this from happening. I recommend reading the documentation for more information about how views work in PostgreSQL.

Implementation

Using updatable views makes the implementation as simple as creating views. For our example I will use the below table:

test=# CREATE TABLE t (id INTEGER PRIMARY KEY, name VARCHAR(100) NOT NULL, password VARCHAR(300) NOT NULL, date_created TIMESTAMP NOT NULL DEFAULT now());
CREATE TABLE
test=# INSERT INTO t(id, name, password) VALUES (1, 'user_1', 'pwd_1'), (2, 'user_2','pwd_2'),(3,'user_3','pwd_3'),(4,'user_4','pwd_4'),(5,'user_5','pwd_5');
INSERT 0 5
test=# SELECT * FROM t;
id | name | password | date_created
----+--------+----------+----------------------------
1 | user_1 | pwd_1 | 2018-12-27 07:50:39.562455
2 | user_2 | pwd_2 | 2018-12-27 07:50:39.562455
3 | user_3 | pwd_3 | 2018-12-27 07:50:39.562455
4 | user_4 | pwd_4 | 2018-12-27 07:50:39.562455
5 | user_5 | pwd_5 | 2018-12-27 07:50:39.562455
(5 rows)

We then changed the schema renaming the columns password to pwd, date_created to dt_created and added 2 more columns, pwd_salt and comment. The added columns are not a real problem because they can be either nullable or have a default value but the column name change is a problem. The changes are:

test=# create schema v_10;
CREATE SCHEMA
test=# CREATE VIEW v_10.t AS SELECT id, name, password AS password, date_created AS date_created FROM public.t;
CREATE VIEW
test=# ALTER TABLE public.t RENAME COLUMN password TO pwd;
ALTER TABLE
test=# ALTER TABLE public.t RENAME COLUMN date_created TO dt_created;
ALTER TABLE
test=# ALTER TABLE public.t ADD COLUMN pwd_salt VARCHAR(100);
ALTER TABLE
test=# ALTER TABLE public.t ADD COLUMN comment VARCHAR(500);
ALTER TABLE

To make sure our application will work properly we’ve defined that the tables will be in a specific main schema, in this example is the PUBLIC schema and the views will be in the versioned schemas. In this case, if we have a change in one specific version that needs a view guaranteeing backwards-compatibility, we just create the view inside the versioned schema and apply the changes to the table in the main schema. The application will always define the “search_path” as “versioned_schema,main_schema”, which is “v_10, public” in this example:

test=# SET search_path TO v_10, public;
SET
test=# SELECT * FROM t;
id | name | password | date_created
----+--------+----------+----------------------------
1 | user_1 | pwd_1 | 2018-12-27 07:50:39.562455
2 | user_2 | pwd_2 | 2018-12-27 07:50:39.562455
3 | user_3 | pwd_3 | 2018-12-27 07:50:39.562455
4 | user_4 | pwd_4 | 2018-12-27 07:50:39.562455
5 | user_5 | pwd_5 | 2018-12-27 07:50:39.562455
(5 rows)
test=# select * from public.t;
id | name | pwd | dt_created | pwd_salt | comment
----+--------+-------+----------------------------+----------+---------
1 | user_1 | pwd_1 | 2018-12-27 07:50:39.562455 | |
2 | user_2 | pwd_2 | 2018-12-27 07:50:39.562455 | |
3 | user_3 | pwd_3 | 2018-12-27 07:50:39.562455 | |
4 | user_4 | pwd_4 | 2018-12-27 07:50:39.562455 | |
5 | user_5 | pwd_5 | 2018-12-27 07:50:39.562455 | |
(5 rows)

As we can see, the application still sees the old schema, but does this work? What if someone updates the password of ID #3? Let’s check:

test=# UPDATE t SET password = 'new_pwd_3' WHERE id = 3;
UPDATE 1
test=# SELECT * FROM t;
id | name | password | date_created
----+--------+-----------+----------------------------
1 | user_1 | pwd_1 | 2018-12-27 07:50:39.562455
2 | user_2 | pwd_2 | 2018-12-27 07:50:39.562455
4 | user_4 | pwd_4 | 2018-12-27 07:50:39.562455
5 | user_5 | pwd_5 | 2018-12-27 07:50:39.562455
3 | user_3 | new_pwd_3 | 2018-12-27 07:50:39.562455
(5 rows)
test=# SELECT * FROM public.t;
id | name | pwd | dt_created | pwd_salt | comment
----+--------+-----------+----------------------------+----------+---------
1 | user_1 | pwd_1 | 2018-12-27 07:50:39.562455 | |
2 | user_2 | pwd_2 | 2018-12-27 07:50:39.562455 | |
4 | user_4 | pwd_4 | 2018-12-27 07:50:39.562455 | |
5 | user_5 | pwd_5 | 2018-12-27 07:50:39.562455 | |
3 | user_3 | new_pwd_3 | 2018-12-27 07:50:39.562455 | |
(5 rows)

As we can see, the updatable view worked just like a charm! The new and old application codebase can coexist and work together while we roll up our upgrades. There are some restrictions, as explained in the documentation, like having only one table or view in the WHERE clause but for its simplicity, upgradable views do a great job. For more complex cases where we need to split/join tables? Well, we will discuss these in future articles and show how we can solve them with both TRIGGERS and the PostgreSQL Rule System.

References

[1] https://www.postgresql.org/docs/current/sql-createview.html


Photo by Egor Kamelev from Pexels

Dec
17
2018
--

Amazon RDS Aurora Serverless – The Basics

amazon aurora serverless

amazon aurora serverlessWhen I attended AWS Re:Invent 2018, I saw there was a lot of attention from both customers and the AWS team on Amazon RDS Aurora Serverless. So I decided to take a deeper look at this technology, and write a series of blog posts on this topic.

In this first post of the series, you will learn about Amazon Aurora Serverless basics and use cases. In later posts, I will share benchmark results and in depth realization results.

What Amazon Aurora Serverless Is

A great source of information on this topic is How Amazon Aurora Serverless Works from the official AWS  documentation. In this article, you learn what Serverless deployment rather than provisional deployment means. Instead of specifying an instance size you specify the minimum and maximum number of “Aurora Capacity Units” you would like to have:

choose MySQL version on Aurora

Amazon Aurora setup

capacity settings on Amazon Aurora

Once you set up such an instance it will automatically scale between its minimum and maximum capacity points. You also will be able to scale it manually if you like.

One of the most interesting Aurora Serverless properties in my opinion is its ability to go into pause if it stays idle for specified period of time.

pause capacity on Amazon Aurora

This feature can save a lot of money for test/dev environment where load can be intermittent.  Be careful, though, using this for production size databases as waking up is far from instant. I’ve seen cases of it taking over 30 seconds in my experiments.

Another thing which may surprise you about Amazon Aurora Serverless, at the time of this writing, is that it is not very well coordinated with other Amazon RDS Aurora products –  it is only available as a MySQL 5.6 based edition and is not compatible with recent parallel query innovations either as it comes with list of other significant limitations. I’m sure Amazon will resolve these in due course, but for now you need to be aware of them.

A simple way to think about it is as follows: Amazon Aurora Serverless is a way to deploy Amazon Aurora so it scales automatically with load; can automatically pause when there is no load; and resume automatically when requests come in.

What Amazon Aurora Serverless is not

When I think about Serverless Computing I think about about elastic scalability across multiple servers and resource usage based pricing.   DynamoDB, another Database which is advertised as Serverless by Amazon, fits those criteria while Amazon Aurora Serverless does not.

With Amazon Aurora Serverless, for better or for worse, you’re still living in the “classical” instance word.  Aurora Capacity Units (ACUs) are pretty much CPU and Memory Capacity. You still need to understand how many database connections you are allowed to have. You still need to monitor your CPU usage on the instance to understand when auto scaling will happen.

Amazon Aurora Serverless also does not have any magic to scale you beyond single instance performance, which you can get with provisioned Amazon Aurora

Summary

I’m excited about the new possibilities Amazon Aurora Serveless offers.  As long as you do not expect magic and understand this is one of the newest products in the Amazon Aurora family, you surely should give it a try for applications which fit.

If you’re hungry for more information about Amazon Aurora Serverless and can’t wait for the next articles in this series, this article by Jeremy Daly contains a lot of great information.


Photo by Emily Hon on Unsplash

Nov
15
2018
--

How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse

MySQL High Availability 2

Let’s talk about MySQL high availability (HA) and synchronous replication once more.

It’s part of a longer series on some high availability reference architecture solutions over geographically distributed areas.

Part 1: Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care?

Part 2: MySQL High Availability On-Premises: A Geographically Distributed Scenario

The Problem

A question I often get from customers is: How do I achieve high availability in case if I need to spread my data in different, distant locations? Can I use Percona XtraDB Cluster?

Percona XtraDB Cluster (PXC), mariadb-cluster or MySQL-Galera are very stable and well-known solutions to improve MySQL high availability using an approach based on multi-master data-centric synchronous data replication model. Which means that each data-node composing the cluster MUST see the same data, at a given moment in time.

Information/transactions must be stored and visible synchronously on all the nodes at a given time. This is defined as a tightly coupled database cluster. This level of consistency comes with a price, which is that nodes must physically reside close to each other and cannot be geographically diverse.

This is by design (in all synchronous replication mechanisms). This also has to be clarified over and over throughout the years. Despite that we still see installations that span across geographic locations, including AWS Regions.

We still see some solutions breaking the golden rule of proximity, and trying to break the rules of physics as well. The problem/mistake is not different for solutions based on-premises or in the cloud (for whatever cloud provider).

Recently I had to design a couple of customer solutions based on remote geographic locations. In both cases, the customer was misled by an incorrect understanding of how the synchronous solution works, and from a lack of understanding of the network layer. I decided I need to cover this topic again, as I have done previously in Galera geographic replication and Effective way to check network connection in a geographically distributed environment 

What Happen When I Put Things on the Network?

Well, let’s start with the basics.

While light travels at 300 million meters per second, the propagation of the electric fields or electric signaling is slower than that.

The real speed depends by the medium used to transmit it. But it can be said that the real speed normally spans from 0% to 99% of light-speed (depending on the transmission medium).

This means that in optimal conditions the signal travels at approximately 299.72Km per millisecond, in good/mid condition about half that at 149.86Km per millisecond, and in bad conditions it could be 3Km per millisecond or less.

To help you understand, the distance between Rome (Italy) and Mountain View (California) is about 10,062Km. At light-speed it will take 33.54ms. In good conditions (90% of light-speed) the signal will take 37.26ms to reach Mountain View, and in less optimal conditions it can easily double to 74.53 ms.

Keep in mind this is the electric field propagation speed: optimal conditions with no interruption, re-routing and so on. Reality will bring all the kind of interruptions, repeaters and routing.

All the physics above works as a baseline. On top of this, each human construct adds functionalities, flexibility and (unfortunately) overhead – leading to longer times and slower speeds.

The final speed will be different than the simple propagation of the electric fields. It will include the transmission time of complex signaling using ICMP protocol, or even higher delays with the use of a very complex protocol like TCP/IP, which includes handshaking, package rerouting, re-sending and so on. On top of that, when sending things over the internet, we need to realize that it is very improbable we will be the only user sending data over that physical channel. As such, whatever we have “on the road” will need to face bandwidth limitation, traffic congestion and so on.

I had described the difference between protocols (ICMP – TCP/IP) hereclarifying how the TCP/IP scenario is very different from using different protocols like ICMP, or the theoretical approach.

What it all means is that we cannot trust the theoretical performance. We must move to a more empirical approach. But we must understand the right empirical approach or we will be misled.

An Example

I recently worked on a case where a customer had two data centers (DC) at a distance of approximately 400Km, connected with “fiber channel”. Server1 and Server2 were hosted in the same DC, while Server3 was in the secondary DC.

Their ping, with default dimension, to Server3 was ~3ms. Not bad at all, right?

We decided to perform some serious tests, running multiple sets of tests with netperf for many days collecting data. We also used the data to perform additional fine tuning on the TCP/IP layer AND at the network provider.

The results produced a common (for me) scenario (not so common for them):

 

The red line is the first set of tests BEFORE we optimized. The yellow line is the results after we optimized.

The above graph reports the number of transactions/sec (AVG) we could run against the different dimension of the dataset and changing the destination server. The full roundtrip was calculated.

It is interesting to note that while the absolute numbers were better in the second (yellow) tests, this was true only for a limited dataset dimension. The larger the dataset, the higher the impact. This makes sense if you understand how the TCP/IP stack works (the article I mentioned above explains it).

But what surprised them were the numbers. Keeping aside the extreme cases and focusing instead on the intermediate case, we saw that shifting from a 48k dataset dimension to 512K hugely dropped the performance. The drop for executed transactions was from 2299 to 219 on Server2 (same dc) and from 1472 to 167 Server3 (different DC).

Also, note that Server3 only managed ~35% fewer transactions comparing to Server2 from the start given the latency. Latency moved from a more than decent 2.61ms to 27.39ms for Server2 and 4.27ms to 37.25ms for Server3.

37ms latency is not very high. If that had been the top limit, it would have worked.

But it was not.

In the presence of the optimized channel, with fiber and so on, when the tests were hitting heavy traffic, the congestion was such to compromise the data transmitted. It hit a latency >200ms for Server3. Note those were spikes, but if you are in the presence of a tightly coupled database cluster, those events can become failures in applying the data and can create a lot of instability.

Let me recap a second the situation for Server3:

We had two datacenters.

  • The connection between the two was with fiber
  • Distance Km ~400, but now we MUST consider the distance to go and come back. This because in case of real communication, we have not only the send, but also the receive packages.
  • Theoretical time at light-speed =2.66ms (2 ways)
  • Ping = 3.10ms (signal traveling at ~80% of the light speed) as if the signal had traveled ~930Km (full roundtrip 800 Km)
  • TCP/IP best at 48K = 4.27ms (~62% light speed) as if the signal had traveled ~1,281km
  • TCP/IP best at 512K =37.25ms (~2.6% light speed) as if the signal had traveled ~11,175km

Given the above, we have from ~20%-~40% to ~97% loss from the theoretical transmission rate. Keep in mind that when moving from a simple signal to a more heavy and concurrent transmission, we also have to deal with the bandwidth limitation. This adds additional cost. All in only 400Km distance.

This is not all. Within the 400km we were also dealing with data congestions, and in some cases the tests failed to provide the level of accuracy we required due to transmission failures and too many packages retry.

For comparison, consider Server2 which is in the same DC of Server1. Let see:

  • Ping = 0.027ms that is as if the signal had traveled ~11km light-speed
  • TCP/IP best at 48K = 2.61ms as if traveled for ~783km
  • TCP/IP best at 512K =27.39ms as if traveled for ~8,217km
  • We had performance loss, but the congestion issue and accuracy failures did not happen.

You might say, “But this is just a single case, Marco, you cannot generalize from this behavior!”

You would be right IF that were true (but is not).

The fact is, I have done this level of checks many times and in many different environments. On-premises or using the cloud. Actually, in the cloud (AWS), I had even more instability. The behavior stays the same. Please test it yourself (it is not difficult to use netperf). Just do the right tests with RTT and multiple requests (note at the end of the article).

Anyhow, what I know from the tests is that when working INSIDE a DC with some significant overhead due to the TCP/IP stack (and maybe wrong cabling), I do not encounter the same congestion or bandwidth limits I have when dealing with an external DC.

This allows me to have more predictable behavior and tune the cluster accordingly. Tuning that I cannot do to cover the transmission to Server3 because of unpredictable packages behavior and spikes. >200ms is too high and can cause delivery failures.

If we apply the given knowledge to the virtually-synchronous replication we have with Galera (Percona XtraDB Cluster), we can identify that we are hitting the problems well-explained in Jay’s article Is Synchronous Replication right for your appThere, he explains Callaghan’s Law: [In a Galera cluster] a given row can’t be modified more than once per RTT. 

On top of that, when talking of geographical disperse solutions we have the TCP/IP magnifying the effect at writeset transmission/latency level. This causes nodes NOT residing on the same physical contiguous network delay for all the certification-commit phases for an X amount of time.

When X is predictable, it may range between 80% – 3% of the light speed for the given distance. But you can’t predict the transmission-time of a set of data split into several datagrams, then sent on the internet, when using TCP/IP. So we cannot use the X range as a trustable measure.

The effect is unpredictable delay, and this is read as a network issue from Galera. The node can be evicted from the cluster. Which is exactly what happens, and what we experience when dealing with some “BAD” unspecified network issue. This means that whenever we need to use a solution based on tightly coupled database cluster (like PXC), we cannot locate our nodes at a distance that is longer than the largest RTT time of our shortest desired period of commit.

If our application must apply the data in a maximum of 200ms in one of its functions, our min RTT is 2ms and our max RTT is 250ms. We cannot use this solution, period. To be clear, locating a node on another geolocation, and as such use the internet to transmit/receive data, is by default a NO GO given the unpredictability of that network link.

I doubt that nowadays we have many applications that can wait an unpredictable period to commit their data. The only case when having a node geographically distributed is acceptable is if you accept commits happening in undefined periods of time and with possible failures.

What Is the Right Thing To Do?

The right solution is easier than the wrong one, and there are already tools in place to make it work efficiently. Say you need to define your HA solution between the East and West Coast, or between Paris and Frankfurt. First of all, identify the real capacity of your network in each DC. Then build a tightly coupled database cluster in location A and another tightly coupled database cluster in the other location B. Then link them using ASYNCHRONOUS replication.

Finally, use a tool like Replication Manager for Percona XtraDB Cluster to automatically manage asynchronous replication failover between nodes. On top of all of that use a tool like ProxySQL to manage the application requests.

The full architecture is described here.

Conclusions

The myth of using ANY solution based on tightly coupled database cluster on distributed geographic locations is just that: a myth. It is conceptually wrong and practically dangerous. It MIGHT work when you set it up, it MIGHT work when you test it, it MIGHT even work for some time in production.

By definition, it will break, and it will break when it is least convenient. It will break in an unpredictable moment, but because of a predictable reason. You did the wrong thing by following a myth.

Whenever you need to distribute your data over different geographic locations, and you cannot rely on a single physical channel (fiber) to connect the two locations, use asynchronous replication, period!

References

https://github.com/y-trudeau/Mysql-tools/tree/master/PXC

http://www.tusacentral.net/joomla/index.php/mysql-blogs/164-effective-way-to-check-the-network-connection-when-in-need-of-a-geographic-distribution-replication-.html

https://www.percona.com/blog/2013/05/14/is-synchronous-replication-right-for-your-app/

Sample test

#!/bin/bash
test_log=/tmp/results_$(date +'%Y-%m-%d_%H_%M_%S').txt
exec 9>>"$test_log"
exec 2>&9
exec 1>&9
echo "$(date +'%Y-%m-%d_%H_%M_%S')" >&9
for ip in 11 12 13; do
  echo "  ==== Processing server 10.0.0.$ip === "
  for size in 1 48 512 1024 4096;do
    echo " --- PING ---"
    ping -M do -c 5  10.0.0.$ip -s $size
    echo "  ---- Record Size $size ---- "
    netperf -H 10.0.0.$ip -4 -p 3307 -I 95,10 -i 3,3 -j -a 4096 -A 4096  -P 1 -v 2 -l 20 -t TCP_RR -- -b 5 -r ${size}K,48K -s 1M -S 1M
    echo "  ---- ================= ---- ";
  done
   echo "  ==== ----------------- === ";
done

 

Nov
15
2018
--

MySQL High Availability On-Premises: A Geographically Distributed Scenario

On-Premises MySQL High Availability

MySQL High Availability

MySQL High Availability. Shutterstock.com

In this article, we’ll look at an example of an on-premises, geographically distributed MySQL high availability solution. It’s part of a longer series on some high availability reference architecture solutions over geographically distributed areas.

Part 1: Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care?

Percona consulting’s main aim is to identify simple solutions to complex problems. We try to focus on identifying the right tool, a more efficient solution, and what can be done to make our customers’ lives easier. We believe in doing the work once, doing it well and have more time afterward for other aspects of life.

In our journey, we often receive requests for help – some simple, some complicated.  

Scenario

The company “ACME Inc.” is moving its whole business from a monolithic application to a distributed application, split into services. Each different service deals with the requests independently from each other. Some services follow the tightly-bounded transactional model, and others work/answer asynchronously. Each service can access the data storage layer independently.

In this context, ACME Inc. identified the need to distribute the application services over wide geographic regions, focusing on each region achieving scale independently.

The identified regions are:

  • North America
  • Europe
  • China

ACME Inc. is also aware of the fact that different legislation acts on each region. As such, each region requires independent information handling about sales policies, sales campaigns, customers, orders, billing and localized catalogs, but will share the global catalog and some historical aggregated data. While most of the application services will work feeding and reading local distributed caches, the basic data related to the catalog, sales and billing is based on an RDBMS.

Historical data is instead migrated to a “Big Data” platform, and aggregated data is elaborated and push to a DWH solution at HQ. The application components are developed using multiple programming languages, depending on the service.   

The RDBMS identified by ACME Inc. in collaboration with the local authorities was MySQL-oriented. There were several solutions like:

  • PostgreSQL
  • Oracle DB
  • MS SQL server

We excluded closed-source RDBMSs given that some countries imposed a specific audit plugin. This plugin was only available for the mentioned platforms. The cost of parallel development and subsequent maintenance in case of RDBMS diversification was too high. As such all the regions must use the same major RDBMS component.

We excluded PostgreSQL given that compared to the adoption of MySQL, utilization cases were higher and MySQL had a well-defined code producer. Finally, the Business Continuity team of ACME Inc., had defined an ITSC (Information Technology Service Continuity) plan that defined the RPO (Recovery Point Objective), the RTO (Recovery Time Objective) and system redundancy.

That’s it. To fulfill the ITSCP, each region must have the critical system redundantly replicated in the same region, but not on the proximity.

Talking About the Components

This is a not-so-uncommon scenario, and it also presents a lot of complexity if you try to address it with one solution. But let’s analyze it and see how we can simplify the approach while still meeting the needs and requirements of ACME Inc.

When using MySQL-based solutions, the answer to “what should we use?” is use what best fits your business needs. The “nines” availability reference table for the MySQL world (most RDBMSs) can be summarized below:

9 0. 0 0 0 % (36 days) MySQL Replication
9 9. 9 0 0 % (8 hours) Linux Heartbeat with DRBD (Obsolete DRBD)
9 9. 9 0 0 % (8 hours) RHCS with Shared Storage (Active/Passive)
9 9. 9 9 0 % (52 minutes) MHA/Orchestrator with at least three nodes
9 9. 9 9 0 % (52 minutes) DRBD and Replication (Obsolete DRBD)
9 9 .9 9 5 % (26 minutes) Multi-Master (Galera replication) 3 node minimum
9 9. 9 9 9 % (5 minutes) MySQL Cluster

An expert will tell you that it always doesn’t make sense to go for the most “nines” in the list. This because each solution comes with a tradeoff: the more high availability (HA) you get, the higher the complexity of the solution and in managing the solution.

For instance, the approach used in MySQL Cluster (NDB) makes this solution not suitable for generic utilization. It requires proper analysis of the application needs, data utilization and archiving before being selected. It also requires in-depth knowledge to properly manage the cluster, as it is more complex than other similar solutions.

This indirectly makes a solution based on MySQL+Galera replication the one with the highest HA level a better choice, since it is close to the defaults generalized utilizations. 

This is why MySQL+Galera replication has become in the last six years the most used solution for platform looking for very high HA, without the need to diverge from standard MySQL/InnoDB approach. You can read more about Galera replication: http://galeracluster.com/products/ 

Read more about Percona XtraDB Cluster.

There are several distributions implementing Galera replication:

*Note that MariaDB Cluster/Server and all related solutions coming from MariaDB have significantly diverged from the MySQL mainstream. This often means that once migrated to MariaDB; your database will not be compatible with other MySQL solutions. In short, you are locked-in to MariaDB. It is recommended that you carefully evaluate the move to MariaDB before making that move.

Choosing the Components

RDBMS

Our advice is to use Percona XtraDB Cluster (PXC), because at the moment it is one of the most flexible and reliable and compatible solutions. PXC is composed of three main components:

The cluster is normally composed of three nodes or more. Each node can be used as a Master, but the preferred and recommended way is to use one node as a Writer and the other as Readers.

Application-wise, accessing the right node can be challenging since this means you need to be aware of which node is the writer, which is the reader, and be able to shift from one to the other if necessary.

Proxy

To simplify this process, it helps to have an additional component that works as a “proxy” connecting the application layer to the desired node(s). The most popular solutions are:

  • HAProxy
  • ProxySQL

There are several important differences between the two. But in summary, ProxySQL is a Level 7 proxy and is MySQL protocol aware. So, while HAProxy is just passing the connection over as a forward proxy (level 4), ProxySQL is aware of what is going through it and acts as reverse proxy. 

With ProxySQL is possible to decide, based on several parameters, where to send traffic (read/write split and more), what must be stopped, or if we should rewrite an incoming SQL command. A lot of information is available on the ProxySQL website https://github.com/sysown/proxysql/wiki and on the Percona Database Performance Blog .

Backup/Restore

No RDBMS platform is safe without a well-tested procedure for backup and recovery. The Percona XtraDB Cluster package distribution comes with Percona XtraBackup as the default method for node provisioning. A good backup and restore (B/R) policy start from the consideration of ACME’s ITSCP, to have full and incremental backups, perfectly covering the RPO, and a good recovery procedure to keep the recovery time inside RTO whenever possible.

There are several tools that allow you to plan and execute backup/restore procedure, some coming from vendors other than open source and community-oriented. In respect to being a fully open source and community-oriented, we in consulting normally suggest using: https://github.com/dotmanila/pyxbackup.

Pyxbackup is a wrapper around XtraBackup that helps simplify the B/R operations, including the preparation of a full and incremental set. This helps significantly reduce the recovery time.  

Disaster Recovery

Another very important aspect of the ITSC Plan is the capacity of the system to survive to major disasters. The disaster and recovery (DR) solution must be able to act as the main production environment. Therefore, it must be designed and scaled as the main production site in resources. It must be geographically separated, normally hundreds of kilometers or more. It must be completely independent of the main site. It must be as much as possible in sync with the main production site.

While the first three “musts” are easy to understand, the fourth one is often the object of misunderstanding.

The concept of be as much in sync with the production site as possible creates confusion in designing HA solutions with Galera replication involved. The most common misunderstanding is the misuse of the Galera replication layer. Mainly the conceptual confusion between tightly coupled database cluster and loosely coupled database cluster.

Any solution based on Galera replication is a tightly coupled database cluster, because the whole idea is to be data-centric, synchronously distributed and consistent. The price is that this solution cannot be geographically distributed.

Solutions like standard MySQL replication are instead loosely coupled database cluster and they are designed to be asynchronous. Given that, the nodes connected by it are completely independent in processing/apply the transaction, and the solution fits perfectly into ANY geographically distributed replication solution. The price is that data on the receiving front might not be up to date with the one from the source in that specific instant.

The point is that for the DR site the ONLY valid solution is the asynchronous link (loosely coupled database cluster), because by design and requirement the two sites must be separated by a significant number of kilometers. For better understanding about why synchronous replication cannot work in a geographically distributed scenario, see “Misuse of Geographic Node distribution with Galera-based replication“.

In our scenario, the use of Percona XtraDB Cluster helps to create a most robust asynchronous solution. This is because each tightly coupled database cluster, no matter if source or destination, will be seen by the other tightly coupled database cluster as a single entity.

What it means is that we can shift from one node to another inside the two clusters, still confident we will have the same data available and the same asynchronous stream passing from one source to the other.

To ensure this procedure is fully automated, we add to our architecture the last block: replication manager for Percona XtraDB Cluster (https://github.com/y-trudeau/Mysql-tools/tree/master/PXC). RMfP is another open source tool that simplifies and automates failover inside each PXC cluster such that our asynchronous solution doesn’t suffer if the node is currently acting as Master fails.  

How to Link the Components

Summarizing all the different components of our solution:

  • Application stack
    • Load balancer
    • Application nodes by service
    • Distributed caching
    • Data access service
  • Database stack
    • Data proxy (ProxySQL)
    • RDBMS (Percona XtraDB Cluster)
    • Backup/Restore
      • Xtrabackup
      • Pyxbackup
      • Custom scripts
    • DR
      • Replication Manager for Percona XtraDB Cluster
  • Monitoring
    • PMM (not covered here see <link> for detailed information)

 

In the solution above, we have two locations separated by several kilometers. On top of them, the load balancer(s)/DNS resolution redirects the incoming traffic to the active site. Each site hosts a full application stack, and applications connect to local ProxySQL.

ProxySQL has read/write enabled to optimize the platform utilization, and is configured to shift writes from one PXC node to another in case of node failure. Asynchronous replication connects the two locations and transmits data from master to slave.

Note that with this solution, it is possible to have multiple geographically distributed sites.

Backups are taken at each site independently and recovery test is performed. RMfP oversees and modifies the replication channels in the case of a node failure.

Finally, Percona Monitoring and Management (PMM) is in place to perform in-depth monitoring of the whole database platform.

Conclusions

We always look for the most efficient, manageable, user-friendly combination of products, because we believe in providing and supporting the community with simple but efficient solutions. What we have presented here is the most robust and stable high availability solution in the MySQL space (except for MySQL NDB that we have excluded). 

It is conceptualized to provide maximum service continuity, with limited bonding between the platforms/sites. It also is a well-tested solution, that has been adopted and adapted in many different scenarios where performance and real HA are a must. I have preferred to keep this digression at a high level, given the details of the implementation have already been discussed elsewhere (see reference section for more reading).

Still, Percona XtraDB Cluster (as any other solution implementing Galera replication) might not fit the final use. Given that, it is important to understand where it does and doesn’t fit. This article is a good summary with examples: Is Synchronous Replication right for your app?.

Check out the next article on How Not to do MySQL High Availability.

References

https://www.percona.com/blog/2016/06/07/choosing-mysql-high-availability-solutions/

https://dev.mysql.com/doc/mysql-ha-scalability/en/ha-overview.html

https://www.percona.com/blog/2014/11/17/typical-misconceptions-on-galera-for-mysql/

http://galeracluster.com/documentation-webpages/limitations.html

http://tusacentral.net/joomla/index.php/mysql-blogs/170-geographic-replication-and-quorum-calculation-in-mysqlgalera.html

http://tusacentral.net/joomla/index.php/mysql-blogs/167-geographic-replication-with-mysql-and-galera.html

http://tusacentral.net/joomla/index.php/mysql-blogs/164-effective-way-to-check-the-network-connection-when-in-need-of-a-geographic-distribution-replication-.html

http://tusacentral.net/joomla/index.php/mysql-blogs/183-proxysql-percona-cluster-galera-integration.html

https://github.com/sysown/proxysql/wiki

 

Nov
15
2018
--

Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care?

High Availability Solutions

High Availability Solutions

High Availability Solutions. Shutterstock.com

In this series of blog posts, I’m going to look at some high availability reference architecture solutions over geographically distributed areas.

The Problem

Nowadays, when businesses plan a new service or application, it is very common for them to worry about ensuring a very high level of availability. 

It doesn’t matter if we are talking about an online shop, online banking or the internal services of a large organization. We know users are going to expect access to services 24x7x365. They also expect to access data consistently and instantaneously. If we fail to meet their expectations, then they move to another provider and we lose money. Simple as that.

The other important aspect of providing online services and applications is that the amount of data produced, analyzed and stored is growing every day. We’ve moved from the few gigabytes of yesterday to terabytes today. Who knows what number of petabytes we need tomorrow?

What was once covered with a single LAMP stack, today can require dozens of Ls, As, different letters instead of P (like J, R, Py, G) and M. Our beloved MySQL that used to be “enough” to cover our needs 12 years ago is not fitting well with all the needs of many modern applications.

It is very common to have an application using different types of  “storage” at different levels and in different aspects of their activities. We can use a key-value store to cache inflight operations, and a relational full ACID database for the “valuable” core data (the kind of data that must be consistent and durable). Large data gets stored in an eventually consistent columns store mechanism, and long-term data in some “big data” approach. 

On top of all this is are reporting mechanisms that collect elements of each data store to provide a required, comprehensive data picture. The situation is very diversified and complex, and the number of possible variables is high. The way we can combine them is so vast that nowadays developers have no limits, and often comes up with creative solutions.

This is where we as architects can help: we can clarify how each tool can be used for the right JOB. We, at Percona, have the strong belief that we must seek simplicity in the complexity, and embracing the KISS approach. This starts with the initial identification of the right tool for the job.

Let’s start by looking at the following good practices in the following examples:

  • It is not a good idea to use key-value storage if you need to define the relationship between entities and rules between them.
  • Avoid using an eventually consistent storage when you have to save monetary information about customer payments.
  • It’s not a best practice to use a relational database to store HTML caching, page-tracking information, or game info in real time.

Use the right tool for the right job. Some tools scale writes better and keep an eventually consistent approach. Some others are designed to store an unbelievable amount of data, but cannot handle relations. As a result, they might take a long time when processing a typical OLTP request – if they can at all. Each tool has a different design and goal, each one scales differently, and each one has its way of handling and improving availability.

It is a crucial part of the architectural phase of your project not to mix the cards. Keep things clean and build the right architecture for each component. Then combine them in the way that harmonizes in the final result. We should optimize each block when solving a complex issue with simple answers.

How far are we from the old LAMP single stack? Ages. It is like turning your head and looking at our ancestors building the first tents. Tents are still a valid solution if you want to go camping. But only for fun, not for everyday life.

There is too often confusion around what a relational database should do and how it should do it. A relational database should not replace every other component of the wide architecture, and vice versa. They must coexist and work together with other options. Each one should maximize its characteristics and minimize its limitations.

In this series, we will focus on RDBMSs, and we will present a few possible reference architectures for the relational database layer. I will illustrate solutions that improve service availability, keeping a focus on what the tool’s design and the relational data approach concerning the ACID paradigm.

This means employing the simple rules of:

  • Atomicity -> All operations, part of the same transaction, are concluded successfully or not applied at all.
  • Consistency -> Any data written must be valid/validated against the defined rules and combination thereof.
  • Isolation -> Guarantees that all transactions will occur in isolation. No transaction affects any other transaction.
  • Durability -> Durability means that, once a transaction is committed, it will remain in the system even if there’s a system crash immediately following the transaction. Transaction changes must be stored permanently.

We will discuss the solution involving the most common open source RDBMSs, covering on-premises and in the cloud:

  • MySQL
  • PostgreSQL
  • MongoDB

The scenario will be common to all solutions, but the way we implement the solution will instead answer to different needs. The first example is MySQL high availability on premises: MySQL High Availability on premises.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com