Apr
23
2017
--

Percona XtraDB Cluster Transaction Replay Anomaly

dh key too small

Replay AnomalyIn this blog post, we’ll look at a transaction replay anomaly in Percona XtraDB Cluster.

Introduction

Percona XtraDB Cluster/Galera replays a transaction if the data is non-conflicting but, the transaction happens to have conflicting locks.

Anomaly

Let’s understand this with an example:

  • Let’s assume a two-node cluster (node-1 and node-2)
  • Base table “t” is created as follows:
create database test;
use test;
create table t (i int, c char(20), primary key pk(i)) engine=innodb;
insert into t values (1, 'abc'), (2, 'abc'), (4, 'abc');
select * from t;
mysql> select * from t;
+---+------+
| i | c |
+---+------+
| 1 | abc |
| 2 | abc |
| 4 | abc |
+---+------+

  • node-2 starts runs a transaction (trx-2):
trx-2: update t set c = 'pqr';

  • node-2 creates a write-set and is just about to replicate it. At the same time, node-1 executes the following transaction (trx-1), and is first to add it to the group-channel (before node-2 adds transaction (trx-2))
trx-1: insert into t values (3, 'a');

  • trx-1 is replicated on node-2, and it proceeds with the apply action. Since there is a lock conflict (no certification conflict), node-2 local transaction (trx-2) is aborted and scheduled for replay.
  • trx-1 causes addition of (3, ‘a’) and then node-2 transaction is REPLAYed.
  • REPLAY is done using the pre-created write-set that only modifies existing entries (1,2,4).

End-result:

mysql> select * from t;
+---+------+
| i | c |
+---+------+
| 1 | pqr |
| 2 | pqr |
| 3 | a |
| 4 | pqr |
+---+------+

  • At first, nothing looks wrong. If you look closely, however, the REPLAYed transaction “UPDATE t set c= ‘pqr’” is last to commit. But the effect of it is not seen as there is still a row (3, ‘a’) that has ‘a’ instead of ‘pqr’.
| mysql-bin.000003 | 792 | Gtid | 2 | 857 | SET @@SESSION.GTID_NEXT= '6706fa1f-e3df-ee18-6621-c4e0bae533bd:4' |
| mysql-bin.000003 | 857 | Query | 2 | 925 | BEGIN |
| mysql-bin.000003 | 925 | Table_map | 2 | 972 | table_id: 219 (test.t) |
| mysql-bin.000003 | 972 | Write_rows | 2 | 1014 | table_id: 219 flags: STMT_END_F existing|
| mysql-bin.000003 | 1014 | Xid | 2 | 1045 | COMMIT /* xid=4 */ |
| mysql-bin.000003 | 1045 | Gtid | 3 | 1110 | SET @@SESSION.GTID_NEXT= '6706fa1f-e3df-ee18-6621-c4e0bae533bd:5' |
| mysql-bin.000003 | 1110 | Query | 3 | 1187 | BEGIN |
| mysql-bin.000003 | 1187 | Table_map | 3 | 1234 | table_id: 219 (test.t) |
| mysql-bin.000003 | 1234 | Update_rows | 3 | 1324 | table_id: 219 flags: STMT_END_F |
| mysql-bin.000003 | 1324 | Xid | 3 | 1355 | COMMIT /* xid=5 */ |
+------------------+------+----------------+-----------+-------------+---------------------------------------------------------------------------------+
21 rows in set (0.00 sec)

  • We have used a simple char string, but if there is a constraint here, like c should have X after UPDATE is complete, than the CONSTRAINT will be violated even though the application reports UPDATE as a success.
  • Is it interesting to note what happens on node-1:
    • node-1 applies the local transaction (trx-1) and then gets the replicated write-set from node-2 (trx-2) that has changes only for (1,2,4). Thereby data consistency is not compromised.
Apr
22
2017
--

BEWARE: Increasing fc_limit can affect SELECT latency

SELECT Latency

SELECT LatencyIn this blog post, we’ll look at how increasing the fc_limit can affect SELECT latency.

Introduction

Recent Percona XtraDB Cluster optimizations have exposed fc_limit contention. It was always there, but was never exposed as the Commit Monitor contention was more significant. As it happens with any optimization, once we solve the bigger contention issues, smaller contention issues start popping up. We have seen this pattern in InnoDB, and Percona XtraDB Cluster is no exception. In fact, it is good because it tells us that we are on the right track.

If you haven’t yet checked the performance blogs, then please visit here and here.

What is FC_LIMIT?

Percona XtraDB Cluster has the concept of Flow Control. If any member of the cluster (not garbd) is unable to match the apply speed with the replicated write-set speed, then the queue builds up. If this queue crosses some threshold (dictated by gcs.fc_limit), then flow control kicks in. Flow control causes members of the cluster to temporary halt/slow-down so that the slower node can catch up.

The user can, of course, disable this by setting wsrep_desync=1 on the slower node, but make sure you understand the effect of doing so. Unless you have a good reason, you should avoid setting it.

mysql> show status like 'wsrep_flow_control_interval';
+-----------------------------+------------+
| Variable_name | Value |
+-----------------------------+------------+
| wsrep_flow_control_interval | [ 16, 16 ] |
+-----------------------------+------------+
1 row in set (0.01 sec)

Increasing fc_limit

Until recently, the default fc_limit was 16 (starting with Percona XtraDB Cluster 5.7.17-29.20, the default is 100). This worked until now, since Percona XtraDB Cluster failed to scale and rarely hit the limit of 16. With new optimizations, Percona XtraDB Cluster nodes can process more write-sets in a given time period, and thereby can replicate more write-sets (anywhere in the range of three to ten times). Of course, the replicating/slave nodes are also performing at a higher speed. But depending on the slave threads, it is easy to start hitting this limit.

So what is the solution?

  • Increase fc_limit from 16 to something really big. Say 1600.

Is this correct?

YES and NO.

Why YES?

  • If you don’t care about the freshness of data on the replicated nodes, then increasing the limit to a higher value is not an issue. Say setting it to 10K means that the replicating node is holding 10K write-sets to replicate, and a SELECT fired during this time will not view changes from these 10K write-sets.
  • But if you insist on having fresh data, then Percona XtraDB Cluster has a solution for this (set wsrep_sync_wait=7).
  • Setting wsrep_sync_wait places the SELECT request in a queue that is serviced only after existing replicated write-sets (at the point when the SELECT was fired) are done with. If the queue has 8K write-sets, then SELECT is placed at the 8K+1 position. As the queue progresses, SELECT gets serviced only when all those 8K write-sets are done. This insanely increases SELECT latency and can cause all Monitoring ALARM to go ON.

Why NO?

  • For the reason mentioned above, we feel it is not a good idea to increase the fc_limit beyond some value unless you don’t care about data freshness and in turn don’t care to set wsrep_sync_wait.
  • We did a small experiment with the latest Percona XtraDB Cluster release to understand the effects.
- Started 2 node cluster.
- Fired 64-threads workload on node-1 of the cluster.
- node-2 is acting as replicating slave without any active workload.
- Set wsrep_sync_wait=7 on node-2 to ensure data-freshness.
Using default fc_limit (= 16)
-----------------------------
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+-------------+
| sum(k) |
+-------------+
| 22499552612 |
+-------------+
1 row in set (0.03 sec)
Increasing it from 16 -> 1600
-----------------------------
mysql> set global wsrep_provider_options="gcs.fc_limit=1600";
Query OK, 0 rows affected (0.00 sec)
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+-------------+
| sum(k) |
+-------------+
| 22499552612 |
+-------------+
1 row in set (0.46 sec)
That is whopping 15x increase in SELECT latency.
Increasing it even further (1600 -> 25000)
-------------------------------------------
mysql> set global wsrep_provider_options="gcs.fc_limit=25000";
Query OK, 0 rows affected (0.00 sec)
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+-------------+
| sum(k) |
+-------------+
| 22499552612 |
+-------------+
1 row in set (7.07 sec)

Note: wsrep_sync_wait=7 will enforce the check for all DMLs (INSERT/UPDATE/DELETE/SELECT). We highlighted the SELECT example, as that is more concerning at first go. But latency for other DMLs also increases for the same reasons as mentioned above.

Conclusion

Let’s conclude with the following observation:

  • Avoid increasing fc_limit to an insanely high value as it can affect SELECT latency (if you are running a SELECT session with wsrep_sync_wait=7 for data freshness).
Apr
22
2017
--

Better Than Linear Scaling

Scalability

In this blog, we’ll look at how to achieve better-than-linear scaling.

Scalability is the capability of a system, network or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth. For example, we consider a system scalable if it is capable of increasing its total output under an increased load when resources (typically hardware) are added: https://en.wikipedia.org/wiki/Scalability.

It is often accepted as a fact that systems (in particular databases) can’t scale better than linearly. By this I mean when you double resources, the expected performance doubles, at best (and often is less than doubled).  

We can attribute this assumption to Amdahl’s law (https://en.wikipedia.org/wiki/Amdahl%27s_law), and later to the Universal Scalability Law (http://www.perfdynamics.com/Manifesto/USLscalability.html). Both these laws prescribe that it is impossible to achieve better than linear scalability. To be totally precise, this is practically correct for single server systems when the added resources are only CPU units.

Multi-nodes systems

However, I think databases systems no longer should be seen as single server systems. MongoDB and Cassandra for a long time have had multi-node auto-sharding capabilities. We are about to see the rise of strongly-consistent SQL based multi-node systems. And even MySQL is frequently deployed with manual sharding on multi-nodes.

The products like Vitess (http://vitess.io/) proposes auto-sharding for MySQL, and with ProxySQL (which I will use in my experiment) you can setup a basic sharding schema.

I describe multi-nodes setups, because in this environment it is possible to achieve much better than linear scalability. I will show this below.

Why is this important?

Understanding scalability of multi-node systems is important for resource planning, and understanding how much of a potential performance gain we can expect when we add more nodes. This is especially interesting for cloud deployments.

How is it possible?

I’ve written about how the size of available memory (cache) affects the performance. When we add additional nodes to the deployment, effectively we increase not only CPU cores, but also the memory that comes with the node (and we are adding extra IO capacity). So, with increasing node counts, we also increase available memory (and cache). As we can see from these graphs, the effect of extra memory could be non-linear (and actually better than linear). Playing on this fact, we can achieve better-than-linear scaling in a sharded setup. I am going to show the experimental setup of how to achieve this.

Experimental setup

To show the sharded setup we will use ProxySQL in front of N MySQL servers (shards). We also will use sysbench with 60 tables (4 million rows each, uniform distribution).

  • For one shard, this shard contains all 60 tables
  • For two shards, each shard contains 30 tables each
  • For three shards, each shard contains 20 tables each
  • For six shards, each shard contains ten tables each

So schematically, it looks like this:

One shard:

Scaling

Two shards:

Scaling

Six shards:

Scaling

We want to measure how the performance (for both throughput and latency) changes when we go from 1 to 2, to 3, to 4, to 5 and to 6 shards.

For the single shard, I used a Google Cloud instance with eight virtual CPUs and 16GB of RAM, where 10GB is allocated for the innodb_buffer_pool_size.

The database size (for all 60 tables) is about 51GB for the data, and 7GB for indexes.

For this we will use a sysbench read-only uniform workload, and ProxySQL helps to perform query routing. We will use ProxySQL query rules, and set sharding as:

mysql -u admin -padmin -h 127.0.0.1 -P6032 -e "DELETE FROM mysql_query_rules"
shards=$1
for i in {1..60}
do
hg=$(( $i % $shards + 1))
mysql -u admin -padmin -h 127.0.0.1 -P6032 -e "INSERT INTO mysql_query_rules (rule_id,active,username,match_pattern,destination_hostgroup,apply) VALUES ($i,1,'root','sbtest$is',$hg,1);"
done
mysql -u admin -padmin -h 127.0.0.1 -P6032 -e "LOAD MYSQL QUERY RULES TO RUNTIME;"

Command line for sysbench 1.0.4:
sysbench oltp_read_only.lua --mysql-socket=/tmp/proxysql.sock --mysql-user=root --mysql-password=test --tables=60 --table-size=4000000 --threads=60 --report-interval=10 --time=900 --rand-type=pareto run

The results

Nodes Throughput Speedup vs. 1 node Latency, ms
1 245 1.00 244.88
2 682 2.78 87.95
3 1659 6.77 36.16
4 2748 11.22 21.83
5 3384 13.81 17.72
6 3514 14.34 17.07

Scaling
As we can see, the performance improves by a factor much better than just linearly.

With five nodes, the improvement is 13.81 times compared to the single node.

The 6th node does not add much benefit, as at this time data practically fits into memory (with five nodes, the total cache size is 50GB compared to the 51GB data size)

Factors that affects multi-node scaling

How can we model/predict the performance gain? There are multiple factors to take into account: the size of the active working set, the available memory size and (also importantly) the distribution of the access to the working set (with uniform distribution being the best case scenario, and with access to the one with only one row being the opposite corner-case, where speedup is impossible). Also we need to keep network speed in mind: if we come close to using all available network bandwidth, it will be impossible to get significant improvement.

Conclusion

In multi-node, auto-scaling, auto-sharding distributed systems, the traditional scalability models do not provide much help. We need to have a better framework to understand how multiple nodes affect performance.

Apr
21
2017
--

Percona Monitoring and Management 1.1.3 is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.1.3 on April 21, 2017.

For installation instructions, see the Deployment Guide.

This release includes several new graphs in dashboards related to InnoDB and MongoDB operation, as well as smaller fixes and improvements.

New in PMM Server

  • PMM-649: Added the InnoDB Page Splits and InnoDB Page Reorgs graphs to the MySQL InnoDB Metrics Advanced dashboard.
  • Added the following graphs to the MongoDB ReplSet dashboard:
    • Oplog Getmore Time
    • Oplog Operations
    • Oplog Processing Time
    • Oplog Buffered Operations
    • Oplog Buffer Capacity
  • Added descriptions for graphs in the following dashboards:
    • MongoDB Overview
    • MongoDB ReplSet
    • PMM Demo

Changes in PMM Client

  • PMM-491: Improved pmm-admin error messages.
  • PMM-523: Added the --verbose option for pmm-admin add.
  • PMM-592: Added the --force option for pmm-admin stop.
  • PMM-702: Added the db.serverStatus().metrics.repl.executor stats to mongodb_exporter. These new stats will be used for graphs in future releases.
  • PMM-731: Added real-time checks to pmm-admin check-network output.
  • The following commands no longer require connection to PMM Server:
    • pmm-admin start --all
    • pmm-admin stop --all
    • pmm-admin restart --all
    • pmm-admin show-passwords

    NOTE: If you want to start, stop, or restart a specific service, connection to PMM Server is still required.

About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Apr
21
2017
--

Percona Server for MySQL in Docker Swarm with Secrets

This quick post demonstrates using Percona Server for MySQL in Docker Swarm with some new authentication provisioning practices.

Some small changes to the startup script for the Percona-Server container image allows us to specify a file that contains password values to set as our root user’s secret. “Why do we need this functionality,” I hear you cry? When we use an environment variable, it’s not terribly hard to locate the value to which someone has set as their database root password. Environment variables are not well suited for sensitive data. We preach against leaving our important passwords in easy to reach places. So moving towards something more secure whilst retaining usability is desirable. I’ll detail the current methods, the problems, and finish off with Docker Secrets – which in my opinion, is the right direction to be heading.

Environment Variables

I’ll elaborate on the main reason why we would want to change from the default given behavior. In the documentation for using the MySQL/Percona and MariaDB containers, we are invited to start containers with an environment variable to control what the instance’s root password is set as upon startup. Let’s demonstrate with the latest official Percona-Server image from Percona’s repository of images on the Docker Hub registry:

moore@chom:~$ docker pull percona/percona-server:latest
latest: Pulling from percona/percona-server
e12c678537ae: Already exists
65ab4b835640: Pull complete
f63269a127d1: Pull complete
757a4fef28b8: Pull complete
b0cb547a5105: Pull complete
4214179ba9ea: Pull complete
155dafd2fd9c: Pull complete
848020b1da10: Pull complete
771687fe7e8b: Pull complete
Digest: sha256:f3197cac76cccd40c3525891ce16b0e9f6d650ccef76e993ed7a22654dc05b73
Status: Downloaded newer image for percona/percona-server:latest

Then start a container:

moore@chom:~$ docker run -d
--name percona-server-1
-e MYSQL_ROOT_PASSWORD='secret'
percona/percona-server
d08f299a872f1408c142b58bc2ce8e59004acfdb26dca93d71f5e9367b4f2a57
moore@chom:~$ docker ps
CONTAINER ID        IMAGE                            COMMAND             CREATED             STATUS              PORTS               NAMES
d08f299a872f        percona/percona-server           "/entrypoint.sh "   32 seconds ago      Up 31 seconds       3306/tcp            percona-server-1

Looks good, eh? Let’s inspect this container a little closer to reveal why this method is flawed:

moore@chom:~$ docker inspect --format '{{ index (index .Config.Env) 0}}' percona-server-1
MYSQL_ROOT_PASSWORD=secret

*facepalm*

We don’t want the root password exposed here, not really. If we wanted to use this method in docker-compose files, we would also be storing passwords inline, which isn’t considered a secure practice.

Environment File

Another approach is to use an environment file. This is simply a file that we can provide docker run or docker-compose in order to instantiate the environment variables within the container. It’s a construct for convenience. So just to illustrate that we have the same problem, the next example uses the mechanism of an environment file for our database container:

moore@chom:~$ echo 'MYSQL_ROOT_PASSWORD=secret' > /tmp/ps.env
moore@chom:~$ docker run -d --name percona-server-2 --env-file=/tmp/ps.env percona/percona-server
d5105d044673bd5912e0e29c2f56fa37c5f174d9d2a4811ceaba284092837c84
moore@chom:~$ docker inspect --format '{{ index (index .Config.Env) 0}}' percona-server-2
MYSQL_ROOT_PASSWORD=secret
NOTE: shortly after starting this container failed because we didn't provide mysql root password options

While we’re not specifying it in our docker run command or our docker-compose.yml file, the password value remains on our filesystem within the environment file. Again, not ideal.

Password File

With the ability to use a password file it obscures this from the inspect output. Let’s roll through the steps we would use to leverage this new option. With our new Percona-Server image, we’re going to start a container, but first let’s create an arbitrary file containing our desired password:

moore@chom:~$ docker:cloud> echo "secret" > /tmp/mysql_pwd_file

Now start a container where we’re going to bind mount the file, and use our new environment variable to point to it:

moore@chom:~$ docker run -v /tmp/mysql_pwd_file:/tmp/mysqlpwd --name percona-secret
-e MYSQL_ROOT_PASSWORD_FILE=/tmp/mysqlpwd percona/percona-server:latest

With the same inspect command, let’s show that there’s no snooping on our password value:

moore@chom:~$ docker inspect --format '{{ index (index .Config.Env) 0}}' percona-secret
MYSQL_ROOT_PASSWORD_FILE=/tmp/mysqlpwd

We are revealing the path where our password was read from within the container. For those eagle-eyed readers, this file was just a bind mounted file in the docker run command, and it’s still on the host’s filesystem.

moore@chom:~$ cat /tmp/mysql_pwd_file
secret
moore@chom:~$ docker exec percona-secret cat /tmp/mysqlpwd
secret

Not perfect, because we need to have that file available on all of our Docker hosts, but it works and we’re closer to a more robust solution.

Docker Secrets

The main reason for the new environment variable is to leverage the docker secrets feature. Since Docker version 1.13 (17.03 is now GA), we have the Docker Secrets feature, however it’s only available to the Docker Swarm workflow. If you’re not already working with Docker Swarm mode, I can’t recommend it enough. It’s part of Docker-engine, simple to get started, and intuitive since 1.13 it is compatible with docker-compose files. You don’t need to have a cluster of hardware, it’s entirely valid to use Docker Swarm on a single node. This allows you to test on your local environment with ease.

I won’t waste pixels explaining what’s already well documented in official channels, but in summary: Docker secrets is a new feature that allows us to keep sensitive information out of source code and configuration files. Secrets are stored in the Raft log which is encrypted and replicated throughout the Docker Swarm cluster. The protection and distribution come for free out of the box, which is a fantastic feature if you ask me.

So, let’s create a Docker Secret. Please note that I’ve moved to my Docker Swarm installation for this next part:

moore@chom:~$ docker:cloud> docker info | egrep -i 'swarm|version'
Server Version: 17.03.0-ce
Swarm: active

Operating as a swarm manager we have the ability to create a new secret to serve as our root user’s password:

moore@chom:~$ docker:cloud> echo "{secret_string}" | docker secret create mysql_root_password -
ugd8dx0kae9hbyt4opbolukgi

We can list all of our existing secrets:

moore@chom:~$ docker:cloud> docker secret ls
ID                          NAME                  CREATED                  UPDATED
ugd8dx0kae9hbyt4opbolukgi   mysql_root_password   Less than a second ago   Less than a second ago

Now our secret has been created, it’s obscured from us. We are unable to see it’s value.

moore@chom:~$ docker secret inspect mysql_root_password
[
    {
        "ID": "ugd8dx0kae9hbyt4opbolukgi",
        "Version": {
            "Index": 905780
        },
        "CreatedAt": "2017-04-11T23:33:08.118037434Z",
        "UpdatedAt": "2017-04-11T23:33:08.118037434Z",
        "Spec": {
            "Name": "mysql_root_password"
        }
    }
]

Now we can use our secret to set our authentication for the MySQL instance by doing the following:

moore@chom:~$ docker service create
--name percona-secret
--secret mysql_root_password
-e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
percona/percona-server:latest

You can see that instead of docker run, I’ve issued the swarm equivalent docker service create, which is going to start a new Percona-Server container in the scope of my Swarm workflow. I’m also using the –secret option to tell docker to mount my secret in the container, which gets mounted to a file under the path /run/secrets/{secret_name}. The final point here, I’m passing MYSQL_ROOT_PASSWORD_FILE=/path/to/secret as an environment variable to let the startup script know where to find the file with my secret value for the root password. Once the startup routine has completed and the container has started successfully I can connect to my container to test the password was set correctly:

moore@chom:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
397bdf9b75f9 percona/percona-server "/entrypoint.sh " 46 seconds ago Up 44 seconds 3306/tcp percona-secret.1.9xvbneset9363dr5xv4fqqxua
moore@chom:~$ docker exec -ti 397bdf9b75f9 bash
mysql@397bdf9b75f9:/$ cat /run/secrets/mysql_root_password
{secret_string}
mysql@397bdf9b75f9:/$ mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 4
Server version: 5.7.17-11 Percona Server (GPL), Release '11', Revision 'f60191c'
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql>

The secret can be shared around any container where it’s necessary, simply by telling Docker to use the secret when instantiating a container. For example, if I wanted to start an application container such as a WordPress instance, I can use a secret object to easily share credentials to the data source safely and consistently.

This method is totally viable for other forms of sensitive data. For example, I can generate SSL certificates and use Docker secrets to add them to my containers for encrypted replication or forcing secure logins from remote clients. I’m still thinking of all the possible use cases for this option and no doubt will have some more to share with you in the near future.

Please share your comments, suggestions and corrections in the comments below. Thank you for reading.

Apr
21
2017
--

Simplified Percona XtraDB Cluster SSL Configuration

Percona XtraDB Cluster SST Traffic Encryption

Percona XtraDB Cluster SSLIn this blog post, we’ll look at a feature that recently added to Percona XtraDB Cluster 5.7.16, that makes it easier to configure Percona XtraDB Cluster SSL for all related communications. It uses mode “encrypt=4”, and configures SSL for both IST/Galera communications and SST communications using the same SSL files. “encrypt=4” is a new encryption mode added in Percona XtraDB Cluster 5.7.16 (we’ll cover it in a later blog post).

If this option is used, this will override all other Galera/SST SSL-related file options. This is to ensure that a consistent configuration is applied.
Using this option also means that the Galera/SST communications are using the same keys as client connections.

Example

This example shows how to startup a cluster using this option. We will use the default SSL files created by the bootstrap node. Basically, there are two steps:

  1. Set
    pxc_encrypt_cluster_traffic=ON

     on all nodes

  2. Ensure that all nodes share the same SSL files

Step 1: Configuration (on all nodes)

We enable the

pxc_encrypt_cluster_traffic

 option in the configuration files on all nodes. The default value of this option is “OFF”, so we enable it here.

[mysqld]
 pxc_encrypt_cluster_traffic=ON

Step 2: Startup the bootstrap node

After initializing and starting up the bootstrap node, the datadir will contain the necessary data files. Here is some SSL-related log output:

[Note] Auto generated SSL certificates are placed in data directory.
 [Warning] CA certificate ca.pem is self signed.
 [Note] Auto generated RSA key files are placed in data directory.

The required files are ca.pem, server-cert.pem and server-key.pem, which are the Certificate Authority (CA) file, the server certificate and the server private key, respectively.

Step 3: Copy the SSL files to all other nodes

Galera views the cluster as a set of homogeneous nodes, so the same configuration is expected on all nodes. Therefore, we have to copy the CA file, the server’s certificate and the server’s private key. By default, MySQL names these: ca.pem, server-cert.pem, and server-key.pem, respectively.

Step 4: Startup the other nodes

This is some log output showing that the SSL certificate files have been found. The other nodes should be using the files that were created on the bootstrap node.

[Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
[Note] Skipping generation of SSL certificates as certificate files are present in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Skipping generation of RSA key pair as key files are present in data directory.

This is some log output (with

log_error_verbosity=3

), showing the SST reporting on the configuration used.

WSREP_SST: [DEBUG] pxc_encrypt_cluster_traffic is enabled, using PXC auto-ssl configuration
WSREP_SST: [DEBUG] with encrypt=4 ssl_ca=/my/data//ca.pem ssl_cert=/my/data//server-cert.pem ssl_key=/my/data//server-key.pem

Customization

The “ssl-ca”, “ssl-cert”, and “ssl-key” options in the “[mysqld]” section can be used to specify the location of the SSL files. If these are not specified, then the datadir is searched (using the default names of “ca.pem”, “server-cert.pem” and “server-key.pem”).

[mysqld]
 pxc_encrypt_cluster_traffic=ON
 ssl-ca=/path/to/ca.pem
 ssl-cert=/path/to/server-cert.pem
 ssl-key=/path/to/server-key.pem

If you want to implement this yourself, the equivalent configuration file options are:

[mysqld]
wsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”
[sst]
encrypt=4
ssl-ca=ca.pem
ssl-cert=server-cert.pem
ssl-key=server-key.pem

How it works

  1. Determine the location of the SSL files
    1. Uses the values if explicitly specified (via the “ssl-ca”, “ssl-cert” and “ssl-key” options in the “[mysqld]” section)
    2. If the SSL file options are not specified, we look in the data directory for files named “ca.pem”, “server-cert.pem” and “server-key.pem” for the CA file, the server certificate, and the server key, respectively.
  2. Modify the configuration
    1. Overrides the values for socket.ssl_ca, socket.ssl_cert, and socket.ssl_key in
      wsrep_provider_options

       in the “[mysqld]” section.

    2. Sets “encrypt=4” in the “[sst]” section.
    3. Overrides the values for ssl-ca, ssl-cert and ssl-key in the “[sst]” section.

This is not a dynamic setting, and is only available on startup.

Apr
21
2017
--

How to Setup and Troubleshoot Percona PAM with LDAP for External Authentication

Percona PAM

Percona PAMIn this blog, we’ll look at how to setup and troubleshoot the Percona PAM authentication plugin.

We occasionally get requests from our support clients on how to get Percona Server for MySQL to authenticate with an external authentication service via LDAP or Active Directory. However, we normally do not have access to client’s infrastructure to help troubleshoot these cases. To help them effectively, we need to setup a testbed to reproduce their issues and guide them on how to get authentication to work. Fortunately, we only need to install Samba to provide an external authentication service for both LDAP and AD.

In this article, I will show you how to (a) compile and install Samba, (b) create a domain environment with Samba, (c) add users and groups to this domain and (d) get Percona Server to use these accounts for authentication via LDAP. In my follow-up article, I will discuss how to get MySQL to authenticate credentials with Active Directory.

My testbed environment consists of two machines

Samba PDC
OS: CentOS 7
IP Address: 172.16.0.10
Hostname: samba-10.example.com
Domain name: EXAMPLE.COM
DNS: 8.8.8.8(Google DNS), 8.8.4.4(Google DNS), 172.16.0.10(Samba)
Firewall: none

Percona Server 5.7 with LDAP authentication
OS: CentOS 7
IP Address: 172.16.0.20
Hostname: ps-ldap-20.example.com

and have several users and groups:

Domain Groups and Users
Support: jericho, jervin and vishal
DBA: sidd, paul and arunjith
Search: ldap

Compile and Install Samba

We will install an NTP client on the Samba PDC/samba-10.example.com machine because time synchronization is a requirement for domain authentication. We will also compile and install Samba from source because the Samba implementation in the official repository doesn’t include the Active Directory Domain Controller role. Hence, samba-tool is not included in the official repository. For our testbed, we need this tool because it makes it easier to provision a domain and manage users and groups. So, for CentOS 7, you can either build from source or use a trusted 3rd party build of Samba (as discussed in Samba’s wiki).

For more information, please read Setting up Samba as an Active Directory Domain Controller as well.

  1. Install, configure, and run the NTP client. Ensure that this client service runs when the server boots up:
[root@samba-10 ~]# yum -y install ntp
* * *
Installed:
  ntp.x86_64 0:4.2.6p5-25.el7.centos.1
Dependency Installed:
  autogen-libopts.x86_64 0:5.18-5.el7                     ntpdate.x86_64 0:4.2.6p5-25.el7.centos.1
[root@samba-10 ~]# ntpdate 0.centos.pool.ntp.org
 7 Apr 06:06:07 ntpdate[9788]: step time server 202.90.132.242 offset 0.807640 sec
[root@samba-10 ~]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@samba-10 ~]# systemctl start ntpd.service

  1. Install compilers and library dependencies for compiling Samba:
[root@samba-10 ~]# yum -y install gcc perl python-devel gnutls-devel libacl-devel openldap-devel
* * *
Installed:
  gcc.x86_64 0:4.8.5-11.el7  gnutls-devel.x86_64 0:3.3.24-1.el7  libacl-devel.x86_64 0:2.2.51-12.el7  openldap-devel.x86_64 0:2.4.40-13.el7  perl.x86_64 4:5.16.3-291.el7  python-devel.x86_64 0:2.7.5-48.el7
Dependency Installed:
  cpp.x86_64 0:4.8.5-11.el7                            cyrus-sasl.x86_64 0:2.1.26-20.el7_2               cyrus-sasl-devel.x86_64 0:2.1.26-20.el7_2             glibc-devel.x86_64 0:2.17-157.el7_3.1
  glibc-headers.x86_64 0:2.17-157.el7_3.1              gmp-devel.x86_64 1:6.0.0-12.el7_1                 gnutls-c++.x86_64 0:3.3.24-1.el7                      gnutls-dane.x86_64 0:3.3.24-1.el7
  kernel-headers.x86_64 0:3.10.0-514.10.2.el7          ldns.x86_64 0:1.6.16-10.el7                       libattr-devel.x86_64 0:2.4.46-12.el7                  libevent.x86_64 0:2.0.21-4.el7
  libmpc.x86_64 0:1.0.1-3.el7                          libtasn1-devel.x86_64 0:3.8-3.el7                 mpfr.x86_64 0:3.1.1-4.el7                             nettle-devel.x86_64 0:2.7.1-8.el7
  p11-kit-devel.x86_64 0:0.20.7-3.el7                  perl-Carp.noarch 0:1.26-244.el7                   perl-Encode.x86_64 0:2.51-7.el7                       perl-Exporter.noarch 0:5.68-3.el7
  perl-File-Path.noarch 0:2.09-2.el7                   perl-File-Temp.noarch 0:0.23.01-3.el7             perl-Filter.x86_64 0:1.49-3.el7                       perl-Getopt-Long.noarch 0:2.40-2.el7
  perl-HTTP-Tiny.noarch 0:0.033-3.el7                  perl-PathTools.x86_64 0:3.40-5.el7                perl-Pod-Escapes.noarch 1:1.04-291.el7                perl-Pod-Perldoc.noarch 0:3.20-4.el7
  perl-Pod-Simple.noarch 1:3.28-4.el7                  perl-Pod-Usage.noarch 0:1.63-3.el7                perl-Scalar-List-Utils.x86_64 0:1.27-248.el7          perl-Socket.x86_64 0:2.010-4.el7
  perl-Storable.x86_64 0:2.45-3.el7                    perl-Text-ParseWords.noarch 0:3.29-4.el7          perl-Time-HiRes.x86_64 4:1.9725-3.el7                 perl-Time-Local.noarch 0:1.2300-2.el7
  perl-constant.noarch 0:1.27-2.el7                    perl-libs.x86_64 4:5.16.3-291.el7                 perl-macros.x86_64 4:5.16.3-291.el7                   perl-parent.noarch 1:0.225-244.el7
  perl-podlators.noarch 0:2.5.1-3.el7                  perl-threads.x86_64 0:1.87-4.el7                  perl-threads-shared.x86_64 0:1.43-6.el7               unbound-libs.x86_64 0:1.4.20-28.el7
  zlib-devel.x86_64 0:1.2.7-17.el7
Complete!

  1. Download, compile and install Samba:
[root@samba-10 ~]# yum -y install wget
* * *
[root@samba-10 ~]# wget https://www.samba.org/samba/ftp/samba-latest.tar.gz
* * *
2017-04-07 06:16:59 (337 KB/s) - 'samba-latest.tar.gz' saved [21097045/21097045]
[root@samba-10 ~]# tar xzf samba-latest.tar.gz
[root@samba-10 ~]# cd samba-4.6.2/
[root@samba-10 samba-4.6.2]# ./configure --prefix=/opt/samba
Checking for program gcc or cc           : /usr/bin/gcc
Checking for program cpp                 : /usr/bin/cpp
Checking for program ar                  : /usr/bin/ar
Checking for program ranlib              : /usr/bin/ranlib
* * *
Checking compiler for PIE support                                                               : yes
Checking compiler for full RELRO support                                                        : yes
Checking if toolchain accepts -fstack-protector                                                 : yes
'configure' finished successfully (39.119s)
[root@samba-10 samba-4.6.2]# make
WAF_MAKE=1 python ./buildtools/bin/waf build
Waf: Entering directory `/root/samba-4.6.2/bin'
symlink: tevent.py -> python/tevent.py
* * *
[3773/3775] Linking default/source3/modules/libvfs_module_acl_xattr.so
[3774/3775] Linking default/source3/modules/libvfs_module_shadow_copy.so
[3775/3775] Linking default/source3/modules/libvfs_module_dirsort.so
Waf: Leaving directory `/root/samba-4.6.2/bin'
'build' finished successfully (6m58.144s)
[root@samba-10 samba-4.6.2]# make install
WAF_MAKE=1 python ./buildtools/bin/waf install
Waf: Entering directory `/root/samba-4.6.2/bin'
* creating /opt/samba/etc
* creating /opt/samba/private
* * *
* installing bin/default/source3/nmbd/nmbd.inst as /opt/samba/sbin/nmbd
* installing bin/default/file_server/libservice_module_s3fs.inst.so as /opt/samba/lib/service/s3fs.so
Waf: Leaving directory `/root/samba-4.6.2/bin'
'install' finished successfully (1m44.377s)

Please take note that when I downloaded Samba, the latest version was 4.6.2. If you have a problem with compiling the latest version of Samba, try using version 4.6.2.

  1. Include executable path of Samba to the PATH variable so we can call samba binaries without specifying its absolute path:
[root@samba-10 samba-4.6.2]# echo "PATH=/opt/samba/sbin:/opt/samba/bin:/usr/sbin:/usr/bin" >> /etc/environment
[root@samba-10 samba-4.6.2]# PATH=/opt/samba/sbin:/opt/samba/bin:/usr/sbin:/usr/bin
[root@samba-10 samba-4.6.2]# which samba-tool
/opt/samba/bin/samba-tool

  1. Setup systemd script for Samba and ensure that this service auto starts on server boot
[root@samba-10 samba-4.6.2]# echo "[Unit]
Description=Samba PDC
After=syslog.target network.target
[Service]
Type=forking
PIDFile=//opt/samba/var/run/samba.pid
ExecStart=/opt/samba/sbin/samba -D
ExecReload=/usr/bin/kill -HUP $MAINPID
ExecStop=/usr/bin/kill $MAINPID
[Install]
WantedBy=multi-user.target" > /etc/systemd/system/samba.service
[root@samba-10 samba-4.6.2]# systemctl enable samba.service
Created symlink from /etc/systemd/system/multi-user.target.wants/samba.service to /etc/systemd/system/samba.service.

  1. Remove existing /etc/krb5.conf, because the existing configuration prevents us from provisioning a new domain.
[root@samba-10 samba-4.6.2]# rm -f /etc/krb5.conf
[root@samba-10 samba-4.6.2]# cd
[root@samba-10 ~]#

  1. Done.

Create a domain environment with Samba

  1. To setup a domain, all we need to do is to run “samba-tool domain provision” and pass the following details:

Realm: EXAMPLE.COM
Domain: EXAMPLE
Server Role: dc(domain controller)
DNS backend: SAMBA_INTERNAL
DNS forwarder IP address: 8.8.8.8

You will also need to supply the Administrator password. This account is used to join a workstation or server to a domain:

[root@samba-10 ~]# samba-tool domain provision
Realm [EXAMPLE.ORG]: EXAMPLE.COM
 Domain [EXAMPLE]: EXAMPLE
 Server Role (dc, member, standalone) [dc]: dc
 DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) [SAMBA_INTERNAL]: SAMBA_INTERNAL
 DNS forwarder IP address (write 'none' to disable forwarding) [8.8.8.8]: 8.8.8.8
Administrator password:
Retype password:
Looking up IPv4 addresses
Looking up IPv6 addresses
No IPv6 address will be assigned
Setting up secrets.ldb
Setting up the registry
Setting up the privileges database
Setting up idmap db
Setting up SAM db
Setting up sam.ldb partitions and settings
Setting up sam.ldb rootDSE
Pre-loading the Samba 4 and AD schema
Adding DomainDN: DC=example,DC=com
Adding configuration container
Setting up sam.ldb schema
Setting up sam.ldb configuration data
Setting up display specifiers
Modifying display specifiers
Adding users container
Modifying users container
Adding computers container
Modifying computers container
Setting up sam.ldb data
Setting up well known security principals
Setting up sam.ldb users and groups
Setting up self join
Adding DNS accounts
Creating CN=MicrosoftDNS,CN=System,DC=example,DC=com
Creating DomainDnsZones and ForestDnsZones partitions
Populating DomainDnsZones and ForestDnsZones partitions
Setting up sam.ldb rootDSE marking as synchronized
Fixing provision GUIDs
A Kerberos configuration suitable for Samba AD has been generated at /opt/samba/private/krb5.conf
Once the above files are installed, your Samba4 server will be ready to use
Server Role:           active directory domain controller
Hostname:              samba-10
NetBIOS Domain:        EXAMPLE
DNS Domain:            example.com
DOMAIN SID:            S-1-5-21-1337223342-1741564684-602463608

Please take note that if you get the error below, it’s likely due to not removing the existing /etc/krb5.conf before using samba-tool:

ERROR(ldb): uncaught exception - operations error at ../source4/dsdb/samdb/ldb_modules/password_hash.c:2820
  File "/opt/samba/lib64/python2.7/site-packages/samba/netcmd/__init__.py", line 176, in _run
    return self.run(*args, **kwargs)
  File "/opt/samba/lib64/python2.7/site-packages/samba/netcmd/domain.py", line 471, in run
    nosync=ldap_backend_nosync, ldap_dryrun_mode=ldap_dryrun_mode)
  File "/opt/samba/lib64/python2.7/site-packages/samba/provision/__init__.py", line 2175, in provision
    skip_sysvolacl=skip_sysvolacl)
  File "/opt/samba/lib64/python2.7/site-packages/samba/provision/__init__.py", line 1787, in provision_fill
    next_rid=next_rid, dc_rid=dc_rid)
  File "/opt/samba/lib64/python2.7/site-packages/samba/provision/__init__.py", line 1447, in fill_samdb
    "KRBTGTPASS_B64": b64encode(krbtgtpass.encode('utf-16-le'))
  File "/opt/samba/lib64/python2.7/site-packages/samba/provision/common.py", line 55, in setup_add_ldif
    ldb.add_ldif(data, controls)
  File "/opt/samba/lib64/python2.7/site-packages/samba/__init__.py", line 225, in add_ldif
    self.add(msg, controls)

You could also get an error if you entered a simple password for the Administrator account.

  1. Create a symlink of the generated krb5.conf in /etc. This configuration is used authenticate machines, accounts and services:
[root@samba-10 ~]# ln -s /opt/samba/private/krb5.conf /etc

  1. Start the Samba service:
[root@samba-10 ~]# systemctl start samba.service

  1. Check network ports to see if Samba is running:
[root@samba-10 ~]# yum -y install net-tools
* * *
[root@samba-10 ~]# netstat -tapn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:464             0.0.0.0:*               LISTEN      13296/samba
tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN      13302/samba
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      875/sshd
tcp        0      0 0.0.0.0:88              0.0.0.0:*               LISTEN      13296/samba
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1327/master
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      13294/samba
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      13307/smbd
tcp        0      0 0.0.0.0:1024            0.0.0.0:*               LISTEN      13291/samba
tcp        0      0 0.0.0.0:1025            0.0.0.0:*               LISTEN      13291/samba
tcp        0      0 0.0.0.0:3268            0.0.0.0:*               LISTEN      13294/samba
tcp        0      0 0.0.0.0:3269            0.0.0.0:*               LISTEN      13294/samba
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      13294/samba
tcp        0      0 0.0.0.0:135             0.0.0.0:*               LISTEN      13291/samba
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      13307/smbd

  1. Done.

Add users and groups to this domain

Now that Samba is running we can add users and groups, and assign users to groups with samba-tool.

  1. Add groups by running “samba-tool group add group_name”:
[root@samba-10 ~]# samba-tool group add support
Added group support
[root@samba-10 ~]# samba-tool group add dba
Added group dba
[root@samba-10 ~]# samba-tool group add search
Added group search

  1. Add users by running “samba-tool user create username”:
[root@samba-10 ~]# samba-tool user create jericho
New Password:
Retype Password:
User 'jericho' created successfully
[root@samba-10 ~]# samba-tool user create jervin
New Password:
Retype Password:
User 'jervin' created successfully
[root@samba-10 ~]# samba-tool user create vishal
New Password:
Retype Password:
User 'vishal' created successfully
[root@samba-10 ~]# samba-tool user create sidd
New Password:
Retype Password:
User 'sidd' created successfully
[root@samba-10 ~]# samba-tool user create paul
New Password:
Retype Password:
User 'paul' created successfully
[root@samba-10 ~]# samba-tool user create arunjith
New Password:
Retype Password:
User 'arunjith' created successfully
[root@samba-10 ~]# samba-tool user create ldap
New Password:
Retype Password:
User 'ldap' created successfully

  1. Add users to their corresponding groups with “samba-tool group addmembers group_name user,user2,usern”:
[root@samba-10 ~]# samba-tool group addmembers support jericho,jervin,vishal
Added members to group support
[root@samba-10 ~]# samba-tool group addmembers dba sidd,paul,arunjith
Added members to group dba
[root@samba-10 ~]# samba-tool group addmembers search ldap
Added members to group search

  1. Verify that users, groups and memberships exist with commands “samba-tool user list”, “samba-tool group list” and “samba-tool group listmembers group_name”:
[root@samba-10 ~]# samba-tool user list
Administrator
arunjith
jericho
jervin
krbtgt
vishal
Guest
ldap
paul
sidd
[root@samba-10 ~]# samba-tool group list
Allowed RODC Password Replication Group
Enterprise Read-Only Domain Controllers
Denied RODC Password Replication Group
Pre-Windows 2000 Compatible Access
Windows Authorization Access Group
Certificate Service DCOM Access
Network Configuration Operators
Terminal Server License Servers
Incoming Forest Trust Builders
Read-Only Domain Controllers
Group Policy Creator Owners
Performance Monitor Users
Cryptographic Operators
Distributed COM Users
Performance Log Users
Remote Desktop Users
Account Operators
Event Log Readers
RAS and IAS Servers
Backup Operators
Domain Controllers
Server Operators
Enterprise Admins
Print Operators
Administrators
Domain Computers
Cert Publishers
DnsUpdateProxy
Domain Admins
Domain Guests
Schema Admins
Domain Users
Replicator
IIS_IUSRS
DnsAdmins
Guests
Users
support
search
dba
[root@samba-10 ~]# samba-tool group listmembers support
jervin
jericho
vishal
[root@samba-10 ~]# samba-tool group listmembers dba
arunjith
sidd
paul
[root@samba-10 ~]# samba-tool group listmembers search
ldap

For more information on using samba-tool, just run

samba-tool --help

.

  1. Done.

How to get Percona Server to use these accounts for authentication via LDAP

We will be using the machine ps-ldap-20.example.com to offer MySQL service with LDAP authentication via Percona PAM. If you’re not familiar with Percona PAM, please have a look at this before moving forward.

At this point, our Samba service is running with users, groups and memberships added. We can now query Samba via LDAP ports 389 and 636. We will configure the server to do LDAP lookups when searching for users and groups. This is necessary because we use the name service to validate group membership. We will then install Percona Server for MySQL and configure our PAM plugin to use

nss-pam-ldapd

 to authenticate to LDAP. Finally, we will test LDAP authentication on Percona Server for MySQL using a regular user and proxy user.

  1. Install
    nss-pam-ldapd

     and

    nscd

    . We will use these packages to query LDAP server from our server:

[root@ps-20 ~]# yum -y install nss-pam-ldapd

  1. Configure
    nss-pam-ldapd

     by incorporating our Samba’s LDAP settings:

[root@ps-20 ~]# echo "uid nslcd
gid ldap
pagesize 1000
referrals off
idle_timelimit 800
filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer)))
map    passwd uid           sAMAccountName
map    passwd uidNumber     objectSid:S-1-5-21-1337223342-1741564684-602463608
map    passwd gidNumber     objectSid:S-1-5-21-1337223342-1741564684-602463608
map    passwd homeDirectory "/home/$cn"
map    passwd gecos         displayName
map    passwd loginShell    "/bin/bash"
filter group (|(objectClass=group)(objectClass=person))
map    group gidNumber      objectSid:S-1-5-21-1337223342-1741564684-602463608
uri ldaps://172.16.0.10
base dc=example,dc=com
tls_reqcert never
binddn cn=ldap,cn=Users,dc=example,dc=com
bindpw MyLdapPasswordDontCopyIt2017" > /etc/nslcd.conf

As you can see above, this config contains LDAP settings, mapping custom LDAP attributes, and LDAP credentials. The value of objectSid was taken from “DOMAIN SID” that was generated when I created a new domain. So, be sure to use the value of “DOMAIN SID” generated on your end. Otherwise, your LDAP queries will not match any record. However, if you’re authenticating from an existing Windows AD server, you can obtain the value of “DOMAIN SID” by running “Get-ADDomain”. Also, you can take a look at this link to get to know more about other configurations for nslcd.conf.

  1. Add LDAP lookup to nsswitch service by editing /etc/nsswitch.conf:

Find:
passwd: files sss
shadow: files sss
group: files sss

Replace with:
passwd: files sss ldap
shadow: files sss ldap
group: files sss ldap

  1. Run nslcd in debug mode:
[root@ps-20 ~]# nslcd -d
nslcd: DEBUG: add_uri(ldaps://172.16.0.10)
nslcd: DEBUG: ldap_set_option(LDAP_OPT_X_TLS_REQUIRE_CERT,0)
nslcd: version 0.8.13 starting
nslcd: DEBUG: unlink() of /var/run/nslcd/socket failed (ignored): No such file or directory
nslcd: DEBUG: initgroups("nslcd",55) done
nslcd: DEBUG: setgid(55) done
nslcd: DEBUG: setuid(65) done
nslcd: accepting connections

  1. Test if LDAP lookups work by running “id ” and “getent passwd” on another terminal:
[root@ps-20 ~]# id jervin
uid=1107(jervin) gid=1107(jervin) groups=1107(jervin),1103(support)
[root@ps-20 ~]# id paul
uid=1110(paul) gid=1110(paul) groups=1110(paul),1104(dba)
[root@ps-20 ~]# getent passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
avahi-autoipd:x:170:170:Avahi IPv4LL Stack:/var/lib/avahi-autoipd:/sbin/nologin
systemd-bus-proxy:x:999:997:systemd Bus Proxy:/:/sbin/nologin
systemd-network:x:998:996:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
polkitd:x:997:995:User for polkitd:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
user:x:1000:1000:user:/home/user:/bin/bash
mysql:x:27:27:Percona Server:/var/lib/mysql:/bin/false
nscd:x:28:28:NSCD Daemon:/:/sbin/nologin
nslcd:x:65:55:LDAP Client User:/:/sbin/nologin
Administrator:*:500:500::/home/Administrator:/bin/bash
arunjith:*:1111:1111::/home/arunjith:/bin/bash
jericho:*:1106:1106::/home/jericho:/bin/bash
jervin:*:1107:1107::/home/jervin:/bin/bash
krbtgt:*:502:502::/home/krbtgt:/bin/bash
vishal:*:1108:1108::/home/vishal:/bin/bash
Guest:*:501:501::/home/Guest:/bin/bash
ldap:*:1112:1112::/home/ldap:/bin/bash
paul:*:1110:1110::/home/paul:/bin/bash
sidd:*:1109:1109::/home/sidd:/bin/bash

If you take a look at the nslcd terminal again, you will see that it’s trying to resolve the user and group identification with LDAP searches:

* * *
nslcd: [7b23c6] <passwd=1107> DEBUG: ldap_simple_bind_s("cn=ldap,cn=Users,dc=example,dc=com","***") (uri="ldaps://172.16.0.10")
nslcd: [7b23c6] <passwd=1107> DEBUG: ldap_result(): CN=jervin,CN=Users,DC=example,DC=com
nslcd: [7b23c6] <passwd=1107> DEBUG: ldap_result(): end of results (1 total)
nslcd: [3c9869] DEBUG: connection from pid=10468 uid=0 gid=0
nslcd: [3c9869] <passwd=1107> DEBUG: myldap_search(base="dc=example,dc=com", filter="(&(&(objectClass=user)(objectClass=person)(!(objectClass=computer)))(objectSid=?1?5?0?0?0?0?0?515?0?0?0ae68b44f?c2bce6778dde8...
* * *
nslcd: [5558ec] <passwd="paul"> DEBUG: myldap_search(base="dc=example,dc=com", filter="(&(&(objectClass=user)(objectClass=person)(!(objectClass=computer)))(sAMAccountName=paul))")
nslcd: [5558ec] <passwd="paul"> DEBUG: ldap_result(): CN=paul,CN=Users,DC=example,DC=com
nslcd: [5558ec] <passwd="paul"> DEBUG: ldap_result(): end of results (1 total)
* * *
nslcd: [e2a9e3] <passwd(all)> DEBUG: myldap_search(base="dc=example,dc=com", filter="(&(objectClass=user)(objectClass=person)(!(objectClass=computer)))")
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=Administrator,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=arunjith,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=jericho,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=jervin,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=krbtgt,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=vishal,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=Guest,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=ldap,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=paul,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): CN=sidd,CN=Users,DC=example,DC=com
nslcd: [e2a9e3] <passwd(all)> DEBUG: ldap_result(): end of results (10 total)

Now that we know nslcd is working, shut it down by running “Ctrl-C”.

  1. Run nslcd normally and make sure it starts up on boot:
[root@ps-20 ~]# systemctl start nslcd.service
[root@ps-20 ~]# systemctl enable nslcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nslcd.service to /usr/lib/systemd/system/nslcd.service.

  1. Install and run Percona Server for MySQL 5.7 and make sure it runs when the server boots up:
[root@ps-20 ~]# rpm -Uvh https://www.percona.com/redir/downloads/percona-release/redhat/percona-release-0.1-4.noarch.rpm
Retrieving https://www.percona.com/redir/downloads/percona-release/redhat/percona-release-0.1-4.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:percona-release-0.1-4            ################################# [100%]
[root@ps-20 ~]# yum -y install Percona-Server-server-57
* * *
[root@ps-20 ~]# mysqld --initialize-insecure --user=mysql
[root@ps-20 ~]# systemctl start mysqld.service
[root@ps-20 ~]# systemctl enable mysqld.service
Created symlink from /etc/systemd/system/mysql.service to /usr/lib/systemd/system/mysqld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/mysqld.service to /usr/lib/systemd/system/mysqld.service.

  1. Login to MySQL and change the root password:
[root@ps-20 ~]# mysql -uroot
mysql> SET PASSWORD=PASSWORD('MyNewAndImprovedPassword');

  1. Install the Percona PAM plugin:
mysql> delete from mysql.user where user='';
Query OK, 0 rows affected (0.00 sec)
mysql> INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';
Query OK, 0 rows affected (0.01 sec)
mysql> INSTALL PLUGIN auth_pam_compat SONAME 'auth_pam_compat.so';
Query OK, 0 rows affected (0.00 sec)

  1. Configure Percona PAM to authenticate to LDAP by creating /etc/pam.d/mysqld with this content:
auth required pam_ldap.so
account required pam_ldap.so

  1. Create a MySQL user that will authenticate via auth_pam:
mysql> CREATE USER jervin@'%' IDENTIFIED WITH auth_pam;
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON support.* TO jervin@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

  1. Login as this user and check grants:
[root@ps-20 ~]# mysql -u jervin
Password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 22
Server version: 5.7.17-13 Percona Server (GPL), Release 13, Revision fd33d43
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> SHOW GRANTS;
+-----------------------------------------------------+
| Grants for jervin@%                                 |
+-----------------------------------------------------+
| GRANT USAGE ON *.* TO 'jervin'@'%'                  |
| GRANT ALL PRIVILEGES ON `support`.* TO 'jervin'@'%' |
+-----------------------------------------------------+
2 rows in set (0.00 sec)

It works! However, if you have 100 support users who have the same MySQL privileges, creating 100 MySQL users is tedious and can be difficult to maintain. If belonging to a group has certain MySQL privileges, setup proxy users instead to map a user’s privilege to its defined group. We will implement this for both dba and support users in the next step.

For now, delete the user we just created:

mysql> DROP USER jervin@'%';
Query OK, 0 rows affected (0.00 sec)

  1. Create proxy user and proxied accounts:
mysql> CREATE USER ''@'' IDENTIFIED WITH auth_pam as 'mysqld,support=support_users,dba=dba_users';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE USER support_users@'%' IDENTIFIED BY 'some_password';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE USER dba_users@'%' IDENTIFIED BY 'some_password';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON support.* TO support_users@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO dba_users@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT PROXY ON support_users@'%' TO ''@'';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT PROXY ON dba_users@'%' TO ''@'';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

To know more about setting up proxy users, see this article written by Stephane.

  1. Let’s try logging in as “jericho” and “paul” and see if they inherit the privileges of their group.
[root@ps-20 ~]# mysql -ujericho -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 25
Server version: 5.7.17-13 Percona Server (GPL), Release 13, Revision fd33d43
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> SELECT user(), current_user(), @@proxy_user;
+-------------------+-----------------+--------------+
| user()            | current_user()  | @@proxy_user |
+-------------------+-----------------+--------------+
| jericho@localhost | support_users@% | ''@''        |
+-------------------+-----------------+--------------+
1 row in set (0.00 sec)
mysql> SHOW GRANTS;
+------------------------------------------------------------+
| Grants for support_users@%                                 |
+------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'support_users'@'%'                  |
| GRANT ALL PRIVILEGES ON `support`.* TO 'support_users'@'%' |
+------------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> quit
Bye
[root@ps-20 ~]# mysql -upaul -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 27
Server version: 5.7.17-13 Percona Server (GPL), Release 13, Revision fd33d43
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> SELECT user(), current_user(), @@proxy_user;
+----------------+----------------+--------------+
| user()         | current_user() | @@proxy_user |
+----------------+----------------+--------------+
| paul@localhost | dba_users@%    | ''@''        |
+----------------+----------------+--------------+
1 row in set (0.00 sec)
mysql> SHOW GRANTS;
+------------------------------------------------+
| Grants for dba_users@%                         |
+------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'dba_users'@'%' |
+------------------------------------------------+
1 row in set (0.00 sec)

As you can see, they did inherit the MySQL privileges of their groups.

  1. Done.

Conclusion

To be honest, setting up Percona PAM with LDAP can be challenging if you add this functionality with existing infrastructure. But hopefully, by setting this up in a lab environment from scratch, and doing some tests, you’ll be confident enough to incorporate this feature in production environments.

Apr
20
2017
--

Tracking IST Progress in Percona XtraDB Cluster

Percona XtraDB Cluster SST Traffic Encryption

ISTIn this blog post, we’ll look at how Percona XtraDB Cluster uses IST.

Introduction

Percona XtraDB Cluster uses the concept of an Incremental State Transfer (IST). When a node of the cluster leaves the cluster for a short period of time, it can rejoin the cluster by getting the delta set of missing changes from any active node in the cluster.

This process of getting the delta set of changes is named as IST in Percona XtraDB Cluster.

Tracking IST Progress

The number of write-sets/changes that the joining node needs to catch up on when rejoining the cluster is dictated by:

  1. The duration the node was not present in the cluster
  2. The workload of the cluster during that time frame

This catch-up process can be time-consuming. Until this process is complete, the rejoining node is not ready to process any active workloads.

We believe that any process that is time-consuming should have a progress monitor attached to it. This is exactly what we have done.

In the latest release of Percona XtraDB Cluster 5.7.17-29.20, we added an IST progress monitor that is exposed through SHOW STATUS. This helps you to monitor the percentage of write-sets which has been applied by the rejoining node.

Let’s see this in a working example:

  • Start a two-node cluster
  • Process some basic workloads, allow cluster replication
  • Shutdown node-2
  • Node-1 then continues to process more workloads (the workload fits the allocated gcache)
  • Restart Node-2, causing it to trigger an IST
mysql> show status like 'wsrep_ist_receive_status';
+--------------------------+--------------------------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------------------------+
| wsrep_ist_receive_status | 3% complete, received seqno 1421771 of 1415410-1589676 |
+--------------------------+--------------------------------------------------------+
1 row in set (0.00 sec)
....
mysql> show status like 'wsrep_ist_receive_status';
+--------------------------+---------------------------------------------------------+
| Variable_name | Value |
+--------------------------+---------------------------------------------------------+
| wsrep_ist_receive_status | 52% complete, received seqno 1506799 of 1415410-1589676 |
+--------------------------+---------------------------------------------------------+
1 row in set (0.00 sec)
....
mysql> show status like 'wsrep_ist_receive_status';
+--------------------------+---------------------------------------------------------+
| Variable_name | Value |
+--------------------------+---------------------------------------------------------+
| wsrep_ist_receive_status | 97% complete, received seqno 1585923 of 1415410-1589676 |
+--------------------------+---------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show status like 'wsrep_ist_receive_status';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| wsrep_ist_receive_status | |
+--------------------------+-------+
1 row in set (0.00 sec)

As you can see, the wsrep_ist_receive_status monitoring string indicates the percentage completed, currently received write-set and the range of write-sets applicable to the IST.

Once the IST activity is complete, the variable shows an empty-string.

Closing Comments

I hope you enjoy this newly added feature. Percona Engineering would be happy to hear from you, about more such features that can help you make effective use of Percona XtraDB Cluster. We will try our best to include them in our future plans (based on feasibility).

Note: Special thanks for Kenn Takara and Roel Van de Paar for helping me edit this post.

Apr
20
2017
--

More Trackable Flow Control for Percona XtraDB Cluster

Percona XtraDB Cluster SST Traffic Encryption

Percona XtraDB ClusterIn this blog post, we’ll discuss trackable flow control in Percona XtraDB Cluster.

Introduction

Percona XtraDB Cluster has a self-regulating mechanism called Flow Control. This mechanism helps to avoid a situation wherein the weakest/slowest member of the cluster falls significantly behind other members of the cluster.

When a member of a cluster is slow at applying write-sets (while simultaneously continuing to receive write-sets from the cluster group channel), then the incoming/receive queue grows in size. If this queue crosses a set threshold (gcs.fc_limit), the node emits a FLOW_CONTROL message asking other members to slow down or halt processing.

While this happens transparently for the end-user, it’s important for the cluster administrator to know whether a node is using flow-control. If it is, it can affect overall cluster productivity.

Finding if a node is in flow control

FLOW_CONTROL is not a persistent state. A node enters FLOW_CONTROL once the queue size exceeds the set threshold. It is released back again once the queue size falls back below the low-end watermark.

So how can one view the higher and lower watermarks?

Up until now, this was exposed only in the log files. However, starting with Percona XtraDB Cluster 5.7.17-29.20, this is now exposed and available through SHOW STATUS:

mysql> show status like 'wsrep_flow_control_interval';
+-----------------------------+------------+
| Variable_name | Value |
+-----------------------------+------------+
| wsrep_flow_control_interval | [ 12, 23 ] |
+-----------------------------+------------+
1 row in set (0.00 sec)
mysql> set global wsrep_provider_options="gcs.fc_limit=100";
Query OK, 0 rows affected (0.00 sec)
mysql> show status like 'wsrep_flow_control_interval';
+-----------------------------+-------------+
| Variable_name | Value |
+-----------------------------+-------------+
| wsrep_flow_control_interval | [ 71, 141 ] |
+-----------------------------+-------------+
1 row in set (0.00 sec)
mysql> set global wsrep_provider_options="gcs.fc_factor=0.75";
Query OK, 0 rows affected (0.00 sec)
mysql> show status like 'wsrep_flow_control_interval';
+-----------------------------+--------------+
| Variable_name | Value |
+-----------------------------+--------------+
| wsrep_flow_control_interval | [ 106, 141 ] |
+-----------------------------+--------------+
1 row in set (0.01 sec)

As you can see, the wsrep_flow_control_interval status variable emits a range representing the lower and higher watermark. A value set of (12, 23) means that if the incoming queue size is greater than 23, then FLOW_CONTROL is enabled. If the size falls below 12, FLOW_CONTROL is released.

Still, this does not show us if the node is using FLOW_CONTROL at any given moment.

To address this, we simultaneously introduced a wsrep_flow_control_status status variable into the same release. This boolean status variable tells the user if the node is in FLOW_CONTROL or not. Once the node is out of flow-control, the variable is switched to OFF, and vice versa to ON when the node is employing flow-control:

show status like '%flow%';
.....
| wsrep_flow_control_sent | 28156 |
| wsrep_flow_control_recv | 28156 |
| wsrep_flow_control_interval | [ 106, 141 ] |
| wsrep_flow_control_status | ON |

Finally, the wsrep_flow_control_sent/recv counter can be used to track FLOW_CONTROL status. This shows an aggregated count of how many times flow-control kicked in. You can clear them using FLUSH STATUS.

mysql> show status like '%flow%';
.....
| wsrep_flow_control_sent | 31491 |
| wsrep_flow_control_recv | 31491 |
| wsrep_flow_control_interval | [ 106, 141 ] |
| wsrep_flow_control_status | OFF |

Takeaway thoughts

We hope you enjoy the new features added to Percona XtraDB Cluster, including flow control tracking!

We like feedback from customers (via the usual Customer Support website), and where possible from the community (via Percona JIRA and Launchpad), on which features they most want to see. We recently re-engineered Percona XtraDB Cluster to have much-increased performance (a request originating from the community), and the new IST Progress Meter and Flow Control Monitoring were introduced to aid in solving customer-facing challenges.

If you have a feature you would like to see, let us know!

Note: Special thanks for Kenn Takara and Roel Van de Paar for helping me edit this post.

Apr
20
2017
--

Percona Live Featured Session with Casper Kejlberg-Rasmussen: Placing Databases @ Uber

Percona Live Featured Session

Percona Live Featured SessionWelcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured session, we’ll meet Casper Kejlberg-Rasmussen, Software Developer at Uber. His session is Placing Databases @ Uber. Uber has many thousands of MySQL databases running inside of Docker containers on thousands of hosts. When deciding exactly which host a database should run on, it is important that you avoid hosts running databases of the same cluster as the one you are placing, and that you avoid placing all databases of a cluster on the same rack or in the same data center.

I had a chance to talk to Casper about Uber and database placement:

CasperPercona: How did you get into database technology? What do you love about it?

Casper: When I took my Ph.D., my thesis area was about dynamic data structures. During my bachelor, master and Ph.D., I took all the algorithms and data structure classes I could. So it was natural for me to also work with databases in my professional career. Databases are a prime example of a very useful dynamic data structure.

Percona: Your talk is called Placing Database @ Uber. What do you mean by placing databases, and why is it important?

Casper: At Uber, the storage team manages all of Uber’s storage offerings. Our main database technology is an in-house NoSQL database called Schemaless. Schemaless builds on top of MySQL (and we specifically use our own fork of Percona’s MySQL variant, found here in GitHub). We have many thousands of databases that run inside of Docker containers. Whenever we need to create a new Schemaless instance for our internal customers, or we need to add capacity to an existing Schemaless instance, we need to place new Docker containers with Percona Server for MySQL running inside. For our Schemaless instances to be reliable, durable and highly available, we need to place databases in at least two different data centers. We want to avoid placing two databases of the same instance on the same rack or the same host. So consideration needs to be taken when deciding where to place the databases.

Percona: What are some of the conditions that affect where you place databases?

Casper: When we place a database we have to take into account the labels on the hosts we consider. These labels can be which data center or rack the host is part of, or what Clusto (Clusto is an internal hardware management tool we have) pools a host belongs to. This can be a testing, staging or production host, etc. A host also has “relations.” A relation is also a label, but instead of stating facts about the host, a relation states what other databases are running on the host. An example of a relation label is schemaless.instance.mezzanine, which indicates that the host is running a Schemaless database from the Mezzanine instance. Another example is schemaless.cluster.percona-cluster-mezzanine-us1-db01, which indicates that the database is a Schemaless database belonging to the cluster percona-cluster-mezzanine-us1-db01.

Percona: What do you want attendees to take away from your session? Why should they attend?

Casper: I want the attendees to remember that there are three steps when placing a database or any container:

  1. Filter out any host that fails the hard requirements (like not having enough resources) or fails the label or relation requirements (like having databases of the same instance as the one we want to place)
  2. Rank the hosts to select the best one, which can be having a host with the most free space left or having a low number of other databases on it
  3. As time passes and databases consume more resources, we want to relocate databases to other hosts at which it makes more sense to place them.

People should attend my session to see how to get good database placements in a simple way.

Percona: What are you most looking forward to at Percona Live 2017?

Casper: I look forward to hearing about other NoSQL solutions, and hearing about different storage engines for MySQL systems. And of course meeting other exciting people from the database community! ?

Register for Percona Live Data Performance Conference 2017, and see Casper present Placing Databases @ Uber. Use the code FeaturedTalk and receive $100 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara and the Santa Clara Convention Center.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com