Aug
09
2018
--

How to Change Settings for PMM Deployed via Docker

change settings for PMM deployed docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and when you want to start, passing new environment variable with

docker start

Unfortunately, this is not going to work as

docker start

 does not support changing environment variables, at least not at the time of writing. I assume the idea is to keep container immutable and if you want container with different properties—like environment variables—you should run a new container instead. Here’s how.

Stop and Rename the old container, just in case you want to go back

docker stop pmm-server
docker rename pmm-server pmm-server-old

Refresh the container with the latest version

docker pull percona/pmm-server:latest

Do not miss this step!  When you destroy and recreate the container, all the updates you have done through PMM Web interface will be lost. What’s more, the software version will be reset to the one in the Docker image. Running an old PMM version with a data volume modified by a new PMM version may cause unpredictable results. This could include data loss.

Run the container with the new settings, for example changing METRICS_RESOLUTION

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=5s \
  percona/pmm-server:latest

After you’re happy with your new container deployment you can remove the old container

docker rm pmm-server-old

That’s it! You should have running the latest PMM version with updated configuration settings.

The post How to Change Settings for PMM Deployed via Docker appeared first on Percona Database Performance Blog.

Oct
12
2016
--

Encrypt your –defaults-file

encrypt

Encrypt your --defaults-file using GPG
Encrypt your credentials using GPG

This blog post will look how to use encryption to secure your database credentials.

In the recent blog post Use MySQL Shell Securely from Bash, there are some good examples of how you might avoid using a ~/.my.cnf – but you still need to put that password down on disk in the script. MySQL 5.6.6 and later introduced the  –login-path option, which is a handy way to store per-connection entries and keep the credentials in an encrypted format. This is a great improvement, but as shown in Get MySQL Passwords in Plain Text from .mylogin.cnf, it is pretty easy to get that information back out.

Let’s fix this with gpg-agent, mkfifo and a few servings of Bash foo…

If you want to keep prying eyes away from your super secret database credentials, then you really need to encrypt it. Nowadays most people are familiar with GPG (GNU Privacy Guard), but for those of you that aren’t it is a free implementation of the OpenPGP standard that allows you to encrypt and sign your data and communication.

First steps…

Before we can go on to use GPG to encrypt our credentials, we need to get it working. GnuPG comes with almost every *nix operating system, but for this post we’ll be using Ubuntu 16.04 LTS and we’ll presume that it isn’t yet installed.

$ sudo apt-get install gnupg gnupg-agent pinentry-curses

Once the packages are installed, there is a little configuration required to make things simpler. We’ll go with some minimal settings just to get you going. First of all, we’ll create our main key:

$ gpg --gen-key
gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (4096)
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (5y)
Key expires at Tue 05 Oct 2021 23:59:00 BST
Is this correct? (y/N) y
You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
"Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"
Real name: Ceri Williams
Email address: notmyrealaddress@somedomain.com
Comment: Encrypted credentials for MySQL
You selected this USER-ID:
"Ceri Williams (Encrypted credentials for MySQL) <notmyrealaddress@somedomain.com>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

After typing a password and gaining sufficient entropy you will have your first key! You can show your private keys as follows:

$ gpg --list-secret-keys
/home/ceri/.gnupg/secring.gpg
-----------------------------
sec 4096R/C38C02B0 2016-10-06 [expires: 2021-10-05]
uid Ceri Williams (Encrypted credentials for MySQL) <notmyrealaddress@somedomain.com>

We’ll now create our “gpg.conf” in which to keep a few settings. This sets the key that is used by default when encrypting, enables the gpg-agent and removes the copyright message.

$ cat <<EOF > ~/.gnupg/gpg.conf
default-key C38C02B0
use-agent
no-greeting
EOF

Now we’ll add a few settings for “gpg-agent” and allow the key to be saved for one day to reduce the number of times you need to enter a password. Also, as this post concentrates on command line programs, we’ve enabled the ncurses pinentry to specify the password when requested.

$ cat <<EOF > ~/.gnupg/gpg-agent.conf
pinentry-program /usr/bin/pinentry-curses
default-cache-ttl 86400
max-cache-ttl 86400
EOF

You can find more information about setting up and using GPG in the GNU Privacy Handbook.

Encrypt your credentials

If all has gone well so far, you should be able to encrypt your first message. Here is a simple example to create armored (ASCII) output for a recipient with key “C38C02B0”:

$ echo hello | gpg -e --armor -r C38C02B0
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1
hQIMA/T3pqGixN5nAQ/+IxmmgoHNVY2IXp7OAQUZZtCw0ayZu/rFotsJBiQcNG4W
J9JZmG78fgPfyF2FD4oVsXDBW7yDzfDSxCcX7LL9z4p33bzUAYOwofRP9+8qJGq/
qob1SclNN4fdFc/PtI7XKYBFYcHlfFeTIH44w9GEGdZlyfDfej+qGTJX+UHrKTo3
DaE2qpb7GvohEnDPX5WM0Pts3cATi3PcH4C9OZ5dgYizmlPB58R2DZl1ioERy2jE
WSIhkZ8ZPW9ezWYDCtFbgFSpgynzYeFRVv1rel8cxZCSYgHOHrUgQM6WdtVFmEjL
ONaRiEA9IcXZXDXaeFezKr2F8PJyaVfmheZDdRTdw54e4R6kPunDeWtD2aCJE4EF
ztyWLgQZ0wNE8UY0PepSu5p0FAENk08xd9xNMCSiCuwmBAorafaO9Q8EnJjHS/w5
aKLJzNzad+8zKq3zgBxHGj1liHmx873Epz5izsH/lK9Jwy6H5qGVB71XuNuRMzNr
ghgHFWNX7Wy8wnBnV6MrenASgtCUY6cGdT7YpPe6pLr8Qj/3QRLdzHDlMi9gGxoS
26emhTi8sIUzQRtQxFKKXyZ43sldtRewHE/k4/ZRXz5N6ST2cSFAcsMyjScS4p2a
JvPvHt4xhn8uRhgiauqd7IqCCSWFrAR4J50AdARmVeucWsbRzIJIEnKW4G/XikvS
QQFOvcdalGWKMpH+mRBkHRjbOgGpB0GeRbuKzhdDvVT+EhhIOG8DphumgI0yDyTo
Ote5sANgTRpr0KunJPgz5pER
=HsSu
-----END PGP MESSAGE-----

Now that we have GPG working, we can secure our credentials and encrypt them to use later on. One of the default files MySQL reads is “~/.my.cnf”, which is where you can store your user credentials for easy command line access.

$ cat <<EOF | gpg --encrypt --armor -r C38C02B0 -o ~/.my.cnf.asc
[client]
user = ceri
password = mysecretpassword
[mysql]
skip-auto-rehash
prompt = "smysql d> "
EOF

There you go, everything is nice and secure! But wait, how can anything use this?

Bash foo brings MySQL data to you

Most MySQL and Percona tools will accept the “–defaults-file” argument, which tells the program where to look to find what configuration to run. This will allow us to use our encrypted config.

The following script carries out the following actions:

  1. Creates a temporary file on disk and then removes it
  2. Creates a FIFO (a socket-like communication channel that requires both ends to be connected)
  3. Decrypts the config to the FIFO in the background
  4. Launches the “mysql” client and reads from the FIFO
#!/bin/bash
set -e
declare -ra ARGS=( "${@}" )
declare -ri ARGV=${#ARGS[@]}
declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.cnf.asc')
declare -r SEC_FIFO=$(mktemp)
declare -a PASSTHRU=( "${ARGS[@]}" )
test ${ARGV} -gt 0 &&
test -f "${ARGS[0]}" &&
PASSTHRU=( "${ARGS[@]:1}" )
set -u
function cleanup {
  test -e ${SEC_FIFO} && rm -f $_
  return $?
}
function decrypt {
  set +e
  $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1
  test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1
  set -e
}
function exec_cmd {
  local -r cmd=${1}
  set +u
  ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}"
  set -u
}
trap cleanup EXIT
test -e ${SEC_MYCNF} || exit 1
cleanup && mkfifo ${SEC_FIFO} && decrypt &
exec_cmd /usr/bin/mysql

You can use this script as you would normally with the “mysql” client, and pass your desired arguments. You can also optionally pass a specific encrypted config as the first argument:

$ ./smysql.sh .my.test.asc
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 56
Server version: 5.7.14-8 Percona Server (GPL), Release '8', Revision '1f84ccd'
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
smysql (none)>

There we go, MySQL access via an encrypted “–defaults-file” – and as long as your key is unlocked in the agent you do not need to enter the password.

But wait . . . what about all of the other tools that you might want to use? Well, with a slight tweak you can make the script a little fancier and get other tools to use the config, too (tools such as mysqladmin, mysqldump, pt-show-grants, pt-table-checksum, etc.). The key part of the next script is the specification of accepted commands (“ALIASES”) and the use of symbolic links to alias the script:

#!/bin/bash
set -e
declare -ra ARGS=( "${@}" )
declare -ri ARGV=${#ARGS[@]}
declare -rA ALIASES=(
 [smysql]=mysql
 [smysqldump]=mysqldump
 [smysqladmin]=mysqladmin
 [spt-show-grants]=pt-show-grants
 [spt-table-checksum]=pt-table-checksum
 [spt-table-sync]=pt-table-sync
 [spt-query-digest]=pt-query-digest
)
declare -r PROGNAME=$(basename ${0})
declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.gpg')
declare -r SEC_FIFO=$(mktemp)
declare -a PASSTHRU=( "${ARGS[@]}" )
test ${ARGV} -gt 0 &&
test -f "${ARGS[0]}" &&
 PASSTHRU=( "${ARGS[@]:1}" )
set -u
function cleanup {
 test -e ${SEC_FIFO} && rm -f $_
 return $?
}
function decrypt {
 set +e
 $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1
 test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1
 set -e
}
function check_cmd {
 local k
 local cmd=${1}
 for k in "${!ALIASES[@]}"; do
 test "${cmd}" = ${k} &&
 test -x "$(which ${ALIASES[${k}]})" &&
 echo $_ && return 0
 done
 return 1
}
function exec_cmd {
 local -r cmd=${1}
 set +u
 ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}"
 set -u
}
function usage {
 local realfn=$(realpath ${0})
 cat <<EOS | fold -sw 120
USAGE: $(basename ${0}) enc_file.gpg [--arg=val]
use a GPG-encrypted my.cnf (default: ${SEC_MYCNF})
currently supports:
${ALIASES[@]}
create a symlink to match the alias (real app prefixed with 's')
e.g.
sudo ln -s ${realfn} /usr/local/bin/smysql
sudo ln -s ${realfn} /usr/local/bin/spt-show-grants
EOS
}
trap cleanup EXIT ERR
test -e ${SEC_MYCNF} || { usage; exit 1; }
cmd=$(check_cmd ${PROGNAME})
test $? -eq 0 || { echo ${ALIASES[${PROGNAME}]} is not available; exit 3; }
cleanup && mkfifo ${SEC_FIFO} && decrypt &
exec_cmd ${cmd}

Now we can set up some symlinks so that the script can be called in a way that the correct application is chosen:

$ mkdir -p ~/bin
$ mv smysql.sh ~/bin
$ ln -s ~/bin/smysql.sh ~/bin/smysql
$ ln -s ~/bin/smysql.sh ~/bin/smysqladmin
$ ln -s ~/bin/smysql.sh ~/bin/spt-show-grants

Examples

With some symlinks now in place we can try out some of the tools that we have enabled:

$ ~/bin/smysql -Bsse 'select 1'
1
$ ~/bin/smysqladmin proc
+----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+
| 58 | ceri | localhost | | Query | 0 | starting | show processlist | 0 | 0 |
+----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+
$ ~/bin/spt-show-grants --only root@localhost | head -n3
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.7.14-8 at 2016-10-07 01:01:55
-- Grants for 'root'@'localhost'

Enjoy some added security in your database environment, on your laptop and even on your Raspberry Pi!

Aug
20
2013
--

Amazon RDS with MySQL 5.6 – Configuration Variables

One longstanding complaint I have heard for the past several years, and still hear today, is that Amazon’s Relational Database Service (RDS) does not allow the configuration flexibility as running MySQL in an ec2 instance. While true, this ignores the consistent work that Amazon has done to provide access to the most important configuration variables needed to tune a MySQL instance (after all, how relevant is it for a customer to set bind_address in an RDS instance).

Let’s take a look visually:

Screen Shot 2013-08-18 at 11.39.50 AM

MySQL provides 523 options (35 of them NDB specific, so aren’t relevant to RDS), while RDS provides (via the web UI) 283, with 58 of those being immutable (things like basedir, datadir, and a variety of other variables).

So, what’s missing from the RDS configuration? The system variables can be roughly grouped into the following categories:

  • Audit Logs
  • Memcached Daemon
  • Binary Log Settings
  • Performance Schema
  • Relay Log Settings
  • Semi-Sync Replication
  • SSL
  • Thread Pool
  • Other

Let’s look at the relevance of these individually:

Audit Logs

The Audit log PlugIn is a commercial extension not available in the MySQL Community Edition offered by Amazon, so it’s not relevant.

Memcached Daemon

RDS is designed for relational database access, not key-value store access. If you need Memcached functionality, check out Amazon’s ElastiCache

Binary Log Settings

Binary logging is enabled by default on RDS, you just lose the ability to:

  • Use the old version of binary logging (pre-5.6.6)
  • Specify where the binlogs are saved or their base name
  • Control the maximum binary log size

The flexibility of controlling the maximum binary log size would be helpful in some workloads, but isn’t something that is generally tuned in the majority of engagements that I have been a part of.

Performance Schema

That these configuration parameters are not available via the Web UI is a bit of a misnomer. It is possible to enable/disable the Performance Schema and then control the collection via SQL as usual.

Relay Log Settings

Like the Binary Log settings, there is not much that we would want to tune here. The standard settings are appropriate for general workloads.

Semi-Sync Replication

Amazon RDS has a proprietary failover solution and block level replication across availability zones. It is not surprising that this functionality is not provided by default in the Web UI, but certainly something that could be useful for a small cross section of workloads.

SSL

For companies with strict security needs, the lack of SSL may be a deal breaker for using RDS. But, depending upon the security policies in place, can be worked around by using Amazon’s VPC with SSL. For many companies, though, this may not play a role in the decision process. I find it hard to believe that, with Amazon’s resources, providing this is an insurmountable technical challenge. Perhaps we’ll see this becoming available in future RDS releases.

Thread Pool

The Thread Pool PlugIn is a commercial extension not available in the Community Edition of MySQL, so is not relevant to what RDS provides. There are, however, solutions in both Percona Server and MariaDB that Amazon may choose to port in the future.

Conclusion

Amazon still has a ways to go to be fully compatible with configuration variables, but by and large the important ones are available to customers, with minor exception (I’m looking at you, innodb_log_file_size).

I’ll be talking about this topic in more detail, as well as a variety of other RDS 5.6-specific issues, in my upcoming Webinar on August 28 titled “Running MySQL 5.6 on Amazon RDS.”

The post Amazon RDS with MySQL 5.6 – Configuration Variables appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com