Aug
17
2018
--

Percona Server for MySQL 5.6.41-84.1 Is Now Available

Percona Server for MySQL 5.6

Percona Server for MySQL 5.6Percona announces the release of Percona Server for MySQL 5.6.41-84.1 on August 17, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.6.41, including all the bug fixes in it. Percona Server for MySQL 5.6.41-84.1 is now the current GA release in the 5.6 series. All of Percona’s software is open-source and free.

Bugs Fixed
  • A simple SELECT query on a table with CHARSET=euckr COLLATE=euckr_bin could return different results each time it was executed. Bug fixed #4513 (upstream 91091).
  • Percona Server 5.6.39-83.1 could crash when altering an InnoDB table that has a full-text search index defined. Bug fixed #3851 (upstream 68987).
Other Bugs Fixed
  • #3325 “online upgrade GTID cause binlog damage in high write QPS situation”
  • #3976 “Errors in MTR tests main.variables-big, main.information_schema-big, innodb.innodb_bug14676111”
  • #4506 “Backport fixes from 8.0 for InnoDB memcached Plugin”

Find the release notes for Percona Server for MySQL 5.6.41-84.1 in our online documentation. Report bugs in the Jira bug tracker.

The post Percona Server for MySQL 5.6.41-84.1 Is Now Available appeared first on Percona Database Performance Blog.

Aug
17
2018
--

Percona Server for MySQL 5.5.61-38.13 Is Now Available

Percona Server for MySQL

Percona Server for MySQL 5.6Percona announces the release of Percona Server for MySQL 5.5.61-38.13 on August 17, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.5.61, including all the bug fixes in it. Percona Server for MySQL 5.5.61-38.13 is now the current GA release in the 5.5 series. All of Percona’s software is open-source and free.

Bugs Fixed
  • The --innodb-optimize-keys option of the mysqldump utility fails when a column name is used as a prefix of a column which has the AUTO_INCREMENT attribute. Bug fixed #4524.
Other Bugs Fixed
  • #4566 “stack-use-after-scope in reinit_io_cache()” (upstream 91603)
  • #4581 “stack-use-after-scope in _db_enter_() / mysql_select_db()” (upstream 91604)
  • #4600 “stack-use-after-scope in _db_enter_() / get_upgrade_info_file_name()” (upstream 91617)
  • #3976 “Errors in MTR tests main.variables-big, main.information_schema-big, innodb.innodb_bug14676111”

Find the release notes for Percona Server for MySQL 5.5.61-38.13 in our online documentation. Report bugs in the Jira bug tracker.

The post Percona Server for MySQL 5.5.61-38.13 Is Now Available appeared first on Percona Database Performance Blog.

Aug
17
2018
--

Replication from Percona Server for MySQL to PostgreSQL using pg_chameleon

postgres mysql replication using pg_chameleon

postgres mysql replication using pg_chameleonReplication is one of the well-known features that allows us to build an identical copy of a database. It is supported in almost every RDBMS. The advantages of replication may be huge, especially HA (High Availability) and load balancing. But what if we need to build replication between 2 heterogeneous databases like MySQL and PostgreSQL? Can we continuously replicate changes from a MySQL database to a PostgreSQL database? The answer to this question is pg_chameleon.

For replicating continuous changes, pg_chameleon uses the mysql-replication library to pull the row images from MySQL, which are transformed into a jsonb object. A pl/pgsql function in postgres decodes the jsonb and replays the changes into the postgres database. In order to setup this type of replication, your mysql binlog_format must be “ROW”.

A few points you should know before setting up this tool :

  1. Tables that need to be replicated must have a primary key.
  2. Works for PostgreSQL versions > 9.5 and MySQL > 5.5
  3. binlog_format must be ROW in order to setup this replication.
  4. Python version must be > 3.3

When you initialize the replication, pg_chameleon pulls the data from MySQL using the CSV format in slices, to prevent memory overload. This data is flushed to postgres using the COPY command. If COPY fails, it tries INSERT, which may be slow. If INSERT fails, then the row is discarded.

To replicate changes from mysql, pg_chameleon mimics the behavior of a mysql slave. It creates the schema in postgres, performs the initial data load, connects to MySQL replication protocol, stores the row images into a table in postgres. Now, the respective functions in postgres decode those rows and apply the changes. This is similar to storing relay logs in postgres tables and applying them to a postgres schema. You do not have to create a postgres schema using any DDLs. This tool automatically does that for the tables configured for replication. If you need to specifically convert any types, you can specify this in the configuration file.

The following is just an exercise that you can experiment with and implement if it completely satisfies your requirement. We performed these tests on CentOS Linux release 7.4.

Prepare the environment

Set up Percona Server for MySQL

InstallMySQL 5.7 and add appropriate parameters for replication.

In this exercise, I have installed Percona Server for MySQL 5.7 using YUM repo.

yum install http://www.percona.com/downloads/percona-release/redhat/0.1-6/percona-release-0.1-6.noarch.rpm
yum install Percona-Server-server-57
echo "mysql ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
usermod -s /bin/bash mysql
sudo su - mysql

pg_chameleon requires the following the parameters to be set in your my.cnf file (parameter file of your MySQL server). You may add the following parameters to /etc/my.cnf

binlog_format= ROW
binlog_row_image=FULL
log-bin = mysql-bin
server-id = 1

Now start your MySQL server after adding the above parameters to your my.cnf file.

$ service mysql start

Fetch the temporary root password from mysqld.log, and reset the root password using mysqladmin

$ grep "temporary" /var/log/mysqld.log
$ mysqladmin -u root -p password 'Secret123!'

Now, connect to your MySQL instance and create sample schema/tables. I have also created an emp table for validation.

$ wget http://downloads.mysql.com/docs/sakila-db.tar.gz
$ tar -xzf sakila-db.tar.gz
$ mysql -uroot -pSecret123! < sakila-db/sakila-schema.sql
$ mysql -uroot -pSecret123! < sakila-db/sakila-data.sql
$ mysql -uroot -pSecret123! sakila -e "create table emp (id int PRIMARY KEY, first_name varchar(20), last_name varchar(20))"

Create a user for configuring replication using pg_chameleon and give appropriate privileges to the user using the following steps.

$ mysql -uroot -p
create user 'usr_replica'@'%' identified by 'Secret123!';
GRANT ALL ON sakila.* TO 'usr_replica'@'%';
GRANT RELOAD, REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'usr_replica'@'%';
FLUSH PRIVILEGES;

While creating the user in your mysql server (‘usr_replica’@’%’), you may wish to replace % with the appropriate IP or hostname of the server on which pg_chameleon is running.

Set up PostgreSQL

Install PostgreSQL and start the database instance.

You may use the following steps to install PostgreSQL 10.x

yum install https://yum.postgresql.org/10/redhat/rhel-7.4-x86_64/pgdg-centos10-10-2.noarch.rpm
yum install postgresql10*
su - postgres
$/usr/pgsql-10/bin/initdb
$ /usr/pgsql-10/bin/pg_ctl -D /var/lib/pgsql/10/data start

As seen in the following logs, create a user in PostgreSQL using which pg_chameleon can write changed data to PostgreSQL. Also create the target database.

postgres=# CREATE USER usr_replica WITH ENCRYPTED PASSWORD 'secret';
CREATE ROLE
postgres=# CREATE DATABASE db_replica WITH OWNER usr_replica;
CREATE DATABASE

Steps to install and setup replication using pg_chameleon

Step 1: In this exercise, I installed Python 3.6 and pg_chameleon 2.0.8 using the following steps. You may skip the python install steps if you already have the desired python release. We can create a virtual environment if the OS does not include Python 3.x by default.

yum install gcc openssl-devel bzip2-devel wget
cd /usr/src
wget https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tgz
tar xzf Python-3.6.6.tgz
cd Python-3.6.6
./configure --enable-optimizations
make altinstall
python3.6 -m venv venv
source venv/bin/activate
pip install pip --upgrade
pip install pg_chameleon

Step 2: This tool requires a configuration file to store the source/target server details, and a directory to store the logs. Use the following command to let pg_chameleon create the configuration file template and the respective directories for you.

$ chameleon set_configuration_files

The above command would produce the following output, which shows that it created some directories and a file in the location where you ran the command.

creating directory /var/lib/pgsql/.pg_chameleon
creating directory /var/lib/pgsql/.pg_chameleon/configuration/
creating directory /var/lib/pgsql/.pg_chameleon/logs/
creating directory /var/lib/pgsql/.pg_chameleon/pid/
copying configuration example in /var/lib/pgsql/.pg_chameleon/configuration//config-example.yml

Copy the sample configuration file to another file, lets say, default.yml

$ cd .pg_chameleon/configuration/
$ cp config-example.yml default.yml

Here is how my default.yml file looks after adding all the required parameters. In this file, we can optionally specify the data type conversions, tables to skipped from replication and the DML events those need to skipped for selected list of tables.

---
#global settings
pid_dir: '~/.pg_chameleon/pid/'
log_dir: '~/.pg_chameleon/logs/'
log_dest: file
log_level: info
log_days_keep: 10
rollbar_key: ''
rollbar_env: ''
# type_override allows the user to override the default type conversion into a different one.
type_override:
  "tinyint(1)":
    override_to: boolean
    override_tables:
      - "*"
#postgres  destination connection
pg_conn:
  host: "localhost"
  port: "5432"
  user: "usr_replica"
  password: "secret"
  database: "db_replica"
  charset: "utf8"
sources:
  mysql:
    db_conn:
      host: "localhost"
      port: "3306"
      user: "usr_replica"
      password: "Secret123!"
      charset: 'utf8'
      connect_timeout: 10
    schema_mappings:
      sakila: sch_sakila
    limit_tables:
#      - delphis_mediterranea.foo
    skip_tables:
#      - delphis_mediterranea.bar
    grant_select_to:
      - usr_readonly
    lock_timeout: "120s"
    my_server_id: 100
    replica_batch_size: 10000
    replay_max_rows: 10000
    batch_retention: '1 day'
    copy_max_memory: "300M"
    copy_mode: 'file'
    out_dir: /tmp
    sleep_loop: 1
    on_error_replay: continue
    on_error_read: continue
    auto_maintenance: "disabled"
    gtid_enable: No
    type: mysql
    skip_events:
      insert:
#        - delphis_mediterranea.foo #skips inserts on the table delphis_mediterranea.foo
      delete:
#        - delphis_mediterranea #skips deletes on schema delphis_mediterranea
      update:

Step 3: Initialize the replica using this command:

$ chameleon create_replica_schema --debug

The above command creates a schema and nine tables in the PostgreSQL database that you specified in the .pg_chameleon/configuration/default.yml file. These tables are needed to manage replication from source to destination. The same can be observed in the following log.

db_replica=# \dn
List of schemas
Name | Owner
---------------+-------------
public | postgres
sch_chameleon | target_user
(2 rows)
db_replica=# \dt sch_chameleon.t_*
List of relations
Schema | Name | Type | Owner
---------------+------------------+-------+-------------
sch_chameleon | t_batch_events | table | target_user
sch_chameleon | t_discarded_rows | table | target_user
sch_chameleon | t_error_log | table | target_user
sch_chameleon | t_last_received | table | target_user
sch_chameleon | t_last_replayed | table | target_user
sch_chameleon | t_log_replica | table | target_user
sch_chameleon | t_replica_batch | table | target_user
sch_chameleon | t_replica_tables | table | target_user
sch_chameleon | t_sources | table | target_user
(9 rows)

Step 4: Add the source details to pg_chameleon using the following command. Provide the name of the source as specified in the configuration file. In this example, the source name is mysql and the target is postgres database defined under pg_conn.

$ chameleon add_source --config default --source mysql --debug

Once you run the above command, you should see that the source details are added to the t_sources table.

db_replica=# select * from sch_chameleon.t_sources;
-[ RECORD 1 ]-------+----------------------------------------------
i_id_source | 1
t_source | mysql
jsb_schema_mappings | {"sakila": "sch_sakila"}
enm_status | ready
t_binlog_name |
i_binlog_position |
b_consistent | t
b_paused | f
b_maintenance | f
ts_last_maintenance |
enm_source_type | mysql
v_log_table | {t_log_replica_mysql_1,t_log_replica_mysql_2}
$ chameleon show_status --config default
Source id Source name Type Status Consistent Read lag Last read Replay lag Last replay
----------- ------------- ------ -------- ------------ ---------- ----------- ------------ -------------
1 mysql mysql ready Yes N/A N/A

Step 5: Initialize the replica/slave using the following command. Specify the source from which you are replicating the changes to the PostgreSQL database.

$ chameleon init_replica --config default --source mysql --debug

Initialization involves the following tasks on the MySQL server (source).

1. Flush the tables with read lock
2. Get the master’s coordinates
3. Copy the data
4. Release the locks

The above command creates the target schema in your postgres database automatically.
In the default.yml file, we mentioned the following schema_mappings.

schema_mappings:
sakila: sch_sakila

So, now it created the new schema scott in the target database db_replica.

db_replica=# \dn
List of schemas
Name | Owner
---------------+-------------
public | postgres
sch_chameleon | usr_replica
sch_sakila | usr_replica
(3 rows)

Step 6: Now, start replication using the following command.

$ chameleon start_replica --config default --source mysql

Step 7: Check replication status and any errors using the following commands.

$ chameleon show_status --config default
$ chameleon show_errors

This is how the status looks:

$ chameleon show_status --source mysql
Source id Source name Type Status Consistent Read lag Last read Replay lag Last replay
----------- ------------- ------ -------- ------------ ---------- ----------- ------------ -------------
1 mysql mysql running No N/A N/A
== Schema mappings ==
Origin schema Destination schema
--------------- --------------------
sakila sch_sakila
== Replica status ==
--------------------- ---
Tables not replicated 0
Tables replicated 17
All tables 17
Last maintenance N/A
Next maintenance N/A
Replayed rows
Replayed DDL
Skipped rows

Now, you should see that the changes are continuously getting replicated from MySQL to PostgreSQL.

Step 8:  To validate, you may insert a record into the table in MySQL that we created for the purpose of validation and check that it is replicated to postgres.

$ mysql -u root -pSecret123! -e "INSERT INTO sakila.emp VALUES (1,'avinash','vallarapu')"
mysql: [Warning] Using a password on the command line interface can be insecure.
$ psql -d db_replica -c "select * from sch_sakila.emp"
 id | first_name | last_name
----+------------+-----------
  1 | avinash    | vallarapu
(1 row)

In the above log, we see that the record that was inserted to the MySQL table was replicated to the PostgreSQL table.

You may also add multiple sources for replication to PostgreSQL (target).

Reference : http://www.pgchameleon.org/documents/

Please refer to the above documentation to find out about the many more options that are available with pg_chameleon

The post Replication from Percona Server for MySQL to PostgreSQL using pg_chameleon appeared first on Percona Database Performance Blog.

Aug
17
2018
--

This Week in Data with Colin Charles 49: MongoDB Conference Opportunities and Serverless Aurora MySQL

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Beyond the MongoDB content that will be at Percona Live Europe 2018, there is also a bit of an agenda for MongoDB Europe 2018, happening on November 8 in London—a day after Percona Live in Frankfurt. I expect you’ll see a diverse set of MongoDB content at Percona Live.

The Percona Live Europe Call for Papers closes TODAY! (Friday August 17, 2018)

From Amazon, there have been some good MySQL changes. You now have access to time delayed replication as a strategy for your High Availability and disaster recovery. This works with versions 5.7.22, 5.6.40 and later. It is worth noting that this isn’t documented as working for MariaDB (yet?). It arrived in MariaDB Server in 10.2.3.

Another MySQL change from Amazon? Aurora Serverless MySQL is now generally available. You can build and run applications without thinking about instances: previously, the database function was not all that focused on serverless. This on-demand auto-scaling serverless Aurora should be fun to use. Only Aurora MySQL 5.6 is supported at the moment and also, be aware that this is not available in all regions yet (e.g. Singapore).

Releases

  • pgmetrics is described as an open-source, zero-dependency, single-binary tool that can collect a lot of information and statistics from a running PostgreSQL server and display it in easy-to-read text format or export it as JSON for scripting.
  • PostgreSQL 10.5, 9.6.10, 9.5.14, 9.4.19, 9.3.24, And 11 Beta 3 has two fixed security vulnerabilities may inspire an upgrade.

Link List

Industry Updates

  • Martin Arrieta (LinkedIn) is now a Site Reliability Engineer at Fastly. Formerly of Pythian and Percona.
  • Ivan Zoratti (LinkedIn) is now Director of Product Management at Neo4j. He was previously on founding teams, was the CTO of MariaDB Corporation (then SkySQL), and is a long time MySQL veteran.

Upcoming Appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

 

The post This Week in Data with Colin Charles 49: MongoDB Conference Opportunities and Serverless Aurora MySQL appeared first on Percona Database Performance Blog.

Aug
14
2018
--

Open Source Database Community Blog: The Story So Far

open source database community blog

open source database community blogRecently, we initiated a new project, the Open Source Database Community Blog. One way to think of this is as an online, year round version of the Percona Live conferences. If you have a story to tell, an experience to share, or a lesson to be learned send it along. As long as it’s related to open source database software, their management and application. That’s right. Not just Percona software. Any open source database software of all formats.

Unlike Percona Live, though, we are not limited by time or space. All submissions are welcome as long as they follow some simple guidelines.

We have already had some excellent posts, and in case this is news to you, here’s a recap:

You can also read Jean-François’s personal blog (unsolicited, but greatly appreciated) on how the process of getting his post up and running went.

About the posts … and what’s in it for you

All of the writers are giving freely of their time and knowledge. So .. if you would just like to read some alternative independent viewpoints, try the blog. If you want to support the writers with constructive exchanges and comments, that would be great.

If you would like to go a little further and provide some feedback about what you’d like to see via our blog poll, that would be awesome. As a community blog, we want to make sure it hits community interests.

You can also drop me a line if there are things that I’ve missed from the poll.

Also, you should know I have the English covered but I am not a technical expert. We don’t want—not would I get—Percona techs to edit or review these blog posts. That’s not the point of the blog!

So, would you consider being a technical editor maybe? Not for all posts, since many of the writers will want to ‘own’ their content. But there could be some new writers who’d appreciate some back up from a senior tech before going ‘live’ with their posts. Might that tech buddy be you?

There’s some more ideas and I have written more about our vision for the blog here.

If you are tempted to write for this, please get in touch, I would love to hear from you. You do not have to be an expert! Content suitable for all levels of experience is welcome.

The post Open Source Database Community Blog: The Story So Far appeared first on Percona Database Performance Blog.

Aug
13
2018
--

Webinar Tues 8/14: Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL

Amazon Migration Service to migrate to MySQL

Amazon Migration Service to migrate to MySQLPlease join Percona’s Solution Engineer, Dimitri Vanoverbeke as he presents Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL on Tuesday, August 14th, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

In this talk, we will learn about the Amazon Migration Tool. The talk will cover the possibilities, potential pitfalls prior to migrating and a high-level overview of its functionalities.

Register Now

The post Webinar Tues 8/14: Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL appeared first on Percona Database Performance Blog.

Aug
10
2018
--

This Week in Data with Colin Charles 48: Coinbase Powered by MongoDB and Prometheus Graduates in the CNCF

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The call for submitting a talk to Percona Live Europe 2018 is closing today, and while there may be a short extension, have you already got your talk submitted? I suggest doing so ASAP!

I’m sure many of you have heard of cryptocurrencies, the blockchain, and so on. But how many of you realiize that Coinbase, an application that handles cryptocurrency trades, matching book orders, and more, is powered by MongoDB? With the hype and growth in interest in late 2017, Coinbase has had to scale. They gave an excellent talk at MongoDB World, titled MongoDB & Crypto Mania (the video is worth a watch), and they’ve also written a blog post, How we’re scaling our platform for spikes in customer demand. They even went driver hacking (the Ruby driver for MongoDB)!

It is great to see there be a weekly review of happenings in the Vitess world.

PingCap and TiDB have been to many Percona Live events to present, and recently hired Morgan Tocker. Morgan has migrated his blog from MySQL to TiDB. Read more about his experience in, This blog, now Powered by WordPress + TiDB. Reminds me of the early days of Galera Cluster and showing how Drupal could be powered by it!

Releases

Link List

  • Sys Schema MySQL 5.7+ – blogger from Wipro, focusing on an introduction to the sys schema on MySQL (note: still not available in the MariaDB Server fork).
  • Prometheus Graduates in the CNCF, so is considered a mature project. Criteria for graduation is such that “projects must demonstrate thriving adoption, a documented, structured governance process, and a strong commitment to community sustainability and inclusivity.” Percona benefits from Prometheus in Percona Monitoring & Management (PMM), so we should celebrate this milestone!
  • Replicating from MySQL 8.0 to MySQL 5.7
  • A while ago in this column, we linked to Shlomi Noach’s excellent post on MySQL High Availability at GitHub. We were also introduced to GitHub Load Balancer (GLB), which they ran on top of HAProxy. However back then, GLB wasn’t open; now you can get GLB Director: GLB: GitHub’s open source load balancer. The project describes GLB Director as: “… a Layer 4 load balancer which scales a single IP address across a large number of physical machines while attempting to minimise connection disruption during any change in servers. GLB Director does not replace services like haproxy and nginx, but rather is a layer in front of these services (or any TCP service) that allows them to scale across multiple physical machines without requiring each machine to have unique IP addresses.”
  • F1 Query: Declarative Querying at Scale – a well-written paper.

Upcoming Appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

 

The post This Week in Data with Colin Charles 48: Coinbase Powered by MongoDB and Prometheus Graduates in the CNCF appeared first on Percona Database Performance Blog.

Aug
09
2018
--

Lock Down: Enforcing AppArmor with Percona XtraDB Cluster

Enforcing AppArmor with Percona XtraDB Cluster

Recently, I wrote a blog post showing how to enforce SELinux with Percona XtraDB Cluster (PXC). The Linux distributions derived from RedHat use SELinux. There is another major mandatory discretionary access control (DAC) system, AppArmor. Ubuntu, for example, installs AppArmor by default. If you are concerned by computer security and use PXC on Ubuntu, you should enforce AppArmor. This post will guide you through the steps of creating a profile for PXC and enabling it. If you don’t want to waste time, you can just grab my profile, it seems to work fine. Adapt it to your environment if you are using non-standard paths. Look at the section “Copy the profile” for how to install it. For the brave, let’s go!

Install the tools

In order to do anything with AppArmor, we need to install the tools. On Ubuntu 18.04, I did:

apt install apparmor-utils

The apparmor-utils package provides the tools we need to generate a skeleton profile and parse the system logs.

Create a skeleton profile

AppArmor is fairly different from SELinux. Instead of attaching security tags to resources, you specify what a given binary can access, and how, in a text file. Also, processes can inherit permissions from their parent. We will only create a profile for the mysqld_safe script and it will cover the mysqld process and the SST scripts as they are executed under it. You create the skeleton profile like this:

root@BlogApparmor2:~# aa-autodep /usr/bin/mysqld_safe
Writing updated profile for /usr/bin/mysqld_safe.

On Ubuntu 18.04, there seems to be a bug. I reported it and apparently I am not the only one with the issue. If you get a “KeyError” error with the above command, try:

root@BlogApparmor2:~# echo "#include <abstractions>" > /etc/apparmor.d/scripts
root@BlogApparmor2:~# aa-autodep /usr/bin/mysqld_safe

The aa-autodep command creates the profile “usr.bin.mysqld_safe” in the /etc/apparmor.d directory. The initial content is:

root@BlogApparmor2:~# cat /etc/apparmor.d/usr.bin.mysqld_safe
# Last Modified: Wed Jul 25 18:56:31 2018
#include <tunables/global>
/usr/bin/mysqld_safe flags=(complain) {
  #include <abstractions/base>
  #include <abstractions/bash>
  /bin/dash ix,
  /lib/x86_64-linux-gnu/ld-*.so mr,
  /usr/bin/mysqld_safe r,
}

I suggest you add, ahead of time, things you know are needed. In my case, I added:

/etc/mysql/** r,
/usr/bin/innobackupex mrix,
/usr/bin/wsrep_sst_xtrabackup-v2 mrix,
/usr/lib/galera3/* r,
/usr/lib/mysql/plugin/* r,
/usr/sbin/mysqld mrix,
/var/log/mysqld.log w,
owner /tmp/** rw,
owner /var/lib/mysql/** rwk,

This will save time on redundant questions later. Those entries are permissions granted to mysqld_safe. For example,

/etc/mysql** r

  allows to read everything in

/etc/mysql

  and its subdirectories. These lines need to go right after the

/usr/bin/mysqld_safe r,

  line. Once done, parse and load the profile with:

root@BlogApparmor2:~# apparmor_parser -r /etc/apparmor.d/usr.bin.mysqld_safe

Get a well behaved SST script

If you read my previous blog post on SELinux, you may recall the

wsrep_sst_xtrabackup-v2

  script does not behave well, security wise. The Percona developers have released a fixed version but it may not be available yet in a packaged form. In the meantime, you can download it from github.

Start iterating

My initial thought was to put the profile in complain mode, generate activity and parse the logs with aa-logprof to get entries to add to the profile. Likely there is something I am doing wrong but in complain mode, aa-logprof detects nothing. In order to get something I had to enforce the profile with:

root@BlogApparmor2:~# aa-enforce /etc/apparmor.d/usr.bin.mysqld_safe

Then, I iterated many times—like more than 20—over the following sequence:

  1. rm -rf /var/lib/mysql/* # optional
  2. systemctl start mysql &
  3. tail -f /var/log/mysqld.log /var/log/kern.log
  4. systemctl stop mysql
  5. ps fax | egrep ‘mysqld_safe|mysqld’ | grep -v grep | awk ‘{print $1}’ | xargs kill -9 # sometimes
  6. aa-logprof
  7. if something was not right, jump back to step 1

See the next section for how to run aa-logprof. Once that sequence worked well, I tried SST (joiner/donor) roles and IST.

Parse the logs with aa-logprof

Now, the interesting part begins, parsing the logs. Simply begin the process with:

root@BlogApparmor2:~#  aa-logprof

and answer the questions. Be careful, I made many mistakes before I got it right, remember I am more a DBA than a Sysadmin. For example, you’ll get questions like:

Profile:  /usr/sbin/mysqld
Path:     /etc/hosts.allow
New Mode: r
Severity: unknown
 [1 - #include <abstractions/lxc/container-base>]
  2 - #include <abstractions/lxc/start-container>
  3 - /etc/hosts.allow r,
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

AppArmor asks you how it should provide read access to the

/etc/hosts.allow

  file. If you answer right away with “A”, it will add

#include <abstractions/lxc/container-base>

 to the profile. With all the dependencies pulled by the lxc-related includes, you basically end up allowing nearly everything. You must first press “3” to get:

Profile:  /usr/sbin/mysqld
Path:     /etc/hosts.allow
New Mode: r
Severity: unknown
  1 - #include <abstractions/lxc/container-base>
  2 - #include <abstractions/lxc/start-container>
 [3 - /etc/hosts.allow r,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

Notice the “[ ]” have moved to the bottom entry and then, press “A”. You’ll also get questions like:

Profile:  /usr/bin/mysqld_safe
Execute:  /bin/sed
Severity: unknown
(I)nherit / (C)hild / (N)amed / (X) ix On / (D)eny / Abo(r)t / (F)inish

For such a question, my answer is “I” for inherit. After a while, you’ll get through all the questions and you’ll be asked to save the profile:

The following local profiles were changed. Would you like to save them?
 [1 - /usr/bin/mysqld_safe]
(S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t
Writing updated profile for /usr/bin/mysqld_safe.

Revise the profile

Do not hesitate to edit the profile if you see, for example, many similar file entries which could be replaced by a “*” or “**”. If you manually modify the profile, you need to parse it to load your changes:

root@BlogApparmor2:~# apparmor_parser -r /etc/apparmor.d/usr.bin.mysqld_safe

Copy the profile

Once you have a server running with AppArmor enforced on PXC, simply copy the profile to the other servers and parse it. For example:

root@BlogApparmor3:~# cd /etc/apparmor.d
root@BlogApparmor3:/etc/apparmor.d# scp ubuntu@10.0.4.76:/etc/apparmor.d/usr.bin.mysqld_safe .
ubuntu@10.0.4.76's password:
usr.bin.mysqld_safe                                   100% 2285     3.0MB/s   00:00
root@BlogApparmor3:/etc/apparmor.d# aa-enforce usr.bin.mysqld_safe
Setting /etc/apparmor.d/usr.bin.mysqld_safe to enforce mode.

You can always verify if the profile is enforced with:

root@BlogApparmor3:/etc/apparmor.d# aa-status
apparmor module is loaded.
42 profiles are loaded.
20 profiles are in enforce mode.
 /sbin/dhclient
 ...
 /usr/bin/mysqld_safe
 ...
 man_groff

Once enforced, I strongly advise to monitor the log files on a regular basis to see if anything has been overlooked. Similarly if you encounter a strange and unexpected behavior with PXC. Have the habit of checking the logs, it might save a lot of frustrating work.

Conclusion

As we have just seen, enabling AppArmor with PXC is not a difficult task, it just requires some patience. AppArmor is an essential component of a layered security approach. It achieves similar goals as the other well known DAC framework, SELinux. With the rising security concerns and the storage of sensitive data in databases, there are compelling reasons to enforce a DAC framework. I hope these two posts will help DBAs and Sysadmins to configure and enable DAC for PXC.

The post Lock Down: Enforcing AppArmor with Percona XtraDB Cluster appeared first on Percona Database Performance Blog.

Aug
09
2018
--

Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Monitoring Amazon Aurora with PMM

Monitoring Amazon Aurora with PMMPlease join Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to Aurora and Monitoring with PMM on Thursday, August 9th, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migration to Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using PMM.

The topics in this session include:

  • Why AWS Aurora? What is the future of AWS Aurora?
  • Build Aurora Infrastructure
  • Using Terraform (Without Data)
  • Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
  • Verify data consistency
  • Aurora migration
  • 1:1 migration
  • Many:1 migration using Percona Server multi-source replication
  • Show benchmarks and PMM dashboard
  • Demo

Register Now

 

Vineet KhannaVineet Khanna, Senior Database Engineer, Autodesk

Vineet Khanna, Senior Database Engineer at Autodesk, has 10+ years of experience as a MySQL DBA. His main professional interests are managing complex database environments, improving database performance, architecting High Availability solutions for MySQL. He has handled database environments of organizations like Chegg, Zendesk, Adobe.

 

Tate McDanielTate Mcdaniel, Sr. MySQL DBA

Tate joined Percona in June 2017 as a Remote MySQL DBA. He holds a Bachelors degree in Information Systems and Decision Strategies from LSU. He has 10+ years of experience working with MySQL and operations management. His great love is application query tuning. In his off time, he races sailboats, travels the Caribbean by sailboat, and
drives all over in an RV.

The post Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM appeared first on Percona Database Performance Blog.

Aug
09
2018
--

How to Change Settings for PMM Deployed via Docker

change settings for PMM deployed docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and when you want to start, passing new environment variable with

docker start

Unfortunately, this is not going to work as

docker start

 does not support changing environment variables, at least not at the time of writing. I assume the idea is to keep container immutable and if you want container with different properties—like environment variables—you should run a new container instead. Here’s how.

Stop and Rename the old container, just in case you want to go back

docker stop pmm-server
docker rename pmm-server pmm-server-old

Refresh the container with the latest version

docker pull percona/pmm-server:latest

Do not miss this step!  When you destroy and recreate the container, all the updates you have done through PMM Web interface will be lost. What’s more, the software version will be reset to the one in the Docker image. Running an old PMM version with a data volume modified by a new PMM version may cause unpredictable results. This could include data loss.

Run the container with the new settings, for example changing METRICS_RESOLUTION

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=5s \
  percona/pmm-server:latest

After you’re happy with your new container deployment you can remove the old container

docker rm pmm-server-old

That’s it! You should have running the latest PMM version with updated configuration settings.

The post How to Change Settings for PMM Deployed via Docker appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com