Mar
18
2019
--

PostgreSQL Upgrade Using pg_dumpall

migrating PostgreSQL using pg_dumpall

PostgreSQL logoThere are several approaches to assess when you need to upgrade PostgreSQL. In this blog post, we look at the option for upgrading a postgres database using pg_dumpall. As this tool can also be used to back up PostgreSQL clusters, then it is a valid option for upgrading a cluster too. We consider the advantages and disadvantages of this approach, and show you the steps needed to achieve the upgrade.

This is the first of our Upgrading or Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions series where we’ll be exploring different paths to accomplish postgres upgrade or migration. The series will culminate with a practical webinar to be aired April 17th (you can register here).

We begin this journey by providing you the most straightforward way to carry on with a PostgreSQL upgrade or migration: by rebuilding the entire database from a logical backup.

Defining the scope

Let’s define what we mean by upgrading or migrating PostgreSQL using pg_dumpall.

If you need to perform a PostgreSQL upgrade within the same database server, we’d call that an in-place upgrade or just an upgrade. Whereas a procedure that involves migrating your PostgreSQL server from one server to another server, combined with an upgrade from an older version (let’s say 9.3) to a newer version PostgreSQL (say PG 11.2), can be considered a migration.

There are two ways to achieve this requirement using logical backups :

  1. Using pg_dumpall
  2. Using pg_dumpall + pg_dump + pg_restore

We’ll be discussing the first option (pg_dumpall) here, and will leave the discussion of the second option for our next post.

pg_dumpall

pg_dumpall can be used to obtain a text-format dump of the whole database cluster, and which includes all databases in the cluster. This is the only method that can be used to backup globals such as users and roles in PostgreSQL.

There are, of course, advantages and disadvantages in employing this approach to upgrading PostgreSQL by rebuilding the database cluster using pg_dumpall.

Advantages of using pg_dumpall for upgrading a PostgreSQL server :

  1. Works well for a tiny database cluster.
  2. Upgrade can be completed using just a few commands.
  3. Removes bloat from all the tables and shrinks the tables to their absolute sizes.

Disadvantages of using pg_dumpall for upgrading a PostgreSQL server :

  1. Not the best option for databases that are huge in size as it might involve more downtime. (Several GB’s or TB’s).
  2. Cannot use parallel mode. Backup/restore can use just one process.
  3. Requires double the space on disk as it involves temporarily creating a copy of the database cluster for an in-place upgrade.

Let’s look at the steps involved in performing an upgrade using pg_dumpall:

  1. Install new PostgreSQL binaries in the target server (which could be the same one as the source database server if it is an in-place upgrade).

    -- For a RedHat family OS
    # yum install postgresql11*
    Or
    -- In an Ubuntu/Debian OS
    # apt install postgresql11
  2. Shutdown all the writes to the database server to avoid data loss/mismatch between the old and new version after upgrade.
  3. If you are doing an upgrade within the same server, create a cluster using the new binaries on a new data directory and start it using a port other than the source. For example, if the older version PostgreSQL is running on port 5432, start the new cluster on port 5433. If you are upgrading and migrating the database to a different server, create a new cluster using new binaries on the target server – the cluster may not need to run on a different port other than the default, unless that’s your preference.

    $ /usr/pgsql-11/bin/initdb -D new_data_directory
    $ cd new_data_directory
    $ echo “port = 5433” >> postgresql.auto.conf
    $ /usr/pgsql-11/bin/pg_ctl -D new_data_directory start
  4. You might have a few extensions installed in the old version PostgreSQL cluster. Get the list of all the extensions created in the source database server and install them for the new versions. You can exclude those you get with the contrib module by default. To see the list of extensions created and installed in your database server, you can run the following command.

    $ psql -d dbname -c "\dx"

    Please make sure to check all the databases in the cluster as the extensions you see in one database may not match the list of those created in another database.

  5. Prepare a postgresql.conf file for the new cluster. Carefully prepare this by looking at the existing configuration file of the older version postgres server.
  6. Use pg_dumpall to take a cluster backup and restore it to the new cluster.

    -- Command to dump the whole cluster to a file.
    $ /usr/pgsql-11/bin/pg_dumpall > /tmp/dumpall.sql
    -- Command to restore the dump file to the new cluster (assuming it is running on port 5433 of the same server).
    $ /usr/pgsql-11/bin/psql -p 5433 -f /tmp/dumpall.sql

    Note that i have used the new pg_dumpall from the new binaries to take a backup.
    Another, easier, way is to use PIPE to avoid the time involved in creating a dump file. Just add a hostname if you are performing an upgrade and migration.

    $ pg_dumpall -p 5432 | psql -p 5433
    Or
    $ pg_dumpall -p 5432 -h source_server | psql -p 5433 -h target_server
  7. Run ANALYZE to update statistics of each database on the new server.
  8. Restart the database server using the same port as the source.

Our next post in this series provides a similar way of upgrading your PostgreSQL server while at the same time providing some flexibility to carry on with changes like the ones described above. Stay tuned!


Image based on photo by Sergio Ortega on Unsplash

Mar
04
2019
--

PostgreSQL Webinar Wed April 17 – Upgrading or Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions

upgrade postgresql webinar series

PostgreSQL logoA date for your diary. On Wednesday, April 17 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4) Percona’s PostgreSQL Support Technical Lead, Avinash Vallarapu and Senior Support Engineers, Fernando Laudares, Jobin Augustine and Nickolay Ihalainen, will demonstrate the upgrade of a legacy version of PostgreSQL to a newer version, using built-in as well as open source tools. In the lead up to the live webinar, we’ll be publishing a series of five blog posts that will help you to understand the solutions available to perform a PostgreSQL upgrade.

Register Now

Synopsis

Are you stuck with an application that is using an older version PostgreSQL which is no longer supported? Are you looking for the methods available to upgrade a legacy version PostgreSQL cluster (< PostgreSQL 9.3)? Are you searching for solutions that could upgrade your PostgreSQL with a minimalistic downtime? Are you afraid that your application may not work with latest PostgreSQL versions as it was built on a legacy version, a few years ago? Do you want to confirm if you are doing your PostgreSQL upgrades the right way ? Do you think that you need to buy an enterprise license to minimize the downtime involved in upgrades?

Then we suggest you to subscribe to our webinar, that should answer most of your questions around PostgreSQL upgrades.

This webinar starts with a list of solutions that are built-in to PostgreSQL to help us upgrade a legacy version of PostgreSQL with minimal downtime. The advantages of choosing such methods will also be discussed. You’ll notice a list of prerequisites for each solution, reducing the scope of possible mistakes. It’s important to minimize downtime when upgrading from an older version of PostgreSQL server. Therefore, we will present three open source solutions that will help us either to minimize or to completely avoid downtime.

Our presentation will show the full process of upgrading a set of PostgreSQL servers to the latest available version. Furthermore, we’ll show the pros and cons for each of the methods we employed.

The webinar programme

Topics covered in this webinar will include:

  1. PostgreSQL upgrade using pg_dump/pg_restore (with downtime)
  2. PostgreSQL upgrade using pg_dumpall (with downtime)
  3. Continuous replication from a legacy PostgreSQL version to a newer version using Slony.
  4. Replication between major PostgreSQL versions using Logical Replication.
  5. Fast upgrade of legacy PostgreSQL with minimum downtime.

In the 45 minute session, we’ll walk you through the methods and demonstrate some of the methods you may find useful in your database environment. We’ll see how simple and quick it is to perform the upgrade using our approach.

Register Now


Image adapted from Photo by Magda Ehlers from Pexels

Aug
20
2018
--

Webinar Tues 8/21: MariaDB 10.3 vs. MySQL 8.0

MariaDB 10.3 vs MySQL 8.0

MariaDB 10.3 vs MySQL 8.0Please join Percona’s Chief Evangelist, Colin Charles on Tuesday, August 21st, 2018, as he presents MariaDB 10.3 vs. MySQL 8.0 at 7:00 AM PDT (UTC-7) / 10:00 PM EDT (UTC-4).

Are they syntactically similar? Where do these two languages differ? Why would I use one over the other?

MariaDB 10.3 is on the path of gradually diverging from MySQL 8.0. One obvious example is the internal data dictionary currently under development for MySQL 8.0. This is a major change to the way metadata is stored and used within the server: MariaDB doesn’t have an equivalent feature. Implementing this feature could mark the end of datafile-level compatibility between MySQL and MariaDB.

There are also non-technical differences between MySQL 8.0 and MariaDB 10.4, including:

  • Licensing: MySQL offers their code as open-source under the GPL, and provides the option of non-GPL commercial distribution in the form of MySQL Enterprise. MariaDB can only use the GPL because their work is derived from the MySQL source code under the terms of that license.
  • Support services: Oracle provides technical support, training, certification and consulting for MySQL, while MariaDB has their own support services. Some people prefer working with smaller companies, as traditionally it affords them more leverage as a customer.
  • Community contributions: MariaDB touts the fact that they accept more community contributions than Oracle. Part of the reason for this disparity is that developers like to contribute features, bug fixes and other code without a lot of paperwork overhead (and they complain about the Oracle Contributor Agreement). However, MariaDB has its own MariaDB Contributor Agreement — which more or less serves the same purpose.

Colin will take a look at some of the differences between MariaDB 10.3 and MySQL 8.0 and help answer some of the common questions our Database Performance Experts get about the two databases.

Register Now

The post Webinar Tues 8/21: MariaDB 10.3 vs. MySQL 8.0 appeared first on Percona Database Performance Blog.

Aug
07
2018
--

Replicating from MySQL 8.0 to MySQL 5.7

replicate from MySQL 8 to MySQL 5.7

In this blog post, we’ll discuss how to set a replication from MySQL 8.0 to MySQL 5.7. There are some situations that having this configuration might help. For example, in the case of a MySQL upgrade, it can be useful to have a master that is using a newer version of MySQL to an older version slave as a rollback plan. Another example is in the case of upgrading a master x master replication topology.

Officially, replication is only supported between consecutive major MySQL versions, and only from a lower version master to a higher version slave. Here is an example of a supported scenario:

5.7 master –> 8.0 slave

while the opposite is not supported:

8.0 master –> 5.7 slave

In this blog post, I’ll walk through how to overcome the initial problems to set a replication working in this scenario. I’ll also show some errors that can halt the replication if a new feature from MySQL 8 is used.

Here is the initial set up that will be used to build the topology:

slave > select @@version;
+---------------+
| @@version     |
+---------------+
| 5.7.17-log |
+---------------+
1 row in set (0.00 sec)
master > select @@version;
+-----------+
| @@version |
+-----------+
| 8.0.12    |
+-----------+
1 row in set (0.00 sec)

First, before executing the CHANGE MASTER command, you need to modify the collation on the master server. Otherwise the replication will run into this error:

slave > show slave status\G
                   Last_Errno: 22
                   Last_Error: Error 'Character set '#255' is not a compiled character set and is not specified in the '/opt/percona_server/5.7.17/share/charsets/Index.xml' file' on query. Default database: 'mysql8_1'. Query: 'create database mysql8_1'

This is because the default character_set and the collation has changed on MySQL 8. According to the documentation:

The default value of the character_set_server and character_set_database system variables has changed from latin1 to utf8mb4.

The default value of the collation_server and collation_database system variables has changed from latin1_swedish_ci to utf8mb4_0900_ai_ci.

Let’s change the collation and the character set to utf8 on MySQL 8 (it is possible to use any option that exists in both versions):

# master my.cnf
[client]
default-character-set=utf8
[mysqld]
character-set-server=utf8
collation-server=utf8_unicode_ci

You need to restart MySQL 8 to apply the changes. Next, after the restart, you have to create a replication user using mysql_native_password.This is because MySQL 8 changed the default Authentication Plugin to caching_sha2_password which is not supported by MySQL 5.7. If you try to execute the CHANGE MASTER command with a user using caching_sha2_password plugin, you will receive the error message below:

Last_IO_Errno: 2059
Last_IO_Error: error connecting to master 'root@127.0.0.1:19025' - retry-time: 60 retries: 1

To create a user using mysql_native_password :

master> CREATE USER 'replica_user'@'%' IDENTIFIED WITH mysql_native_password BY 'repli$cat';
master> GRANT REPLICATION SLAVE ON *.* TO 'replica_user'@'%';

Finally, we can proceed as usual to build the replication:

master > show master status\G
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 155
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
slave > CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_USER='replica_user', MASTER_PASSWORD='repli$cat',MASTER_PORT=19025, MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=155; start slave;
Query OK, 0 rows affected, 2 warnings (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
# This procedure works with GTIDs too
slave > CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_USER='replica_user', MASTER_PASSWORD='repli$cat',MASTER_PORT=19025,MASTER_AUTO_POSITION = 1 ; start slave;

Checking the replication status:

master > show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 127.0.0.1
Master_User: replica_user
Master_Port: 19025
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 155
Relay_Log_File: mysql-relay.000002
Relay_Log_Pos: 321
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 155
Relay_Log_Space: 524
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 100
Master_UUID: 00019025-1111-1111-1111-111111111111
Master_Info_File: /home/vinicius.grippa/sandboxes/rsandbox_5_7_17/master/data/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.01 sec)

Executing a quick test to check if the replication is working:

master > create database vinnie;
Query OK, 1 row affected (0.06 sec)

slave > show databases like 'vinnie';
+-------------------+
| Database (vinnie) |
+-------------------+
| vinnie |
+-------------------+
1 row in set (0.00 sec)

Caveats

Any tentative attempts to use a new feature from MySQL 8 like roles, invisible indexes or caching_sha2_password will make the replication stop with an error:

master > alter user replica_user identified with caching_sha2_password by 'sekret';
Query OK, 0 rows affected (0.01 sec)

slave > show slave status\G
               Last_SQL_Errno: 1396
               Last_SQL_Error: Error 'Operation ALTER USER failed for 'replica_user'@'%'' on query. Default database: ''. Query: 'ALTER USER 'replica_user'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$H	MEDi\"gQ
                        wR{/I/VjlgBIUB08h1jIk4fBzV8kU1J2RTqeqMq8Q2aox0''

Summary

Replicating from MySQL 8 to MySQL 5.7 is possible. In some scenarios (especially upgrades), this might be helpful, but it is not advisable to have a heterogeneous topology because it will be prone to errors and incompatibilities under some cases.

You might also like:

 

The post Replicating from MySQL 8.0 to MySQL 5.7 appeared first on Percona Database Performance Blog.

Nov
07
2016
--

Updating Percona XtraDB Cluster from 5.6.24-72.2 to 5.6.32-25.17

Percona XtraDB Cluster

Percona XtraDB ClusterThis blog describes how to upgrade Percona XtraDB Cluster in place from 5.6.24-72.2 to 5.6.32-25.17.

This very hands-on blog is the result of some questions such as “can I perform an in-place upgrade for Percona XtraDB Cluster” coming in. We have done these minor upgrades for Percona Managed Services customers running Percona XtraDB Cluster with lots of nodes, and I think it’s feasible to smoothly do it – if we pay special attention to some specific points I’ll call out. The main concern you should have is that if you have a big dataset, you should avoid SST (which consumes a lot of time if a node rebuild is needed).

Make sure you have all the steps very clear in order to avoid spending too much time when updating packages. The crucial point is Galera’s API GCache size. If you’re executing this when part of the cluster is online, and writes cannot be avoided, check first if the current configuration for the GCache can avoid nodes being written to SST while shutting down Percona Server on each of the nodes, updating packages and finally getting Percona Server back up online again.

A blog post written by Miguel Angel Nieto provides instructions on how to check the GCache file’s size and make sure it’s covering all the transactions for the time you need to take the node out. After increasing the size of the GCache, if the new node finds all the missing transactions on the donor’s GCache, it goes to IST. If not, it will need to use SST.

You can read more about the difference between IST and SST in the Galera API documentation.

Little less talk, little more action…

At this point, we need to update the packages one cluster node at a time. The cluster needs to stay up. I’m going to use a cluster with three nodes. Node 01 is dedicated to writes, while nodes 02 and 03 are dedicated to scaling the cluster’s reads (all are running 5.6.24-72.2). Just for the reference, it’s running on CentOS 6.5, and I’m going to use yum, but you can convert that to any other package manager depending on the Linux distort you’re running. This is the list of nodes and the packages we need to update:

#: servers are like below
(writes) node01::192.168.50.11:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL)
(reads) node02::192.168.50.12:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL)
(reads) node03::192.168.50.13:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL)
#: packages currently installed
[vagrant@node02 ~]$ sudo rpm -qa | grep Percona
Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64

Before updating the packages above, make sure you update the XtraBackup package in case you have configured the variable

wsrep_sst_method

 as xtrabackup-v2, this avoids the error below:

WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307)
2016-10-26 20:47:15 5227 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '5227'  ''

So, on all three nodes, update percona-xtrabackup to make sure we’re running the latest version:

[root@node02 vagrant]# yum update percona-xtrabackup
Loaded plugins: fastestmirror, versionlock
Determining fastest mirrors
...
--> Running transaction check
---> Package percona-xtrabackup.x86_64 0:2.3.4-1.el6 will be updated
---> Package percona-xtrabackup.x86_64 0:2.3.5-1.el6 will be an update

With that, take out of the cluster one node at a time, update all old binaries using yum update and start mysqld back up online. You don’t need to run

mysql_upgrade

 in this case. When you start mysqld with the newer binaries in place, depending on the size of configured cache, it’s going to perform either an IST or SST.

As you’re going to take the node out of rotation and out of the cluster, you don’t need to worry about configuring it as read_only. If you can do that in a maintenance window, where no one is writing data to the main node, it’s the best scenario. You won’t need to worry about SST, as in most cases the dataset is too big (TB++) and the SST time can be some hours (an overnight streaming in my experience).

Let’s take out node02 and update the packages:
#: let's take out node02 to update packages
[vagrant@node02 ~]$ sudo /etc/init.d/mysql stop
Shutting down MySQL (Percona XtraDB Cluster).... SUCCESS!
[vagrant@node02 ~]$ sudo yum update Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64
...
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.24-72.2.el6 will be updated
---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.32-25.17.1.el6 will be an update
---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.24-72.2.el6 will be updated
---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.32-25.17.1.el6 will be an update
---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.15-1.rhel6 will be updated
---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.17-1.rhel6 will be an update
---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.24-72.2.el6 will be updated
---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.32-25.17.1.el6 will be an update
---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.24-72.2.el6 will be updated
---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.32-25.17.1.el6 will be an update
#: new packages in place after yum update - here, make sure you run yum clean all before yum update
[root@node02 ~]# rpm -qa | grep Percona
Percona-XtraDB-Cluster-shared-56-5.6.32-25.17.1.el6.x86_64
Percona-XtraDB-Cluster-galera-3-3.17-1.rhel6.x86_64
Percona-XtraDB-Cluster-devel-56-5.6.32-25.17.1.el6.x86_64
Percona-XtraDB-Cluster-client-56-5.6.32-25.17.1.el6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.32-25.17.1.el6.x86_64

Now start node02, knowing that it’s going to join the cluster, but with updated packages:

[root@node02 vagrant]# /etc/init.d/mysql start
Starting MySQL (Percona XtraDB Cluster)...State transfer in progress, setting sleep higher
.. SUCCESS!
#: here you can see that the state transfer was required due to different states from cluster and current node
#: this is gonna test the wsrep_sst_method to make sure it’s working well after updating percona-xtrabackup
#: to latest version available
2016-10-26 21:51:38 3426 [Note] WSREP: State transfer required:
 Group state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613
 Local state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52611
2016-10-26 21:51:38 3426 [Note] WSREP: New cluster view: global state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613, view# 2: Primary, number of nodes: 2, my index: 0, protocol version 3
2016-10-26 21:51:38 3426 [Warning] WSREP: Gap in state sequence. Need state transfer.
2016-10-26 21:51:38 3426 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '3426'  '' '
WSREP_SST: [INFO] Streaming with xbstream (20161026 21:51:39.023)
WSREP_SST: [INFO] Using socat as streamer (20161026 21:51:39.025)
WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} )(20161026 21:51:39.100)
2016-10-26 21:51:39 3426 [Note] WSREP: Prepared SST request: xtrabackup-v2|192.168.50.12:4444/xtrabackup_sst//1
...
2016-10-26 21:51:39 3426 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 52613)
2016-10-26 21:51:39 3426 [Note] WSREP: Requesting state transfer: success, donor: 1
WSREP_SST: [INFO] Proceeding with SST (20161026 21:51:39.871)
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (2016102621:51:39.873)
WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20161026 21:51:39.876)
...
WSREP_SST: [INFO] Moving the backup to /var/lib/mysql/ (20161026 21:51:55.826)
WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/etc/my.cnf  --defaults-group=mysqld --no-version-check  --datadir=/var/lib/mysql/ --move-back --force-non-empty-directories ${DATA} &>${DATA}/innobackup.move.log (2016102621:51:55.829)
WSREP_SST: [INFO] Move successful, removing /var/lib/mysql//.sst (20161026 21:51:55.859)
...
Version: '5.6.32-78.1-56'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release rel78.1,Revision 979409a, WSREP version 25.17, wsrep_25.17
2016-10-26 21:51:56 3426 [Note] WSREP: 0.0 (pxc01): State transfer from 1.0 (pxc01) complete.
2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINER -> JOINED (TO: 52613)
2016-10-26 21:51:56 3426 [Note] WSREP: Member 0.0 (pxc01) synced with group.
2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 52613)
2016-10-26 21:51:56 3426 [Note] WSREP: Synchronized with group, ready for connections
2016-10-26 21:51:56 3426 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

As you can see above, node02 is back in the cluster. Additionally, it’s important to see that both the Percona-Server packages and the Galera API packages were updated. When the node is up and part of the cluster, you should see a new API version in the output of a SHOW GLOBAL STATUS LIKE ‘wsrep%’ command:

#: node02, the one we just updated
[root@node02 mysql]# mysql -e "show global status like 'wsrep_provider_version'G"
*************************** 1. row ***************************
Variable_name: wsrep_provider_version
        Value: 3.17(r447d194)
#: node01 not updated yet
[root@node01 mysql]# mysql -e "show global status like 'wsrep_provider_version'G"
*************************** 1. row ***************************
Variable_name: wsrep_provider_version
        Value: 3.15(r5c765eb)

Summarizing the procedure until now, the cluster packages update plan is:
  1. Take nodes out of rotation one at a time
  2. Shutdown mysqld on each node in order
  3. Update the below packages (or the ones corresponding to what you’re running):
[vagrant@node02 ~]$ sudo rpm -qa | grep Percona
Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64
Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64

  1. Update percona-xtrabackup on all the cluster’s nodes to avoid issues (as explained above):
WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307)
...
[root@node01 ~]# yum update percona-xtrabackup
...
[root@node02 ~]# xtrabackup --version
xtrabackup version 2.3.5 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 45cda89)

  1. Start mysqld back online to grab the cluster’s current state

After finishing up with each node’s packages update, make sure you check the main node to see if they have joined the cluster. On node01, you can enter the below query to return the main status variables. This checks the current status of node01 and the cluster size:

mysql> SELECT @@HOSTNAME AS HOST, NOW() AS `DATE`, VARIABLE_NAME,VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME IN ('wsrep_cluster_state_uuid','wsrep_cluster_conf_id','wsrep_cluster_size','wsrep_cluster_status','wsrep_local_state_comment')G
*************************** 1. row ***************************
 HOST: node01
 DATE: 2016-10-27 18:14:42
 VARIABLE_NAME: WSREP_LOCAL_STATE_COMMENT
VARIABLE_VALUE: Synced
*************************** 2. row ***************************
 HOST: node01
 DATE: 2016-10-27 18:14:42
 VARIABLE_NAME: WSREP_CLUSTER_CONF_ID
VARIABLE_VALUE: 10
*************************** 3. row ***************************
 HOST: node01
 DATE: 2016-10-27 18:14:42
 VARIABLE_NAME: WSREP_CLUSTER_SIZE
VARIABLE_VALUE: 3
*************************** 4. row ***************************
 HOST: node01
 DATE: 2016-10-27 18:14:42
 VARIABLE_NAME: WSREP_CLUSTER_STATE_UUID
VARIABLE_VALUE: 1e0b9725-9c5e-11e6-886d-7708872d6aa5
*************************** 5. row ***************************
 HOST: node01
 DATE: 2016-10-27 18:14:42
 VARIABLE_NAME: WSREP_CLUSTER_STATUS
VARIABLE_VALUE: Primary
5 rows in set (0.00 sec)

Check the other nodes as well:

#: node02
[root@node02 mysql]# mysql -e "show global status like 'wsrep_local_state%'G"
*************************** 1. row ***************************
Variable_name: wsrep_local_state_uuid
Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5
*************************** 2. row ***************************
Variable_name: wsrep_local_state
Value: 4
*************************** 3. row ***************************
Variable_name: wsrep_local_state_comment
Value: Synced
#: node03
[root@node03 ~]# mysql -e "show global status like 'wsrep_local_state%'G"
*************************** 1. row ***************************
Variable_name: wsrep_local_state_uuid
Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5
*************************** 2. row ***************************
Variable_name: wsrep_local_state
Value: 4
*************************** 3. row ***************************
Variable_name: wsrep_local_state_comment
Value: Synced

Cheers!

Jun
15
2016
--

Installing MongoDB 3.2 and upgrading MongoDB replica set

Upgrading MongoDB replica set

Upgrading MongoDB replica setIn this post, we’ll examine a couple of ways for upgrading MongoDB replica set.

With the release of MongoDB 3.2, comes a rash of new features and improvements. One of these enhancements is improved replica sets. From MongoDB: “A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.”

Config servers are replica sets!

This is HUGE. It signals a significant advancement in backups, metadata stability and overall maturity. It is a very long-awaited feature that shows MongoDB is maturing. It means:

  • Mongos’ can retry connection vs error
  • Unified and consistent backups!
  • Up to 50 secondaries
    • Remove bugs with Mongos’ not near config servers!

How do we activate all these new awesome features? Let’s do it!

Upgrading to 3.2

  • Replace binaries and restart one secondary at a time
  • Then primaries as well
  • Restart configs in reverse order
    • If configdb=con1, con2, con3
      • Restart con3, con2, and then finally con1 with 3.2
      • Do con1 as FAST as possible, while the balancer is also disabled
    • You no longer need to restart a mongos –upgrade (as of 3.2)
    • Restart all mongos, this will reset ALL connections at some point (whether you do at once or space it out).

Upgrading the replset to the new protocol

This is by far the easiest upgrade bit but DON’T do it until you know your stable on 3.2. Log into each primary and run:

>cfg = rs.conf();
 {
     "_id" : "r1",
     "version" : 2,
     "members" : [
         {
             "_id" : 0,
             "host" : "localhost:17001"
         },
         {
             "_id" : 1,
             "host" : "localhost:17002"
         },
         {
            "_id" : 2,
            "host" : "localhost:17003",
         }
     ]
}
>cfg.protocolVersion=1;
>rs.reconfig(cfg);
{
    "ok" : 1,
    "$gleStats" : {
        "lastOpTime" : Timestamp(1464947003, 1),
        "electionId" : ObjectId("7fffffff0000000000000018")
    }
}

Or:

>db.getSiblingDB(‘config’).shards.forEach(function(shard){
    x = new Mongo(shard.host); /* Assumes no auth needed */
    conf =x.getDB("local").system.replset.findOne()
    conf.protcolVersion=1;conf.version++;
    x.getDB(‘admin’).runCommand({ replSetReconfig: conf });
});

The quick upgrade scripts

  • Upgrade_all_to_3.2.4.sh

Does what it says: kills every process and launches them on 3.2 binaries with no other changes.

  • Upgrade_replica_proto_version.sh

Simply runs the quick rs.reconfig() on each primary, adds the new settings to enable to new replication features.

Let’s upgrade the configs the right way!

This is not included as part of a normal upgrade so only do this AFTER you’re stable and don’t do it before upgrading the protocolVersion we just talked about. (I mean it! Disregard this advice and your life will seriously not be awesome!)

Upgrading to a Config ReplicaSet ( the official way)

  1. Run rs.initiate on the first config in the list (must be 3.2.4+)
    • Must be a fully configured document with configsrv:true defined.
  2. Restart same config server adding
    • configsvrMode = sccc
    • replSet = <name used in rs.initiate()>
    • storageEngine= WiredTiger
  3. Start the new config servers for the other two nodes (should be a new dbpath and port)
  4. Add those nodes to the replSet and check their status
  5. Remove the second original config server from the running
  6. Restart the 1st node you set “sccc” on to not have that setting
  7. At this point, the 1st node will transition to removed if using MMAP.
  8. Restart a mongos with a new configdb line
    • –configdb <replSetName>/node1:port,node2:port,…
    • Only replset members should be listed
  9. Verify you can work and query through mongos
  10. Repeat on all mongos
  11. Remove the 1st node with rs.remove
  12. Shutdown final original config and enable balancer

There is also an easy way.

The easy way, with a small maintenance window, which lets you just restore a good backup and have a nice and simple rollback plan:

  1. Stop all mongos after backing up the config directory
  2. Run rs.initiate on first config server
  3. Stop the 2nd, then the 3rd, restarting them with an empty dbpath directory
  4. Check the rs.status now
  5. Stop the 1st config server and restart with an empty dbpath directory
  6. Check Status
  7. Restart all mongos, adding <replSetName>/ to the front of the configdb line.
  8. Done!

Oh look there is a quick script we have made for you:

  • Upgrade_config_to_repliaset.sh
    • Kill all  config and mongos processes
    • Restart the first config server on non-standard port
    • Mongodump config database
    • Restart c1 as WiredTiger, clearing that data path
    • Import dump back into first config server
    • Restart on normal port
    • Initialize Replica Set
    • Restart second and third config server after clearing dbpath folder
    • After the initial sync, start all the mongos.
    • Done and script exits!

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com