Sep
30
2013
--

Join my Oct. 2 webinar: ‘Implementing MySQL and Hadoop for Big Data’

Implementing MySQL and Hadoop for Big DataMySQL DBAs know that integrating MySQL and a big data solution can be challenging. That’s why I invite you to join me this Wednesday (Oct. 2) at 10 a.m. Pacific time for a free webinar in which I’ll walk you through how to implement a successful big data strategy with Apache Hadoop and MySQL. This webinar is specifically tailored for MySQL DBAs and developers (or any person with a previous MySQL experience) who wants to know about how to use Apache Hadoop together with MySQL for Big Data.

The webinar is titled, “Implementing MySQL and Hadoop for Big Data,” and you can register here.

Storing Big Data in MySQL alone can be challenging:

  • Single MySQL instance may not scale enough to store hundreds or terabyte or even a petabyte of data.
  • “Sharding” MySQL is a common approach, however, it can be hard to implement.
  • Indexes for terabytes of data may be a problem (updating index of that size can slow down the insert significantly).

Apache Hadoop together with MySQL can solve many big data challenges. In the webinar I will present:

  • And introduction to Apache Hadoop and its components including HFDS, Map/Reduce, Hive/Impala, Flume, and Scoop
  • What are the common application for Apache Hadoop
  • How to integrate Hadoop and MySQL using Sqoop and MySQL Applier for Hadoop.
  • Clickstream logs statistical analysis and other examples of big data implementation
  • ETL and ELT process with Hadoop and MySQL
  • Star Schema implementation example for Hadoop
  • Star Schema Benchmark results with Cloudera Impala and columnar storage.

I look forward the webinar and hope to see you there! Additionally, if you have questions in advance, please also ask those below, too.

The post Join my Oct. 2 webinar: ‘Implementing MySQL and Hadoop for Big Data’ appeared first on MySQL Performance Blog.

Sep
30
2013
--

What’s your Percona story?

Tell us your Percona storyPercona’s development teams are always delighted to hear about the many ways that engineers and operations teams use our open source software.

Companies say they value the enterprise-grade features and high performance that Percona Server delivers for free and use the software to power critical business applications, hot new apps, and even a growing web host.

We’d love to hear from you. If you use Percona Server or our other software in a production environment, share a quick summary and we’ll list you on our site.

We’re looking forward to hearing from you.

The post What’s your Percona story? appeared first on MySQL Performance Blog.

Sep
28
2013
--

Best offer on Ocean of Dust

Final200x300 Happy Saturday everyone!

First off, thanks to all of you for your support since starting my author “career” almost a year ago. I really appreciate your kind words, reviews, comments and spreading the word.

For everyone who has been thinking about reading Ocean of Dust but hasn’t gotten around to it, it is now FREE on Kindle for 5 days only, starting today. You can’t get a better deal than that. Please tell your friends.

Thanks everyone!

Sep
27
2013
--

Experimental GIT Mirrors of Percona XtraBackup, Percona Server plus Oracle MySQL trees

I recently blogged on setting up Experimental Git mirror of Oracle MySQL trees up on GitHub. I’m now happy to announce that there are also mirrors for:

I’ve also updated the Oracle MySQL GIT mirror to include MySQL 5.7 and the (now abandoned) MySQL 6.0. I include the abandoned 6.0 tree as it can provide useful archaeology as to where some code came from.

I’d love to hear about any positive/negative experiences using these mirrors. Hopefully shortly I’ll fix up the last two bits of missing metadata in the transfer: bzr branch names for each commit and the “bugs fixed” by each commit (what’s added with “bzr commit –fixes lp:1234″).

The post Experimental GIT Mirrors of Percona XtraBackup, Percona Server plus Oracle MySQL trees appeared first on MySQL Performance Blog.

Sep
25
2013
--

Experimental Git mirror of Oracle MySQL trees

I’ve been working on setting up mirrors on github of all our BZR branches. My first efforts that are at a suitable stage to share are mirrors of the Oracle MySQL trees. This is currently a snapshot of MySQL 5.1, 5.5 and 5.6 with all the tags preserved. I’ve managed to get GIT to compact down the repository to a mere 177MB on disk for all the history, which is rather impressive.

Go check it out: https://github.com/percona/mysql

This should be considered experimental and I may end up pushing up something better at some point soon – i.e. don’t rely on being able to merge later update (think rebase rather than merge).

The post Experimental Git mirror of Oracle MySQL trees appeared first on MySQL Performance Blog.

Sep
25
2013
--

Talking Drupal #016 – Drupal vs WordPress

Show Topics

  • Drupal and WordPress in the marketplace
  • Usage Stats – http://w3techs.com
  • Types of Websites 
  • Workflow of building a website
  • Add ons – modules vs plug-ins
  • Communities 
  • When to consider WordPress

Links 

  • http://w3techs.com
  • http://www.Drupal.org
  • http://wordpress.org/

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.RubicDesign.com @johnpicozzi 
  • Nic Laflin – www.nLightened.net @nicxvan
  • Adam Silver – kitchensinkwp.com @kitchensinkwp
  • Dustin Hartzler -YourWebsiteEngineer.com, @DustinHartzler
Sep
25
2013
--

Talking Drupal #016 – Drupal vs WordPress

Show Topics

  • Drupal and WordPress in the marketplace
  • Usage Stats – http://w3techs.com
  • Types of Websites 
  • Workflow of building a website
  • Add ons – modules vs plug-ins
  • Communities 
  • When to consider WordPress

Links 

  • http://w3techs.com
  • http://www.Drupal.org
  • http://wordpress.org/

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.RubicDesign.com @johnpicozzi 
  • Nic Laflin – www.nLightened.net @nicxvan
  • Adam Silver – kitchensinkwp.com @kitchensinkwp
  • Dustin Hartzler -YourWebsiteEngineer.com, @DustinHartzler
Sep
24
2013
--

How to reclaim space in InnoDB when innodb_file_per_table is ON

When innodb_file_per_table is OFF and all data is going to be stored in ibdata files. If you drop some tables of delete some data then there is no any other way to reclaim that unused disk space except dump/reload method.

When Innodb_file_per_table is ON, each table stores data and indexes in it’s own tablespace file. However, the shared tablespace-ibdata1 can still grow and you can check more information here about why it grows and what are the solutions.

http://www.mysqlperformanceblog.com/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/

Following the recent blog post from Miguel Angel Nieto titled “Why is the ibdata1 file continuously growing in MySQL?“, and since this is a very common question for Percona Support, this post covers how to reclaim the space when we are using innodb_file_per_table. Also, I will show you how to do it without causing performance or availability problems with the help of our Percona Toolkit.

When you remove rows, they are just marked as deleted on disk but space will be consumed by InnoDB files which can be re-used later when you insert/update more rows but it will never shrink. Very old MySQL bug : http://bugs.mysql.com/bug.php?id=1341

But, if you are using innodb_file_per_table then you can reclaim the space by running OPTIMIZE TABLE on that table. OPTIMIZE TABLE will create a new identical empty table. Then it will copy row by row data from old table to the new one. In this process a new .ibd tablespace will be created and the space will be reclaimed

mysql> select count(*) From test;
+----------+
| count(*) |
+----------+
| 3145728 |
+----------+
root@nil:/var/lib/mysql/mydb# ll -h
...
-rw-rw---- 1 mysql mysql 168M Sep 5 11:52 test.ibd
mysql> delete from test limit 2000000;
mysql> select count(*) From test;
+----------+
| count(*) |
+----------+
| 1145728 |
+----------+
root@nil:/var/lib/mysql/mydb# ll -h
...
-rw-rw---- 1 mysql mysql 168M Sep 5 11:52 test.ibd

You can see that after deleting 2M records, the test.ibd size was 168M.

mysql> optimize table test;
+-----------+----------+----------+-------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+-----------+----------+----------+-------------------------------------------------------------------+
| mydb.test | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| mydb.test | optimize | status | OK |
+-----------+----------+----------+-------------------------------------------------------------------+
root@nil:/var/lib/mysql/mydb# ll -h
...
-rw-rw---- 1 mysql mysql 68M Sep 5 12:47 test.ibd

After OPTIMIZE, you will be able to reclaim the space. As you can see, test.ibd file size is decreased from 168M to 68M.

I would like to mention here that during that process the table will be locked.(Table locked for just Writes) Which can affect to the performance when you’ll have large table. So If you don’t want to lock the table then you can use one of the best utility by Percona, pt-online-schema-change. It can ALTER without locking tables. You can run ALTER TABLE with ENGINE=INNODB which will re-create the table and reclaim the space.

(It’s always recommended to use latest version of pt-* utilities)

mysql> select count(*) from test;
+----------+
| count(*) |
+----------+
| 2991456 |
+----------+
mysql> delete from test limit 2000000;
mysql> select count(*) from test;
+----------+
| count(*) |
+----------+
| 991456 |
+----------+
root@nil:/var/lib/mysql/mydb# ll -h
...
-rw-rw---- 1 mysql mysql 157M Sep 6 11:52 test.ibd
nilnandan@nil:~$ pt-online-schema-change --alter "ENGINE=InnoDB" D=mydb,t=test --execute
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Altering `mydb`.`test`...
Creating new table...
Created new table mydb._test_new OK.
Altering new table...
Altered `mydb`.`_test_new` OK.
2013-09-06T15:37:46 Creating triggers...
2013-09-06T15:37:47 Created triggers OK.
2013-09-06T15:37:47 Copying approximately 991800 rows...
Copying `mydb`.`test`: 96% 00:01 remain
2013-09-06T15:38:17 Copied rows OK.
2013-09-06T15:38:17 Swapping tables...
2013-09-06T15:38:18 Swapped original and new tables OK.
2013-09-06T15:38:18 Dropping old table...
2013-09-06T15:38:18 Dropped old table `mydb`.`_test_old` OK.
2013-09-06T15:38:18 Dropping triggers...
2013-09-06T15:38:18 Dropped triggers OK.
Successfully altered `mydb`.`test`.
root@nil:/var/lib/mysql/mydb# ll -h
...
-rw-rw---- 1 mysql mysql 56M Sep 6 15:38 test.ibd

Same here, you can notice that test.ibd file size decreased from 157M to 56M.

NOTE: Please make sure that you have ample space before you run pt-online-schema-change because it will create a temporary table that contains roughly the size of the original table. By running this on the primary node, the changes will be replicated to your slaves.

The post How to reclaim space in InnoDB when innodb_file_per_table is ON appeared first on MySQL Performance Blog.

Sep
23
2013
--

MySQL 5.6 Configuration Optimization Webinar, Sept. 25

Webinar: MySQL 5.6 Configuration OptimizationThis Wednesday in our next webinar I’ll share how to configure a better-performing MySQL 5.6 server. You’ll lean a practical approach to generating a sensible configuration file that sets what is needed and omits what is not.

Why dedicate an entire webinar to the new configuration settings within MySQL 5.6? Mainly because the default configuration files that come with MySQL 5.6 are not designed for high volume production use, and I’ve seen many MySQL incidents caused by poor configuration. Hopefully my advice will save you the headache of tweaking the variables within MySQL’s configuration files in order to work within your organization’s unique business environment.

And while I’ll be doing most of the talking, I do look forward to your questions during the webinar, titled “MySQL 5.6 Configuration Optimization.” It will begin at 10 a.m. Pacific time and run for about an hour. So please tune in on Wednesday, Sept. 25 at 10 a.m. You can register here to reserve your spot.

I welcome your questions both during the webinar and in the comments section below. See you Wednesday!

The post MySQL 5.6 Configuration Optimization Webinar, Sept. 25 appeared first on MySQL Performance Blog.

Sep
23
2013
--

Percona XtraDB Cluster: Setting up a simple cluster

Percona XtraDB Cluster (PXC) is different enough from async replication that it can be a bit of a puzzle how to do things the Galera way.  This post will attempt to illustrate the basics of setting up 2 node PXC cluster from scratch.

Requirements

Two servers (could be VMs) that can talk to each other.  I’m using CentOS for this post.  Here’s a dirt-simple Vagrant setup: https://github.com/jayjanssen/two_centos_nodes to make this easy (on Virtualbox).

These servers are talking over the 192.168.70.0/24 internal network for our example.

jayj@~/Src $ git clone https://github.com/jayjanssen/two_centos_nodes.git
jayj@~/Src $ cd two_centos_nodes
jayj@~/Src/two_centos_nodes $ vagrant up
 Bringing machine 'node1' up with 'virtualbox' provider...
 Bringing machine 'node2' up with 'virtualbox' provider...
 [node1] Importing base box 'centos-6_4-64_percona'...
 [node1] Matching MAC address for NAT networking...
 [node1] Setting the name of the VM...
 [node1] Clearing any previously set forwarded ports...
 [node1] Creating shared folders metadata...
 [node1] Clearing any previously set network interfaces...
 [node1] Preparing network interfaces based on configuration...
 [node1] Forwarding ports...
 [node1] -- 22 => 2222 (adapter 1)
 [node1] Booting VM...
 [node1] Waiting for machine to boot. This may take a few minutes...
 [node1] Machine booted and ready!
 [node1] Setting hostname...
 [node1] Configuring and enabling network interfaces...
 [node1] Mounting shared folders...
 [node1] -- /vagrant
 [node2] Importing base box 'centos-6_4-64_percona'...
 [node2] Matching MAC address for NAT networking...
 [node2] Setting the name of the VM...
 [node2] Clearing any previously set forwarded ports...
 [node2] Fixed port collision for 22 => 2222. Now on port 2200.
 [node2] Creating shared folders metadata...
 [node2] Clearing any previously set network interfaces...
 [node2] Preparing network interfaces based on configuration...
 [node2] Forwarding ports...
 [node2] -- 22 => 2200 (adapter 1)
 [node2] Booting VM...
 [node2] Waiting for machine to boot. This may take a few minutes...
 [node2] Machine booted and ready!
 [node2] Setting hostname...
 [node2] Configuring and enabling network interfaces...
 [node2] Mounting shared folders...
 [node2] -- /vagrant

Install the software

These steps should be repeated on both nodes:

jayj@~/Src/two_centos_nodes $ vagrant ssh node1
Last login: Tue Sep 10 14:15:50 2013 from 10.0.2.2
[root@node1 ~]# yum localinstall http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
Loaded plugins: downloadonly, fastestmirror, priorities
Setting up Local Package Process
percona-release-0.0-1.x86_64.rpm | 6.1 kB 00:00
Examining /var/tmp/yum-root-t61o64/percona-release-0.0-1.x86_64.rpm: percona-release-0.0-1.x86_64
Marking /var/tmp/yum-root-t61o64/percona-release-0.0-1.x86_64.rpm to be installed
Determining fastest mirrors
epel/metalink | 15 kB 00:00
* base: mirror.atlanticmetro.net
* epel: mirror.seas.harvard.edu
* extras: centos.mirror.netriplex.com
* updates: mirror.team-cymru.org
base | 3.7 kB 00:00
base/primary_db | 4.4 MB 00:01
epel | 4.2 kB 00:00
epel/primary_db | 5.5 MB 00:04
extras | 3.4 kB 00:00
extras/primary_db | 18 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 4.4 MB 00:03
Resolving Dependencies
--> Running transaction check
---> Package percona-release.x86_64 0:0.0-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================
Installing:
percona-release x86_64 0.0-1 /percona-release-0.0-1.x86_64 3.6 k
Transaction Summary
================================================================================================================================
Install 1 Package(s)
Total size: 3.6 k
Installed size: 3.6 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : percona-release-0.0-1.x86_64 1/1
Verifying : percona-release-0.0-1.x86_64 1/1
Installed:
percona-release.x86_64 0:0.0-1
Complete!
[root@node1 ~]# yum install -y Percona-XtraDB-Cluster-server
Loaded plugins: downloadonly, fastestmirror, priorities
Loading mirror speeds from cached hostfile
* base: mirror.atlanticmetro.net
* epel: mirror.seas.harvard.edu
* extras: centos.mirror.netriplex.com
* updates: mirror.team-cymru.org
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-server.x86_64 1:5.5.31-23.7.5.438.rhel6 will be installed
--> Processing Dependency: xtrabackup >= 1.9.0 for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-client for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: rsync for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-galera for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: nc for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-shared for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Processing Dependency: libaio.so.1()(64bit) for package: 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-client.x86_64 1:5.5.31-23.7.5.438.rhel6 will be installed
---> Package Percona-XtraDB-Cluster-galera.x86_64 0:2.6-1.152.rhel6 will be installed
---> Package Percona-XtraDB-Cluster-shared.x86_64 1:5.5.31-23.7.5.438.rhel6 will be obsoleting
---> Package libaio.x86_64 0:0.3.107-10.el6 will be installed
---> Package mysql-libs.x86_64 0:5.1.69-1.el6_4 will be obsoleted
---> Package nc.x86_64 0:1.84-22.el6 will be installed
---> Package percona-xtrabackup.x86_64 0:2.1.4-656.rhel6 will be installed
--> Processing Dependency: perl(DBD::mysql) for package: percona-xtrabackup-2.1.4-656.rhel6.x86_64
--> Processing Dependency: perl(Time::HiRes) for package: percona-xtrabackup-2.1.4-656.rhel6.x86_64
---> Package rsync.x86_64 0:3.0.6-9.el6 will be installed
--> Running transaction check
---> Package perl-DBD-MySQL.x86_64 0:4.013-3.el6 will be installed
--> Processing Dependency: perl(DBI::Const::GetInfoType) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
--> Processing Dependency: perl(DBI) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
--> Processing Dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
--> Processing Dependency: libmysqlclient.so.16()(64bit) for package: perl-DBD-MySQL-4.013-3.el6.x86_64
---> Package perl-Time-HiRes.x86_64 4:1.9721-131.el6_4 will be installed
--> Running transaction check
---> Package Percona-Server-shared-compat.x86_64 0:5.5.33-rel31.1.566.rhel6 will be obsoleting
---> Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================
Installing:
Percona-Server-shared-compat x86_64 5.5.33-rel31.1.566.rhel6 percona 3.4 M
replacing mysql-libs.x86_64 5.1.69-1.el6_4
Percona-XtraDB-Cluster-server x86_64 1:5.5.31-23.7.5.438.rhel6 percona 15 M
Percona-XtraDB-Cluster-shared x86_64 1:5.5.31-23.7.5.438.rhel6 percona 648 k
replacing mysql-libs.x86_64 5.1.69-1.el6_4
Installing for dependencies:
Percona-XtraDB-Cluster-client x86_64 1:5.5.31-23.7.5.438.rhel6 percona 6.3 M
Percona-XtraDB-Cluster-galera x86_64 2.6-1.152.rhel6 percona 1.1 M
libaio x86_64 0.3.107-10.el6 base 21 k
nc x86_64 1.84-22.el6 base 57 k
percona-xtrabackup x86_64 2.1.4-656.rhel6 percona 6.8 M
perl-DBD-MySQL x86_64 4.013-3.el6 base 134 k
perl-DBI x86_64 1.609-4.el6 base 705 k
perl-Time-HiRes x86_64 4:1.9721-131.el6_4 updates 47 k
rsync x86_64 3.0.6-9.el6 base 334 k
Transaction Summary
================================================================================================================================
Install 12 Package(s)
Total download size: 35 M
Downloading Packages:
(1/12): Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64.rpm | 3.4 MB 00:04
(2/12): Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 6.3 MB 00:03
(3/12): Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64.rpm | 1.1 MB 00:00
(4/12): Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 15 MB 00:04
(5/12): Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64.rpm | 648 kB 00:00
(6/12): libaio-0.3.107-10.el6.x86_64.rpm | 21 kB 00:00
(7/12): nc-1.84-22.el6.x86_64.rpm | 57 kB 00:00
(8/12): percona-xtrabackup-2.1.4-656.rhel6.x86_64.rpm | 6.8 MB 00:03
(9/12): perl-DBD-MySQL-4.013-3.el6.x86_64.rpm | 134 kB 00:00
(10/12): perl-DBI-1.609-4.el6.x86_64.rpm | 705 kB 00:00
(11/12): perl-Time-HiRes-1.9721-131.el6_4.x86_64.rpm | 47 kB 00:00
(12/12): rsync-3.0.6-9.el6.x86_64.rpm | 334 kB 00:00
--------------------------------------------------------------------------------------------------------------------------------
Total 1.0 MB/s | 35 MB 00:34
warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-percona
Importing GPG key 0xCD2EFD2A:
Userid : Percona MySQL Development Team <mysql-dev@percona.com>
Package: percona-release-0.0-1.x86_64 (@/percona-release-0.0-1.x86_64)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-percona
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libaio-0.3.107-10.el6.x86_64 1/13
Installing : 1:Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64 2/13
Installing : 1:Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64 3/13
Installing : Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64 4/13
Installing : nc-1.84-22.el6.x86_64 5/13
Installing : 4:perl-Time-HiRes-1.9721-131.el6_4.x86_64 6/13
Installing : perl-DBI-1.609-4.el6.x86_64 7/13
Installing : rsync-3.0.6-9.el6.x86_64 8/13
Installing : Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64 9/13
Installing : perl-DBD-MySQL-4.013-3.el6.x86_64 10/13
Installing : percona-xtrabackup-2.1.4-656.rhel6.x86_64 11/13
Installing : 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 12/13
ls: cannot access /var/lib/mysql/*.err: No such file or directory
ls: cannot access /var/lib/mysql/*.err: No such file or directory
130917 14:43:16 [Note] WSREP: Read nil XID from storage engines, skipping position init
130917 14:43:16 [Note] WSREP: wsrep_load(): loading provider library 'none'
130917 14:43:16 [Note] WSREP: Service disconnected.
130917 14:43:17 [Note] WSREP: Some threads may fail to exit.
130917 14:43:17 [Note] WSREP: Read nil XID from storage engines, skipping position init
130917 14:43:17 [Note] WSREP: wsrep_load(): loading provider library 'none'
130917 14:43:18 [Note] WSREP: Service disconnected.
130917 14:43:19 [Note] WSREP: Some threads may fail to exit.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h node1 password 'new-password'
Alternatively you can run:
/usr/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the manual for more instructions.
Please report any problems with the /usr/bin/mysqlbug script!
Percona recommends that all production deployments be protected with a support
contract (http://www.percona.com/mysql-suppport/) to ensure the highest uptime,
be eligible for hot fixes, and boost your team's productivity.
/var/tmp/rpm-tmp.rVkEi4: line 95: x0: command not found
Percona XtraDB Cluster is distributed with several useful UDFs from Percona Toolkit.
Run the following commands to create these functions:
mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'"
mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'"
mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'"
See http://code.google.com/p/maatkit/source/browse/trunk/udf for more details
Erasing : mysql-libs-5.1.69-1.el6_4.x86_64 13/13
Verifying : 1:Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel6.x86_64 1/13
Verifying : 1:Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel6.x86_64 2/13
Verifying : perl-DBD-MySQL-4.013-3.el6.x86_64 3/13
Verifying : Percona-Server-shared-compat-5.5.33-rel31.1.566.rhel6.x86_64 4/13
Verifying : rsync-3.0.6-9.el6.x86_64 5/13
Verifying : perl-DBI-1.609-4.el6.x86_64 6/13
Verifying : percona-xtrabackup-2.1.4-656.rhel6.x86_64 7/13
Verifying : 1:Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel6.x86_64 8/13
Verifying : 4:perl-Time-HiRes-1.9721-131.el6_4.x86_64 9/13
Verifying : nc-1.84-22.el6.x86_64 10/13
Verifying : libaio-0.3.107-10.el6.x86_64 11/13
Verifying : Percona-XtraDB-Cluster-galera-2.6-1.152.rhel6.x86_64 12/13
Verifying : mysql-libs-5.1.69-1.el6_4.x86_64 13/13
Installed:
Percona-Server-shared-compat.x86_64 0:5.5.33-rel31.1.566.rhel6 Percona-XtraDB-Cluster-server.x86_64 1:5.5.31-23.7.5.438.rhel6
Percona-XtraDB-Cluster-shared.x86_64 1:5.5.31-23.7.5.438.rhel6
Dependency Installed:
Percona-XtraDB-Cluster-client.x86_64 1:5.5.31-23.7.5.438.rhel6 Percona-XtraDB-Cluster-galera.x86_64 0:2.6-1.152.rhel6
libaio.x86_64 0:0.3.107-10.el6 nc.x86_64 0:1.84-22.el6
percona-xtrabackup.x86_64 0:2.1.4-656.rhel6 perl-DBD-MySQL.x86_64 0:4.013-3.el6
perl-DBI.x86_64 0:1.609-4.el6 perl-Time-HiRes.x86_64 4:1.9721-131.el6_4
rsync.x86_64 0:3.0.6-9.el6
Replaced:
mysql-libs.x86_64 0:5.1.69-1.el6_4
Complete!

Disable IPtables and SElinux

It is possible to run PXC with these enabled, but for simplicity here we just disable them (on both nodes!):

[root@node1 ~]# echo 0 > /selinux/enforce
[root@node1 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]

Configure the cluster nodes

Create a my.cnf file on each node and put this into it:

[mysqld]
datadir = /var/lib/mysql
binlog_format = ROW
wsrep_cluster_name = twonode
wsrep_cluster_address = gcomm://192.168.70.2,192.168.70.3
wsrep_node_address = 192.168.70.2
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_sst_method = xtrabackup
wsrep_sst_auth = sst:secret
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2

Note that the wsrep_node_address should be the proper address on each node.  We only  need this because in this environment we are not using the default NIC.

Bootstrap node1

Bootstrapping is simply starting up the first node in the cluster.  Any data on this node is taken as the source of truth for the other nodes.

[root@node1 ~]# service mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster).. SUCCESS!
[root@node1 mysql]# mysql -e "show global status like 'wsrep%'"
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 |
| wsrep_protocol_version     | 4                                    |
| wsrep_last_committed       | 0                                    |
| wsrep_replicated           | 0                                    |
| wsrep_replicated_bytes     | 0                                    |
| wsrep_received             | 2                                    |
| wsrep_received_bytes       | 133                                  |
| wsrep_local_commits        | 0                                    |
| wsrep_local_cert_failures  | 0                                    |
| wsrep_local_bf_aborts      | 0                                    |
| wsrep_local_replays        | 0                                    |
| wsrep_local_send_queue     | 0                                    |
| wsrep_local_send_queue_avg | 0.000000                             |
| wsrep_local_recv_queue     | 0                                    |
| wsrep_local_recv_queue_avg | 0.000000                             |
| wsrep_flow_control_paused  | 0.000000                             |
| wsrep_flow_control_sent    | 0                                    |
| wsrep_flow_control_recv    | 0                                    |
| wsrep_cert_deps_distance   | 0.000000                             |
| wsrep_apply_oooe           | 0.000000                             |
| wsrep_apply_oool           | 0.000000                             |
| wsrep_apply_window         | 0.000000                             |
| wsrep_commit_oooe          | 0.000000                             |
| wsrep_commit_oool          | 0.000000                             |
| wsrep_commit_window        | 0.000000                             |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| wsrep_cert_index_size      | 0                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_incoming_addresses   | 192.168.70.2:3306                    |
| wsrep_cluster_conf_id      | 1                                    |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_state_uuid   | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| wsrep_local_index          | 0                                    |
| wsrep_provider_name        | Galera                               |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>    |
| wsrep_provider_version     | 2.6(r152)                            |
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+

We can see the cluster is Primary, the size is 1, and our local state is Synced.  This is a one node cluster!

Prep for SST

SST is how new nodes (post-bootstrap) get a copy of data when joining the cluster.  It is in essence (and reality) a full backup.  We specified Xtrabackup as our backup and a username/password (sst:secret).  We need to setup a GRANT on node1 so we can run Xtrabackup against it to SST node2:

[root@node1 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.31 Percona XtraDB Cluster (GPL), wsrep_23.7.5.r3880
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';
Query OK, 0 rows affected (0.00 sec)

This GRANT should not be necessary to re-issue more than once if you are adding more nodes to the cluster.

Start node2

Assuming you’ve installed the software and my.cnf on node2, then it should be ready to start up:

[root@node2 ~]# service mysql start
Starting MySQL (Percona XtraDB Cluster).....SST in progress, setting sleep higher. SUCCESS!

If we check the status of the cluster again:

[root@node1 mysql]# mysql -e "show global status like 'wsrep%'"
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 |
| wsrep_protocol_version     | 4                                    |
| wsrep_last_committed       | 0                                    |
| wsrep_replicated           | 0                                    |
| wsrep_replicated_bytes     | 0                                    |
| wsrep_received             | 6                                    |
| wsrep_received_bytes       | 393                                  |
| wsrep_local_commits        | 0                                    |
| wsrep_local_cert_failures  | 0                                    |
| wsrep_local_bf_aborts      | 0                                    |
| wsrep_local_replays        | 0                                    |
| wsrep_local_send_queue     | 0                                    |
| wsrep_local_send_queue_avg | 0.000000                             |
| wsrep_local_recv_queue     | 0                                    |
| wsrep_local_recv_queue_avg | 0.000000                             |
| wsrep_flow_control_paused  | 0.000000                             |
| wsrep_flow_control_sent    | 0                                    |
| wsrep_flow_control_recv    | 0                                    |
| wsrep_cert_deps_distance   | 0.000000                             |
| wsrep_apply_oooe           | 0.000000                             |
| wsrep_apply_oool           | 0.000000                             |
| wsrep_apply_window         | 0.000000                             |
| wsrep_commit_oooe          | 0.000000                             |
| wsrep_commit_oool          | 0.000000                             |
| wsrep_commit_window        | 0.000000                             |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| wsrep_cert_index_size      | 0                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_incoming_addresses   | 192.168.70.3:3306,192.168.70.2:3306  |
| wsrep_cluster_conf_id      | 2                                    |
| wsrep_cluster_size         | 2                                    |
| wsrep_cluster_state_uuid   | 43ac4bea-1fa8-11e3-8496-070bd0e5c133 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| wsrep_local_index          | 1                                    |
| wsrep_provider_name        | Galera                               |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>    |
| wsrep_provider_version     | 2.6(r152)                            |
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+

We can see that there are now 2 nodes in the cluster!

The network connection is established over the default Galera port of 4567:

[root@node1 ~]# netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:4567                0.0.0.0:*                   LISTEN
tcp        0      0 10.0.2.15:22                10.0.2.2:61129              ESTABLISHED
tcp        0    108 192.168.70.2:4567           192.168.70.3:50170          ESTABLISHED
tcp        0      0 :::22                       :::*                        LISTEN

Summary

In these steps we:

  • Installed PXC server package and dependencies
  • Did the bare-minimum configuration to get it started
  • Bootstrapped the first node
  • Prepared for SST
  • Started the second node (SST was copied by netcat over port 4444)
  • Confirmed both nodes were in the cluster

The setup can certainly be more involved in this, but this gives a simple illustration at what it takes to get things rolling.

The post Percona XtraDB Cluster: Setting up a simple cluster appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com