Jun
30
2014
--

Databricks Snags $33M In Series B And Debuts Cloud Platform For Processing Big Data

Sparks flying from a sparkler. Databricks, the commercial entity created by the developers of the open source Apache Spark project, announced $33M in Series B funding today and the launch of a new cloud product, their first one as a company. There is little doubt that big data is a big deal these days and companies are popping up to help customers process the data. Databricks hopes to simplify the entire matter by moving it… Read More

Jun
30
2014
--

CoreOS Raises $8M Series A Round, Launches Managed Linux As A Service

8028191546_d8f37bad25_k CoreOS, a Linux distribution optimized for very large server deployments, today announced that it has raised a $8 million Series A funding round led by Kleiner Perkins Caufield & Byers with follow-on investments by Sequoia Capital and Fuel Capital. This comes after a previous investment from Andreessen Horowitz and Sequoia Capital last year. Read More

Jun
30
2014
--

Microsoft May Prioritize The Desktop In Windows 9

microsoft-earnings Microsoft is looking to revamp and retool its Windows 8 operating system with a blend of updates that will see the code better suit the form factors that it wants to service. However, could those updates undo its push to create the Windows Store as a favored place for developers to put applications? Read More

Jun
30
2014
--

How to avoid even more of the common (but deadly) MySQL development mistakes

On July 16 I’ll be presenting my next webinar focusing on common mistakes committed by MySQL users.

How to Avoid Even More of the Common (but Deadly) MySQL Development Mistakes

“Why can’t I just save my data to a file?”

Using an SQL database seems so complex to get right, and for good reason. The variety of data-driven applications is practically limitless, and as project requirements change, we find ourselves taking shortcuts and adopting bad habits. But there are proven methods to understanding how to develop and manage data in a scalable and reliable way. This talk shows you some of these methods, including:

  • How to optimize a database application with partitioning and sharding.
  • How to avoid the secret security vulnerability that you’re probably guilty of.
  • How to use optimizer hints effectively.

Percona MySQL webinars: How to avoid even more of the common (but deadly) MySQL development mistakesAt the end of this webinar, you’ll be more productive and confident as you develop database-driven applications.

Please register for this webinar and join me on July 16!

You might also like to view recordings of my past “deadly mistakes” webinars:

The post How to avoid even more of the common (but deadly) MySQL development mistakes appeared first on MySQL Performance Blog.

Jun
30
2014
--

Get Paid: GoDaddy Links With PayPal, Dwolla, Stripe For A Mobile & Web Payment Service

4222474443_a5e2ebaabe_b As GoDaddy gears up for a $100 million IPO, the domain and web services company is adding on more features that will help it make more profitable revenues from its 12 million small-business customers. The latest of these puts GoDaddy further into the world of e-commerce. Today, it is launching “Get Paid,” a new online and mobile payments service created with existing… Read More

Jun
27
2014
--

Failover with the MySQL Utilities – Part 1: mysqlrpladmin

MySQL Utilities are a set of tools provided by Oracle to perform many kinds of administrative tasks. When GTID-replication is enabled, 2 tools can be used for slave promotion: mysqlrpladmin and mysqlfailover. We will review mysqlrpladmin (version 1.4.3) in this post.

Summary

  • mysqlrpladmin can perform manual failover/switchover when GTID-replication is enabled.
  • You need to have your servers configured with --master-info-repository = TABLE or to add the --rpl-user option for the tool to work properly.
  • The check for errant transactions is failing in the current GA version (1.4.3) so be extra careful when using it or watch bug #73110 to see when a fix is committed.
  • There are some limitations, for instance the inability to pre-configure the list of slaves in a configuration file or the inability to check that the tool will work well without actually doing a failover or switchover.

Failover vs switchover

mysqlrpladmin can help you promote a slave to be the new master when the master goes down and then automate replication reconfiguration after this slave promotion. There are 2 separate scenarios: unplanned promotion (failover) and planned promotion (switchover). Beyond the words, it has implications on the way you have to execute the tool.

Setup for this test

To test the tool, our setup will be a master with 2 slaves, all using GTID replication. mysqlrpladmin can show us the current replication topology with the health command:

$ mysqlrpladmin --master=root@localhost:13001 --discover-slaves-login=root health
# Discovering slaves for master at localhost:13001
# Discovering slave at localhost:13002
# Found slave: localhost:13002
# Discovering slave at localhost:13003
# Found slave: localhost:13003
# Checking privileges.
#
# Replication Topology Health:
+------------+--------+---------+--------+------------+---------+
| host       | port   | role    | state  | gtid_mode  | health  |
+------------+--------+---------+--------+------------+---------+
| localhost  | 13001  | MASTER  | UP     | ON         | OK      |
| localhost  | 13002  | SLAVE   | UP     | ON         | OK      |
| localhost  | 13003  | SLAVE   | UP     | ON         | OK      |
+------------+--------+---------+--------+------------+---------+
# ...done.

As you can see, we have to specify how to connect to the master (no surprise) but instead of listing all the slaves, we can let the tool discover them.

Simple failover scenario

What will the tool do when performing failover? Essentially we will give it the list of slaves and the list of candidates and it will:

  • Run a few sanity checks
  • Elect a candidate to be the new master
  • Make the candidate as up-to-date as possible by making it a slave of all the other slaves
  • Configure replication on all the other slaves to make them replicate from the new master

After killing -9 the master, let’s try failover:

$ mysqlrpladmin --slaves=root:@localhost:13002,root:@localhost:13003 --candidates=root@localhost:13002 failover

This time, the master is down so the tool has no way to automatically discover the slaves. Thus we have to specify them with the --slaves option.

However we get an error:

# Checking privileges.
# Checking privileges on candidates.
ERROR: You must specify either the --rpl-user or set all slaves to use --master-info-repository=TABLE.

The error message is clear, but it would have been nice to have such details when running the health command (maybe a warning instead of an error). That would allow you to check beforehand that the tool can run smoothly rather than to discover in the middle of an emergency that you have to look at the documentation to find which option is missing.

Let’s choose to specify the replication user:

$ mysqlrpladmin --slaves=root:@localhost:13002,root:@localhost:13003 --candidates=root@localhost:13002 --rpl-user=repl:repl failover
# Checking privileges.
# Checking privileges on candidates.
# Performing failover.
# Candidate slave localhost:13002 will become the new master.
# Checking slaves status (before failover).
# Preparing candidate for failover.
# Creating replication user if it does not exist.
# Stopping slaves.
# Performing STOP on all slaves.
# Switching slaves to new master.
# Disconnecting new master as slave.
# Starting slaves.
# Performing START on all slaves.
# Checking slaves for errors.
# Failover complete.
#
# Replication Topology Health:
+------------+--------+---------+--------+------------+---------+
| host       | port   | role    | state  | gtid_mode  | health  |
+------------+--------+---------+--------+------------+---------+
| localhost  | 13002  | MASTER  | UP     | ON         | OK      |
| localhost  | 13003  | SLAVE   | UP     | ON         | OK      |
+------------+--------+---------+--------+------------+---------+
# ...done.

Simple switchover scenario

Let’s now restart the old master and configure it as a slave of the new master (by the way, this can be done with mysqlreplicate, another tool from the MySQL Utilities). If we want to promote the old master, we can run:

$ mysqlrpladmin --master=root@localhost:13002 --new-master=root@localhost:13001 --discover-slaves-login=root --demote-master --rpl-user=repl:repl --quiet switchover
# Discovering slave at localhost:13001
# Found slave: localhost:13001
# Discovering slave at localhost:13003
# Found slave: localhost:13003
+------------+--------+---------+--------+------------+---------+
| host       | port   | role    | state  | gtid_mode  | health  |
+------------+--------+---------+--------+------------+---------+
| localhost  | 13001  | MASTER  | UP     | ON         | OK      |
| localhost  | 13002  | SLAVE   | UP     | ON         | OK      |
| localhost  | 13003  | SLAVE   | UP     | ON         | OK      |
+------------+--------+---------+--------+------------+---------+

Notice that the master is available in this case so we can use the discover-slaves-login option. Also notice that we can tune the verbosity of the tool by using --quiet or --verbose or even log the output in a file with --log.

We also used --demote-master to make the old master a slave of the new master. Without this option, the old master will be isolated from the other nodes.

Extension points

In general doing switchover/failover at the database level is one thing but informing the other components of the application that something has changed is most often necessary for the application to keep on working correctly.

This is where the extension points are handy: you can execute a script before switchover/failover with --exec-before and after switchover/failover with --exec-after.

For instance with these simple scripts:

# cat /usr/local/bin/check_before
#!/bin/bash
/usr/local/mysql5619/bin/mysql -uroot -S /tmp/node1.sock -Ee 'SHOW SLAVE STATUS' > /tmp/before
# cat /usr/local/bin/check_after
#!/bin/bash
/usr/local/mysql5619/bin/mysql -uroot -S /tmp/node1.sock -Ee 'SHOW SLAVE STATUS' > /tmp/after

We can execute:

$ mysqlrpladmin --master=root@localhost:13001 --new-master=root@localhost:13002 --discover-slaves-login=root --demote-master --rpl-user=repl:repl --quiet --exec-before=/usr/local/bin/check_before --exec-after=/usr/local/bin/check_after switchover

And looking the /tmp/before and /tmp/after, we can see that our scripts have been executed:

# cat /tmp/before
# cat /tmp/after
*************************** 1. row ***************************
               Slave_IO_State: Queueing master event to the relay log
                  Master_Host: localhost
                  Master_User: repl
                  Master_Port: 13002
[...]

If the external script does not seem to work, using –verbose can be useful to diagnose the issue.

What about errant transactions?

We already mentioned that errant transactions can create lots of issues when a new master is promoted in a cluster running GTIDs. So the question is: how mysqlrpladmin behaves when there is an errant transaction?

Let’s create an errant transaction:

# On localhost:13003
mysql> CREATE DATABASE test2;
mysql> FLUSH LOGS;
mysql> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000001 |     69309 |
| mysql-bin.000002 |   1237667 |
| mysql-bin.000003 |       617 |
| mysql-bin.000004 |       231 |
+------------------+-----------+
mysql> PURGE BINARY LOGS TO 'mysql-bin.000004';

and let’s try to promote localhost:13003 as the new master:

$ mysqlrpladmin --master=root@localhost:13001 --new-master=root@localhost:13003 --discover-slaves-login=root --demote-master --rpl-user=repl:repl --quiet switchover
[...]
+------------+--------+---------+--------+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| host       | port   | role    | state  | gtid_mode  | health                                                                                                                                                                                                                                                                                              |
+------------+--------+---------+--------+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| localhost  | 13003  | MASTER  | UP     | ON         | OK                                                                                                                                                                                                                                                                                                  |
| localhost  | 13001  | SLAVE   | UP     | ON         | IO thread is not running., Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Slave has 1 transactions behind master.  |
| localhost  | 13002  | SLAVE   | UP     | ON         | IO thread is not running., Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Slave has 1 transactions behind master.  |
+------------+--------+---------+--------+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Oops! Although it is suggested by the documentation, the tool does not check errant transactions. This is a major issue as you cannot run failover/switchover reliably with GTID replication if errant transactions are not correctly detected.

The documentation suggests errant transactions should be checked and a quick look at the code confirms that, but it does not work! So it has been reported.

Some limitations

Apart from the missing errant transaction check, I also noticed a few limitations:

  • You cannot use a configuration file listing all the slaves. This becomes boring once you have a large amount of slaves. In such a case, you should write a wrapper script around mysqlrpladmin to generate the right command for you
  • The slave election process is either automatic or it relies on the order of the servers given in the --candidates option. This is not very sophisticated.
  • It would be useful to have a –dry-run mode which would validate that everything is configured correctly but without actually failing/switching over. This is something MHA does for instance.

Conclusion

mysqlrpladmin is a very good tool to help you perform manual failover/switchover in a cluster using GTID replication. The main caveat at this point is the failing check for errant transactions, which requires a lot of care before executing the tool.

The post Failover with the MySQL Utilities – Part 1: mysqlrpladmin appeared first on MySQL Performance Blog.

Jun
27
2014
--

Open Platforms Fuel Startup Ecosystems

Cool looking funnel made of lights. Companies face an inherent tension between being open or proprietary, but we’ve seen, again and again, that open systems can act as catalysts for entirely new businesses built on top of a popular platform. Earlier this month, we encountered examples of each approach, as Tesla Motors opened up its patents and Netflix decided to shut down, or at least severely limit access to its API. (To… Read More

Jun
26
2014
--

Percona Server with TokuDB (beta): Installation, configuration

My previous post was an introduction to the TokuDB storage engine and aimed at explaining the basics of its design and how it differentiates from InnoDB/XtraDB. This post is all about motivating you to give it a try and have a look for yourself. Percona Server is not officially supporting TokuDB as of today, though the guys in the development team are working hard on this and the first GA release of Percona Server with TokuDB is looming on the horizon. However, there’s a beta version available now. For the installation tests in this post I’ve used the latest version of Percona Server alongside the accompanying TokuDB complement, which was published last week.

Installing Percona Server with TokuDB on a sandbox

One of the tools Percona Support Engineers really love is Giuseppe Maxia’s MySQL Sandbox. It allows us to setup a sandbox running a MySQL instance of our choice and makes executing multiple ones for testing purposes very easily. Whenever a customer reaches us with a problem happening on a particular version of MySQL or Percona Server that we can reproduce, we quickly spin off a new sandbox and test it ourselves, so it’s very handy. I’ll use one here to explore this beta version of Percona Server with TokuDB but if you prefer you can install it the regular way using a package from our apt experimental or yum testing repositories.

We start by downloading the tarballs from here: TokuDB’s plugin has been packaged in its own tarball, so there are two to download. Once you get them let’s decompress both and create a unified working directory, compressing it again to create a single tarball we’ll use as source to create our sandbox:

[nando@test01 ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz
[nando@test01 ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.TokuDB.Linux.x86_64.tar.gz
[nando@test01 ~]# tar cfa Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz Percona-Server-5.6.17-rel66.0-608.Linux.x86_64/

Before going ahead, verify if you have transparent huge pages enabled as TokuDB won’t run if it is set. See this documentation page for explanation on what this is and how to disable it on Ubuntu. In my CentOS test server it was defined in a slightly different place and I’ve done the following to temporarily disable it:

[nando@test01]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
[nando@test01]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

We’re now ready to create our sandbox. The following command should be enough (I’ve chosen to run Percona Server on port 5617, you can use any other available one):

[nando@test01 ~]# make_sandbox Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz -- --sandbox_directory=tokudb --sandbox_port=5617

If the creation process goes well you will see something like the following at the end:

.... sandbox server started
Your sandbox server was installed in $HOME/sandboxes/tokudb

You should now be able to access the MySQL console on the sandbox with the default credentials; if you cannot, verify the log-in $HOME/sandboxes/tokudb/data/msandbox.err:

[nando@test01 ~]# mysql --socket=/tmp/mysql_sandbox5617.sock -umsandbox -pmsandbox

Alternatively, you can make use of the “use” script located inside the sandbox directory, which employs the same credentials (configured in the client section of the configuration file my.sandbox.cnf):

[nando@test01 ~]# cd sandboxes/tokudb/
[nando@test01 tokudb]# ./use

First thing to check is if TokuDB is being listed as an available storage engine:

mysql> SHOW ENGINES;
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| (...)              | (...)   | (...)                                                                      | (...)        | (...)| (...)      |
| TokuDB             | YES     | Tokutek TokuDB Storage Engine with Fractal Tree(tm) Technology             | YES          | YES  | YES        |
| InnoDB             | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |
| (...)          | (...)       | (...)                                                                      | NO           | (...)| (...)      |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+

If that’s not the case, you may need to load the plugins manually – I had to do so in my sandbox; you may not need if you’re installing it from a package in a fresh setup:

mysql> INSTALL PLUGIN tokudb SONAME 'ha_tokudb.so';

TokuDB should now figure in the list of supported ENGINES but you still need to activate the related plugins:

mysql> INSTALL PLUGIN tokudb_file_map SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_fractal_tree_info SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_fractal_tree_block_map SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_trx SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_locks SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_lock_waits SONAME 'ha_tokudb.so';

Please note the INSTALL PLUGIN action results in permanent changes and thus is required only once. No modifications to MySQL’s configuration file are required to have those plugins load in subsequent server restarts.

Now you should see not only the main TokuDB plugin but also the add-ons to the INFORMATION SCHEMA:

mysql> SHOW PLUGINS;
+-------------------------------+----------+--------------------+--------------+---------+
| Name                          | Status   | Type               | Library      | License |
+-------------------------------+----------+--------------------+--------------+---------+
| (...)                         | (...)    | (...)              | (...)        | (...)   |
| TokuDB                        | ACTIVE   | STORAGE ENGINE     | ha_tokudb.so | GPL     |
| TokuDB_trx                    | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_locks                  | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_lock_waits             | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_file_map               | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_info      | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_block_map | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
+-------------------------------+----------+--------------------+--------------+---------+

We are now ready to create our first TokuDB table – the only different thing to do here is to specify TokuDB as the storage engine to use:

mysql> CREATE TABLE test.Numbers (id INT PRIMARY KEY, number VARCHAR(20)) ENGINE=TokuDB;

Note some unfamiliar files lying in the datadir; the details surrounding those is certainly good material for future posts:

[nando@test01]# ls ~/sandboxes/tokudb/data
auto.cnf                   _test_Numbers_main_3_2_19.tokudb
ibdata1                    _test_Numbers_status_3_1_19.tokudb
ib_logfile0                tokudb.directory
ib_logfile1                tokudb.environment
log000000000000.tokulog25  __tokudb_lock_dont_delete_me_data
msandbox.err               __tokudb_lock_dont_delete_me_environment
mysql                      __tokudb_lock_dont_delete_me_logs
mysql_sandbox5617.pid      __tokudb_lock_dont_delete_me_recovery
performance_schema         __tokudb_lock_dont_delete_me_temp
tc.log                     tokudb.rollback
test

Configuration: what’s really important

As noted by Vadim long ago, “Tuning of TokuDB is much easier than InnoDB, there’re only a few parameters to change, and actually out-of-box things running pretty well“:

mysql> show variables like 'tokudb_%';
+---------------------------------+------------------+
| Variable_name                   | Value            |
+---------------------------------+------------------+
| tokudb_alter_print_error        | OFF              |
| tokudb_analyze_time             | 5                |
| tokudb_block_size               | 4194304          |
| tokudb_cache_size               | 522651648        |
| tokudb_check_jemalloc           | 1                |
| tokudb_checkpoint_lock          | OFF              |
| tokudb_checkpoint_on_flush_logs | OFF              |
| tokudb_checkpointing_period     | 60               |
| tokudb_cleaner_iterations       | 5                |
| tokudb_cleaner_period           | 1                |
| tokudb_commit_sync              | ON               |
| tokudb_create_index_online      | ON               |
| tokudb_data_dir                 |                  |
| tokudb_debug                    | 0                |
| tokudb_directio                 | OFF              |
| tokudb_disable_hot_alter        | OFF              |
| tokudb_disable_prefetching      | OFF              |
| tokudb_disable_slow_alter       | OFF              |
| tokudb_disable_slow_update      | OFF              |
| tokudb_disable_slow_upsert      | OFF              |
| tokudb_empty_scan               | rl               |
| tokudb_fs_reserve_percent       | 5                |
| tokudb_fsync_log_period         | 0                |
| tokudb_hide_default_row_format  | ON               |
| tokudb_init_flags               | 11403457         |
| tokudb_killed_time              | 4000             |
| tokudb_last_lock_timeout        |                  |
| tokudb_load_save_space          | ON               |
| tokudb_loader_memory_size       | 100000000        |
| tokudb_lock_timeout             | 4000             |
| tokudb_lock_timeout_debug       | 1                |
| tokudb_log_dir                  |                  |
| tokudb_max_lock_memory          | 65331456         |
| tokudb_pk_insert_mode           | 1                |
| tokudb_prelock_empty            | ON               |
| tokudb_read_block_size          | 65536            |
| tokudb_read_buf_size            | 131072           |
| tokudb_read_status_frequency    | 10000            |
| tokudb_row_format               | tokudb_zlib      |
| tokudb_tmp_dir                  |                  |
| tokudb_version                  | tokudb-7.1.7-rc7 |
| tokudb_write_status_frequency   | 1000             |
+---------------------------------+------------------+
42 rows in set (0.00 sec)

The most important of the tokudb_ variables is arguably tokudb_cache_size. The test server where I ran those tests (test01) have a little less than 1G of memory and as you can see above TokuDB is “reserving” half (50%) of them to itself. That’s the default behavior but, of course, you can change it. And you must do it if you are also going to have InnoDB tables on your server – you should not overcommit memory between InnoDB and TokuDB engines. Shlomi Noach wrote a good post explaining the main TokuDB-specific variables and what they do. It’s definitely a worth read.

I hope you have fun testing Percona Server with TokuDB! If you run into any problems worth reporting, please let us know.

The post Percona Server with TokuDB (beta): Installation, configuration appeared first on MySQL Performance Blog.

Jun
26
2014
--

Daily, A New App From Buffer, Uses Tinder Gestures For Finding And Sharing Content

6797754663_a4ba41194b_b In May, Buffer — the app that competes against the likes of SocialFlow and HootSuite as a way for individuals and businesses to schedule and share things on social networks — added a new content suggestion feature to its service, so that when users didn’t have something of their own to spread around, they could get ideas from elsewhere to keep the networking wheels turning. Read More

Jun
26
2014
--

Zendesk Insights Shows It’s A Data-Driven World

zendeskinsights Last month, I wrote an article called Actually, Every Company Is A Big Data Company in which I postulated that simply by the act of collecting data as an artifact of doing business, every company is in the Big Data business. Zendesk is putting this idea into practice, releasing a new feature today called Zendesk Insights that purports to give its customers a deeper understanding of… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com