Nov
24
2014
--

Salesforce Wants Desk.com App Hub To Be One-Stop Shop For SMBs

Women working in small customer service center. Salesforce.com announced an update today to Desk.com, their customer service product aimed at small businesses. They have created a Desk.com app store called the App Hub that includes a bunch of partner apps chosen specifically for SMBs to give them a one-stop shop for the cloud applications they need –and they have created a way to integrate those apps directly into Desk.com. Among… Read More

Nov
24
2014
--

Schema changes in MySQL for OpenStack Trove users

People using OpenStack Trove instances can hit a common issue in the MySQL world: how to perform schema change operations while minimizing the impact on the database server? Let’s explore the options that can allow online schema changes.

Summary

Schema changes in MySQL for OpenStack Trove usersWith MySQL 5.5, pt-online-schema-change from Percona Toolkit is your best option for large tables while regular ALTER TABLE statements are only acceptable for small tables. Also beware of metadata locks.

With MySQL 5.6, almost all types of schema changes can be done online. Metadata locks can also be an issue. pt-online-schema-change can still be worth using as it is also online on read replicas.

Regular ALTER TABLE with MySQL 5.5

If you are still using MySQL 5.5, almost all schema changes will require a table rebuild and MySQL will set a write lock. Therefore all writes to the table that gets modified will be blocked. As soon as the table gets large or if you cannot afford to have a maintenance window, using ALTER TABLE becomes tricky.

The only exception is that secondary indexes can be added or removed without rebuilding the whole table. The table is still write locked during the operation but it is much faster.

You can spot this ‘fast index creation’ process by looking at SHOW PROCESSLIST (see manage keys in the State field):

+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+
| Id | User | Host      | db     | Command | Time | State       | Info                                    | Rows_sent | Rows_examined | Rows_read |
+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+
|  1 | root | localhost | sbtest | Query   |    4 | manage keys | alter table sbtest2 add index idx_k (k) |         0 |             0 |         0 |
+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+

It is helpful for small or even medium tables, but it is not fast enough for large tables.

A common workaround when you have a master-slave configuration is to perform the schema change on the slave first, promote the slave and then perform the schema change on the old master. However replication is not only available if you are using Icehouse and even with Juno, replicas are read-only. So this workaround is not usable with Trove.

pt-online-schema-change

A good alternative is to use pt-online-schema-change. In a nutshell the tool creates a new table with the desired structure, creates triggers on the original table, and copies rows in chunks from the original table to the new table.

Nice features of the tool include being able to write to the original table while the schema change is being performed (hence the name of the tool), monitoring replication lag on slaves and automatic throttling to keep lag under a reasonable threshold.

Let’s assume you want to add a new field called ‘city’ in the ‘user’ table. Instead of running:

mysql> ALTER TABLE mydb.user ADD city VARCHAR(50) NOT NULL DEFAULT '';

you will run:

# pt-online-schema-change --progress=percentage,10 --alter="ADD city VARCHAR(50) NOT NULL DEFAULT ''" h=localhost,u=root,D=mydb,t=user --execute

All ALTER TABLE statements can be converted to an equivalent pt-online-schema-change command.

Is there any tradeoff? The first one is that pt-online-schema-change will be slower than a plain ALTER TABLE. This is expected as the tool has extra code to throttle itself if it adds too much load.

How much slower is it? As always it depends. On a lightly loaded server the difference will be minimal, for instance I measured 3mn24s for ALTER TABLE and 3mn30s for pt-online-schema-change on a test table. However on a heavily loaded server, it can be like 5x slower.

The second tradeoff is that pt-online-schema-change adds triggers on the original table to capture data change. So if you already have triggers, you can’t use the tool (this will be lifted in MySQL 5.7).

Metadata Locks

Metadata locks were introduced in MySQL 5.5 for better transaction isolation.

But one side effect is: if you start a schema change operation on a table while another transaction is reading or writing on the same table, ALTER TABLE will be stuck in the Waiting for metadata lock state.

Another negative side effect of metadata locks is that in some situations ALTER TABLE will also block all queries to the table that is being altered, even reads. In the worst cases, this may create a query pileup situation that will freeze the whole server. For more on this topic, have a look at this post.

What about pt-online-schema-change and metadata locks? It is of course hit by this issue/feature. However the good thing is that the timeout for metadata locks is set by the tool to 60s instead of the default 1 year. So after 60s, pt-online-schema-change will simply retry the operation that failed because of metadata locks.

MySQL 5.6: Online Schema Changes?

Metadata locks also exist with MySQL 5.6, the same issues as described above can then happen.

However the good news with MySQL 5.6 is that most schema changes can be done online. Now the question is: should you use pt-online-schema-change or a regular online ALTER TABLE statement?

Both have pros and cons:

  • ALTER TABLE is easy to use while being confident with pt-online-schema-change requires some time.
  • There is no way for ALTER TABLE to know if it’s overloading a replica while pt-online-schema-change monitors replication lag and throttles itself if needed.
  • ALTER TABLE is only online for the master, not for replicas while pt-online-schema-change is online for all servers.

The last point can be a bit confusing: why an online ALTER TABLE on the master wouldn’t be online on the slave as well? The truth is that unless you are using multi-threaded replication, writes on slaves are serialized. So while the ALTER TABLE is running, the slave will not process any other write, which is not much different from an offline ALTER TABLE.

So if you are running a single server (all Icehouse users for instance), ALTER TABLE is probably the right choice. But if you have read replicas and specifically if the application cannot tolerate replication lag, pt-online-schema-change is definitely a better choice.

Conclusion

Performing schema changes is becoming easier with newer versions of MySQL. However potential issues can be found with any version, so always be careful when you need to run ALTER TABLE.

The post Schema changes in MySQL for OpenStack Trove users appeared first on MySQL Performance Blog.

Nov
22
2014
--

Debunking The 7 Myths of Marketing In The Enterprise

myths Over the last few months enterprise giants Oracle and Salesforce have put on shows featuring the likes of Hillary Clinton and Bruno Mars reminding all of us in the enterprise world that marketing matters. While most enterprise startups will never reach these levels of budget and glamour, this time of the year does serve as a reminder that enterprise marketing matters. Read More

Nov
21
2014
--

Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

I’m happy to announce that Google has added Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

Google Cloud Platform adds Percona XtraDB Cluster to click-to-deploy processPercona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in previous posts, in the PXC manual, and in various webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here: https://cloud.google.com/solutions/percona to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

jayj@~ [500]$ ssh [public ip]
Linux percona-mysql-niyr 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
jayj@percona-mysql-niyr:~$

Once there I installed sysbench 0.5 from the Percona apt repo:

jayj@percona-mysql-niyr:~$ sudo -i
root@percona-mysql-niyr:~# apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
root@percona-mysql-niyr:~# echo "deb http://repo.percona.com/apt wheezy main" > /etc/apt/sources.list.d/percona.list
root@percona-mysql-niyr:~# apt-get update; apt-get install sysbench -y
...
Setting up sysbench (0.5-3.wheezy) ...
root@percona-mysql-niyr:~# logout
jayj@percona-mysql-niyr:~$

Now we can load some data and run a quick test:

jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-tables-count=16 --oltp-table-size=1000000 --oltp-auto-inc=off --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard] --mysql-db=test run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Random number generator seed is 0 and will be ignored
Threads started!
thread prepare4
Creating table 'sbtest5'...
thread prepare2
Creating table 'sbtest3'...
thread prepare6
Creating table 'sbtest7'...
thread prepare0
Creating table 'sbtest1'...
thread prepare7
Creating table 'sbtest8'...
thread prepare1
Creating table 'sbtest2'...
thread prepare5
Creating table 'sbtest6'...
thread prepare3
Creating table 'sbtest4'...
Inserting 100000 records into 'sbtest5'
Inserting 100000 records into 'sbtest3'
Inserting 100000 records into 'sbtest7'
Inserting 100000 records into 'sbtest1'
Inserting 100000 records into 'sbtest8'
Inserting 100000 records into 'sbtest2'
Inserting 100000 records into 'sbtest6'
Inserting 100000 records into 'sbtest4'
Creating table 'sbtest13'...
Creating table 'sbtest15'...
Inserting 100000 records into 'sbtest13'
Inserting 100000 records into 'sbtest15'
Creating table 'sbtest11'...
Creating table 'sbtest12'...
Inserting 100000 records into 'sbtest11'
Creating table 'sbtest16'...
Inserting 100000 records into 'sbtest12'
Creating table 'sbtest10'...
Creating table 'sbtest14'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest16'
Inserting 100000 records into 'sbtest9'
Inserting 100000 records into 'sbtest10'
Inserting 100000 records into 'sbtest14'
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           608
        other:                           32
        total:                           640
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 608    (11.33 per sec.)
    other operations:                    32     (0.60 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          53.6613s
    total number of events:              10000
    total time taken by event execution: 0.0042s
    response time:
         min:                                  0.00ms
         avg:                                  0.00ms
         max:                                  0.02ms
         approx.  95 percentile:               0.00ms
Threads fairness:
    events (avg/stddev):           1250.0000/3307.19
    execution time (avg/stddev):   0.0005/0.00
jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/update_index.lua --oltp-tables-count=16 --oltp-table-size=1000000 --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard]--mysql-db=test --max-requests=0 --max-time=10 --report-interval=1 --oltp-auto-inc=off --rand-init=on --rand-type=uniform run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Report intermediate results every 1 second(s)
Initializing random number generator from timer.
Random number generator seed is 0 and will be ignored
Threads started!
[   1s] threads: 8, tps: 0.00, reads: 0.00, writes: 1396.43, response time: 10.85ms (95%), errors: 0.00, reconnects:  0.00
[   2s] threads: 8, tps: 0.00, reads: 0.00, writes: 1314.13, response time: 14.91ms (95%), errors: 0.00, reconnects:  0.00
[   3s] threads: 8, tps: 0.00, reads: 0.00, writes: 1382.87, response time: 12.34ms (95%), errors: 0.00, reconnects:  0.00
[   4s] threads: 8, tps: 0.00, reads: 0.00, writes: 949.09, response time: 12.88ms (95%), errors: 0.00, reconnects:  0.00
[   5s] threads: 8, tps: 0.00, reads: 0.00, writes: 1312.01, response time: 11.27ms (95%), errors: 0.00, reconnects:  0.00
[   6s] threads: 8, tps: 0.00, reads: 0.00, writes: 898.92, response time: 11.64ms (95%), errors: 0.00, reconnects:  0.00
[   7s] threads: 8, tps: 0.00, reads: 0.00, writes: 1541.71, response time: 10.59ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 8, tps: 0.00, reads: 0.00, writes: 1551.35, response time: 11.48ms (95%), errors: 0.00, reconnects:  0.00
[   9s] threads: 8, tps: 0.00, reads: 0.00, writes: 923.07, response time: 10.40ms (95%), errors: 0.00, reconnects:  0.00
[  10s] threads: 8, tps: 0.00, reads: 0.00, writes: 1273.99, response time: 11.01ms (95%), errors: 0.00, reconnects:  0.00
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           12551
        other:                           0
        total:                           12551
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 12551  (1254.65 per sec.)
    other operations:                    0      (0.00 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          10.0036s
    total number of events:              12551
    total time taken by event execution: 79.9602s
    response time:
         min:                                  1.13ms
         avg:                                  6.37ms
         max:                                389.52ms
         approx.  95 percentile:              11.68ms
Threads fairness:
    events (avg/stddev):           1568.8750/4.81
    execution time (avg/stddev):   9.9950/0.00

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something Percona can help you with.

The post Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process appeared first on MySQL Performance Blog.

Nov
20
2014
--

Microsoft Promises Managed, Web-Based App Stores For The Enterprise With Windows 10

ereer Microsoft previously announced that Windows 10, its next operating system, will contain a single app store that will work across device classes. Don’t make the Lord of the Rings joke. The company also previously promised that organizations will be able to curate their own app experience for their users. Today, it announced that a “Web-based” store will be an option for… Read More

Nov
20
2014
--

Sys Schema for MySQL 5.6 and MySQL 5.7

Performance Schema (P_S) has been available since MySQL 5.5, more than 4 years ago. It is still difficult to see production servers with P_S enabled, even with MySQL 5.6 where the feature is enabled by default. There have been several complaints like the overhead, that the fix is a work in progress, and the ease of use. 52 tables to query and 31 configuration variables is enough to scare people.

There is a solution for the second problem, the usability. It’s name is “sys schema“. It is a collection of views, functions and procedures to make P_S more user friendly.

Installation

If you are a MySQL Workbench user the installation is pretty easy because sys schema is already included. You just need to install it. Click on “Performance – Performance Reports” and there you will find the “Install Helper” button that will install sys schema.

sys schema mysql workbench

If you don’t use MySQL Workbench you need to download sys_56.sql or sys_57.sql (depends if you use 5.6 or 5.7) from the github repository. Then, just import the sql file as usual:

mysql -u root -p < ./sys_56.sql

Usage

After the import, you will have a new “sys” schema with some very descriptive table names. Let’s see an example. Do you want to know what tables are using most of our InnoDB buffer memory? Easy:

mysql> select * from sys.innodb_buffer_stats_by_table;
+---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+
| object_schema | object_name        | allocated | data      | pages | pages_hashed | pages_old | rows_cached |
+---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+
| test          | t                  | 63.61 MiB | 58.06 MiB |  4071 |         4071 |      4071 |     2101222 |
| InnoDB System | SYS_FOREIGN        | 32.00 KiB | 0 bytes   |     2 |            2 |         2 |           0 |
| InnoDB System | SYS_TABLES         | 32.00 KiB | 1.11 KiB  |     2 |            2 |         2 |          10 |
| InnoDB System | SYS_COLUMNS        | 16.00 KiB | 4.68 KiB  |     1 |            1 |         1 |          71 |
| InnoDB System | SYS_DATAFILES      | 16.00 KiB | 324 bytes |     1 |            1 |         1 |           6 |
| InnoDB System | SYS_FIELDS         | 16.00 KiB | 722 bytes |     1 |            1 |         1 |          17 |
| InnoDB System | SYS_INDEXES        | 16.00 KiB | 836 bytes |     1 |            1 |         1 |          12 |
| InnoDB System | SYS_TABLESPACES    | 16.00 KiB | 318 bytes |     1 |            1 |         1 |           6 |
| mysql         | innodb_index_stats | 16.00 KiB | 274 bytes |     1 |            1 |         1 |           3 |
| mysql         | innodb_table_stats | 16.00 KiB | 53 bytes  |     1 |            1 |         1 |           1 |
+---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+

Pretty easy and useful, right? You can also get what is the database using more memory in the buffer pool querying innodb_buffer_stats_by_schema.

For each table there is another similar ones that it’s name starts with x$. For example, you have user_summary_by_file_io and x$user_summary_by_file_io. The difference is that the first table has a human readable output values while the second one has the real values. Let’s see an example:

mysql> select * from sys.user_summary_by_file_io;
+------------+-------+------------+
| user       | ios   | io_latency |
+------------+-------+------------+
| root       | 19514 | 2.87 s     |
| background |  5916 | 1.91 s     |
+------------+-------+------------+
2 rows in set (0.00 sec)
mysql> select * from sys.x$user_summary_by_file_io;
+------------+-------+---------------+
| user       | ios   | io_latency    |
+------------+-------+---------------+
| root       | 19514 | 2871847094292 |
| background |  5916 | 1905079715132 |
+------------+-------+---------------+

For humans, at least for me, it is easier to read seconds rather than picoseconds :)

There are multiple tables with very descriptive names.

– io_by_thread_by_latency
– schema_unused_indexes
– statements_with_temp_tables
– statements_with_errors_or_warnings
– user_summary_by_statement_type
– waits_by_host_by_latency

There are lot more, and they are explained with examples in project’s README file.

Configuration

On the MySQL side nothing special is needed. Just enable performance_schema:

performance_schema="on"

sys schema also provides some procedures to enable/disable some features. For example:

– ps_setup_enable_background_threads
– ps_setup_enable_consumers
– ps_setup_enable_instrument
and so on…

We also have the same procedures with “disable”. After you have made the changes you can save them calling ps_setup_save() and reload it later on if you want calling ps_setup_reload_saved(). If you want to reset the configuration to default values just call ps_setup_reset_to_default().

For example, we can check that some consumers are disabled. It is possible to enable them and save the configuration:

mysql> CALL sys.ps_setup_show_disabled_consumers();
+--------------------------------+
| disabled_consumers             |
+--------------------------------+
| events_stages_current          |
| events_stages_history          |
| events_stages_history_long     |
| events_statements_history      |
| events_statements_history_long |
| events_waits_current           |
| events_waits_history           |
| events_waits_history_long      |
+--------------------------------+
mysql> CALL sys.ps_setup_enable_consumers('events');
+---------------------+
| summary             |
+---------------------+
| Enabled 8 consumers |
+---------------------+
mysql> CALL sys.ps_setup_show_disabled_consumers();
Empty set (0.00 sec)
mysql> CALL sys.ps_setup_save(TRUE);

Conclusion

Performance Schema is very useful when we want to know what is happening inside the database. It is getting more features with each new GA and will probably be the single point of information in near future. Now thanks to sys schema it is also easy to use.

The post Sys Schema for MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

Nov
20
2014
--

Deep Identity Raises $470,000 To Help Companies Manage Access On Their Networks

network Deep Identity, a Singapore-based startup that helps companies manage access to their internal networks and services, has raised US$470,000 (SG$589,000) to expand its services into the cloud and open a presence in Europe. Read More

Nov
19
2014
--

Searching For Truths In Big, Enormous, Massive Data

Big data concept picture with words bid data above a monitor over a map of the world. We have so much data, mountains and mountains of data. In fact, there’s so much, the name ‘Big Data’ doesn’t even do it justice. Maybe we should call it humongous data or at least enormous data. Yet for all this data, it seems we are lost in those piles of data, no closer to the answers that are supposed to solve our most fundamental business and social problems.… Read More

Nov
19
2014
--

Talking Drupal #74 Running a Drupal Camp

Topics

  • NewdCamp overall results?
  • Team?
  • Handling the money?
  • Selecting a Venue?
  • Sponsors?
  • Marketing?
  • Website?
  • Things we forgot ?

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.oomphinc.com @johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan

Guests

  • Ellen Diamond – www.hwdesignco.com @ellendiamond
  • Susan Krushinsky www.HomesBySusanK.com @HomesBySusanK
Nov
19
2014
--

Talking Drupal #74 Running a Drupal Camp

Topics

  • NewdCamp overall results?
  • Team?
  • Handling the money?
  • Selecting a Venue?
  • Sponsors?
  • Marketing?
  • Website?
  • Things we forgot ?

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.oomphinc.com @johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan

Guests

  • Ellen Diamond – www.hwdesignco.com @ellendiamond
  • Susan Krushinsky www.HomesBySusanK.com @HomesBySusanK

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com