Apr
29
2016
--

Microsoft’s new tool for building line-of-business apps is now in public preview

innovations Microsoft PowerApps allows anybody to build basic business apps without having to touch any code. These apps can run on the web and on mobile (through the PowerApps apps for iOS and Android). Microsoft first announced a private preview of this project last November but starting today, it’s open for anybody who wants to give it a try. Building apps in PowerApps is mostly a… Read More

Apr
29
2016
--

IBM raises its blockchain game with secure cloud services and Docker integration

shutterstock ibm Bitcoin may still be the most famous application of blockchain technology, but the distributed, encrypted database architecture is now being applied to a range of other services, from different kinds of (non Bitcoin) financial transactions to anything else that requires secure tracking. And today, IBM laid out its claim for some of that new business: the IT giant announced a new set of… Read More

Apr
28
2016
--

Amazon’s Web Services are shining in its latest earnings beat

amazon Amazon posted its first-quarter earnings today, and boy did they not disappoint: it beat what analysts were expecting on nearly all fronts, and the stock is up more than 12% after its huge beat. The big one here, in particular, is Amazon Web Services. AWS has become a go-to for most businesses, so it’s not surprising that it’s seeing that segment continue to grow steadily. The… Read More

Apr
28
2016
--

Brought to you by the letters A and I: Sesame Workshop, IBM developing edtech for pre-schoolers

Cookie Monster and chip If you think Tickle Me Elmo is freaky already, wait until those talking monsters can beat you at Jeopardy!. Sesame Workshop, the educational nonprofit behind Sesame Street and its iconic characters, this week announced a partnership with IBM Watson to develop edtech for pre-school children. Sesame Workshop is no newcomer to tech. The organization previously set up a venture fund in… Read More

Apr
28
2016
--

New Infosys AI tool could transform the way companies maintain complex systems

Face over vortex of zeros and ones. Like so many organizations today, Infosys, the Indian consulting giant, is a company in transition. For years it has made a good living helping customers manage legacy tools, but CEO Vishal Sikka, who took over 21 months ago saw a shifting landscape and he began implementing new systems immediately. One of those changes involved developing a new artificial intelligence system they have… Read More

Apr
28
2016
--

Oracle buys Textura for $663M to build up its construction IT business

lego lunch atop a skyscraper Oracle today announced that it is making an acquisition to (literally and figuratively) build out its Primavera project management and billing business: it is buying Textura, a provider of cloud-based contract and payment management solutions specifically for the construction industry. The all-cash deal, Oracle said, is worth $663 million, net of Textura’s existing cash. Oracle said… Read More

Apr
27
2016
--

Upgrading to MySQL 5.7, focusing on temporal types

temporal types

temporal typesIn this post, we’ll discuss how MySQL 5.7 handles the old temporal types during an upgrade.

MySQL changed the temporal types in MySQL 5.6.4, and it introduced a new feature: microseconds resolution in the TIME, TIMESTAMP and DATETIME types. Now these parameters can be set down to microsecond granularity. Obviously, this means format changes, but why is this important?

Are they converted automatically to the new format?

If we had tables in MySQL 5.5 that used TIME, TIMESTAMP or DATETIME are these fields are going to be converted to the new format when upgrading to 5.6? The answer is “NO.” Even if we run mysql_upgrade, it does not warn us about the old format. If we check the MySQL error log, we cannot find anything regarding this. But the newly created tables are going to use the new format so that we will have two different types of temporal fields.

How can we find these tables?

The following query gives us a summary on the different table formats:

SELECT CASE isc.mtype
 WHEN '6' THEN 'OLD'
 WHEN '3' THEN 'NEW'
 END FORMAT,
 count(*) TOTAL
FROM information_schema.tables AS t
INNER JOIN information_schema.columns AS c ON c.table_schema = t.table_schema
AND c.table_name = t.table_name
LEFT OUTER JOIN information_schema.innodb_sys_tables AS ist ON ist.name = concat(t.table_schema,'/',t.table_name)
LEFT OUTER JOIN information_schema.innodb_sys_columns AS isc ON isc.table_id = ist.table_id
AND isc.name = c.column_name
WHERE c.column_type IN ('time','timestamp','datetime')
 AND t.table_schema NOT IN ('mysql','information_schema','performance_schema')
 AND t.table_type = 'base table'
 AND (t.engine = 'innodb')
GROUP BY isc.mtype;

+--------+-------+
| FORMAT | TOTAL |
+--------+-------+
| NEW    | 1     |
| OLD    | 9     |
+--------+-------+

Or we can use show_old_temporals, which will highlight the old formats during a

show create table

.

CREATE TABLE `mytbl` (
  `ts` timestamp /* 5.5 binary format */ NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `dt` datetime /* 5.5 binary format */ DEFAULT NULL,
  `t` time /* 5.5 binary format */ DEFAULT NULL
) DEFAULT CHARSET=latin1

MySQL can handle both types, but with the old format you cannot use microseconds, and the default DATETIME takes more space on disk.

Can I upgrade to MySQL 5.7?

Of course you can! But when

mysql_upgrade

 is running it is going to convert the old fields into the new format by default. This basically means an

alter table

 on every single table, which will contain one of the three types.

Depending on the number of tables, or the size of the tables, this could take hours – so you may need to do some planning.

....
test.t1
error : Table rebuild required. Please do "ALTER TABLE `t1` FORCE" or dump/reload to fix it!
test.t2
error : Table rebuild required. Please do "ALTER TABLE `t2` FORCE" or dump/reload to fix it!
test.t3
error : Table rebuild required. Please do "ALTER TABLE `t3` FORCE" or dump/reload to fix it!
Repairing tables
mysql.proxies_priv OK
`test`.`t1`
Running : ALTER TABLE `test`.`t1` FORCE
status : OK
`test`.`t2`
Running : ALTER TABLE `test`.`t2` FORCE
status : OK
`test`.`t3`
Running : ALTER TABLE `test`.`t3` FORCE
status : OK
Upgrade process completed successfully.
Checking if update is needed.

Can we avoid this at upgrade?

We can run

alter tables

 or use pt-online-schema-schange (to avoid locking) before an upgrade, but even without these preparations we can still avoid incompatibility issues.

My colleague Daniel Guzman Burgos pointed out that 

mysql_upgrade

 has an option called upgrade-system-tables. This will only upgrade the system tables, and nothing else.

Can we still write these fields?

The following query returns the schema and the table names that still use the old formats.

SELECT CASE isc.mtype
           WHEN '6' THEN 'OLD'
           WHEN '3' THEN 'NEW'
       END FORMAT,
       t.schema_name,
       t.table_name
FROM information_schema.tables AS t
INNER JOIN information_schema.columns AS c ON c.table_schema = t.table_schema
AND c.table_name = t.table_name
LEFT OUTER JOIN information_schema.innodb_sys_tables AS ist ON ist.name = concat(t.table_schema,'/',t.table_name)
LEFT OUTER JOIN information_schema.innodb_sys_columns AS isc ON isc.table_id = ist.table_id
AND isc.name = c.column_name
WHERE c.column_type IN ('time','timestamp','datetime')
    AND t.table_schema NOT IN ('mysql','information_schema','performance_schema')
    AND t.table_type = 'base table'
    AND (t.engine = 'innodb');

+--------+--------------+------------+
| FORMAT | table_schema | table_name |
+--------+--------------+------------+
| OLD    | test         | t          |
| OLD    | test         | t          |
| OLD    | test         | t          |
| NEW    | sys          | sys_config |
+--------+--------------+------------+
4 rows in set (0.03 sec)
mysql> select version();
+-----------+
| version() |
+-----------+
| 5.7.11-4  |
+-----------+
1 row in set (0.00 sec)

As we can see, we’re using 5.7 and table “test.t” still has the old format.

The schema:

CREATE TABLE `t` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`t1` time DEFAULT NULL,
`t2` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`t3` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=latin1

mysql> select * from t;
+----+----------+---------------------+---------------------+
| id |    t1    |       t2            |        t3           |
+----+----------+---------------------+---------------------+
| 1  | 20:28:00 | 2016-04-09 01:41:58 | 2016-04-23 22:22:01 |
| 2  | 20:28:00 | 2016-04-09 01:41:59 | 2016-04-23 22:22:02 |
| 3  | 20:28:00 | 2016-04-09 01:42:01 | 2016-04-23 22:22:03 |
| 4  | 20:28:00 | 2016-04-09 01:42:03 | 2016-04-23 22:22:04 |
| 5  | 20:28:00 | 2016-04-09 01:42:08 | 2016-04-23 22:22:05 |
+----+----------+---------------------+---------------------+

Let’s try to insert a new row:

mysql> insert into `t` (t1,t3) values ('20:28','2016:04:23 22:22:06');
Query OK, 1 row affected (0.01 sec)
mysql> select * from t;
+----+----------+---------------------+---------------------+
| id |    t1    |         t2          |          t3         |
+----+----------+---------------------+---------------------+
| 1  | 20:28:00 | 2016-04-09 01:41:58 | 2016-04-23 22:22:01 |
| 2  | 20:28:00 | 2016-04-09 01:41:59 | 2016-04-23 22:22:02 |
| 3  | 20:28:00 | 2016-04-09 01:42:01 | 2016-04-23 22:22:03 |
| 4  | 20:28:00 | 2016-04-09 01:42:03 | 2016-04-23 22:22:04 |
| 5  | 20:28:00 | 2016-04-09 01:42:08 | 2016-04-23 22:22:05 |
| 6  | 20:28:00 | 2016-04-09 01:56:38 | 2016-04-23 22:22:06 |
+----+----------+---------------------+---------------------+
6 rows in set (0.00 sec)

It was inserted without a problem, and we can’t see any related info/warnings in the error log.

Does the Replication work?

In many scenarios, when you are upgrading a replicaset, the slaves are upgraded first. But will the replication work? The short answer is “yes.” I configured row-based replication between MySQL 5.6 and 5.7. The 5.6 was the master, and it had all the temporal types in the old format. On 5.7, I had new and old formats.

I replicated from old format to old format, and from old format to new format, and both are working.

Conclusion

Before upgrading to MySQL 5.7, tables should be altered to use the new format. If it isn’t done, however, the upgrade is still possible without altering all the tables – the drawbacks are you cannot use microseconds, and it takes more space on disk. If you had to upgrade to 5.7, however, you could change the format later using

alter table

 or pt-online-schema-schange.

 

Apr
27
2016
--

Eden on-demand office management service launches in Oakland

eden Eden, the on-demand tech support company turned office support service, has today announced the launch of the platform in Oakland. That means that Eden’s service now covers all of the Bay Area, from Oakland to San Francisco to the Peninsula, including Palo Alto and Mountain View. Eden launched almost a year ago out of Y Combinator accelerator. Back then, Eden offered on-demand tech help… Read More

Apr
27
2016
--

Genetics startup Twist Bioscience is working with Microsoft to store the world’s data in DNA

DNA lab Twist Bioscience, a startup making and using synthetic DNA to store digital data, just struck a contract with Microsoft and the University of Washington to encode vast amounts of information on synthetic genes. Big data means business and the company able to gather a lot of it is very valuable to investors and stockholders. But that data needs to be stored somewhere and can cost a lot for… Read More

Apr
26
2016
--

How We Made MySQL Great Again, or Upgrading MySQL with Orchestrator

Upgrading MySQL with Orchestrator

Upgrading MySQL with OrchestratorIn this blog post, we’ll discuss upgrading MySQL with Orchestrator.

I recently had a client, Life360, that wanted to upgrade from Percona Server 5.5 to Percona Server 5.6, and implement GTID in their high transaction environment. They had co-masters and multiple read slaves.

Orchestrator made this job much easier for us. My colleague, Tibi, recently posted about Orchestrator here and here.

Daniel from Life360 saw Orchestrator and was very interested. So here is how he setup Orchestrator in his own words:

I did some initial testing with the vagrant boxes provided in the Orchestrator repo, to see how to configure the agents and get the Orchestrator server to do what we want.

I then moved to install the Orchestrator server, Orchestrator backend on RDS, and deploy the clients on the slaves and masters in our Amazon VPC MySQL instances.

Once the server setup was done, the clients were auto-detected through CNAME discovery of the masters, and the agents talked to the server (it took a while as CNAMES wasn’t working as expected, but that’s fixed in the new server version).

We were pretty amazed at the number of actions you can do through orchestrator itself, such as: moving slaves to a different master through drag and drop, enabling GTID on a node with the push of a button, setting up GTID based failover, taking LVM snapshots using Orchestrator Agent, etc.

We went ahead and tested the master change on drag and drop, and after a few successful attempts, we even brought it back to where it was initially. After those tests, we were pretty confident that we could leverage Orchestrator as one of our main tools to assist in the coming upgrade.

Here is a screenshot of the initial setup:

image02

Manjot: Once Daniel had Orchestrator setup, he wanted to leverage it to help with the MySQL upgrade. We set out to create a plan that worked within his constraints and still kept best practices in mind.

First, we installed Percona Server 5.6 fresh on our dedicated backup slave. That first 5.6 slave was created with MyDumper to achieve forward compatibility and not have any legacy tablespaces. Since MyDumper was already installed with the Percona Backup Service that Life360 has, this was fairly easy to accomplish.

The MyDumper slave rebuild works in the following way:

To take a mydumper backup:

  1. Go to your desired backups directory
  2. Install mydumper (sudo apt-get install mydumper)
  3. mydumper -t 8 -L mydumper.log –compress

To restore:

  1. Make sure MyDumper is installed: sudo apt-get install mydumper
  2. Copy the MyDumper backups over to a backups dir
  3. Export your BACKUP_DIR as env var
  4. Run this to restore with MyLoader (from https://gist.github.com/Dnile/4658b338d4a101cbe2eeb5080ebddf8e):
    #!/usr/bin/env sh
    cd $BACKUP_DIR
    export DESTHOST=127.0.0.1
    export BACKUP_DIR=/vol_mysql/backups
    mysqld --skip-grant-tables &
    for i in `ls -1 *-schema.dump.gz | cut -d'-' -f1`; do mysql -h $DESTHOST -e "CREATE DATABASE IF NOT EXISTS $i"; zcat $i-schema.dump.gz | mysql -h $DESTHOST $i; zcat $i-schema-post.dump.gz | mysql -h $DESTHOST $i; done
    /usr/bin/myloader --host=$DESTHOST --directory=$BACKUP_DIR --enable-binlog --threads=10 --queries-per-transaction=20 -v 3
    chown -R mysql:mysql /var/lib/mysql/

Once the first 5.6 slave was caught up, we used Xtrabackup to backup 5.6 and then restored to each slave, cycling them out of the read slave pool one at a time.

Once all the slaves were upgraded, we created a new 5.6 master and had it replicate off our primary 5.5 master.

Then we moved all of the slaves to replicate off the new 5.6 master.

Life360 had long cron jobs that ran on the second 5.5 master. We moved the cron applications to write to the primary 5.5 master, and locked all tables. We then stopped replication on the second co-master. Daniel stopped MySQL and decommissioned it.

We then moved all application writes to the new 5.6 master. While Orchestrator can use external scripts to move IPs, we used a manual process here to change application DSNs and HAProxy configuration.

On the 5.5 master that remained, we used Orchestrator to set it to read only.

image01

Daniel says this didn’t do a whole lot to get rid of connections that were still open on this server.

On the new master, we used the stop slave and reset slave buttons in the Orchestrator panel so it would no longer slave from the old master.

Once some of the thousands of connections had moved to the new master, we stopped MySQL on the 5.5 master, which took care of the rest and the application “gracefully” reconnected to the new 5.6 master.

There was some write downtime, as some connections did not drop off until they were forced to because php-fpm refused to let go. There is also always a high volume of transactions in this environment.

At this point our topology looks like this (ignore the globe icons for now):

image00

But as always Daniel wanted MOAR. It was time for GTID. While we could have done this during the upgrade, Life360 wanted to manage risk and not make too many production changes at one time.

We followed Percona’s guide, Online GTID Deployment, but used Orchestrator to shuffle the old and new masters and toggle read_only on and off. This made our job a lot easier and faster, and saved us from any downtime.

The globes in the topology screenshot above show that the slaves are now using GTID replication.

Orchestrator makes upgrades and changes much easier than before, just use caution and understand what it is doing in the background.

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com