Jun
30
2016
--

Zenefits halves its previous valuation to $2B to head off investor lawsuits

Fireside Chat With Yammer Founder and CEO, David Sacks Zenefits is executing a change in its current ownership structure that will increase the overall ownership of the company for late-stage investors; it’s a move that revalues the company’s Series C round at $2 billion and looks to placate investor concerns over the company’s regulatory investigations. As part of accepting the new ownership changes, the investors… Read More

Jun
30
2016
--

Rescuing a crashed pt-online-schema-change with pt-archiver

crashed pt-online-schema-change

crashed pt-online-schema-changeThis article discusses how to salvage a crashed pt-online-schema-change by leveraging pt-archiver and executing queries to ensure that the data gets accurately migrated. I will show you how to continue the data copy process, and how to safely close out the pt-online-schema-change via manual operations such as RENAME TABLE and DROP TRIGGER commands. The normal process to recover from a crashed pt-online-schema-change is to drop the triggers on your original table and drop the new table created by the script. Then you would restart pt-online-schema-change. In this case, this wasn’t possible.

A customer recently needed to add a primary key column to a very busy table (with around 200 million rows). The table only had a unique key on one column (called our_id below). The customer had concerns about slave lag, and wanted to ensure there was little or no lag. This, as well as the fact that you can’t add a primary key as an online DDL in MySQL and Percona Server 5.6, meant the obvious answer was using pt-online-schema-change.

Due to the sensitivity of their environment, they could only afford one short window for the initial metadata locks, and needed to manually do the drop swap that pt-online-schema-change normally does automatically. This is where no-drop-triggers and no-swap-tables come in. The triggers will theoretically run indefinitely to keep the new and old tables in sync once pt-online-schema-change is complete. We crafted the following command:

pt-online-schema-change
--execute
--alter-foreign-keys-method=auto
--max-load Threads-running=30
--critical-load Threads_running=55
--check-slave-lag mysql-slave1,mysql-slave2,mysql-slave3
--max?lag=10
--chunk-time=0.5
--set-vars=lock_timeout=1
--tries="create_triggers:10:2,drop_triggers:10:2"
--no-drop-new-table
--no-drop-triggers
--no-swap-tables
--chunk-index "our_id"
--alter "ADD newcol BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST"
D=website,t=largetable
--nocheck-plan

You can see some of the specifics of other flags and why we used them in the Percona Toolkit Manual.

Once we ran the command the customer got concerned, as their monitoring tools weren’t showing any work done (which is by design, pt-online-schema-change doesn’t want to hurt your running environment). The customer ran strace -p to verify it was working. This wasn’t a great choice as it crashed pt-online-schema-change.

At this point, we knew that the application (and management) would not allow us to take new metadata locks to create triggers on the table, as we had passed our metadata lock window.

So how do we recover?

First, let’s start with a clean slate. We issued the following commands to create a new table, where __largetable_new is the table created by pt-online-schema-change:

CREATE TABLE mynewlargetable LIKE __largetable_new;
RENAME TABLE __largetable_new TO __largetable_old, mynewlargetable TO __largetable_new;
DROP TABLE __largetable_old;

Now the triggers on the original table, largetable are updating the new empty table that has our new schema.

Now let’s address the issue of actually moving the data that’s already in largetable to __largetable_new. This is where pt-archiver comes in. We crafted the following command:

pt-archiver
--execute
--max-lag=10
--source D=website,t=largetable,i=our_id
--dest D=website,t=__largetable_new
--where "1=1"
--no-check-charset
--no-delete
--no-check-columns
--txn-size=500
--limit=500
--ignore
--statistics

We use pt-archiver to slowly copy records non-destructively to the new table based on our_id and WHERE 1=1 (all records). At this point, we periodically checked the MySQL data directory over the course of a day with ls -l to compare table sizes.

Once the table files were close to the same size, we ran counts on the tables. We noticed something interesting: the new table had thousands more records than the original table.

This concerned us. We wondered if our “hack” was a mistake. At this point we ran some verification queries:

select min(our_id) from __largetable_new;
select max(our_id) from __largetable_new;
select min(our_id) from largetable;
select max(our_id) from largetable;

We learned that there were older records that didn’t exist in the live table. This means that pt-archiver and the DELETE trigger may have missed each other (i.e., pt-archiver was already in a transaction but hadn’t written records to the new table until after the DELETE trigger already fired).

We verified with more queries:

SELECT COUNT(*) FROM largetable l WHERE NOT EXISTS (SELECT our_id FROM __largetable_new n WHERE n.our_id=l.our_id);

They returned nothing.

SELECT COUNT(*) FROM __largetable_new n WHERE NOT EXISTS (SELECT our_id FROM largetable l WHERE n.our_id=l.our_id);

Our result showed 4000 extra records in the new table. This shows that we ended up with extra records that were deleted from the original table. We ran other queries based on their data to verify as well.

This wasn’t a huge issue for our application, and it could have been easily dealt with using a simple DELETE query based on the unique index (i.e., if it doesn’t exist in the original table, delete it from the new one).

Now to complete the pt-online-schema-change actions. All we need to do is the atomic rename or drop swap. This should be done as soon as possible to avoid running in a degraded state, where all writes to the old table are duplicated on the new one.

RENAME TABLE largetable TO __largetable_old , __largetable_new TO largetable;

Then drop the triggers for safety:

DROP TRIGGER pt_osc_website_largetable_ins;
DROP TRIGGER pt_osc_website_largetable_upd;
DROP TRIGGER pt_osc_website_largetable_del;

At this point it is safer to wait for the old table to clear out of the buffer pool before dropping it, just to ensure there is no impact on the server (maybe a week to be safe). You can check information_schema for a more accurate reading on this:

SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME = '`website`.`__largetable_old`';
+----------+
| count(*) |
+----------+
|   279175 |
+----------+
1 row in set (8.94 sec)

Once this goes to 0 you can issue:

DROP TABLE __largetable_old;

Jun
30
2016
--

Aircall launches mobile apps for its cloud phone system for teams

aircall 1 Aircall just launched its mobile apps on iOS and Android out of beta. The company announced the first beta of its mobile app at TechCrunch Disrupt in San Francisco. The startup is bringing all the core features of the service to its mobile app. Read More

Jun
30
2016
--

IBM and Cisco team up on enterprise collaboration to stave off rivals like Slack and Microsoft

Spain, Murcia Region, Port of Cartagena, Containers Earlier this month, IBM and Cisco announced they would work together to integrate IBM’s Watson artificial intelligence technology into edge routers from Cisco, and today the two IT giants are deepening their partnership again, as they aim for a bigger piece of the enterprise collaboration market being chased by the likes of fast-growing, popular upstarts like Slack, large… Read More

Jun
30
2016
--

SmartRecruiters raises $30 million for hiring software

RECRUITMENT_MARKETING-tap_into_your_entire_hiring_pool--browser[2][1] Because managing a large pool of job applicants can be cumbersome, SmartRecruiters thinks its software has the right tools to keep you organized in your candidate search. The team counts clients like Square, Atlassian and Equinox gyms, who use SmartRecruiters to manage job postings and communicate about prospective employees. Now SmartRecruiters is arming itself with a $30 million funding… Read More

Jun
29
2016
--

Talking Drupal #123 – Paragraphs

In episode #123 we take about the red hot module, Paragraphs.  www.talkingdrupal.com/123

TOPICS:

  • What is Paragraphs
  • Why use it?
  • Features
  • Use Cases
  • Supporting Modules

MODULES:

RESOURCES:

HOSTS:

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • John Picozzi – www.oomphinc.com @johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan

 

Jun
29
2016
--

Talking Drupal #123 – Paragraphs

In episode #123 we take about the red hot module, Paragraphs.  www.talkingdrupal.com/123

TOPICS:

  • What is Paragraphs
  • Why use it?
  • Features
  • Use Cases
  • Supporting Modules

MODULES:

RESOURCES:

HOSTS:

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • John Picozzi – www.oomphinc.com @johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan

 

Jun
29
2016
--

2016 MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

In this post, I’ll share my experience attending the annual MySQL User Group Leaders Summit in Bucharest, Romania.

The MySQL User Group Leaders Summit gathers together as many of the global MySQL user group leaders as possible. At the summit, we discuss further actions on how we can better act for their local communities. This year, it focused primarily on cloud technologies.

As the Azerbaijan MySQL User Group leader, I felt a keen responsibility to go. I wanted to represent our group and learn as much as possible to take back to with me. Mingling and having conversations with other group leaders helps give me more ideas about how to spread the MySQL word!

The Conference

I attended three MySQL presentations:

  • Guided tour on the MySQL source code. In this session, we reviewed the layout of the MySQL code base, roughly following the query execution path. We also covered how to extend MySQL with both built-in and pluggable add-ons.
  • How profiling SQL works in MySQL. This session gave an overview of the performance monitoring tools in MySQL: performance counters, performance schema and SYS schema. It also covered some of the details in analyzing MySQL performance with performance_schema.
  • What’s New in MySQL 5.7 Security. This session presented an overview of the new MySQL Server security-related features, as well as the MySQL 5.6 Enterprise edition tools. This session detailed the shifting big picture of secure deployments, along with all of the security-related MySQL changes.

MySQL User Group Leaders SummitI thought that the conference was very well organized, with uniformly great discussions. We also participated in some city activities and personal interactions. I even got to see Le Fred!

I learned a lot from the informative sessions I attended. The MySQL source code overview showed me the general paths of MySQL source code, including the most important directories, the most important functions and classes. The session about MySQL profiling instrumentation sessions informed us of the great MySQL profiling improvements. It reviewed some useful tools and metrics that you can use to get info from the server. The last session about MySQL security covered improved defaults, tablespace encryption and authentication plugins.

In conclusion, my time was well spent. Meeting and communicating with other MySQL user group leaders gives me insight into the MySQL community. Consequently, I highly recommend everyone gets involved in your local user groups and attend get-togethers like the MySQL User Group Leaders Summit when you can find the time.

Below you can see some of the pics from the trip. Enjoy!

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

 

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

MySQL User Group Leaders Summit

 

Jun
29
2016
--

Percona Server for MongoDB 3.2.7-1.1 is now available

Percona_ServerfMDBLogoVert

Percona Server for MongoDBPercona announces the release of Percona Server for MongoDB 3.2.7-1.1 on June 29, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.2.7-1.1 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. Based on MongoDB 3.2.7, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as enterprise-grade features like external authentication and audit logging at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

Note:

The PerconaFT storage engine has been deprecated and will not be available in future releases.


This release includes all changes from MongoDB 3.2.7 as well as the following:

  • Fixed the software version incorrectly reported by the --version option.
  • Added recommended ulimit values for the mongod process

The release notes are available in the official documentation.

 

Jun
29
2016
--

Box Shuttle helps ferry legacy file stores to cloud

Space Shuttle with two rocket boosters attached. Box announced Box Shuttle today, a new service that combines software and consulting to help customers move large — as in millions or even hundreds of millions — of legacy files to the Box service. Previously, companies with files stored in network file shares or legacy content management systems like Microsoft SharePoint, EMC Documentum or OpenText were on their own when it came… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com