Jul
31
2014
--

Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks

Paris OpenStack Summit Voting - Percona Submits 16 MySQL TalksMySQL plays a critical role in OpenStack. It serves as the host database supporting most components such as Nova, Glance, and Keystone and is the most mature guest database in Trove. Many OpenStack operators use Percona open source software including the MySQL drop-in compatible Percona Server and Galera-based Percona XtraDB Cluster as well as tools such as Percona XtraBackup and Percona Toolkit. We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, Percona, submitted 16 presentations for the Paris OpenStack Summit.

Paris OpenStack Summit presentations are chosen by OpenStack member voting. Please vote for our talks by clicking the titles below that interest you. You must be an OpenStack Foundation member to vote. If you aren’t a member, sign up here – it’s free and only takes a minute. The deadline to vote is Wednesday, August 6, 2014!

Paris OpenStack Summit MySQL Talks Submitted by Percona

OpenStack Operations

MySQL Database Operations in the OpenStack World
Speaker: Stéphane Combaudon

MySQL High Availability Options for Openstack
Speakers: Stéphane Combaudon

Host and Guest Database Backup and Recovery for OpenStack Ops
Speakers: George Lorch, David Busby

Benchmarking the Different Cinder Storage Backends
Speaker: Peter Boros

MySQL and OpenStack Deep Dive
Speakers: Peter Boros, Jay Pipes (Mirantis)

Trove Performance Tuning for MySQL
Speaker: Alexander Rubin

Schema Management: Versioning and Automation with Puppet and MySQL Utilities
Speaker: Frederic Descamps

Deploying Databases for OpenStack
Speakers: Matt Griffin, Jay Pipes (Mirantis), Amrith Kumar (Tesora), Vinay Joosery (Severalnines)

Related Open Source Software Projects

Introduction to Percona XtraDB Cluster
Speaker: Kenny Gryp

Percona Server Features for OpenStack and Trove Ops
Speakers: George Lorch, Vipul Sabhaya (HP Cloud)

Products, Tools & Services

ClusterControl: Efficient and reliable MySQL Management, Monitoring, and Troubleshooting for OpenStack HA
Speakers: Peter Boros, Vinay Joosery (Severalnines)

Advanced MySQL Performance Monitoring for OpenStack Ops
Speaker: Daniel Nichter

Targeting Apps for OpenStack Clouds

Oars in the Cloud: Virtualization-aware Galera instances
Speaker: Raghavendra Prabhu

ACIDic Clusters: Review of contemporary ACID-compliant databases with synchronous replication
Speaker: Raghavendra Prabhu

Cloud Security

Security: It’s more than just your database you should worry about
Speaker: David Busby

Planning Your OpenStack Project

Infrastructure at Scale
Speaker: Michael Coburn

The Paris OpenStack Summit will offer developers, operators, and service providers with valuable insights into OpenStack. The Design Summit sessions will be filled with lively discussions driving OpenStack development including sessions defining the future of Trove, the DBaaS (database as a service) component near and dear to Percona’s heart. There will also be many valuable presentations in the main Paris OpenStack Summit conference about operating OpenStack, utilizing the latest features, complimentary software and services, and real world case studies.

Thank you for your support. We’re looking forward to seeing many Percona software users at the Paris OpenStack Summit in November.

The post Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks appeared first on MySQL Performance Blog.

Jul
31
2014
--

MariaDB: Selective binary logs events

In the first post in a series on MariaDB features we find interesting, we begin with selectively skipping replication of binlog events. This feature is available on MariaDB 5.5 and 10.

By default when using MySQL’s standard replication, all events are logged in the binary log and those binary log events are replicated to all slaves (it’s possible to filter out some schema). But with this feature, it’s also possible to bypass some events to be replicated on the slave(s) even if they are written in the binary log. Having those event in the binary logs is always useful for point-in-time recovery.

Indeed, usually when we need to not replicate an event, we set sql_log_bin = 0 and the event is bypassed: neither written into the binlog, neither replicated to slave(s).

So with this new feature, it’s possible to just set a session variable to tag events that will be written into the binary log and bypassed on demand on some slaves.

And it’s really easy to use, on the master you do:

set skip_replication=1;

and on the slave(s) having replicate_events_marked_for_skip='FILTER_ON_MASTER' or 'FILTER_ON_SLAVE' the events skipped on the master won’t be replicated.

The valid values for replicate_events_marked_for_skip are:

  • REPLICATE (default) : skipped events are replicated on the slave
  • FILTER_ON_SLAVE : events so marked will be skipped on the slave and not replicated
  • FILTER_ON_MASTER : the filtering will be done on the master so the slave won’t even receive it and then save network bandwidth

That’s a cool feature but when this can be very useful?

Use case:

For archiving this can be very interesting. Indeed most of the time when people is archiving data, they use something like pt-archiver that deletes the data and copy the removed data on an archive server.

Thanks to this feature, instead of having an archiving server where we copy the deleted data, it’s possible to have a slave where we won’t delete the data. This will be much faster (smarter?) and allows to have an archiving server always up to date. Of course in this case sql_log_bin = 0 would have worked (if we ignore the point-in-time recovery).
But with a Galera Cluster? Yes that’s where this feature is really cool, if we would have used sql_log_bin = 0 on a Galera Cluster node, all other nodes would have ignored the delete and the result would be inconsistency between the nodes.

So if you use an asynchronous slave as an archiving server of a Galera Cluster, this feature is really mandatory.

As illustrated below, you can have a MariaDB Galera Cluster node joining a Percona XtraDB Cluster that will be used to delete historical data using pt-archiver:

archiving

pt-archiver is started with --set-vars "skip_replication=1"

The post MariaDB: Selective binary logs events appeared first on MySQL Performance Blog.

Jul
31
2014
--

LinkedIn Beats The Street In Q2 On Sales Of $534M, EPS Of $0.51

linked-earnings With social networks Facebook and Twitter handily beating analyst estimates for Q2 earnings, LinkedIn today reported its Q2 results and showed that rising tides are lifting its boat, too. Revenue for the second quarter was $534 million and its EPS (non-GAAP diluted) was $0.51 as the company also raised its guidance for Q3 and the full year. The company’s stock is up by around 8% in… Read More

Jul
31
2014
--

Timely Turns Your Calendar Into A Time Tracker

Timely app One of the most important things a small business manager can do is make sure that everyone is taking on a fair share of the workload. If some are taking on a bigger share of the work than others, it can lead to hurt feelings and inefficiency. Timely is an app that hopes to make that task easier to manage by thinking about how much time everyone is spent on work in total. Read More

Jul
31
2014
--

ScienceLogic Launches App Store To Share Customized Monitoring Apps

Man in suit flipping app in his hand. ScienceLogic announced a new App Store today that enables customers and third parties to create  and share customized apps for the ScienceLogic platform. ScienceLogic is a cloud service that lets IT pros monitor every aspect of their IT infrastructure whether it’s in the cloud or on premises, getting right down to the data center infrastructure management layer if needed.… Read More

Jul
31
2014
--

Talking Drupal #057 HTML Email

 

  • Uses of Email

    • Transaction/Service E-Mails – one email to the single person for a particular purpose.  ie. Reset Password, Shipping Confirmation…

    • Bulk Email – send to the group of people.  ie. Newsletter, Forum subscriptions…

    • User communication – personal contact or something like basecamp

    • Impact the type of service and tools you will use

  • Getting Started With HTML EMails

    • Drupal out of the box will only send plain text emails

    • Need an SMTP server setup on your web server or use this:

      • SMTP Authentication Support Module

  • Structure of HTML email

  • HTML E-Mails From Drupal (Creating and Sending)

    • Simple News (Module)

      • Create E-mails content

    • Mime Mail

      • Enhances the abilities of Drupal to send HTML Emails

      • Allows for email attachments

    • Mandrill / Mail System

      • (service by Mailchimp)

      • Can replace the need for an SMTP server

      • Has statistics for e-mails sent

      • Can setup rules (within the Mandrill service) to change e-mail attributes before sending

  • HTML E-mails From Outside of Drupal (List Signup Only)

    • Mailchimp

      • Allow Drupal visitors to signup for emails

      • List Management for users

      • E-commerce and webform integration

      • Service is easy to use

      • Allows RSS Auto Send

      • Use site content to build newsletters

    • Constant Contact

      • Allow Drupal Visitors to signup for emails

      • Limited List Management

      • Some E-commerce and Webform integration

    • Campaign Monitor

      • Great reseller program

      • Easy template generation

    • Zurb Foundation

      • Responsive templates
        http://zurb.com/playground/responsive-email-templates

 

Module of the week

Table Field – https://www.drupal.org/project/tablefield

This module allows you to attach tabular data to a node in Drupal 6 or any entity in Drupal 7. The input form allows the user to specify the number of rows/columns and allows entry into each table cell using text fields. Tables can be defined globally or on a per-node/per-entity basis, so every node can have multiple tables of arbitrary size. Enter data by hand or by CSV upload. Table data can also be downloaded as CSV files by your users if you so choose. Tables are multi-value and revision capable.

Resources

Simple News – https://www.drupal.org/project/simplenews

Mime Mail – https://www.drupal.org/project/mimemail

SMTP Authentication Support Module – https://www.drupal.org/project/smtp

Mandrill – https://www.drupal.org/project/mandrill

Mail Tester – http://www.mail-tester.com

Mail System – https://www.drupal.org/project/mailsystem

Mailchimp – https://www.drupal.org/project/mailchimp

Constant Contact – https://www.drupal.org/project/constant_contact

Campiagn Monitor – https://www.drupal.org/project/campaignmonitor

Responsive Email Design from Campaign Monitor  – http://bit.ly/rt-rdemailcm

Web Fonts in Email from Campaign Monitor – http://bit.ly/rt-emailcm

Responsive Email from MailChimp – http://bit.ly/rt-rdemailmc

Typography in Email from MailChimp –http://bit.ly/rt-emailmc

Tutorial to Understand HTML Email (Hand crafted) – http://webdesign.tutsplus.com/articles/build-an-html-email-template-from-scratch–webdesign-12770

 

Jul
31
2014
--

Talking Drupal #057 HTML Email

 

  • Uses of Email

    • Transaction/Service E-Mails – one email to the single person for a particular purpose.  ie. Reset Password, Shipping Confirmation…

    • Bulk Email – send to the group of people.  ie. Newsletter, Forum subscriptions…

    • User communication – personal contact or something like basecamp

    • Impact the type of service and tools you will use

  • Getting Started With HTML EMails

    • Drupal out of the box will only send plain text emails

    • Need an SMTP server setup on your web server or use this:

      • SMTP Authentication Support Module

  • Structure of HTML email

  • HTML E-Mails From Drupal (Creating and Sending)

    • Simple News (Module)

      • Create E-mails content

    • Mime Mail

      • Enhances the abilities of Drupal to send HTML Emails

      • Allows for email attachments

    • Mandrill / Mail System

      • (service by Mailchimp)

      • Can replace the need for an SMTP server

      • Has statistics for e-mails sent

      • Can setup rules (within the Mandrill service) to change e-mail attributes before sending

  • HTML E-mails From Outside of Drupal (List Signup Only)

    • Mailchimp

      • Allow Drupal visitors to signup for emails

      • List Management for users

      • E-commerce and webform integration

      • Service is easy to use

      • Allows RSS Auto Send

      • Use site content to build newsletters

    • Constant Contact

      • Allow Drupal Visitors to signup for emails

      • Limited List Management

      • Some E-commerce and Webform integration

    • Campaign Monitor

      • Great reseller program

      • Easy template generation

    • Zurb Foundation

      • Responsive templates
        http://zurb.com/playground/responsive-email-templates

 

Module of the week

Table Field – https://www.drupal.org/project/tablefield

This module allows you to attach tabular data to a node in Drupal 6 or any entity in Drupal 7. The input form allows the user to specify the number of rows/columns and allows entry into each table cell using text fields. Tables can be defined globally or on a per-node/per-entity basis, so every node can have multiple tables of arbitrary size. Enter data by hand or by CSV upload. Table data can also be downloaded as CSV files by your users if you so choose. Tables are multi-value and revision capable.

Resources

Simple News – https://www.drupal.org/project/simplenews

Mime Mail – https://www.drupal.org/project/mimemail

SMTP Authentication Support Module – https://www.drupal.org/project/smtp

Mandrill – https://www.drupal.org/project/mandrill

Mail Tester – http://www.mail-tester.com

Mail System – https://www.drupal.org/project/mailsystem

Mailchimp – https://www.drupal.org/project/mailchimp

Constant Contact – https://www.drupal.org/project/constant_contact

Campiagn Monitor – https://www.drupal.org/project/campaignmonitor

Responsive Email Design from Campaign Monitor  – http://bit.ly/rt-rdemailcm

Web Fonts in Email from Campaign Monitor – http://bit.ly/rt-emailcm

Responsive Email from MailChimp – http://bit.ly/rt-rdemailmc

Typography in Email from MailChimp –http://bit.ly/rt-emailmc

Tutorial to Understand HTML Email (Hand crafted) – http://webdesign.tutsplus.com/articles/build-an-html-email-template-from-scratch–webdesign-12770

 

Jul
31
2014
--

Percona Server 5.1.73-14.12 is now available

Percona Server version 5.1.73-14.12

Percona Server version 5.1.73-14.12

Percona is glad to announce the release of Percona Server 5.1.73-14.12 on July 31st, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.1.73, including all the bug fixes in it, Percona Server 5.1.73-14.12 is now the current stable release in the 5.1 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.1.73-14.12 milestone at Launchpad.

NOTE: Packages for Debian 7.0 (wheezy), Ubuntu 13.10 (saucy) and 14.04 (trusty) are not available for this release due to conflict with newer packages available in those releases.

Bugs Fixed:

  • Percona Server couldn’t be built with Bison 3.0. Bug fixed #1262439, upstream #71250.
  • Ignoring Query Cache Comments feature could cause server crash. Bug fixed #705688.
  • Database administrator password could be seen in plain text when debconf-get-selections was executed. Bug fixed #1018291.
  • If XtraDB variable innodb_dict_size was set, the server could attempt to remove a used index from the in-memory InnoDB data dictionary, resulting in a server crash. Bugs fixed #1250018 and #758788.
  • Ported a fix from MySQL 5.5 for upstream bug #71315 that could cause a server crash f a malformed GROUP_CONCAT function call was followed by another GROUP_CONCAT call. Bug fixed #1266980.
  • MTR tests from binary tarball didn’t work out of the box. Bug fixed #1158036.
  • InnoDB did not handle the cases of asynchronous and synchronous I/O requests completing partially or being interrupted. Bugs fixed #1262500 (upstream #54430).
  • Percona Server version was reported incorrectly in Debian/Ubuntu packages. Bug fixed #1319670.
  • Percona Server source files were referencing Maatkit instead of Percona Toolkit. Bug fixed #1174779.
  • The XtraDB version number in univ.i was incorrect. Bug fixed #1277383.

Other bug fixes: #1272732, #1167486, and #1314568.

Release notes for Percona Server 5.1.73-14.12 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.1.73-14.12 is now available appeared first on MySQL Performance Blog.

Jul
31
2014
--

PagerDuty Lands $27.2M In Series B To Simplify IT Incident Management

canstockphoto3418953 PagerDuty, a 5-year old IT incidents management startup, announced $27.2M in Series B funding today led by Bessemer Venture Partners with help from early round investors Andreessen Horowitz and Baseline Metal. The Series B money brings the total money raised to date to $39.8M. As part of the deal, Trevor Oelschig, a partner at Bessemer Venture Partners, will join PagerDuty’s board… Read More

Jul
30
2014
--

Examining the TokuDB MySQL storage engine file structure

As we know different storage engines in MySQL have different file structures. Every table in MySQL 5.6 must have a .frm file in the database directory matching the table name. But where the rest of the data resides depends on the storage engine.

For MyISAM we have .MYI and .MYD files in the database directory (unless special settings are in place); for InnoDB we might have data stored in the single table space (typically ibdata1 in the database directory) or as file per table (or better said file per partition) producing a single file with .ibd extension for each table/partition. TokuDB as of this version (7.1.7) has its own innovative approach to storing the table contents.

I have created the table in the database test having the following structure:

CREATE TABLE `mytable` (
  `id` int(10) unsigned NOT NULL,
  `c` varchar(15) NOT NULL,
  `d` int(11) NOT NULL,
  PRIMARY KEY (`id`),
  KEY `c` (`c`),
  KEY `d` (`d`,`c`),
  KEY `d_2` (`d`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1

No files appear in the “test” database directory besides mytable.frm, however few files are created in the database directory:

-rwxrwx--x  1 mysql mysql       40960 Jul 29 21:01 _test_mytable_key_c_22f19b0_3_19_B_0.tokudb
-rwxrwx--x  1 mysql mysql       16896 Jul 29 21:02 _test_mytable_key_d_2_22f223b_3_19_B_0.tokudb
-rwxrwx--x  1 mysql mysql       16896 Jul 29 21:01 _test_mytable_key_d_22f1c9a_3_19_B_0.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:01 _test_mytable_main_22f1818_2_19.tokudb
-rwxrwx--x  1 mysql mysql       65536 Jul 29 21:02 _test_mytable_status_22f1818_1_19.tokudb

As you can see the table is presented by a series of files – the “status” file, the “main” table which contains clustered fractal tree index (primary key) plus each index is stored in its own file. Note how files are named – to include the database name, file name and the key name (the name you give to the key, not the columns involved). This is followed by something like “22f1818_1_19″ which I assume is kind of internal TokuDB object identifier.

Note also (at least in my system) files are created with executable bit set. I see no reason for this and this is probably just a minor bug.

Another minor bug (or intended design limitation?) seems to be TokuDB might loose the actual table name in its file name when you alter the table. For example as I altered the table to drop one of the keys and add another one named “superkey” I see the “mytable” name is replaced with “sql_390c_247″ which looks very much like the temporary table which was used to rebuild the table:

-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:15 _test_sql_390c_247_key_c_22f6f7d_3_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:15 _test_sql_390c_247_key_d_22f6f7d_5_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:15 _test_sql_390c_247_key_superkey_22f6f7d_4_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:14 _test_sql_390c_247_main_22f6f7d_2_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:14 _test_sql_390c_247_status_22f6f7d_1_19.tokudb

I like the approach of storing different indexes in the different files as this makes it much easier to drop the index as well as potentially allows the placement of indexes onto different storage if it is desired for some reason. However putting all tables in the database root is a bad idea – having substantial amount of tables, especially with few indexes, each producing huge amounts of files, makes it inconvenient to work with the database directory (which often contains other files – log files, binary logs etc.) plus it might push file systems to their limits or performance limits dealing with huge amounts of files in the single directory.

Many also like having files in the data directory as it allows, in basic configurations, to use simple Unix tools such as du to see how much space given database physically takes.

Same as InnoDB and MyISAM, TokuDB will create a separate set of files for each partition, placing it in the same directory, having same table partitioned by HASH on primary key with 4 partitions I observe:

-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_c_22f9f35_3_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_d_22f9f35_5_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_superkey_22f9f35_4_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_main_22f9f35_2_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_status_22f9f35_1_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_c_22f9f8e_3_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_d_22f9f8e_5_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_superkey_22f9f8e_4_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_main_22f9f8e_2_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_status_22f9f8e_1_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_c_22f9fe1_3_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_d_22f9fe1_5_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_superkey_22f9fe1_4_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_main_22f9fe1_2_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_status_22f9fe1_1_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_c_22f9ffb_3_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_d_22f9ffb_5_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_superkey_22f9ffb_4_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_main_22f9ffb_2_19.tokudb
-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_status_22f9ffb_1_19.tokudb

As you can see “P_p1″ to “P_p2″ suffixes added to each of the files.

What other files exist beyond those which come from TokuDB tables?

There are few system files

-rwxrwx--x  1 mysql mysql       32768 Jul 29 21:16 tokudb.directory
-rwxrwx--x  1 mysql mysql       16384 Jul 17 19:09 tokudb.environment
-rwxrwx--x  1 mysql mysql     1048576 Jul 29 21:22 tokudb.rollback

Which as their name say contain “directory” – metadata about tables and indexes in the system, rollback – contains data which is needed for transaction rollback and environment contains some kind of information about environment. I did not dig into this – I just see this file is not changed after I have started the instance unlike other 2 which are changed when table structure is changed (tokudb.directory) or when database is modified (tokudb.rollback)

Next you will see

-rw-------  1 mysql mysql           0 Jul 17 19:09 __tokudb_lock_dont_delete_me_data
-rw-------  1 mysql mysql           0 Jul 17 19:09 __tokudb_lock_dont_delete_me_environment
-rw-------  1 mysql mysql           0 Jul 17 19:09 __tokudb_lock_dont_delete_me_logs
-rw-------  1 mysql mysql           0 Jul 17 19:09 __tokudb_lock_dont_delete_me_recovery
-rw-------  1 mysql mysql           0 Jul 17 19:09 __tokudb_lock_dont_delete_me_temp

I see TokuDB really tries to lock the database instance preventing concurrent access to the directory with multiple files. I think InnoDB does it more clean way placing the lock on system tablespace which does not require extra files… though the TokuDB team might have some specific reason to do it this way. Might be Oracle holds the software patent on preventing concurrent database operation by locking the file?

Finally there is the transaction log file:

-rwx------  1 mysql mysql    37499084 Jul 29 21:34 log000000002593.tokulog25

TokuDB transaction log files are not pre-allocated like InnoDB’s but they look more similar to MySQL binary logs – they have sequentially increasing file numbers which will increment as new files are created, file itself will grow as new data is written to the log. As I understand log rotation happens during checkpoint and you would typically see only one log file.

There is a lot more for me to learn when it comes to TokuDB file layout and purpose of individual files, yet I hope this provides you with good basic overview of the TokuDB MySQL storage engine.

Further Reading: Rich Prohaska from TokuDB team was kind enough to share some more good reading with me, for those extra curious: TokuDB Files and File Descriptors, Separate Database Directories Design

The post Examining the TokuDB MySQL storage engine file structure appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com