Oct
31
2016
--

Percona Server for MongoDB 3.2.10-3.0 is now available

Percona Server for MongoDB

Percona Server for MongoDBPercona announces the release of Percona Server for MongoDB 3.2.10-3.0 on October 31, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.2.10-3.1 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like external authentication and audit logging at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

Note:

We deprecated the PerconaFT storage engine. It will not be available in future releases.


This release is based on MongoDB 3.2.10 and includes the following additional new features and improvements:

New Features

  • Universal Hot Backup
    This release introduces an integrated hot backup system for the default WiredTiger and alternative MongoRocks engine. It creates a physical data backup on a running server without performance degradation.
  • Profiling Rate Limit
    Rate limiting enables to seed logged queries and thus decrease the impact of profiling on database performance.

Bug Fixes

  • Fixed crash when running out of WiredTiger cache under Percona Memory Engine.

The release notes are available in the official documentation.

 

Oct
31
2016
--

Webinar Wednesday November 2: MongoDB Backups, All Grown Up!

MongoDB Backups

MongoDB BackupsPlease join us on Wednesday, November 2, 2016 at 10:00 am PDT / 1:00pm EDT (UTC-7) for the webinar MongoDB Backups, All Grown Up, featuring David Murphy, Percona’s Mongo Practice Manager. 
 

It has been a long road to stable and dependable backups in the MongoDB space. This webinar covers the current types of backups and their limitations when it comes to sharding. From there we will move into why you can’t be consistent with a single node, and how you can take sharded or unsharded consistent backups. 

The webinar also covers more about the “mongodb_consistent_backup.py” tool, and the features it offers to the open source community: how to use it, what it looks like and why it’s our preferred backup methodology.

Click here to register for the webinar MongoDB Backups, All Grown Up!

MongoDB BackupsDavid Murphy, MongoDB Practice Manager

David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David joined the ObjectRocket by Rackspace team as the Lead DBA in Sept 2013. With the growth involved with a any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledgebase design, consulting, technical account management, mentoring and much more.

Prior to the world of MongoDB, David was a MySQL and NoSQL architect at Electronic Arts. There, he worked with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.

Oct
31
2016
--

Microsoft open sources its next-gen cloud hardware design

80527689-6b4f-4ac8-bb93-25556376560c Microsoft today open sourced its next-gen hyperscale cloud hardware design and contributed it to the Open Compute Project (OCP). Microsoft joined the OCP, which also includes Facebook, Google, Intel, IBM, Rackspace and many other cloud vendors, back in 2014. Over the last two years, it already contributed a number of server, networking and data center designs. With this new contribution,… Read More

Oct
28
2016
--

Blog Series: MySQL Configuration Management

MySQL Configuration Management

MySQL Configuration ManagementMySQL configuration management remains a hot topic, as I’ve noticed on numerous occasions during my conversations with customers.

I thought it might be a good idea to start a blog series that goes deeper in detail into some of the different options, and what modules potentially might be used for managing your MySQL database infrastructure.

Configuration management has been around since way before the beginning of my professional career. I, myself, originally began working on integrating an infrastructure with my colleagues using Puppet.

Why is configuration management important?
  • ReproducibilityIt’s giving us the ability to provision any environment in an automated way, and feel sure that the new environment will contain the same configuration.
  • Fast restorationThanks to reproducibility, you can quickly provision machines in case of disasters. This makes sure you can focus on restoring your actual data instead of worrying about the deployment and configuration of your machines.
  • Integral part of continuous deploymentContinuous deployment is a terminology everyone loves: being able to deploy changes rapidly and automatically after automated regression testing requires a configuration management solution.
  • Compliance and securitySolutions like Puppet and Chef maintain and enforce configuration parameters on your infrastructure. This can sound bothersome at first, but it’s essential for maintaining a well-configured environment.
  • Documented environmentAlthough reading someone’s puppet code can potentially harm you beyond insanity, it provides you with the real truth about your infrastructure.
  • Efficiency and manageabilityConfiguration management can automate repetitive tasks (for example, user grants, database creation, configuration variables), as well as security updates, service restarts, etc. These can potentially bring you less work and faster rollouts.
Which players are active in this field?

The most popular open source solutions are Puppet, Chef, Ansible, and CFengine (among others). In this series, we will go deeper in the first three of them.

Let’s first start by giving you a quick, high-level introduction.

Puppet

Puppet is a language used to describe the desired state of an environment. The Puppet client reads the catalog of the expected state from the server and enforces these changes on the client. The system works based on a client/server principle.

Puppet has as default four essential components:

  • Puppet Server: A Java virtual machine offering Puppet’s core services.
  • Puppet Agent: A client library that requests configuration catalog info from the puppet-server.
  • Hiera: A key-value lookup database, which can store and modify values for specific hosts.
  • Facter: An application that keeps an inventory of the local node variables.

How can you integrate puppet in your MySQL infrastructure?

This will allow you and your team to create users, databases, install and configure MySQL

Probably my old “code from hell” module is still somewhere out there.

Chef

Chef also consists of a declarative language (like Puppet) based on Ruby which will allow you to write cookbooks for potential integrable technologies. Chef is also based on a server/client solution. The client being chef nodes, the server managing the cookbooks, catalogs and recipes.

In short, Chef consists of:

  • Chef server: Manages the multiple cookbooks and the catalog
  • Chef clients (nodes): The actual system requesting the catalog information from the chef server.
  • Workstations: This is a system that is configured to run Chef command-line tools that synchronize with a Chef-repository or the Chef server. You could also describe this as a Chef development and tooling environment.

How can you integrate Chef in your MySQL infrastructure:

Ansible

Ansible originated with something different in mind. System engineers typically chose to use their own management scripts. This can be troublesome and hard to maintain. Why wouldn’t you use something easy and automated and standardized? Ansible fills in these gaps, and simplifies management of Ansible targets.

Ansible works by connecting to your nodes (by SSH default) and pushes out Ansible modules to them. These modules represent the desired state of the node, and will be used to execute commands to attain the desired state.

This procedure is different to Puppet and Chef, which are essentially preferably client/server solutions.

Some pre-made modules for MySQL are:

Conclusion and Next Steps

Choose your poison (or magical medicine, you pick the wording), every solution has its perks.

Keep in mind that in some situations running a complicated Puppet or Chef infrastructure could be overkill. At this moment, a solution like Ansible might be a quick and easily integrable answer for you.

The next blog post will go over the Puppet Forge MySQL module, so stay tuned!

Oct
28
2016
--

New TokuDB and PerconaFT database file management feature in Percona Server 5.6.33-79.0 and Percona Server 5.7.15-9

TokuDB and PerconaFT database file management

TokuDB and PerconaFT database file managementThis blog post discusses a new TokuDB and PerconaFT database file management feature in two Percona Server releases.

By now you have hopefully read through Peter’s post and my two prior posts on the TokuDB/PerconaFT file set. If you have not, it is probably a good idea to run through them now before we get into the details of this new feature.

We introduced a new server option beginning in Percona Server 5.6.33-79.0 and Percona Server 5.7.15-9, called tokudb_dir_per_db, that addresses two shortcomings within the current TokuDB implementation:

  • The renaming of data files on table/index rename
  • The ability to group data files together within a directory that represents a single database.

The new option is disabled by default in 5.6.33-79.0, but will be enabled by default beginning in 5.7.15-9


New table renaming functionality

When you rename a TokuDB table via SQL, the data files on disk keep their original names. Only the mapping in the PerconaFT directory file is changed to map the new dictionary name to the original internal file names. This makes it difficult to quickly match database/table/index names to their actual files on disk, requiring you to use the INFORMATION_SCHEMA.TOKUDB_FILE_MAP to cross reference.

When tokudb_dir_per_db is enabled, this is no longer the case. When you rename a table, the mapping in the PerconaFT directory file will be updated, and the files will be renamed on disk to reflect the new table name. This was a lot more difficult to implement than we originally thought due to the non-transactional nature of the underlying file systems. We had to “invent” a transactional file system rename functionality within PerconaFT to ensure that a crash during a file system mv src dst was recoverable.


New directory layout functionality

Many users have had issues with managing the huge volume of individual files used by TokuDB and PerconaFT. We are beginning to take some steps to help improve the manageability of these files, and potentially even reduce the number of files present.

When you enable tokudb_dir_per_db, all new tables and indices are placed within their corresponding database directory within the tokudb_data_dir or system datadir. Existing table files will not be automatically relocated to their corresponding database directory.

You can easily move a table’s data files into the new scheme and proper database directory with a few steps:

mysql> SET GLOBAL tokudb_dir_per_db=true;
mysql> RENAME TABLE <table> TO <tmp_table>;
mysql> RENAME TABLE <tmp_table> TO <table>;

Two renames are needed because MySQL will not allow you to rename a table to itself. The first rename renames the table to the temporary name and mv‘s the tables files into the owning database directory. The second rename sets the table name back to the original name. Tables can, of course, also be renamed/moved across databases and will be placed correctly into the corresponding database directory.

You must be careful with renaming tables! If you have used any tricks to create symlinks of the database directories on different storage volumes, the mv is not a simple directory mv on the same volume, but a physical copy across volumes. This can take quite some time and prevents access to the table being moved during the copy.

NOTE: If you have tokudb_data_dir set to something other than the system datadir, TokuDB creates a directory matching the name of the database. Upon dropping of the database, this directory remains behind.


Example:

While running Percona Server 5.7.15-9 with tokudb_dir_per_db=false to illustrate the old behavior, create a table t1, show the file map, and list the data directory:

mysql> SET GLOBAL tokudb_dir_per_db=false;
Query OK, 0 rows affected (0.00 sec)
mysql> DROP DATABASE IF EXISTS test; CREATE DATABASE test; USE test;
Query OK, 0 rows affected (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Database changed
mysql> CREATE TABLE t1(a INT PRIMARY KEY, b INT, c VARCHAR(200), KEY kab(a, b)) ENGINE=TOKUDB;
Query OK, 0 rows affected (0.07 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_FILE_MAP;
*************************** 1. row ***************************
      dictionary_name: ./test/t1-key-kab
   internal_file_name: ./_test_t1_key_kab_de_3_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: key-kab
*************************** 2. row ***************************
      dictionary_name: ./test/t1-main
   internal_file_name: ./_test_t1_main_de_2_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: main
*************************** 3. row ***************************
      dictionary_name: ./test/t1-status
   internal_file_name: ./_test_t1_status_de_1_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: status
3 rows in set (0.00 sec)

$ ls -1 data/*.tokudb
data/_test_t1_key_kab_de_3_1d.tokudb
data/_test_t1_main_de_2_1d.tokudb
data/_test_t1_status_de_1_1d.tokudb

We see the data files for our table t1 as the three files _test_t1_*

Rename t1 to all_the_kings_horses, show the file map again, and another listing of the data directory:

mysql> RENAME TABLE t1 TO all_the_kings_horses;
Query OK, 0 rows affected (0.01 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_FILE_MAP;
*************************** 1. row ***************************
      dictionary_name: ./test/all_the_kings_horses-key-kab
   internal_file_name: ./_test_t1_key_kab_de_3_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: key-kab
*************************** 2. row ***************************
      dictionary_name: ./test/all_the_kings_horses-main
   internal_file_name: ./_test_t1_main_de_2_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: main
*************************** 3. row ***************************
      dictionary_name: ./test/all_the_kings_horses-status
   internal_file_name: ./_test_t1_status_de_1_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: status
3 rows in set (0.00 sec)

$ ls -1 data/*.tokudb
data/_test_t1_key_kab_de_3_1d.tokudb
data/_test_t1_main_de_2_1d.tokudb
data/_test_t1_status_de_1_1d.tokudb

The file names remained the same as the original table, but the file map has changed to reflect the new table/dictionary names.

Let us inject a little confusion by adding another index to the table:

mysql> alter table all_the_kings_horses add index kac(a,c);
Query OK, 0 rows affected (0.08 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_FILE_MAP;
*************************** 1. row ***************************
      dictionary_name: ./test/all_the_kings_horses-key-kab
   internal_file_name: ./_test_t1_key_kab_de_3_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: key-kab
*************************** 2. row ***************************
      dictionary_name: ./test/all_the_kings_horses-key-kac
   internal_file_name: ./_test_all_the_kings_horses_key_kac_e3_3_1d_B_0.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: key-kac
*************************** 3. row ***************************
      dictionary_name: ./test/all_the_kings_horses-main
   internal_file_name: ./_test_t1_main_de_2_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: main
*************************** 4. row ***************************
      dictionary_name: ./test/all_the_kings_horses-status
   internal_file_name: ./_test_t1_status_de_1_1d.tokudb
         table_schema: test
           table_name: all_the_kings_horses
table_dictionary_name: status
4 rows in set (0.00 sec)

$ ls -1 data/*.tokudb
data/_test_all_the_kings_horses_key_kac_e3_3_1d_B_0.tokudb
data/_test_t1_key_kab_de_3_1d.tokudb
data/_test_t1_main_de_2_1d.tokudb
data/_test_t1_status_de_1_1d.tokudb

The file for the new index kac was created with the current table name, not the original.

Now we move on to the new behavior. First make sure that tokudb_dir_per_db=true, then rename the table again, show the file map, and do another directory listing:

mysql> SET GLOBAL tokudb_dir_per_db=true;
Query OK, 0 rows affected (0.00 sec)
mysql> RENAME TABLE all_the_kings_horses TO all_the_kings_men;
Query OK, 0 rows affected (0.02 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_FILE_MAP;
*************************** 1. row ***************************
      dictionary_name: ./test/all_the_kings_men-key-kab
   internal_file_name: ./test/all_the_kings_men_key_kab_ea_2_1d.tokudb
         table_schema: test
           table_name: all_the_kings_men
table_dictionary_name: key-kab
*************************** 2. row ***************************
      dictionary_name: ./test/all_the_kings_men-key-kac
   internal_file_name: ./test/all_the_kings_men_key_kac_ea_3_1d.tokudb
         table_schema: test
           table_name: all_the_kings_men
table_dictionary_name: key-kac
*************************** 3. row ***************************
      dictionary_name: ./test/all_the_kings_men-main
   internal_file_name: ./test/all_the_kings_men_main_ea_4_1d.tokudb
         table_schema: test
           table_name: all_the_kings_men
table_dictionary_name: main
*************************** 4. row ***************************
      dictionary_name: ./test/all_the_kings_men-status
   internal_file_name: ./test/all_the_kings_men_status_ea_5_1d.tokudb
         table_schema: test
           table_name: all_the_kings_men
table_dictionary_name: status
4 rows in set (0.00 sec)

$ ls -1 data/test/*.tokudb
data/test/all_the_kings_men_key_kab_ea_2_1d.tokudb
data/test/all_the_kings_men_key_kac_ea_3_1d.tokudb
data/test/all_the_kings_men_main_ea_4_1d.tokudb
data/test/all_the_kings_men_status_ea_5_1d.tokudb

The database files have now been renamed to properly match the name of the database, table and keys, and they have been moved into the data/test directory.

Now let us watch all that action with an alternate tokudb_data_dir. Rather than showing how to move files around as mentioned in the previous blog posts, we will just reset our TokuDB installation and start the server with a different tokudb_data_dir that is a sibling to the server datadir, called tokudb_data.

mysql> SET GLOBAL tokudb_dir_per_db=true;
Query OK, 0 rows affected (0.00 sec)
mysql> DROP DATABASE IF EXISTS test; CREATE DATABASE test; USE test;
Query OK, 0 rows affected, 1 warning (0.00 sec)
Query OK, 1 row affected (0.00 sec)
Database changed
mysql> CREATE TABLE t1(a INT PRIMARY KEY, b INT, c VARCHAR(200), KEY kab(a, b)) ENGINE=TOKUDB;
Query OK, 0 rows affected (0.15 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.TOKUDB_FILE_MAP;
*************************** 1. row ***************************
      dictionary_name: ./test/t1-key-kab
   internal_file_name: /ssd/toku/DB-295/percona-server-install-5.7/tokudb_data/test/t1_key_kab_d9_3_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: key-kab
*************************** 2. row ***************************
      dictionary_name: ./test/t1-main
   internal_file_name: /ssd/toku/DB-295/percona-server-install-5.7/tokudb_data/test/t1_main_d9_2_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: main
*************************** 3. row ***************************
      dictionary_name: ./test/t1-status
   internal_file_name: /ssd/toku/DB-295/percona-server-install-5.7/tokudb_data/test/t1_status_d9_1_1d.tokudb
         table_schema: test
           table_name: t1
table_dictionary_name: status
3 rows in set (0.00 sec)

$ ls -1 tokudb_data/
test
__tokudb_lock_dont_delete_me_data
__tokudb_lock_dont_delete_me_temp
$ ls -1 tokudb_data/test/
t1_key_kab_d9_3_1d.tokudb
t1_main_d9_2_1d.tokudb
t1_status_d9_1_1d.tokudb

This shows us TokuDB now putting everything over in the directory we specified in our tokudb_data_dir, and is following the tokudb_dir_per_db paradigm by creating the directory called test before creating the table.

What happens when we drop that database?

mysql> drop database test;
Query OK, 1 row affected (0.02 sec)

$ ls -1 tokudb_data/
test
__tokudb_lock_dont_delete_me_data
__tokudb_lock_dont_delete_me_temp
$ ls -1 tokudb_data/test/

All of the tables files have been removed, but as mentioned above, the test still exists and needs to be removed manually.


Thanks for reading! We hope that you have found this blog series useful and look forward to hearing your experience with this new feature.

In the future we will investigate implementing the CREATE TABLE … DATA|INDEX DIRECTORY=… feature for TokuDB, which builds on top of this work.

We need to give a shout out to Vladislav Lesin, who took the lead on this issue and fought several battles in his attempt to ensure that this is fully crash safe and recoverable.

Oct
28
2016
--

NFV and the Tebow thing

ENGLEWOOD, CO - APRIL 23:  Tim Tebow is introduced by the Denver Broncos at a press conference at the Broncos Headquarters in Dove Valley on April 23, 2010 in Englewood, Colorado. The Broncos picked Tebow in the first round of the 2010 NFL draft.  (Photo by Doug Pensinger/Getty Images) The game-changing nature of virtualization in the server world has led many in IT to assume it would automatically translate to networking. The idea inspired the network functions virtualization architecture that has gripped some circles of the industry with great exuberance. But success in one domain does not ensure success in another. Read More

Oct
27
2016
--

Thoughts on MySQL 8.0 Invisible Indexes

Invisible Indexes

Invisible IndexesMySQL 8.0 has a new feature called “invisible indexes,” which allow you to quickly enable/disable indexes from being used by the MySQL Optimizer.

I wanted to share some of my first experiences and thoughts about this new feature.

Why is it good for us?

There are a couple of use cases. One of them is if you want to drop an index, but want to know the effect beforehand. You can make it invisible to the optimizer. It is a quick metadata change to make an index invisible. Once you are sure there is no performance degradation, you can then drop the index.

The main point is that the invisible index is unavailable for use by the optimizer, but it is still present and kept up-to-date by write operations. The optimizer won’t use it, even if we try to “FORCE INDEX”. I think we should be able to force it somehow, though. There might be scenarios where this could be useful:

  • We can create a new invisible index, but if we want to test it we have to make it visible. That means all the queries are going to be able to use it, which could have an immediate effect on the application. I don’t think this is the best approach if we just want to test it. You should always test on staging, but not everybody has the same data size or real life data on their staging servers. Forcing invisible indexes could be useful.
  • You have many indexes, but are not sure which one is not in use. You can change one index to invisible to see if there is any performance degradation. If yes, you can change it back immediately.
  • You could have a special case where only one query can use that index. In that case, an invisible index could be a great solution.

How can you create invisible indexes?

There are two options. We can create a table with an invisible index like this:

CREATE TABLE `t1` (
`i` int(11) DEFAULT NULL,
`j` int(11) DEFAULT NULL,
`k` int(11) DEFAULT NULL,
KEY `i_idx` (`i`),
KEY `idx_1` (`i`,`j`,`k`) INVISIBLE
) ENGINE=InnoDB DEFAULT CHARSET=latin1

Or we can use alter table and change the index to be invisible:

ALTER TABLE t1 ALTER INDEX idx_1 INVISIBLE;

Using invisible indexes

If we want to remove an index now, we can change it to invisible. But what about queries that use “FORCE/USE INDEX”? Are they are going to throw an error? If you force an index that does not exist, you get an error. You don’t get an error with invisible indexes. The optimizer doesn’t use it, but it knows that it exists.

mysql> show create table t1 G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`i` int(11) DEFAULT NULL,
`j` int(11) DEFAULT NULL,
`k` int(11) DEFAULT NULL,
KEY `i_idx` (`i`),
KEY `idx_1` (`i`,`j`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
mysql> explain select * from t1 force index(idx_1) where i=1 and j=4;
+----+-------------+-------+------------+------+---------------+-------+---------+-------------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key   | key_len | ref         | rows | filtered | Extra       |
+----+-------------+-------+------------+------+---------------+-------+---------+-------------+------+----------+-------------+
|  1 | SIMPLE      | t1    | NULL       | ref  | idx_1         | idx_1 | 10      | const,const |    2 |   100.00 | Using index |
+----+-------------+-------+------------+------+---------------+-------+---------+-------------+------+----------+-------------+
1 row in set, 1 warning (0.00 sec)
mysql> alter table t1 alter index idx_1 invisible;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql> explain select * from t1 force index(idx_1) where i=1 and j=4;
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra       |
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
|  1 | SIMPLE      | t1    | NULL       | ALL  | NULL          | NULL | NULL    | NULL |   16 |     6.25 | Using where |
+----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+
1 row in set, 1 warning (0.01 sec)
mysql> explain select * from t1 where i=1 and j=4;
+----+-------------+-------+------------+------+---------------+-------+---------+-------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key   | key_len | ref   | rows | filtered | Extra       |
+----+-------------+-------+------------+------+---------------+-------+---------+-------+------+----------+-------------+
|  1 | SIMPLE      | t1    | NULL       | ref  | i_idx         | i_idx | 5       | const |    2 |    10.00 | Using where |
+----+-------------+-------+------------+------+---------------+-------+---------+-------+------+----------+-------------+
1 row in set, 1 warning (0.00 sec)

As you can see, if we use “FORCE INDEX” with an invisible index MySQL performs a full table scan because (from mysql.com):

The FORCE INDEX hint acts like USE INDEX (index_list), with the addition that a table scan is assumed to be very expensive. In other words, a table scan is used only if there is no way to use one of the named indexes to find rows in the table.

MySQL won’t throw any errors because the index exists, but it is not visible. Even if there is another usable index, it is going to perform a full table scan. On a large table, that could cause serious performance issues. Even if MySQL doesn’t throw any errors during the query execution, it should log a warning in the error log.

Conclusion

I think invisible indexes are a great new feature that could be useful for many customers. We should to be able to use an invisible index if necessary, and be able to log queries that are trying to use invisible indexes.

You can read more about invisible indexes in this blog post, and in the MySQL Documentation.

Oct
27
2016
--

HPE gears its updated Helion OpenStack distribution toward telcos

hpe_container HPE is in an interesting position. Now that it has shut down its Helion public cloud business, its main focus is on its private and managed cloud services, which center around the open source OpenStack cloud platform that’s pretty much the de facto standard for building private clouds now. HPE launched version 4.0 of its Helion OpenStack platform for enterprises this week. With… Read More

Oct
27
2016
--

Samsung acquires Tachyon to extend enterprise mobile device management

samsung-AP-sm Samsung’s mobile operation has taken a big hit of late, with profits plunging 30% in the last quarter in the wake of the Note 7 recall. But today, the company is announcing an acquisition of a wireless enterprise startup that speaks to how it wants to grow other areas of its business to offset setbacks like this, as well as the larger overall slowdown in handset sales globally. Samsung… Read More

Oct
26
2016
--

A maturing OpenStack looks to the future

img_20161025_084640 OpenStack, the open source cloud computing platform that allows enterprises to essentially run their own version of AWS in their data centers, was founded by NASA and Rackspace in 2010. Today, it’s being used by the likes of Comcast, PayPal, Volkswagen, CERN, AT&T, China Mobile and Her Majesty’s Revenue and Customs in the U.K. The OpenStack Foundation’s bi-annual… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com