Apr
23
2018
--

Percona Live 2018 Featured Talk: Data Integrity at Scale with Alexis Guajardo

Alexis Google Percona Live 2018

Percona Live 2018 Featured TalkWelcome to another interview blog for the rapidly-approaching Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Alexis Guajardo, Senior Software Engineer at Google.com. His session talk is titled Data Integrity at Scale. Keeping data safe is the top responsibility of anyone running a database. In this session, he dives into Cloud SQL’s storage architecture to demonstrate how they check data down to the disk level:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

Alexis: I am a Software Engineer on the Cloud SQL team with Google Cloud. I got into databases by using FileMaker. However, the world of database technology has changed many times over since then.

Percona: Your session is titled “Data Integrity at Scale”. Has the importance of data integrity increased over time? Why?

Alexis Google Percona Live 2018Alexis: Data integrity has always been vital to databases and data in general. The most common method is using checksum validation to ensure data integrity. The challenge that we faced at Cloud SQL on Google Cloud was how to do this for two very popular open source database solutions, and how to do it at scale. The store for MySQL was a bit more straightforward, because of innochecksum.  PostgreSQL required our team to create a utility, which is open sourced. The complicated aspect of data corruption is that sometimes it is dormant and discovered at a most inopportune time. What we have instituted are frequent checks for corruption of the entire data set, so if there is a software bug or other issue, we can mitigate it as soon as possible.

Percona: How does scaling affect the ability to maintain data integrity?

AlexisThere is a benefit to working on a team that provides a public cloud. Since Google Cloud is not bounded by most restrictions that an individual or company would be, we can allocate resources to do data integrity verifications without restriction. If I were to implement a similar system at a smaller company, most likely there would be cost and resource restrictions. However, data integrity is a feature that Google Cloud provides.

Percona: What are three things a DBA should know about ensuring data integrity?

Alexis: I think that the three things can be simplified down to three words: verify your backups.

Even if someone does not use Cloud SQL, it is vital to take backups, maintain them and verify them. Having terabytes of backups, but without verification, leaves open the possibility that a software bug or hardware issue somehow corrupted a backup.

Percona: Why should people attend your talk? What do you hope people will take away from it? 

Alexis: I would say the main reason to attend my talk is to discover more about Cloud SQL. As a DBA or developer, having a managed database as a service solution takes away a lot of the minutia. But there are still the tasks of improving queries and creating applications.  However, having reliable and verified backups is vital. With the addition of high availability and the ability to scale up easily, Cloud SQL’s managed database solution makes life much easier.

Percona: What are you looking forward to at Percona Live (besides your talk)?

Alexis: The many talks about Vitesse look very interesting. It is also an open source Google technology, and to see its adoption by many companies and how they have benefited from its use will be interesting.

Want to find out more about this Percona Live 2018 featured talk, and data integrity at scale? Register for Percona Live 2018, and see Alexis session talk Data Integrity at Scale. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

The post Percona Live 2018 Featured Talk: Data Integrity at Scale with Alexis Guajardo appeared first on Percona Database Performance Blog.

Apr
23
2018
--

This Week In Data with Colin Charles 36: Percona Live 2018

Colin Charles

Colin CharlesPercona Live Santa Clara 2018! Last week’s column may have somehow not made it to Planet MySQL, so please don’t miss the good links at: This Week in Data with Colin Charles 35: Percona Live 18 final countdown and a roundup of recent news.

Back to Percona Live – I expect people are still going to be registering, right down to the wire! I highly recommend you also register for the community dinner. They routinely sell out and people tend to complain about not being able to join in the fun, so reserve your spot early. Please also be present on Monday, which is not just tutorial day, but also during the welcoming reception, there will be the most excellent community awards. In addition, if you don’t find a tutorial as something you’re interested in (or didn’t get a ticket that included tutorials!), why not check out the China Track, something new and unique that showcases the technology coming out of China.

The biggest news this week? On Thursday, April 19, 2018, MySQL 8.0 became Generally Available with the 8.0.11 release. The release notes are a must read, as is the upgrade guide (this time around, you really want to read it!). Some more digestible links: What’s New in MySQL 8.0? (Generally Available), MySQL 8.0: New Features in Replication, MySQL 8.0 – Announcing GA of the MySQL Document Store. As a bonus, the Hacker News thread is also well worth a read. Don’t forget that all the connectors also got a nice version bump.

The PostgreSQL website has been redesigned – check out PostgreSQL.org.

More open source databases are always a good thing, and it’s great to see Apple open sourcing FoundationDB. Being corporate-backed open source, I have great hopes for what the project can become. The requisite Hacker News thread is also well worth a read.

Releases

  • PostgreSQL 10.3, 9.6.8, 9.5.12, 9.4.17, AND 9.3.22 released
  • MariaDB 10.3.6 is another release candidate, more changes for sql_mode=oracle, changes to the INFORMATION_SCHEMA tables around system versioning, and more. Particularly interesting is the contributor list, listing a total of 34 contributors. Five come from the MariaDB Foundation (including Monty) which is 14%, 17 come from the MariaDB Corporation (including Monty again) which is 50%, two from Tempesta, one from IBM, six from Codership (over 17%!), and four are independent. So nearly 62% of contributions are run by the Corporation/Foundation in total.
  • SysbenchRocks, a repository of Sysbench benchmarks, libraries and extensions.

Link List

Upcoming appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

The post This Week In Data with Colin Charles 36: Percona Live 2018 appeared first on Percona Database Performance Blog.

Apr
19
2018
--

Congratulations to Our Friends at Oracle with the MySQL 8.0 GA Release!

MySQL 8.0 GA

MySQL 8.0 GAIt is a great today for whole MySQL community: MySQL 8.0 was just released as GA!

Geir Høydalsvik has a great summary in his “What’s New in MySQL 8.0” blog post. You can find additional information about MySQL 8.0 Replication and MySQL 8.0 Document Store that is also worth reading.

If you can’t wait to upgrade to MySQL 8.0, please make sure to read the Upgrading to MySQL 8.0 section in the manual, and pay particular attention to changes to Connection Authentication. It requires special handling for most applications.

Also keep in mind that while MySQL 8.0 passed through an extensive QA process, this is the first GA release. It is not yet as mature and polished as MySQL 5.7. If you’re just now starting application development, however, you should definitely start with MySQL 8.0 — by the time you launch your application, 8.0 will be good. 

All of us at Percona – and me personally – are very excited about this release. You can learn more details about what we expect from it in our Why We’re Excited about MySQL 8.0 webinar recording.    

We also wrote extensively about MySQL 8.0 on our blog. Below are some posts on various features, as well as thoughts on the various RCs, that you might want to review:

The best way to learn about MySQL 8.0, though, is to attend the Percona Live Open Source Database Conference 2018, taking place in Santa Clara, CA next week. We have an outstanding selection of MySQL 8.0 focused talks both from the MySQL Engineering team and the community at large (myself included):

You can still get tickets to the conference. Come by and learn about MySQL 8.0. If you can’t make it, please check back later for slides.

Done reading? Go ahead go download  MySQL 8.0 and check it out!

The post Congratulations to Our Friends at Oracle with the MySQL 8.0 GA Release! appeared first on Percona Database Performance Blog.

Apr
18
2018
--

Webinar Thursday, April 19, 2018: Running MongoDB in Production, Part 1

Running MongoDB

Running MongoDBPlease join Percona’s Senior Technical Operations Architect, Tim Vaillancourt as he presents Running MongoDB in Production, Part 1 on Thursday, April 19, 2018, at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4).

Are you a seasoned MySQL DBA that needs to add MongoDB to your skills? Are you used to managing a small environment that runs well, but want to know what you might not know yet? This webinar helps you with running MongoDB in production environments.

MongoDB works well, but when it has issues, the number one question is “where should I go to solve a problem?”

This tutorial will cover:

Backups
– Logical vs Binary-level backups
– Sharding and Replica-Set Backup strategies
Security
– Filesystem and Network Security
– Operational Security
– External Authentication features of Percona Server for MongoDB
– Securing connections with SSL and MongoDB Authorization
– Encryption at Rest
– New Security features in 3.6
Monitoring
– Monitoring Strategy
– Important metrics to monitor in MongoDB and Linux
– Percona Monitoring and Management

Register for the webinar now.

Part 2 of this series will take place on Thursday, April 26, 2018, at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4). Register for the second part of this series here.

Running MongoDBTimothy Vaillancourt, Senior Technical Operations Architect

Tim joined Percona in 2016 as Sr. Technical Operations Architect for MongoDB, with the goal to make the operations of MongoDB as smooth as possible. With experience operating infrastructures in industries such as government, online marketing/publishing, SaaS and gaming combined with experience tuning systems from the hard disk all the way up to the end-user, Tim has spent time in nearly every area of the modern IT stack with many lessons learned. Tim is based in Amsterdam, NL and enjoys traveling, coding and music.

Prior to Percona Tim was the Lead MySQL DBA of Electronic Arts’ DICE studios, helping some of the largest games in the world (“Battlefield” series, “Mirrors Edge” series, “Star Wars: Battlefront”) launch and operate smoothly while also leading the automation of MongoDB deployments for EA systems. Before the role of DBA at EA’s DICE studio, Tim served as a subject matter expert in NoSQL databases, queues and search on the Online Operations team at EA SPORTS. Before moving to the gaming industry, Tim served as a Database/Systems Admin operating a large MySQL-based SaaS infrastructure at AbeBooks/Amazon Inc.

The post Webinar Thursday, April 19, 2018: Running MongoDB in Production, Part 1 appeared first on Percona Database Performance Blog.

Apr
13
2018
--

MongoDB Replica Set Tag Sets

MongoDB Replica Set Tag Sets

MongoDB Replica Set Tag SetsIn this blog post, we will look at MongoDB replica set tag sets, which enable you to use customized write concern and read preferences for replica set members.

This blog post will cover most of the questions that come to mind before using tag sets in a production environment.

  • What scenarios are these helpful for?
  • Do these tag sets work with all read preferences modes?
  • What if we’re already using maxStalenessSeconds along with the read preferences, can we still use a tag set?
  • How can one configure tag sets in a replica set?
  • Do these tags work identically for custom read preferences and write concerns?

Now let’s answer all these questions one by one.

What scenarios are these helpful for?

You can use tags:

  • If replica set members have different configurations and queries need to be redirected to the specific secondaries as per their purpose. For example, production queries can be redirected to the higher configuration member for faster execution and queries used for internal reporting purpose can be redirected to the low configurations secondaries. This will help improve per node resource utilization.
  • When you use custom read preferences, but the reads are routed to a secondary that resides in another data center to make reads more optimized and cost-effective. You can use tag sets to make sure that specific reads are routed to the specific secondary node within the DC.
  • If you want to use custom write concerns with the tag set for acknowledging writes are propagated to the secondary nodes per the requirements.

Do these tag sets work with all read preferences modes?

Yes, these tag-sets work with all the read preferences — except “primary” mode. “Primary” preferred read preference mode doesn’t allow you to add any tag sets while querying.

replicaTest:PRIMARY> db.tagTest.find().readPref('primary', [{"specs" : "low","purpose" : "general"}])
Error: error: {
	"ok" : 0,
	"errmsg" : "Only empty tags are allowed with primary read preference",
	"code" : 2,
	"codeName" : "BadValue"
}

What if we’re already using maxStalenessSeconds along with the read preferences, can tag set still be used?

Yes, you can use tag sets with a maxStalenessSeconds value. In that case, priority is given to staleness first, then tags, to get the most recent data from the secondary member.

How can one configure tag sets in a replica set?

You can configure tags by adding a parameter in the replica set configuration. Consider this test case with a five members replica set:

"members" : [
		{
			"_id" : 0,
			"name" : "host1:27017",
			"stateStr" : "PRIMARY",
		},
		{
			"_id" : 1,
			"name" : "host2:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 2,
			"name" : "host3:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 3,
			"name" : "host4:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 4,
			"name" : "host5:27017",
			"stateStr" : "SECONDARY",
         }
		]

For our test case, members specification of the host are “specs” and the requirement for the query as per the application is the “purpose,” in order to route queries to specific members in an optimized manner.

You must associate tags to each member by adding it to the replica set configuration:

cfg=rs.conf()
cfg.members[0].tags={"specs":"high","purpose":"analytics"}
cfg.members[1].tags={"specs":"high"}
cfg.members[2].tags={"specs":"low","purpose":"general"}
cfg.members[3].tags={"specs":"high","purpose":"analytics"}
cfg.members[4].tags={"specs":"low"}
rs.reconfig(cfg)

After adding tags, you can validate these changes by checking replica set configurations like:

rs.conf()
	"members" : [
		{
			"_id" : 0,
			"host" : "host1:27017",
			"tags" : {
				"specs" : "high",
				"purpose" : "analytics"
			},
		},
		{
			"_id" : 1,
			"host" : "host2:27017",
			"tags" : {
				"specs" : "high"
			},
		},
		{
			"_id" : 2,
			"host" : "host3:27017",
			"tags" : {
				"specs" : "low",
				"purpose" : "general"
			},
		},
		{
			"_id" : 3,
			"host" : "host4:27017",
			"tags" : {
				"specs" : "high",
				"purpose" : "analytics"
			},
		},
		{
			"_id" : 4,
			"host" : "host5:27017",
			"tags" : {
				"specs" : "low"
			},
		}
	]

Now, we are done with the tag-set configuration.

Do these tags work identically for custom read preferences and write concerns?

No, custom read preferences and write concerns consider tag sets in different ways.

Read preferences routes read operations to a required specific member by following tag values assigned to it, but write concerns follows tag values only to check if the value is unique or not. It will not consider tag values while selecting replica members.

Let us see how to use tag sets with write concerns. As per our test case, we have two unique tag values (i.e., “analytics” and “general”) defined as:

cfg=rs.conf()
cfg.settings={ getLastErrorModes: {writeNode:{"purpose": 2}}}
rs.reconfig(cfg)

You can validate these changes by checking the replica set configuration:

rs.conf()
	"settings" : {
			"getLastErrorModes" : {
			"writeNode" : {
				"purpose" : 2
			}<strong>
		},</strong>
	}

Now let’s try to insert a sample document in the collection named “tagTest” with this write concern:

db.tagTest.insert({name:"tom",tech:"nosql",status:"active"},{writeConcern:{w:"writeNode"}})
WriteResult({ "nInserted" : 1 })

Here, the write concern “writeNode” means the client gets a write acknowledgment from two nodes with unique tag set values. If the value set in the configuration exceeds the count of unique values, then it leads to an error at the time of the write:

cfg.settings={ getLastErrorModes: {writeNode:{"purpose": 4}}}
rs.reconfig(cfg)
db.tagTest.insert({name:"tom",tech:"nosql",status:"active"},{writeConcern:{w:"writeNode"}})
WriteResult({
	"nInserted" : 1,
	"writeConcernError" : {
		"code" : 100,
		"codeName" : "CannotSatisfyWriteConcern",
		"errmsg" : "Not enough nodes match write concern mode "writeNode""
	}
}

You can perform read and write operations with tag sets like this:

db.tagTest.find({name:"tom"}).readPref("secondary",[{"specs":"low","purpose":"general"}])
db.tagTest.insert({name:"john",tech:"rdbms",status:"active"},{writeConcern:{w:"writeNode"}})

I hope this helps you to understand how to configure MongoDB replica set tag sets, how the read preferences and write concerns handle them, and where you can use them

The post MongoDB Replica Set Tag Sets appeared first on Percona Database Performance Blog.

Apr
05
2018
--

Percona Live Europe 2018 – Save the Date!

Percona Live Europe 2018

Percona Live Europe 2018We’ve been searching for a great venue for Percona Live Europe 2018, and I am thrilled to announce we’ll be hosting it in Frankfurt, Germany! Please block November 5-7, 2018 on your calendar now and plan to join us at the Radisson Blu Frankfurt for the premier open source database conference.

We’re in the final days of organizing for the Percona Live 2018 in Santa Clara. You can still purchase tickets for an amazing lineup of keynote speakers, tutorials and sessions. We have ten tracks, including MySQL, MongoDB, Cloud, PostgreSQL, Containers and Automation, Monitoring and Ops, and Database Security. Major areas of focus at the conference will include:

  • Database operations and automation at scale, featuring speakers from Facebook, Slack, Github and more
  • Databases in the cloud – how database-as-a-service (DBaaS) is changing the DB landscape, featuring speakers from AWS, Microsoft, Alibaba and more
  • Security and compliance – how GDPR and other government regulations are changing the way we manage databases, featuring speakers from Fastly, Facebook, Pythian, Percona and more
  • Bridging the gap between developers and DBAs – finding common ground, featuring speakers from Square, Oracle, Percona and more

The Call for Papers for Percona Live Europe will open soon. We look forward to seeing you in Santa Clara!

The post Percona Live Europe 2018 – Save the Date! appeared first on Percona Database Performance Blog.

Apr
05
2018
--

Managing MongoDB Bulk Deletes and Inserts with Minimal Impact to Production Environments

MongoDB bulk deletes and inserts

MongoDB bulk deletes and insertsIn this blog post, we’ll look at how to manage MongoDB bulk deletes and inserts with little impact on production traffic.

If you are like me, there is no end to the demands placed on you as a DBA. One of the biggest is when we want to load X% more data into the database, during peak traffic no less. I refer to this as MongoDB bulk deletes and inserts. As a DBA, my first reaction is “no, do this during off-peak hours.” However, the business person in me says what if this is due to clients loading a customer, product, or email list into the system for work during business hours. That puts it into another light, does it not?

This raises the question of how can we change data in the database as fast as possible while also trying to give the production system some breathing room. In this blog, I wanted to give you some nice scripts that you can load into your MongoDB shell to really simplify the process.

First, we will cover an iterative delete function that can be stopped and restarted at any time. Next, I will talk about smart updating with similarly planned overhead. Lastly, I want to talk about more advanced forms of health checking when you want to do something a bit smarter than where this basic series of scripts stop.

Bulk Deleting with a Plan

In this code, you can see there are a couple of ways to manage these deletes. Specifically, you can see how to call this from anywhere (deleteFromCollection). I’ve also shown how to extend the shell so you can call (db.collection.deleteBulk). This avoids the need to provide the namespace, as it can discover that from the context of the function.

The idea behind this function is pretty straightforward: you provide it with a find pattern for what you want to delete. This could be { } if you don’t want to restrict it, but you should use .drop() in that case. After that, it expects a batch size, which is the number of document ID’s to use to drop in a single go. There is a trade-off between more deletes with more iterations or more with fewer iterations. Keep in mind this means there are 1000 of oplog entries per batch (also in my examples). You should consider this carefully and watch your oplog range as a result. You could improve this to allow someone to check that size, but it requires more permissions (we’ll leave that discussion for another time). Finally, between batches, the pauseNS sleeps for that duration.

If you find that the overhead is too much for you, simply kill the shell running this and it will stop running. You can then reduce the batch, increase the pause, or both, to make the system handle the change better. Sadly, this is not an exact science as some people have different behaviors they consider an “acceptable” from an impact perspective with so many writes. We will talk about this more in a bit:

function parseNS(ns){
    //Expects we are forcing people to not violate the rules and not doing "foodb.foocollection.month.day.year" if they do they need to use an array.
    if (ns instanceof Array){
        database =  ns[0];
        collection = ns[1];
    }
    else{
        tNS =  ns.split(".");
        if (tNS.length > 2){
            print('ERROR: NS had more than 1 period in it, please pass as an [ "dbname","coll.name.with.dots"] !');
            return false;
        }
        database = tNS[0];
        collection = tNS[1];
    }
    return {database: database,collection: collection};
}
DBCollection.prototype.deleteBulk = function( query, batchSize, pauseMS){
    //Parse and check namespaces
    ns = this.getFullName();
    srcNS={
        database:   ns.split(".")[0],
        collection: ns.split(".").slice(1,ns.length).join("."),
    };
    var db = this._db;
    var batchBucket = new Array();
    var totalToProcess = db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).find(query,{_id:1}).count();
    if (totalToProcess < batchSize){ batchSize = totalToProcess; }
    currentCount = 0;
    print("Processed "+currentCount+"/"+totalToProcess+"...");
    db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).find(query).addOption(DBQuery.Option.noTimeout).forEach(function(doc){
        batchBucket.push(doc._id);
        if ( batchBucket.length >= batchSize){
            printjson(db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).remove({_id : { "$in" : batchBucket}}));
            currentCount += batchBucket.length;
            batchBucket = [];
            sleep (pauseMS);
            print("Processed "+currentCount+"/"+totalToProcess+"...");
        }
    })
    print("Completed");
}
function deleteFromCollection( sourceNS, query, batchSize, pauseMS){
    //Parse and check namespaces
    srcNS = parseNS(sourceNS);
    if (srcNS == false) { return false; }
    batchBucket = new Array();
    totalToProcess = db.getDB(srcNS.database).getCollection(srcNS.collection).find(query,{_id:1}).count();
    if (totalToProcess < batchSize){ batchSize = totalToProcess};
    currentCount = 0;
    print("Processed "+currentCount+"/"+totalToProcess+"...");
    db.getDB(srcNS.database).getCollection(srcNS.collection).find(query).addOption(DBQuery.Option.noTimeout).forEach(function(doc){
        batchBucket.push(doc._id);
        if ( batchBucket.length >= batchSize){
            db.getDB(srcNS.database).getCollection(srcNS.collection).remove({_id : { "$in" : batchBucket}});
            currentCount += batchBucket.length;
            batchBucket = [];
            sleep (pauseMS);
            print("Processed "+currentCount+"/"+totalToProcess+"...");
        }
    })
    print("Completed");
}
/** Example Usage:
    deleteFromCollection("foo.bar",{"type":"archive"},1000,20);
  or
    db.bar.deleteBulk({type:"archive"},1000,20);
**/

Inserting & Updating with a Plan

Not to be outdone with the deletes, MongoDB updates and inserts are equally good for the same logic. In these cases, only small changes would be needed to create batches of inserts and then pass .insert(batchBucket) into the shell. Using “sleep” allows breather room for other reads and actions in the system. I find we don’t need this for modern MongoDB using WiredTiger, but your mileage can vary based on workloads. Also, you might want to figure out a way to tell the script how to handle a document that already exists. In the case of data loading, you could wrap the script with a check for errors not including a duplicate key. Please note it’s very easy to duplicate data if you do not have a unique index, and MongoDB is auto assigning its own _id field.

Updates are a tad tricker as they can be expensive if the query portion of the code is not indexed. I’ve provided you with an example, however. You should consider the query time when planning batches and pauses — the more the update is based on a table scan, the smaller the batch you should consider. The reasoning here is that we want to avoid restarting and causing a new table scan as much as possible. A future improvement might be to also support reads from a secondary, and doing the update itself on the primary by the _id field, to ensure a pin-pointed update query.

DBCollection.prototype.updateBulk = function( query, changeObject, batchSize, pauseMS){
    //Parse and check namespaces
    ns = this.getFullName();
    srcNS={
        database:   ns.split(".")[0],
        collection: ns.split(".").slice(1,ns.length).join("."),
    };
    var db = this._db;
    var batchBucket = new Array();
    var totalToProcess = db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).find(query,{_id:1}).count();
    if (totalToProcess < batchSize){ batchSize = totalToProcess; }
    currentCount = 0;
    print("Processed "+currentCount+"/"+totalToProcess+"...");
    db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).find(query).addOption(DBQuery.Option.noTimeout).forEach(function(doc){
        batchBucket.push(doc._id);
        if ( batchBucket.length >= batchSize){
            var bulk = db.getSiblingDB(srcNS.database).getCollection(srcNS.collection).initializeUnorderedBulkOp();
            batchBucket.forEach(function(doc){
                bulk.find({_id:doc._id}).update(changeObject);
            })
            printjson(bulk.execute());
            currentCount += batchBucket.length;
            batchBucket = [];
            sleep (pauseMS);
            print("Processed "+currentCount+"/"+totalToProcess+"...");
        }
    })
    print("Completed");
}
/** Example Usage:
    db.bar.updateBulk({type:"archive"},{$set:{archiveDate: ISODate()}},1000,20);
**/

In each iteration, the update prints out the failure. You can extend this example code to either write the failures to a file or try to automatically fix any issues as appropriate. My goal here is to provide you the starter function to build on. As with the earlier example, this assumes the JS shell, but you can follow the logic in the programming language of your choice if you would rather use Python, Golang or Java.

If you got nothing else from this blog on MongoDB bulk deletes and inserts, I hope you learned a good deal more about writing functions in the shell. Hopefully, you learned how to use programming to add pauses to the bulk operations you need to do. Taking this forward, you could be inventive by having a query to measure latency to trigger pauses (canary query), or even measure things like the oplog to ensure your not adversely impacting HA and replication. There is no right answer, but this is a great start towards explaining more operationally safe ways to do the bigger actions DBAs are asked to do from time to time.

The post Managing MongoDB Bulk Deletes and Inserts with Minimal Impact to Production Environments appeared first on Percona Database Performance Blog.

Apr
04
2018
--

Pattern Matching Queries vs. Full-Text Indexes

Pattern Matching Queries vs. Full-Text Indexes

Pattern Matching Queries vs. Full-Text IndexesIn this blog post, we’ll compare the performance of pattern matching queries vs. full-text indexes.

In my previous blog post, I looked for a solution on how we can search only a part of the email address and how can we make faster queries where the condition is email LIKE '%n.pierre%'. I showed two possible ways that could work. Of course, they had some pros and cons as well but were more efficient and faster than a like '%n.prierre%'.

But you could also ask why I would bother with this? Let’s add a FULLTEXT index, and everybody is happy! Are you sure about that? I’m not. Let’s investigate and test a bit. (We have some nice blog posts that explain how FULLTEXT indexes work: Post 1, Post 2, Post 3.)

Let’s see if it works in our case where we were looking for email addresses. Here is the table:

CREATE TABLE `email` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `email` varchar(120) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
  PRIMARY KEY (`id`),
  KEY `idx_email` (`email`)
) ENGINE=InnoDB AUTO_INCREMENT=318465 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci

Add the default full-text index:

ALTER TABLE email ADD FULLTEXT KEY (email);

It took only five seconds for 320K email addresses.

Let’s run a search:

SELECT id, email FROM email where MATCH(email) AGAINST ('n.pierre' IN NATURAL LANGUAGE MODE);
+--------+--------------------------------+
| id     | email                          |
+--------+--------------------------------+
|   2940 | pierre.west@example.org        |
|  10775 | pierre.beier@example.org       |
|  24267 | schroeder.pierre@example.org   |
|  26285 | bode.pierre@example.org        |
|  27104 | pierre.franecki@example.org    |
|  31792 | pierre.jaskolski@example.com   |
|  39369 | kuphal.pierre@example.org      |
|  58625 | olson.pierre@example.org       |
|  59526 | larkin.pierre@example.net      |
|  64718 | boyle.pierre@example.com       |
|  72033 | pierre.wolf@example.net        |
|  90587 | anderson.pierre@example.org    |
| 108806 | fadel.pierre@example.org       |
| 113897 | jacobs.pierre@example.com      |
| 118579 | hudson.pierre@example.com      |
| 118798 | pierre.wuckert@example.org     |
| 118937 | green.pierre@example.net       |
| 125451 | hauck.pierre@example.net       |
| 133352 | friesen.pierre@example.net     |
| 134594 | windler.pierre@example.com     |
| 135406 | dietrich.pierre@example.org    |
| 190451 | daugherty.pierre@example.org   |
...

Immediately, we have issues with the results. It returns 43 rows, but there are only 11 rows with string n.pierre. Why? It is because of . The manual says:

The built-in FULLTEXT parser determines where words start and end by looking for certain delimiter characters; for example,   (space), , (comma), and . (period).

The parser believes that a . starts a new word, so it is going to search for pierre instead of n.pierre. That’s not good news as many email addresses contain ..  What can we do? The manual says:

It is possible to write a plugin that replaces the built-in full-text parser. For details, see Section 28.2, “The MySQL Plugin API”. For example parser plugin source code, see the plugin/fulltext directory of a MySQL source distribution.

If you are willing to write your own plugin in C/C++, you can try that route. Until then, it is going to give us back a lot of irrelevant matches.

We can order the results by relevancy:

SELECT id,email,MATCH(email) AGAINST ('n.pierre' IN NATURAL LANGUAGE MODE)
 AS score FROM email where MATCH(email) AGAINST
('n.pierre' IN NATURAL LANGUAGE MODE) ORDER BY 3 desc limit 10;
+-------+------------------------------+-------------------+
| id    | email                        | score             |
+-------+------------------------------+-------------------+
|  2940 | pierre.west@example.org      | 14.96491813659668 |
| 10775 | pierre.beier@example.org     | 14.96491813659668 |
| 24267 | schroeder.pierre@example.org | 14.96491813659668 |
| 26285 | bode.pierre@example.org      | 14.96491813659668 |
| 27104 | pierre.franecki@example.org  | 14.96491813659668 |
| 31792 | pierre.jaskolski@example.com | 14.96491813659668 |
| 39369 | kuphal.pierre@example.org    | 14.96491813659668 |
| 58625 | olson.pierre@example.org     | 14.96491813659668 |
| 59526 | larkin.pierre@example.net    | 14.96491813659668 |
| 64718 | boyle.pierre@example.com     | 14.96491813659668 |
+-------+------------------------------+-------------------+

This does not guarantee we get back the lines that we are looking for, however. I tried to change innodb_ft_min_token_size as well, but it did not affect the results.

Let’s see what happens when I search for williamson pierre. Two separate words. I know there is only one email address with these names.

SELECT id,email,MATCH(email) AGAINST
('williamson.pierre' IN NATURAL LANGUAGE MODE) AS score
FROM email where MATCH(email) AGAINST
('williamson.pierre' IN NATURAL LANGUAGE MODE) ORDER BY 3 desc limit 50;
+--------+---------------------------------+-------------------+
| id     | email                           | score             |
+--------+---------------------------------+-------------------+
| 238396 | williamson.pierre@example.net   | 24.08820343017578 |
|   2940 | pierre.west@example.org         | 14.96491813659668 |
|  10775 | pierre.beier@example.org        | 14.96491813659668 |
|  24267 | schroeder.pierre@example.org    | 14.96491813659668 |
|  26285 | bode.pierre@example.org         | 14.96491813659668 |
|  27104 | pierre.franecki@example.org     | 14.96491813659668 |
|  31792 | pierre.jaskolski@example.com    | 14.96491813659668 |
|  39369 | kuphal.pierre@example.org       | 14.96491813659668 |
|  58625 | olson.pierre@example.org        | 14.96491813659668 |
...

The first result is that we still got another 49 addresses. How can the application decide which email address is relevant and which is not? I am still not happy.

Are there any other options without writing our own plugin?

Can I somehow tell the parser to use n.pierre as one word? The manual says:

A phrase that is enclosed within double quote (") characters matches only rows that contain the phrase literally, as it was typed. The full-text engine splits the phrase into words and performs a search in the FULLTEXT index for the words. Nonword characters need not be matched exactly: Phrase searching requires only that matches contain exactly the same words as the phrase and in the same order. For example, "test phrase" matches "test, phrase".

I can use double quotes, but it will still split at . and the results are the same. I did not find a solution except writing your own plugin. If someone knows a solution, please write a comment.

With Parser Ngram

The built-in MySQL full-text parser uses delimiters between words, but we can create an Ngram-based full-text index.

mysql> alter table  email ADD FULLTEXT KEY (email) WITH PARSER ngram;
Query OK, 0 rows affected (20.10 sec)
Records: 0  Duplicates: 0  Warnings: 0

Before that, I changed the ngram_token_size to 3.

mysql> SELECT id,email,MATCH(email) AGAINST ('n.pierre' IN NATURAL LANGUAGE MODE) AS score FROM email where MATCH(email) AGAINST ('n.pierre' IN NATURAL LANGUAGE MODE) ORDER BY 3 desc;
+--------+----------------------------------+--------------------+
| id     | email                            | score              |
+--------+----------------------------------+--------------------+
|  58625 | olson.pierre@example.org         |  16.56794548034668 |
|  59526 | larkin.pierre@example.net        |  16.56794548034668 |
|  90587 | anderson.pierre@example.org      |  16.56794548034668 |
| 118579 | hudson.pierre@example.com        |  16.56794548034668 |
| 118937 | green.pierre@example.net         |  16.56794548034668 |
| 133352 | friesen.pierre@example.net       |  16.56794548034668 |
| 200608 | wilkinson.pierre@example.org     |  16.56794548034668 |
| 237928 | johnson.pierre@example.org       |  16.56794548034668 |
| 238396 | williamson.pierre@example.net    |  16.56794548034668 |
| 278384 | monahan.pierre@example.net       |  16.56794548034668 |
| 306718 | rohan.pierre@example.com         |  16.56794548034668 |
| 226737 | warren.pfeffer@example.net       | 12.156486511230469 |
|  74278 | stiedemann.perry@example.net     |  11.52701187133789 |
|  75234 | bogan.perry@example.org          |  11.52701187133789 |
...
4697 rows in set (0.03 sec)

Finally, we are getting somewhere. But it gives back 4697 rows. How can the application decide which results are relevant? Should we just use the score?

Subselect?

I dropped the Ngram FULLTEXT index and created a normal one because that gives me back only 43 results instead of 4697. I thought a full-text search might be good to narrow down the results from a million to a few thousand, and then we can run a select based on that. Example:

mysql> Select e2.id,e2.email from
(SELECT id,email FROM email where MATCH(email)
AGAINST ('n.pierre' IN NATURAL LANGUAGE MODE))
as e2 where e2.email like '%n.pierre%';
+--------+-------------------------------+
| id     | email                         |
+--------+-------------------------------+
|  58625 | olson.pierre@example.org      |
|  59526 | larkin.pierre@example.net     |
|  90587 | anderson.pierre@example.org   |
| 118579 | hudson.pierre@example.com     |
| 118937 | green.pierre@example.net      |
| 133352 | friesen.pierre@example.net    |
| 200608 | wilkinson.pierre@example.org  |
| 237928 | johnson.pierre@example.org    |
| 238396 | williamson.pierre@example.net |
| 278384 | monahan.pierre@example.net    |
| 306718 | rohan.pierre@example.com      |
+--------+-------------------------------+
11 rows in set (0.00 sec)

Wow, this can work and it looks quite fast as well. BUT (there is always a but), if I run the following query (searching for ierre):

mysql> Select e2.id,e2.email from
(SELECT id,email FROM email where MATCH(email)
AGAINST ('ierre' IN NATURAL LANGUAGE MODE))
as e2 where e2.email like '%ierre%';
Empty set (0.00 sec)

It gives back nothing because the default full-text parser uses only full words! In our case, that is not very helpful. Let's switch back to Ngram and re-run the query:

mysql> Select e2.id,e2.email from
(SELECT id,email FROM email where MATCH(email)
AGAINST ('ierre' IN NATURAL LANGUAGE MODE))
as e2 where e2.email like '%ierre%';
+--------+--------------------------------+
| id     | email                          |
+--------+--------------------------------+
|   2940 | pierre.west@example.org        |
|  10775 | pierre.beier@example.org       |
|  16958 | pierre68@example.com           |
|  24267 | schroeder.pierre@example.org   |
...
65 rows in set (0.05 sec)
+-------------------------+----------+
| Status                  | Duration |
+-------------------------+----------+
| starting                | 0.000072 |
| checking permissions    | 0.000006 |
| Opening tables          | 0.000014 |
| init                    | 0.000027 |
| System lock             | 0.000007 |
| optimizing              | 0.000006 |
| statistics              | 0.000013 |
| preparing               | 0.000006 |
| FULLTEXT initialization | 0.006384 |
| executing               | 0.000012 |
| Sending data            | 0.020735 |
| end                     | 0.000014 |
| query end               | 0.000014 |
| closing tables          | 0.000013 |
| freeing items           | 0.001383 |
| cleaning up             | 0.000024 |
+-------------------------+----------+

It gives us back 65 rows, and it takes between 0.02-0.05s because the subquery results in many rows.

With my "shorting method":

select e.email from email as e right join email_tib as t
on t.email_id=e.id where t.email_parts like "ierre%";
+--------------------------------+
| email                          |
+--------------------------------+
| anderson.pierre@example.org    |
| bode.pierre@example.org        |
| bode.pierre@example.org        |
| boyle.pierre@example.com       |
| bradtke.pierre@example.org     |
| bradtke.pierre@example.org     |
...
65 rows in set (0.00 sec)
mysql> show profile;
+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| starting             | 0.000069 |
| checking permissions | 0.000011 |
| checking permissions | 0.000003 |
| Opening tables       | 0.000020 |
| init                 | 0.000021 |
| System lock          | 0.000008 |
| optimizing           | 0.000009 |
| statistics           | 0.000070 |
| preparing            | 0.000011 |
| executing            | 0.000001 |
| Sending data         | 0.000330 |
| end                  | 0.000002 |
| query end            | 0.000007 |
| closing tables       | 0.000005 |
| freeing items        | 0.000014 |
| cleaning up          | 0.000010 |
+----------------------+----------+

It reads and gives back exactly 65 rows and it takes 0.000s.

Conclusion

When it comes to pattern matching queries vs. full-text indexes, it looks like full-text index can be helpful, and it is built in. Unfortunately, we do not have many metrics regarding full-text indexes. We cannot see how many rows were read, etc. I don't want to make any conclusions on which one is faster. I still have to run some tests with our favorite benchmark tool sysbench` on a much bigger dataset.

I should mention that full-text indexes and my previous solutions won’t solve all the problems. In this and my other blog I was trying to find an answer to a specific problem, but there are cases where my solutions would not work that well.

The post Pattern Matching Queries vs. Full-Text Indexes appeared first on Percona Database Performance Blog.

Apr
02
2018
--

Plot MySQL Data in Real Time Using Percona Monitoring and Management (PMM)

Plot MySQL Data in Real Time

In this blog post, we’ll show that you can plot MySQL data in real time using Percona Monitoring and Management (PMM).

In my previous blog post, I showed how we could load into any metrics, benchmarks into MySQL and visualize it with PMM. But that’s not all! We can even visualize most any kind of data from MySQL in real time. I am falling in love with the MySQL plugin for Grafana — it just makes things so easy and smooth.

This graph shows us the number of visitors to a website in real time (refreshing in every 5 seconds).

We have the following table in MySQL:

CREATE TABLE `page_stats` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `visitors` int(11) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `time` (`time`)
) ENGINE=InnoDB AUTO_INCREMENT=9232 DEFAULT CHARSET=latin1

We store the number of visitors every second. I am not saying you have to update this table hundreds or thousands of times, it depends on how many visitors you have. You could use the example of Redis to store and increase this counter and save it into MySQL every second. Here are my metrics:

mysql> select * from page_stats order by id desc limit 10;
+------+---------------------+----------+
| id   | time                | visitors |
+------+---------------------+----------+
| 9446 | 2018-02-27 21:44:12 |      744 |
| 9445 | 2018-02-27 21:44:11 |      703 |
| 9444 | 2018-02-27 21:44:10 |      791 |
| 9443 | 2018-02-27 21:44:09 |      734 |
| 9442 | 2018-02-27 21:44:08 |      632 |
| 9441 | 2018-02-27 21:44:07 |      646 |
| 9440 | 2018-02-27 21:44:06 |      656 |
| 9439 | 2018-02-27 21:44:05 |      678 |
| 9438 | 2018-02-27 21:44:04 |      673 |
| 9437 | 2018-02-27 21:44:03 |      660 |
+------+---------------------+----------+

We can easily add my MySQL query to Grafana, and it will visualize it for us:

You might ask “what is $__timeFilter?” I discussed that in the previous post, but let me copy the manual here as well:

Time series:
- return column named time_sec (UTC in seconds), use UNIX_TIMESTAMP(column)
- return column named value for the time point value
- return column named metric to represent the series name
Table:
- return any set of columns
Macros:
- $__time(column) -> UNIX_TIMESTAMP(column) as time_sec
- $__timeFilter(column) ->  UNIX_TIMESTAMP(time_date_time) ? 1492750877 AND UNIX_TIMESTAMP(time_date_time) ? 1492750877
- $__unixEpochFilter(column) ->  time_unix_epoch > 1492750877 AND time_unix_epoch < 1492750877
- $__timeGroup(column,'5m') -> (extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int
Or build your own conditionals using these macros which just return the values:
- $__timeFrom() ->  FROM_UNIXTIME(1492750877)
- $__timeTo() ->  FROM_UNIXTIME(1492750877)
- $__unixEpochFrom() ->  1492750877
- $__unixEpochTo() ->  1492750877

What can I visualize?

It’s true! Basically, if you can write a query, you can graph it. For example, let’s count all the visitors in every minute. Here is the query:

select
      UNIX_TIMESTAMP(ps.time) as time_sec,
      sum(visitors) as value,
      'visitors' as metric
   from
   page_stats as ps
   WHERE $__timeFilter(time)
   GROUP BY DATE_FORMAT(`time`, '%Y-%m-%d %H:%i')
    ORDER BY ps.time ASC;

And it gives us the following graph:

See, it’s easy! ?

Conclusion

There is no more excuse why you can not visualize your data! Percona Monitoring and Management lets you plot MySQL data in real time. You do not have to move it anywhere or change anything! Just grant read access from PMM, and you can start to create your own graphs!

The post Plot MySQL Data in Real Time Using Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Apr
02
2018
--

MongoDB Data at Rest Encryption Using eCryptFS

In this post, we’ll look at MongoDB data at rest encryption using eCryptFS, and how to deploy a MongoDB server using encrypted data files.

When dealing with data, a good security policy should enforce the use of “no trivial” passwords, the use of encrypted connections and hopefully encrypted files on the disks.

Only the MongoDB Enterprise edition has an “engine encryption” feature. The Community edition and Percona Server for MongoDB don’t (yet). This is why I’m going to introduce a useful way to achieve data encryption at rest for MongoDB, using a simple but effective tool: eCryptFS.

eCryptFS is an enterprise-class stacked cryptographic filesystem for Linux. You can use it to encrypt partitions or even any folder that doesn’t use a partition of its own, no matter the underlying filesystem or partition type. For more information about this too, visit the official website: http://ecryptfs.org/.

I’m using Ubuntu 16.04 and I have Percona Server for MongoDB already installed on the system. The data directory (dbpath) is in /var/lib/mongodb.

Preparation of the encrypted directory

First, let’s stop mongod if it’s running:

sudo service mongod stop

Install eCryptFS:

sudo apt-get install ecryptfs-utils

Create two new directories:

sudo mkdir /datastore
sudo mkdir /var/lib/mongodb-encrypted

We’ll use the /datastore directory as the folder where we copy all the mongo’s files, and have them automatically encrypted. It’s also useful to test later that everything is working correctly. The folder /var/lib/mongodb_encrypted is the mount point we’ll use as the new data directory for mongod.

Mount the encrypted directory

Now it’s time to use eCryptFS to mount the /datastore folder and define it as encrypted. Launch the command as follows, choose a passphrase and respond to all the questions with the default proposed value. In a real case, choose the answers that best fit for you, and a complex passphrase:

root@psmdb1:~# sudo mount -t ecryptfs /datastore /var/lib/mongo-encrypted
Passphrase:
Select cipher:
1) aes: blocksize = 16; min keysize = 16; max keysize = 32
2) blowfish: blocksize = 8; min keysize = 16; max keysize = 56
3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24
4) twofish: blocksize = 16; min keysize = 16; max keysize = 32
5) cast6: blocksize = 16; min keysize = 16; max keysize = 32
6) cast5: blocksize = 8; min keysize = 5; max keysize = 16
Selection [aes]:
Select key bytes:
1) 16
2) 32
3) 24
Selection [16]:
Enable plaintext passthrough (y/n) [n]:
Enable filename encryption (y/n) [n]:
Attempting to mount with the following options:
 ecryptfs_unlink_sigs
 ecryptfs_key_bytes=16
 ecryptfs_cipher=aes
 ecryptfs_sig=f946e4b85fd84010
Mounted eCryptfs

If you see Mounted eCryptfs as the last line, everything went well. Now you have the folder /datastore encrypted. Any file you create or copy into this folder is automatically encrypted by eCryptFS. Also, you have mounted the encrypted folder into the path /var/lib/mongo-encrypted.

For the sake of security, you can verify with the mount command that the directory is correctly mounted. You should see something similar to the following:

root@psmdb1:~# sudo mount | grep crypt
/datastore on /var/lib/mongo-encrypted type ecryptfs (rw,relatime,ecryptfs_sig=f946e4b85fd84010,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs)

Copy mongo files

sudo cp -r /var/lib/mongodb/* /var/lib/mongo-encrypted

We copy all the files from the existent mongo’s data directory into the new path.

Since we are working as root (or we used sudo -s at the beginning), we need to change the ownership of the files to the mongod user, the default user for the database server. Otherwise, mongod won’t start:

sudo chown -R mongod:mongod /var/lib/mongo-encrypted/

Modify mongo configuration

Before starting mongod, we have to change the configuration into /etc/mongod.conf to instruct the server to use the new folder. So, change the line with dbpath as follow and save the file:

dbpath=/var/lib/mongo-encrypted

Launch mongod and verify

So, it’s time to start mongod, connect with the mongo shell and verify that it’s working as usual:

root@psmdb1:~# sudo service mongod start

The server works correctly and is unaware of the encrypted files because eCryptFS itself takes care of encryption and decryption activities at a lower level. There’s a little price to pay in terms of performance, as in every system that uses encryption, but we won’t worry about that since our first goal is security. In any case, eCryptFS has some small footprint.

Now, let’s verify the files directly.

Since the encrypted folder is mounted and automatically managed by eCryptFS, we can see the content of the files. Let’s have a look:

root@psmdb1:~# cat /var/lib/mongo-encrypted/mongod.lock
6965

But if we look at the same file into /datastore, we see weird characters:

root@psmdb1:~# cat /datastore/mongod.lock
?0???k?"3DUfw`?Pp?Ku?????b?_CONSOLE?F?_?@??[?'?b??^??fZ?7

As expected.

Make encrypted dbpath persistent

Finally, if you want to automatically mount the encrypted directory at startup, add the following line into /etc/fstab:

/datastore /var/lib/mongo-encrypted ecryptfs defaults 0 0

Create the file .ecryptfsrc into /root directory with the following lines:

key=passphrase:passphrase_passwd_file=/root/passphrase.txt
ecryptfs_sig=f946e4b85fd84010
ecryptfs_cipher=aes
ecryptfs_key_bytes=16
ecryptfs_passthrough=n
ecryptfs_enable_filename_crypto=n

You can find the value of the variable ecryptfs_sig in the file /root/.ecryptfs/sig-cache.txt.

Create the file /root/passphrase.txt containing your secret passphrase. The format is as follows:

passphrase_passwd=mypassphrase

Now you can reboot the system and have the encrypted directory mounted at startup.

Tip: it is not a good idea to have a plain text file on your server with our passphrase. To have a better security level, you can place this file into a USB key (for example) that you can mount at startup, or you can use some sort of wallet tool to protect your passphrase.

Conclusion

Security is more and more a “must have” that customers are requesting of anyone managing their data. This how-to guide shows that achieving MongoDB data at rest encryption success is not so complicated.

The post MongoDB Data at Rest Encryption Using eCryptFS appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com