Jul
17
2017
--

Percona Server for MongoDB 3.2.14-3.4 is Now Available

Percona Server for MongoDB 3.2

Percona Server for MongoDB 3.2.14-3.4Percona announces the release of Percona Server for MongoDB 3.2.14-3.4 on July 17, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database that supports the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: This release deprecates the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.14 and includes the following additional changes:

New Features

Bugs Fixed

  • #PSMDB-67: Fixed mongod service status messages.

Percona Server for MongoDB 3.2.14-3.4 release notes are available in the official documentation.

Jul
12
2017
--

Gh-ost benchmark against pt-online-schema-change performance

gh-ost-benchmark

In this blog post, I will run a gh-ost benchmark against the performance of pt-online-schema-change.

When gh-ost came out, I was very excited. As MySQL ROW replication became commonplace, you could use it to track changes instead of triggers. This practice is cleaner and safer compared to Percona Toolkit’s pt-online-schema-change. Since gh-ost doesn’t need triggers, I assumed it would generate lower overhead and work faster. I frequently called it “pt-online-schema-change on steroids” in my talks. Finally, I’ve found some time to check my theoretical claims with some benchmarks.

DISCLAIMER: These benchmarks correspond to one specific ALTER TABLE on the table of one specific structure and hardware configuration. I have not set up a broad set of tests. If you have other results – please comment!

Benchmark Setup Details

  • pt-online-schema-change from Percona Toolkit 3.0.3
  • gh-ost 1.0.36
  • Percona Server 5.7.18 on Ubuntu 16.04 LTS
  • Hardware: 28CPU cores/56 Threads.  128GB Memory.   Samsung 960 Pro 512GB
  • Sysbench 1.0.7

Prepare the table by running:

sysbench --threads=40 --rate=0 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 prepare

The table size is about 3GB (completely fitting to innodb_buffer_pool).

Run the benchmark in “full ACID” mode with:

  • sync_binlog=1
  • innodb_flush_log_at_trx_commit=1
  • innodb_doublewrite=1

This is important as this workload is heavily commit-bound, and extensively relies on group commit.

This is the pt-online-schema-change command to alter table:

time pt-online-schema-change --execute --alter "ADD COLUMN c1 INT" D=sbtest,t=sbtest1

This the gh-ost command to alter table:

time ./gh-ost  --user="sbtest" --password="sbtest" --host=localhost --allow-on-master --database="sbtest" --table="sbtest1"  --alter="ADD COLUMN c1 INT" --execute

Tests Details

For each test the old sysbench table was dropped and a new one prepared. I tested alter table in three different cases:

  • When nothing else was running (“Idle Load”)   
  • When the system handled about 2% of load it can handle at full capacity (“Light Background Load”)
  • When the system handled about 40% of the possible load, with sysbench injected about 25% of the transactions/sec the system could handle at full load (“Heavy Background Load”)

I measured the alter table completion times for all cases, as well as the overhead generated by the alter (in other words, how much peak throughput is reduced by running alter table through the tools).

Idle Load

gh-ost benchmark 1

For the Idle Load test, pt-online-schema-change completed nearly twice as fast as gh-ost. This was a big surprise for me. I haven’t looked into the reasons or details yet, though I can see most of the CPU usage for gh-ost is on the MySQL server side. Perhaps the differences relate to the SQL used to perform non-blocking alter tables.

Light Background Load

I generated the Light Background Load by running the sysbench command below. It corresponds to a roughly 4% load, as the system can handle some 2500 transactions/sec at this concurrency under full load. Adjust the 

--rate

 value to scale it for your system.

time sysbench --threads=40 --rate=100 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

gh-ost benchmark 2

The numbers changed (as expected), but pt-online-schema-change is still approximately twice as fast as gh-ost.

What is really interesting in this case is how a relatively light background load affects the process completion time. It took both pt-online-schema-change and gh-ost about 2.7x times longer to finish! 

Heavy Background Load

I generated the Heavy Background Load running the sysbench command below. It corresponds to a roughly 40% load, as the system can handle some 2500 transactions/sec at this concurrency under full load. Adjust

--rate

 value to scale it for your system.

time sysbench --threads=40 --rate=1000 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

gh-ost benchmark 3

What happened in this case? When the load gets higher, gh-ost can’t keep up with binary log processing, and just never finishes at all. While this may be surprising at first, it makes sense if you think more about how these tools work. pt-online-schema-change uses triggers, and while they have a lot of limitations and overhead they can execute in parallel. gh-ost, on the other hand, processes the binary log in a single thread and might not be able to keep up.   

In MySQL 5.6 we didn’t have parallel replication, which applies writes to the same table in parallel. For that version the gh-ost limitation probably isn’t as big a deal, as such a heavy load would also cause replication lag. MySQL 5.7 has parallel replication. This makes it much easier to quickly replicate workloads that are too heavy for gh-ost to handle.

I should note that the workload being simulated in this benchmark is a rather extreme case. The table being altered by gh-ost here is at the same time handling a background load so high it can’t be replicated in a single thread.

Future versions of gh-ost could improve this issue by applying binlog events in parallel, similar to what MySQL replicas do.

An excerpt from the gh-ost log shows how it is totally backed up trying to apply the binary log:

root@rocky:/tmp# time ./gh-ost  --user="sbtest" --password="sbtest" --host=localhost --allow-on-master --database="sbtest" --table="sbtest1"  --alter="ADD COLUMN c1 INT" --execute
2017/06/25 19:16:05 binlogsyncer.go:75: [info] create BinlogSyncer with config &{99999 mysql localhost 3306 sbtest sbtest  false false <nil>}
2017/06/25 19:16:05 binlogsyncer.go:241: [info] begin to sync binlog from position (rocky-bin.000018, 640881773)
2017/06/25 19:16:05 binlogsyncer.go:134: [info] register slave for master server localhost:3306
2017/06/25 19:16:05 binlogsyncer.go:568: [info] rotate to (rocky-bin.000018, 640881773)
2017-06-25 19:16:05 ERROR parsing time "" as "2006-01-02T15:04:05.999999999Z07:00": cannot parse "" as "2006"
# Migrating `sbtest`.`sbtest1`; Ghost table is `sbtest`.`_sbtest1_gho`
# Migrating rocky:3306; inspecting rocky:3306; executing on rocky
# Migration started at Sun Jun 25 19:16:05 -0400 2017
# chunk-size: 1000; max-lag-millis: 1500ms; max-load: ; critical-load: ; nice-ratio: 0.000000
# throttle-additional-flag-file: /tmp/gh-ost.throttle
# Serving on unix socket: /tmp/gh-ost.sbtest.sbtest1.sock
Copy: 0/9872432 0.0%; Applied: 0; Backlog: 0/100; Time: 0s(total), 0s(copy); streamer: rocky-bin.000018:641578191; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 0; Backlog: 100/100; Time: 1s(total), 1s(copy); streamer: rocky-bin.000018:641626699; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 640; Backlog: 100/100; Time: 2s(total), 2s(copy); streamer: rocky-bin.000018:641896215; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 1310; Backlog: 100/100; Time: 3s(total), 3s(copy); streamer: rocky-bin.000018:642178659; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 1920; Backlog: 100/100; Time: 4s(total), 4s(copy); streamer: rocky-bin.000018:642436043; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 2600; Backlog: 100/100; Time: 5s(total), 5s(copy); streamer: rocky-bin.000018:642722777; State:
...
Copy: 0/9872432 0.0%; Applied: 120240; Backlog: 100/100; Time: 3m0s(total), 3m0s(copy); streamer: rocky-bin.000018:694142377; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 140330; Backlog: 100/100; Time: 3m30s(total), 3m30s(copy); streamer: rocky-bin.000018:702948219; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 160450; Backlog: 100/100; Time: 4m0s(total), 4m0s(copy); streamer: rocky-bin.000018:711775662; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 180600; Backlog: 100/100; Time: 4m30s(total), 4m30s(copy); streamer: rocky-bin.000018:720626338; State: migrating; ETA: N/A
Copy: 0/9872432 0.0%; Applied: 200770; Backlog: 100/100; Time: 5m0s(total), 5m0s(copy); streamer: rocky-bin.000018:729509960; State: migrating; ETA: N/A

Online Schema Change Performance Impact

For this test I started the alter table, waited 60 seconds and then ran sysbench at full speed for five minutes. Then I measured how much the performance was impacted by running the tool:

sysbench --threads=40 --rate=0 --report-interval=1 --percentile=99 --events=0 --time=300 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

gh-ost benchmark 4

As we can see, gh-ost has negligible overhead in this case. pt-online-schema-change on the other hand, had peformance reduced by 12%. It is worth noting though that pt-online-schema-change still makes progress in this case (though slowly), while gh-ost would never complete.

If anything, I was surprised at how little impact the pt-online-schema-change run had on sysbench performance.

It’s important to note that in this case we only measured the overhead for the “copy” stage of the online schema change. Another thing you should worry about is the impact to performance during “table rotation” (which I have not measured).

Summary

While gh-ost introduces a number of design advantages, and gives better results in some situation, I wouldn’t call it always superior the tried and true pt-online-schema-change. At least in some cases, pt-online-schema-change offers better performance than gh-ost and completes a schema change when gh-ost is unable to keep up. Consider trying out both tools and see what works best in your situation.

Jul
06
2017
--

ClickHouse: One Year!

ClickHouse One Year

ClickHouse One YearIn this blog, we’ll look at ClickHouse on its one year anniversary.

It’s been a year already since the Yandex team released ClickHouse as open source software. I’ve had an interest in this project from the very start, as I didn’t think there was an open source analytical database that could compete with industry leaders like Vertica (for example).

This was an exciting year for ClickHouse early adopters. Let’s look at what it accomplished so far.

ClickHouse initially generated interest due to the Yandex name – the most popular search engine in Russia. It wasn’t long before jaw-dropping responses popped up: guys, this thing is crazy fast! Many early adopters who tried ClickHouse were really impressed.

Fast doesn’t mean convenient though. That was the main community concern to ClickHouse over the past months. Developed as an internal project for an internal customer (Yandex.Metrica), ClickHouse had a lot of limitations for general community use. It took several months before Yandex could restructure the team and mindset, establish proper communication with the community and start addressing external needs. There are still a lot of things that need to be done. The public roadmap is not easily available, for example, but the wheels are pointed in the right direction. The ClickHouse team has added a lot of the features people were screaming for, and more are in ClickHouse’s future plans.

The Yandex guys are actively attending international conferences, and they were:

They are speaking much more in Russia (no big surprise).

We were very excited by Yandex’s ClickHouse performance claims at Percona, and could not resist making our own benchmarks:

ClickHouse did very well in these benchmarks. There are many other benchmarks by other groups as well, including a benchmark against Amazon RedShift by Altinity.

The first ClickHouse production deployments outside of Yandex started in October-November 2016. Today, Yandex reports that dozens of companies around the world are using ClickHouse in production, with the biggest installations operating with up to several petabytes of data. Hundreds of other enterprises are deploying pilot installations or actively evaluating the software.

There are also interesting reports from CloudFare (How Cloudflare analyzes 1M DNS queries per second) and from Carto (Geospatial processing with ClickHouse).

There are also various community projects around ClickHouse worth mentioning:

Percona is also working to adapt ClickHouse to our projects. We are using ClickHouse to handle Query Analytics and as a long term metrics data for Metrics inside a new version (under development) of Percona Monitoring and Management.

I also will be speaking about ClickHouse at BIG DATA DAY LA 2017 on August 5th, 2017. You are welcome to attend if you are in Los Angeles this day!

ClickHouse has the potential to become one of the leading open source analytical DBMSs – much like MySQL and PostreSQL are leaders for OLTP workloads. We will see in the next couple of years if it happens or not. Congratulations to the Yandex team on their one-year milestone!

Jun
30
2017
--

Industry Recognition of Percona: Great Products, Great Team

Percona

Percona’s focus on customer success pushes us to deliver the very best products, services and support. Three recent industry awards reflect our continued success in achieving these goals.

aba-logoWe recently garnered a Bronze Stevie Award “Company of the Year – Computer Software – Medium.” Stevies are some of the most coveted awards in the tech industry. Each year, more than 10,000 organizations in more than 60 nations compete for Stevies. It’s a huge honor to come away with a win.

DBTA_0_0Percona was also included in Database Trends and Applications’ (DBTA) annual DBTA 100 list of the companies that matter most in data. The directory was compiled by the editorial staff of DBTA and spotlights the companies that deal with evolving market demands through innovation in software, services, and hardware.

zippia_logo_headerNewZippia, a new website dedicated to helping recent grads with their career choices, ranked Percona in the top 10 of the “20 Best Companies To Work For In Raleigh, NC.” To compile the ranking, Zippia chose companies in a 25-mile radius that consistently get high reviews from career resource websites.  

Top CEOFinally, Percona co-founder and CEO Peter Zaitsev was honored as one of the Top Rated CEOs in Raleigh, North Carolina, by Owler. Owler looked at 167,000 leaders across 50 cities and 25 industries worldwide and identified the top 1,000. This is a huge honor for Peter!

Driven by Peter’s vision, all of us at Percona remain committed to building an excellent organization that is the champion of unbiased open source database solutions. Our continued success, growth and recognition from organizations like the above lead us to believe that the best is yet to come. Stay tuned!

Jun
29
2017
--

How Does Percona Software Compare: the Value of Percona Software and Services

Percona Software and Services

Percona Software and ServicesIn this blog post, I’ll discuss my experience as a Solutions Engineer in explaining the value of Percona software and services to customers.

The inspiration for this blog post was a recent conversation with a prospective client. They were exploring open source software solutions and professional services for the first time. This is a fairly common and recurring conversation with individuals and enterprises exploring open source for the first time. They generally find Percona through recommendations from colleagues, or organic Google searches. One of the most common questions Percona gets when engaging at this level is, “How does Percona’s software compare with << Insert Closed-source Enterprise DB Here >>?”

We’re glad you asked. Percona’s position on this question is to explain the value of Percona in the open source space. Percona does not push a particular flavor of open source software. We recommend an appropriate solution given each of our client’s unique business and application requirements. A client coming from this space is likely to have a lengthy list of compliance, performance and scalability requirements. We love these conversations. Our focus is always on providing our customers with the best services, support, and open-source technology solutions possible.

If there were a silver bullet for every database, Percona would not have been as successful over the past 10 (almost 11!) years. We know that MongoDB, MariaDB, MySQL Enterprise, MySQL Community, Percona Server for MySQL, etc., is never the right answer 100% of the time. Our competitors in this space will most likely disagree with this assessment – because they have an incentive to sell their own software. The quality of all of these offerings is extremely high, and our customers use each of them to great success. Using the wrong database software for a particular use-case is like using a butter knife to cut a steak. You can probably make it work, but it will be a frustrating and unrewarding experience.

Usually, there are more efficient alternatives.

There is a better question to ask instead of software vs. software, which basically turns into an apples vs. oranges conversation. It’s: “Why should we partner with Percona as our business adopts more open-source software?” This is where Percona’s experience and expertise in the open-source database space shine.

Percona is invested in ensuring you are making the right decisions based on your needs. Choosing the wrong database solution or architecture can be catastrophic. A recent Ponemon study (https://planetaklimata.com.ua/instr/Liebert_Hiross/Cost_of_Data_Center_Outages_2016_Eng.pdf) estimates the average cost of downtime can be up to $8,800 per minute. Not only is downtime costly, but re-architecting your environment due to an errant choice in database solutions can be costly as well. Uber experienced these pains when using Postgres as their database solution and wrote an interesting blog post about their decision to switch back to MySQL (https://eng.uber.com/mysql-migration/). Postgres has, in turn, responded to Uber recently (http://thebuild.com/presentations/uber-perconalive-2017.pdf). The point of bringing up this high-profile switch is not to take sides, but to highlight the breadth of knowledge necessary to navigate the open source database environment. Given its complexity, the necessity of a trusted business partner with unbiased recommendations should be apparent.

This is where Percona’s experience and expertise in the open source database space shines. Percona’s only investors are its customers, and our only customers are those using our services. Because of this, our core values are intertwined with customer success and satisfaction. This strategy trickles down through support, to professional services and all the way to the engineering team, where efforts are focused on developing software and features that provide the most value to customers as a whole.

Focusing on a business partner in the open source space is a much more important and worthwhile effort than focusing on a software provider. Percona has hands-on, practical experience with a varying array of open-source technologies in the MySQL, MariaDB, and MongoDB ecosystems. We’ve tracked our ability to assist our customer’s efficiencies in their databases and see a 30% efficiency improvement on average with some cases seeing a 10x boost in performance of their database. (https://www.percona.com/blog/2016/07/20/value-database-support/). The open source software itself has proven itself time and again to be the choice of billion dollar industries. Again, there is not one universal choice among companies. There are success stories that run the gamut of open source software solutions.

To summarize, there’s a reason we redirect the software vs. software question. It’s not because we don’t like talking about how great our software is (it is great, and holds it own with other open source offerings). It’s because the question does not highlight Percona’s value. Percona doesn’t profit from its software. Supporting the open-source ecosystem is our goal. We are heavily invested in providing our customers with the right open source software solution for their situation regardless of whether it is ours or not. Our experience and expertise in this space make us the ideal partner for businesses adopting more open source technology in place of enterprise-licensed models.

Jun
16
2017
--

Peter Zaitsev’s Speaking Schedule: Percona University Belgium / PG Day / Meetups

Peter Zaitsev Speaking Schedule

This blog shows Peter Zaitsev’s speaking schedule for this summer.

Summer 2017 Speaking Engagements

This week I spoke at the DB Tech Showcase OSS conference in Japan and am now heading to Europe. I have a busy schedule in June and early July, but there are events and places where we can cross paths and have a quick conversation. Let’s meet at these events if you need anything from Percona (or me personally). 

Below is a full list of places I’ll be at this summer:

Amsterdam, Netherlands

On June 20 I am speaking at the In-Memory Computing Summit 2017 with Denis Magda (Product Manager, Gridgain Systems). Our talk “Accelerate MySQL® for Demanding OLAP and OLTP Use Cases with Apache® Ignite™” starts at 2:35 pm.

On the same day in Amsterdam, Denis and I will speak at the local MySQL User Group meetupI will share some how-tos for MySQL monitoring with Percona Monitoring and Management (PMM), along with a PMM demo.

Ghent, Belgium

On June 22 we are organizing a Percona University event in Ghent, Belgium, which is a widely known tech hub in the region. I will give several talks there on MySQL, MongoDB and PMM monitoring. Dimitri Vanoverbeke from Percona will discuss MySQL in the Cloud. We have also invited guest speakers: Frederic Descamps from Oracle, and Julien Pivotto from Inuits.

Percona University technical events are 100% free to attend, and so far we are getting very positive attendee feedback on them. To check the full agenda for the Belgium edition, and to register, please use this link.

St. Petersburg, Russia

Percona is sponsoring PG Day’17 Russia, the PostgreSQL conference. This year they added a track on open source databases (and I was happy to be their Committee member for the OSDB track). The conference starts on July 5, and on that day I will give a tutorial on InnoDB Architecture and Performance Optimization. Sveta Smirnova will also present a tutorial on MySQL Performance Troubleshooting.

On July 6-7, you can expect four more talks given by Perconians at PG Day. We invite you to stop by our booth (“Percona”) and ask us any tough questions you might have.

Moscow, Russia

On July 11 I will speak at a Moscow MySQL User Group meetup at the Mail.Ru Group office. While we’re still locking down the agenda, we always have a great selection of speakers at the MMUG meetups. Make sure you don’t miss this gathering!

Thank you, and I hope to see many of you at these events.

Jun
06
2017
--

MySQL Encryption at Rest – Part 1 (LUKS)

MySQL Encryption at Rest

MySQL Encryption at RestIn this first of a series of blog posts, we’ll look at MySQL encryption at rest.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPPA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series is implementing disk-level encryption using crypt+LUKS.

In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.

Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.

LUKS Tutorial

The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.

WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.

Install the necessary utilities:

# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk

The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.

Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):

# openssl rand -base64 32
# date | md5 | rev | head -c 24 | md5 | tail -c 32

Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.

WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!

# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4

You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:

# cryptsetup luksOpen /dev/sda4 mysqldata

You will be prompted for the passphrase you used above. On success, you should see the device show up:

# ls /dev/mapper/
lrwxrwxrwx  1 root root      7 Jun  2 11:50 mysqldata -> ../dm-0

You can now format this encrypted block device and create a filesystem:

# mkfs.ext4 /dev/mapper/mysqldata

Now you can mount the encrypted block device you just formatted:

# mount /dev/mapper/mysqldata /var/lib/mysql

Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).

Creating a Backup of Encryption Information

The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:

# cryptsetup luksHeaderBackup --header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4

Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:

# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat

GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.

Unmounting and Closing a Disk

If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:

# umount /var/lib/mysql/
# cryptsetup luksClose mysqldata

In order to mount this device again, you must “open” it and provide one of the keys.

Rotating Keys (Adding / Removing Keys)

Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.

Take a look at the existing header information:

# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: ENABLED
Iterations: 181059
Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b
69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:

# cryptsetup luksAddKey --key-slot 1 -v /dev/sda4
Enter any passphrase:
Key slot 0 unlocked.
Enter new passphrase for key slot:
Verify passphrase:
Command successful.

LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.

Now that we have saved the new key in slot 1, we can remove the key in slot 0.

# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
No key available with this passphrase.

In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.

Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:

# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: DISABLED
Key Slot 1: ENABLED
Iterations: 229712
Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df
cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6
Key material offset: 264
AF stripes: 4000
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.

Conclusion

Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.

In Part Two, we will cover using InnoDB tablespace encryption.

Jun
05
2017
--

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

MySQL in the Cloud

MySQL in the CloudJoin Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).

Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.

When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”

Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

  • Public and private cloud
  • Migration to the cloud
  • Best practices
  • High availability
  • Scaling

Register for the webinar here.

Peter ZaitsevPeter Zaitsev, Percona CEO and Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

May
24
2017
--

Percona Software and Roadmap Update with CEO Peter Zaitsev: Q2 2017

Percona Software and Services

This blog post is a summary of the Percona Software and Roadmap Update – Q2 2017 webinar given by Peter Zaitsev on May 4, 2017. This webinar reflects changes and updates since the last update (Q1 2017).

A full recording of this webinar, along with the presentation slide deck, can be found here.

Percona Software

Below are the latest and upcoming features in Percona’s software. All of Percona’s software is 100% free and open source, with no restricted “Enterprise” version. Percona doesn’t restrict users with open core or “open source, eventually” (BSL) licenses.

Percona Server for MySQL 5.7

Latest Improvements

Features About To Be Released 

  • Integration of TokuDB and Performance Schema
  • MyRocks integration in Percona Server
  • Starting to look towards MySQL 8

Percona XtraBackup 2.4

Latest Improvements

Percona Toolkit

Latest Improvements

Percona Server for MongoDB 3.4

Latest Improvements

Percona XtraDB Cluster 5.7

Latest Improvements

Performance Improvement Benchmarks

Below, you can see the benchmarks for improvements to Percona XtraDB Cluster 5.7 performance. You can read about the improvements and benchmark tests in more detail here and here.

Percona Software and Roadmap Update

Percona XtraDB Cluster 5.7 Integrated with ProxySQL 1.3

Percona Monitoring and Management

New in Percona Monitoring and Management

Advanced MariaDB Dashboards in PMM (Links go to PMM Demo)

Percona Q217 Roadmap 4

Improved MongoDB Dashboards in PMM (Links go to PMM Demo)

Percona Q217 Roadmap 7

Percona Q217 Roadmap 9

Percona Q217 Roadmap 10

Check out the PMM Demo

Thanks for tuning in for an update on Percona Software and Roadmap Update – Q2 2017.

New Percona Online Store – Easy to Buy, Pay Monthly

May
15
2017
--

Percona Server for MongoDB 3.2.13-3.3 is Now Available

Percona Server for MongoDB 3.2

Percona Server for MongoDB 3.2Percona announces the release of Percona Server for MongoDB 3.2.13-3.3 on May 15, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: We deprecated the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.13 and includes the following additional changes:

  • #PSMDB-127: Fixed cleanup of deleted documents and indexes for MongoRocks. When you upgrade to this release, deferred compaction may occur and cause database size to decrease significantly.
  • #PSMDB-133: Added the wiredTigerCheckpointSizeMB variable, set to 1000 in the configuration template for WiredTiger. Valid values are 32 to 2048 (2GB), with the latter being default.
  • #PSMDB-138: Implemented SERVER-23418 for MongoRocks.

Percona Server for MongoDB 3.2.13-3.3 release notes are available in the official documentation.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com