Jun
12
2019
--

How to Report Bugs, Improvements, New Feature Requests for Percona Products

report bugs

report bugsClear and structured bug, improvement, and new feature request reports are always helpful for validating and fixing issues. In a few cases, we have received these reports with incomplete information which can cause a delay in the verification of issues. The most effective method to avoid this situation is to ensure complete information about the issue when filing a report.

In this post we will discuss:

  • The best ways to report an issue for Percona products
  • Including a “how to reproduce” test case to verify the issue
  • The purpose of bug/improvement/new feature verification

https://jira.percona.com is the central place to report a bug/improvement/new feature request for all Percona products.

Let’s first discuss a few important entries which you should update when reporting an issue.

Project: The product name for which you wish to report an issue.

Issue Type: Provides options for the type of request. Example: Bug/Improvement/New Feature Request.

Summary: Summary of the issue which will serve as the title. It should be a one-line summary.

Affects Version/s: The version number of the Percona software for which you are reporting an issue.

Description: This field is to describe your issue in detail. Issue description should be clear and concise.

Bug report:

  • Describe the actual issue.
  • Add test case/steps to reproduce the issue if possible.
  • If a crash bug, provide my.cnf and error log as additional information along with the details mentioned in this blog_post.
  • In some cases, the supporting file for bug reports such as a coredump, sqldump, or error log is prohibitively large. For these cases, we have an SFTP server where these files can be uploaded.

      Documentation bug:

  • Provide the documentation link, describe what is wrong, and suggest how the documentation could be improved.

Improvement/New Feature  Request:

  • For new features, describe the need and use case. Include answers to such questions as “What problem it will solve?” and “How will it benefit the users?”
  • In the case of improvements, mention what is problematic with the current behavior. What is your expectation as an improvement in a particular product feature?

Note: When reporting an issue, be sure to remove/replace sensitive information such as IP addresses, usernames, passwords, etc. from the report description and attached files.

Upstream Bugs:

Percona Server for MySQL and Percona Server for MongoDB are patched versions of their upstream codebases. It is possible that the particular issue originated from the upstream version. For these products, it would be helpful if the reporter also checks upstream for the same issue. If issues exist upstream, use the following URLs to report an issue for these respective products.

MySQL: https://bugs.mysql.com
MongoDB: https://jira.mongodb.org

If you are a Percona customer, please file a support request to let us know how the bug affects you.

Purpose of Bug/Improvement/New Feature Request verification

  • Gather the required information from the reporter and identify whether the reported issue is a valid bug/improvement/new feature.
  • For bugs, create a reproducible test case. To effectively address, the bug must be repeatable on demand.

Any incorrect assumptions can break other parts of the code while fixing a particular bug; this is why the verification process is important to identify the exact problem. Another benefit of having a reproducible test case is that it can then be used to verify the fix.

While feature requests and improvements are about ideas on how to improve Percona products, they still need to be verified. We need to ensure that behavior reported as a new feature or improvement:

  • Is not a bug
  • Is not implemented yet
  • For new feature verification, we also check whether there is an existing, different way to achieve the same.

Once bugs, improvement, and new features are validated, the issue status will be “Open” and it will move forward for implementation.

Apr
26
2018
--

The Evolution of the DBA in an “As-A-Service” World

DBA

The requirements for managing and running a database in a modern enterprise have evolved over the past ten years. Those in charge of running enterprise databases have seen their focus shift from ensuring access and availability, to architecture, design and scalability responsibilities. Web-first companies pioneered the change by charging site reliability engineers (SRE’s) or multi-faceted DBAs with the task of ensuring that the company’s main revenue engine not only stayed up, but could scale to wherever the business needed to go. This is a far cry from the classic enterprise DBA’s top responsibilities: keep it up, keep it backed up, and react to issues as they present themselves.

Today, enterprises look for new revenue models to keep up with a shifting technology paradigm driven by the cloud. The requirements and needs for managing their database environments are changing along with this shift. In the SaaS world, application outages mean lost revenue. Worse, it leads to customer churn and gives your competitors an opening. To keep revenue flowing, every part of a SaaS company’s critical infrastructure needs to be planned out: redundancy should be built-in, and a future-proof architecture should be built to accommodate scale.

The more issues you can design out before launch, the less chance of a catastrophic outage later on. This means as a SaaS provider you want your DBAs and database engineers architecting a database that avoids problems at scale, and you want them working with your developers to write better, more efficient database calls. The database infrastructure is designed and automated to work at scale, while taking into account efficient use of resources for meeting today’s requirements.

When companies move to the cloud, the cloud provider takes care of much of the operational automation and many of the mundane day-to-day tasks (for example, using database as a service (DBaaS) options such as Amazon RDS and Aurora). But this does not eliminate the need for database expertise: it moves the function closer to the design and development side of the application. Someone needs to not only design and tune the database to support the application, but also has to understand how to build the modular pieces available in the cloud into a cohesive scalable unit that meets the needs of the application and the company. This means there are much higher impacts and clearer ROIs realized from efficient database expertise.

Cloud DBA vs. Classic DBA

 

Over the years at Percona, we have seen this shift as well. Currently, more than 50% of the support tickets our customers open are related to application design issues, query performance or database infrastructure design. This is a far cry from five years ago when these represented less than 20% of our overall caseload. This makes sense, however, when you think about the maturity of our database products and the technological advances that impact the database. A more stable MySQL and MongoDB, coupled with advances in either homegrown automation or cloud-based infrastructure, reduce the likelihood of common crashing bugs and “Core Database Software” related bugs. Instead, outages and issues are increasingly caused by design decisions, bad code or unplanned-for edge cases. In order to keep up, DBAs need to evolve to move upstream to have the greatest impact.

At Percona, we recognize the changing requirements of modern database deployments. In fact, we have been providing database expertise since the beginning of the SaaS and cloud era. We recognize the needs of clients that choose to run on a DBaaS platform are slightly different than those managing their own full-stack database deployments.

That’s why we created a brand new tier of support focused on DBaaS platforms. These services allow you to rely on your cloud provider for operational break-fix support, while augmenting that with proven world-class expertise focused on the design, development, and tuning of the database itself (which cloud providers typically don’t address).

We also launched a DBaaS-focused version of our Percona DBA service. The Percona DBA service focuses on designing, setting up, and proactively improving your DBaaS cloud environment to ensure you get the most out of your investment. 

Contact us for more details on our new support and managed service options that can help optimize your cloud database environments, and make them run as efficiently as possible with the applications that drive your business.

The post The Evolution of the DBA in an “As-A-Service” World appeared first on Percona Database Performance Blog.

Sep
26
2017
--

Pepper Turns to Percona to Ensure a Great Customer Experience at Pepper.com

Pepper.com

Pepper.comPepper.com, the world’s largest community deal platform, has selected Percona to manage its open source database performance.

Pepper.com’s around-the-clock community seeks and finds the best offers in fashion, electronics, traveling and much more. With 500 million page views, over 25 million users and over 65,000 user-submitted deals per month across communities in America, Europe and Asia, Pepper has quickly risen to be the largest community deal platform worldwide.

When Pepper.com’s primary MySQL database administrator left the company, Pepper decided to shift to a managed service to maintain uptime and responsiveness. Having previously attended Percona Live Europe, the premier European open source database conference, as well as being avid readers of the Percona Database Performance Blog, the Pepper team turned to Percona for open source database remote managed service expertise.

“Guaranteeing database performance is key to making sure our web applications are responsive and up-to-date,” said Pavel Genov, Head of Software Development at Pepper.com. “Percona Care Ultimate helps us to achieve these objectives.”

Pepper.comPepper was already using Percona Server for MySQL. Following a Percona Database Performance Audit to review the Pepper.com environment, architecture and setup, Percona XtraBackup was deployed to provide online non-blocking, tightly compressed, highly secure backups.

Check out the case study on Pepper.com and Percona’s engagement to improve and manage Pepper’s database environment.

Jun
29
2017
--

How Does Percona Software Compare: the Value of Percona Software and Services

Percona Software and Services

Percona Software and ServicesIn this blog post, I’ll discuss my experience as a Solutions Engineer in explaining the value of Percona software and services to customers.

The inspiration for this blog post was a recent conversation with a prospective client. They were exploring open source software solutions and professional services for the first time. This is a fairly common and recurring conversation with individuals and enterprises exploring open source for the first time. They generally find Percona through recommendations from colleagues, or organic Google searches. One of the most common questions Percona gets when engaging at this level is, “How does Percona’s software compare with << Insert Closed-source Enterprise DB Here >>?”

We’re glad you asked. Percona’s position on this question is to explain the value of Percona in the open source space. Percona does not push a particular flavor of open source software. We recommend an appropriate solution given each of our client’s unique business and application requirements. A client coming from this space is likely to have a lengthy list of compliance, performance and scalability requirements. We love these conversations. Our focus is always on providing our customers with the best services, support, and open-source technology solutions possible.

If there were a silver bullet for every database, Percona would not have been as successful over the past 10 (almost 11!) years. We know that MongoDB, MariaDB, MySQL Enterprise, MySQL Community, Percona Server for MySQL, etc., is never the right answer 100% of the time. Our competitors in this space will most likely disagree with this assessment – because they have an incentive to sell their own software. The quality of all of these offerings is extremely high, and our customers use each of them to great success. Using the wrong database software for a particular use-case is like using a butter knife to cut a steak. You can probably make it work, but it will be a frustrating and unrewarding experience.

Usually, there are more efficient alternatives.

There is a better question to ask instead of software vs. software, which basically turns into an apples vs. oranges conversation. It’s: “Why should we partner with Percona as our business adopts more open-source software?” This is where Percona’s experience and expertise in the open-source database space shine.

Percona is invested in ensuring you are making the right decisions based on your needs. Choosing the wrong database solution or architecture can be catastrophic. A recent Ponemon study (https://planetaklimata.com.ua/instr/Liebert_Hiross/Cost_of_Data_Center_Outages_2016_Eng.pdf) estimates the average cost of downtime can be up to $8,800 per minute. Not only is downtime costly, but re-architecting your environment due to an errant choice in database solutions can be costly as well. Uber experienced these pains when using Postgres as their database solution and wrote an interesting blog post about their decision to switch back to MySQL (https://eng.uber.com/mysql-migration/). Postgres has, in turn, responded to Uber recently (http://thebuild.com/presentations/uber-perconalive-2017.pdf). The point of bringing up this high-profile switch is not to take sides, but to highlight the breadth of knowledge necessary to navigate the open source database environment. Given its complexity, the necessity of a trusted business partner with unbiased recommendations should be apparent.

This is where Percona’s experience and expertise in the open source database space shines. Percona’s only investors are its customers, and our only customers are those using our services. Because of this, our core values are intertwined with customer success and satisfaction. This strategy trickles down through support, to professional services and all the way to the engineering team, where efforts are focused on developing software and features that provide the most value to customers as a whole.

Focusing on a business partner in the open source space is a much more important and worthwhile effort than focusing on a software provider. Percona has hands-on, practical experience with a varying array of open-source technologies in the MySQL, MariaDB, and MongoDB ecosystems. We’ve tracked our ability to assist our customer’s efficiencies in their databases and see a 30% efficiency improvement on average with some cases seeing a 10x boost in performance of their database. (https://www.percona.com/blog/2016/07/20/value-database-support/). The open source software itself has proven itself time and again to be the choice of billion dollar industries. Again, there is not one universal choice among companies. There are success stories that run the gamut of open source software solutions.

To summarize, there’s a reason we redirect the software vs. software question. It’s not because we don’t like talking about how great our software is (it is great, and holds it own with other open source offerings). It’s because the question does not highlight Percona’s value. Percona doesn’t profit from its software. Supporting the open-source ecosystem is our goal. We are heavily invested in providing our customers with the right open source software solution for their situation regardless of whether it is ours or not. Our experience and expertise in this space make us the ideal partner for businesses adopting more open source technology in place of enterprise-licensed models.

May
01
2017
--

Webinar Thursday 5/1/2017: Percona Software News and Roadmap Update Q2 2017

Percona Software News

Percona Software NewsCome and listen to Percona CEO Peter Zaitsev on Thursday, May 4, 2017 at 11:00 am (PST) / 2:00 pm (EST) discuss Percona’s software news and roadmap, including Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

During this webinar, Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services, and finish with a Q&A.

You can register for the webinar here.

Peter ZaitsevPeter Zaitsev, CEO of Percona

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. Fortune and DZone also tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads.

Jan
10
2017
--

Webinar Thursday, January 12: Percona Software News and Roadmap Update for Q1 2017

Percona Software News and RoadmapPlease join Percona CEO Peter Zaitsev for a webinar on Thursday, January 12, 2017 at 11 am PST/ 2 pm EST (UTC-8) for a discussion on the Percona Software News and Roadmap Update for Q1 2017.

In this webinar, Peter will discuss what’s new in Percona open source software. This will include Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

During this webinar Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register for the Percona Software News and Roadmap Update webinar here.

Percona Software News and RoadmapPeter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization is one of Percona’s most popular downloads.

 

Dec
01
2016
--

Database Daily Ops Series: GTID Replication and Binary Logs Purge

GTID replication

GTID replicationThis blog continues the ongoing series on daily operations and GTID replication.

In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log:
'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave requires.'

The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices are repeatable and consistent. However, some additional features and practices can be added depending on the customer situation.

Some time ago one of our customer’s database servers presented the above message. At that point, we could see the binary log files in a compressed form on master (gzipped). Of course, MySQL can’t identify a compressed file with a .gz extension as a binary log. We uncompressed the file, but replication presented the same problem – even after uncompressing the file and making sure the UUID of the current master and the TRX_ID were there. Obviously, I needed to go and investigate the problem to see what was going on.

After some reading, I re-read the below:

When the server starts, the global value of gtid_purged, which was called before as gtid_lost, is initialized to the set of GTIDs contained by the Previous_gtid_log_event of the oldest binary log. When a binary log is purged, gtid_purged is re-read from the binary log that has now become the oldest one.

=> https://dev.mysql.com/doc/refman/5.6/en/replication-options-gtids.html#sysvar_gtid_purged

That makes me think: if something is compressing binlogs on the master without purging them as expected by the GTID mechanism, it’s not going to be able to re-read existing GTIDs on disk. When the slave replication threads restarts, or the DBA issues commands like reset slave and reset master (to clean out the increased GTID sets on Executed_Gtid_Set from the SHOW SLAVE STATUS command, for example), this error can occur. But if I compress the file:

  • Will the slave get lost and not find all the needed GTIDs on the master after a reset slave/reset master?
  • If I purge the logs correctly, using PURGE BINARY LOGS, will the slave be OK when restarting replication threads?

Test 1: Compressing the oldest binary log file on master, restarting slave threads

I would like to test this very methodically. We’ll create one GTID per binary log, and then I will compress the oldest binary log file in order to make it unavailable for the slaves. I’m working with three virtual machines, one master and two slaves. On the second slave, I’m going to run the following sequence: stop slave, reset slave, reset master, start slave, and then, check the results. Let’s see what happens.

On master (tool01):

tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name          | File_size |
+-------------------+-----------+
| mysqld-bin.000001 |       341 |
| mysqld-bin.000002 |       381 |
| mysqld-bin.000003 |       333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000001';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000001 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000001 | 120 | Previous_gtids |         1 |         151 |                                                                   |
| mysqld-bin.000001 | 151 | Gtid           |         1 |         199 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:1' |
| mysqld-bin.000001 | 199 | Query          |         1 |         293 | create database wb01                                              |
| mysqld-bin.000001 | 293 | Rotate         |         1 |         341 | mysqld-bin.000002;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000002';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000002 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000002 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1                            |
| mysqld-bin.000002 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:2' |
| mysqld-bin.000002 | 239 | Query          |         1 |         333 | create database wb02                                              |
| mysqld-bin.000002 | 333 | Rotate         |         1 |         381 | mysqld-bin.000003;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000003';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000003 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000003 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1-2                          |
| mysqld-bin.000003 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:3' |
| mysqld-bin.000003 | 239 | Query          |         1 |         333 | create database wb03                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
4 rows in set (0.00 sec)

Here we see that each existing binary log file has just one transaction. That will make it easier to compress the oldest binary log, and then disappear with part of the existing GTIDs. When the slave connects to a master, it will first send all the Executed_Gtid_Set, and then the master sends all the missing IDs to the slave. As Stephane Combaudon said, we will force it to happen! Slave database servers are both currently in the same position:

tool02 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000006
                Relay_Log_Pos: 545
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000008
                Relay_Log_Pos: 451
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3

Now, we’ll compress the oldest binary log on master:

[root@tool01 mysql]# ls -lh | grep mysqld-bin.
-rw-rw---- 1 mysql mysql  262 Nov 11 13:55 mysqld-bin.000001.gz #: this is the file containing 4fbe2d57-5843-11e6-9268-0800274fb806:1
-rw-rw---- 1 mysql mysql  381 Nov 11 13:55 mysqld-bin.000002
-rw-rw---- 1 mysql mysql  333 Nov 11 13:55 mysqld-bin.000003
-rw-rw---- 1 mysql mysql   60 Nov 11 13:55 mysqld-bin.index

On tool03, which is the database server that will be used, we will execute the replication reload:

tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.03 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:
          Read_Master_Log_Pos: 4
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 4
        Relay_Master_Log_File:
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 151
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp: 161111 14:47:13
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 1
1 row in set (0.00 sec)

Bingo! We broke the replication streaming on the slave. Now we know that the missing GTID on the master was due to the compressed file, and wasn’t able to be passed along to the connecting slave during their negotiation. Additionally, @@GTID_PURGED was not reloaded as per what the online manual said. The test done and we confirmed the theory (if you have additional comments, enter it at the end of the blog).

Test 2: Purge the oldest file on master and reload replication on slave

Let’s make it as straightforward as possible. The purge can be done manually using the PURGE BINARY LOGS command to get it done a proper way as the binary log index file should be considered a part of this purge operation as well (it should be edited to remove the file name index entry together with the log file on disk). I’m going to execute the same as before, but include purging the file manually with the mentioned command.

tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000001 | 341 |
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> purge binary logs to 'mysqld-bin.000002';
Query OK, 0 rows affected (0.01 sec)
tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
2 rows in set (0.00 sec)

Now, we’ll execute the commands to check how it goes:

tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:
          Read_Master_Log_Pos: 4
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 4
        Relay_Master_Log_File:
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 151
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp: 161111 16:35:02
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 1
1 row in set (0.00 sec)

The GTID on the purged file is needed by the slave. In both cases, we can set the @@GTID_PURGED as below with the transaction that we know was purged, and move forward with replication:

tool03 [(none)]:> stop slave; set global gtid_purged='4fbe2d57-5843-11e6-9268-0800274fb806:1';
Query OK, 0 rows affected, 1 warning (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
tool03 [(none)]:> start slave;
Query OK, 0 rows affected (0.01 sec)

The above adjusts the GTID on @@GTID_PURGED to just request the existing GTIDs, using the oldest existing GTID minus one to make the slave start the replication from the oldest existing GTID. In our scenario above, the replica restarts replication from 4fbe2d57-5843-11e6-9268-0800274fb806:2, which lives on binary log file mysqld-bin.000002. Replication is fixed, as its threads can restart processing the data streaming coming from master.

You will need to execute additional steps in checksum and sync for the set of transactions that were jumped when setting a new value for @@GTID_PURGED. If replication continues to break after restarting, I advise you rebuild the slave (possibly the subject of future blog).

Good explanations about this can be found on the below bug, reported by the Facebook guys and Laurynas Biveinis, the Percona Server Lead (who clarified the issue):

  • MySQL Bugs: #72635: Data inconsistencies when master has truncated binary log with GTID after crash;
  • MySQL Bugs: #73032: Setting gtid_purged may break auto_position and thus slaves;

Conclusion

Be careful when purging or doing something manually with binary logs, because @@GTID_PURGED needs to be automatically updated when binary logs are purged. It seems to happen only when

expire_logs_days

 is set to purge binary logs. Yet you need to be careful when trusting this variable, because it doesn’t consider fraction of days, depending the number of writes on a database server, it can get disks full in minutes. This blog showed that even housekeeping scripts and the PURGER BINARY LOGS command were able to make it happen.

Nov
10
2016
--

Database Daily Ops Series: GTID Replication

GTID Replication

GTID ReplicationThis post discusses ways of fixing broken GTID replication.

This blog series is all about the daily stories we have in Managed Services, dealing with customers’ environments (mostly when we need to quickly restore a service level within the SLA time).

One of the issues we encounter daily is replication using the GTID protocol. While there are a lot of blogs written about this subject, I would like to just highlight GTID replication operations, and the way you can deal with broken replication.

Most of the time we face way more complex scenarios then the one I’m about to present as an example, but the main goal of this blog is to quickly highlight the tools that can be used to fix issues to resume replication.

After reading this blog, you might ask yourself “Now, we know how to fix replication, but what about consistency?” The next blog will be entirely focused on that matter, data consistency!

Little less talk, little more action…

Replication is broken, and the SHOW SLAVE STATUS command output looks like below:

mysql> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.12
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000005
          Read_Master_Log_Pos: 879
               Relay_Log_File: mysqld-relay-bin.000009
                Relay_Log_Pos: 736
        Relay_Master_Log_File: mysqld-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 1062
                   Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into wb.t1 set i=1'
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 634
              Relay_Log_Space: 1155
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 1062
               Last_SQL_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into wb.t1 set i=1'
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 3
                  Master_UUID: 46fdb7ad-5852-11e6-92c9-0800274fb806
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State:
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp: 161108 16:47:53
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-4,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-3,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1
                Auto_Position: 1
1 row in set (0.00 sec)

When a slave configured to replicate using the GTID protocol breaks, pay attention to the SHOW SLAVE STATUS command output. You will find the Retrieved_Gtid_Set and Executed_Gtid_Set in the listed columns. You can see that the last global transaction ID retrieved from the current master was not executed (it’s going to appear on the Retrieved_Gtid_Set but not on the Executed_Gtid_Set, following the GTID format).

That means that the slave has retrieved a transaction that, for some other reason, it couldn’t execute. That’s the global transaction ID you need if you want to inject a fake transaction and get replication resumed. The fake transaction you inject takes the place of the one that has an SQL that cannot be executed due to an error found in Last_Error from the SHOW SLAVE STATUS.

Let’s analyze it:
#: replication is broken due to error 1062, when the primary key of a particular table is violated
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: ''. Query: 'insert into wb.t1 set i=1'
 
#: you can identify what is the global transaction id with problems, so, getting the replication streaming broken
           Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-4,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-3,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1

As shown above, we can see clearly that the transaction causing issues is global transaction ID number 4, coming from master under UUID 46fdb7ad-5852-11e6-92c9-0800274fb806. You can use the SHOW RELAYLOG EVENTS to try and verify that is the transaction’s query that is causing problems:

mysql> show relaylog events in 'mysqld-relay-bin.000009' from 736G
*************************** 1. row ***************************
   Log_name: mysqld-relay-bin.000009
        Pos: 736
 Event_type: Gtid
  Server_id: 3
End_log_pos: 682
       Info: SET @@SESSION.GTID_NEXT= '46fdb7ad-5852-11e6-92c9-0800274fb806:4'
*************************** 2. row ***************************
   Log_name: mysqld-relay-bin.000009
        Pos: 784
 Event_type: Query
  Server_id: 3
End_log_pos: 755
       Info: BEGIN
*************************** 3. row ***************************
   Log_name: mysqld-relay-bin.000009
        Pos: 857
 Event_type: Query
  Server_id: 3
End_log_pos: 848
       Info: insert into wb.t1 set i=1
*************************** 4. row ***************************
   Log_name: mysqld-relay-bin.000009
        Pos: 950
 Event_type: Xid
  Server_id: 3
End_log_pos: 879
       Info: COMMIT /* xid=66 */
4 rows in set (0.00 sec)

Before fixing and resuming the replication stream, we need to check why that INSERT query breaks replication. Let’s SELECT data and check the structure of table wb.t1:

mysql> select * from wb.t1;
+---+
| i |
+---+
| 1 |
+---+
1 row in set (0.00 sec)
 
mysql> show create table wb.t1;
+-------+-----------------------------------------------------+
| Table | Create Table                                                                                                       |
+-------+-----------------------------------------------------+
| t1    | CREATE TABLE `t1` (
  `i` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`i`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+-----------------------------------------------------+
1 row in set (0.01 sec)

It’s clear that something is wrong somewhere else than just the database. It’s time to fix and resume replication, and catch up with the master data. GTID replication data uses the below tools:

  • pt-slave-restart
  • mysqlslavetrx
  • inject a fake/empty transaction

pt-slave-restart

One of the easiest ways of resuming replication on slaves when replication is broken is using

pt-slave-restart

, which is part of Percona Toolkit. Once you find the above facts (mainly the master UUID of the problematic global transaction ID that broke slave replication), you can move forward using

pt-slave-restart  

with the GTID flag

—-master-uuid

. This passes the master’s UUID and it skips all global transactions breaking replication on a specific slave server, as you can see below:

[root@dbops02 ~]# pt-slave-restart --master-uuid 46fdb7ad-5852-11e6-92c9-0800274fb806 --host=localhost -u root
2016-11-08T17:24:09 h=localhost,u=root mysqld-relay-bin.000009         736 1062
2016-11-08T17:24:25 h=localhost,u=root mysqld-relay-bin.000010         491 1062
2016-11-08T17:24:34 h=localhost,u=root mysqld-relay-bin.000010         736 1062
2016-11-08T17:24:35 h=localhost,u=root mysqld-relay-bin.000010         981 1062
2016-11-08T17:24:36 h=localhost,u=root mysqld-relay-bin.000010        1226 1062

With the resources provided by

pt-slave-restart

, together with the above info, replication should resume. If you don’t have the Percona Toolkit package setup on your servers, make sure you follow these steps. It’s easier if you add the Percona Repository to your servers (you can use the Package Manager to install it for Debian-based and for RedHat-based systems).

mysqlslavetrx

To use mysqlslavetrx (which is part of MySQL Utilities developer by Oracle), I recommend you read the article written by Daniel Guzman, and install MySQL Utilities on your database servers. Using it to skip problematic transactions and inject fake ones is pretty straightforward as well .

So, find the below on the slave side:

         Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-13,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-8,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1

Then use GTID_SUBTRACT as the first parameter, which you can find on Retrieved_Gtid_Set and the subset, which you can grab from the Executed_Gtid_Set. Use the UUID of the direct master in the function to find the global transaction IDs executed on slave:

#: this below function is pretty cool and will exactly shows you if the slave is lacking some
#: GTIDs master have and vice-versa - this is going to help out using mysqlslavetrx to put
#: master and slave consistently with the same binary logs contents
mysql> SELECT GTID_SUBTRACT('46fdb7ad-5852-11e6-92c9-0800274fb806:1-13','46fdb7ad-5852-11e6-92c9-0800274fb806:1-8') gap;
*************************** 1. row ***************************
gap: 46fdb7ad-5852-11e6-92c9-0800274fb806:9-13
1 row in set (0.00 sec)

Now we can use mysqlslavetrx to insert a fake transaction on the slave to resume replication, as below:

 [root@dbops02 mysql-utilities-1.6.2]# mysqlslavetrx --gtid-set=46fdb7ad-5852-11e6-92c9-0800274fb806:9-13 --verbose --slaves=wb@localhost:3306
WARNING: Using a password on the command line interface can be insecure.
#
# GTID set to be skipped for each server:
# - localhost@3306: 46fdb7ad-5852-11e6-92c9-0800274fb806:9-13
#
# Injecting empty transactions for 'localhost:3306'...
# - 46fdb7ad-5852-11e6-92c9-0800274fb806:9
# - 46fdb7ad-5852-11e6-92c9-0800274fb806:10
# - 46fdb7ad-5852-11e6-92c9-0800274fb806:11
# - 46fdb7ad-5852-11e6-92c9-0800274fb806:12
# - 46fdb7ad-5852-11e6-92c9-0800274fb806:13
#
#...done.
#

When you get back to the MySQL client on the slave, you’ll see that the retrieved and executed out of SHOW SLAVE STATUS will point that they are in the same position:

           Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-13,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-13,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1

After, make sure you start the slave (mysqlslavetrx won’t start the replication as previous tool does).

Inject a Fake Transaction

Fake transactions are called empty transactions as well, but the fact is if a global transaction is affecting a slave, you must fake empty transactions that won’t affect data to resume replication and carry on processing the data streaming from the master aka replication! We need to know that it’s not going to affect future slaves, especially if this server becomes a new master after a failover/switchover process. You can get more information about Errant Transactions here and here.

mysql> stop slave;
Query OK, 0 rows affected (0.01 sec)
 
mysql> set gtid_next='46fdb7ad-5852-11e6-92c9-0800274fb806:14';
Query OK, 0 rows affected (0.00 sec)
 
mysql> begin; commit;
Query OK, 0 rows affected (0.00 sec)
 
Query OK, 0 rows affected (0.00 sec)
 
mysql> set gtid_next=automatic;
Query OK, 0 rows affected (0.00 sec)
Now, when you check retrieved and executed out of SHOW SLAVE STATUS, you can see the below:
           Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-14,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-14,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1

It’s time to start slave (and be happy)!

mysql> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.12
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000005
          Read_Master_Log_Pos: 3329
               Relay_Log_File: mysqld-relay-bin.000011
                Relay_Log_Pos: 491
        Relay_Master_Log_File: mysqld-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 3329
              Relay_Log_Space: 3486
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 3
                  Master_UUID: 46fdb7ad-5852-11e6-92c9-0800274fb806
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-14,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 46fdb7ad-5852-11e6-92c9-0800274fb806:1-14,
4fbe2d57-5843-11e6-9268-0800274fb806:1-3,
81a567a8-5852-11e6-92cb-0800274fb806:1
                Auto_Position: 1
1 row in set (0.00 sec)

Cheers!

Mar
24
2016
--

Want to be a superhero? Join the Database Performance Team!

database performance team

database performance team

Admit it, you’ve always wanted to fight danger, win battles and save the day! Who doesn’t? Do you want to be a superhero? Percona can show you how!

We don’t have any radioactive spiders or billionaire gadgets, but we do have our own team of superheroes dedicated to protecting your database performance: The Database Performance Team!

The Database Performance Team is comprised of our services experts, who work tirelessly every day to guarantee the performance of your database. Percona’s database services are some of our most valuable customer resources – besides the software itself. Whether it’s support, consulting, technical account managers, or remote DBAs, our support team is made up of superheroes that make sure your database is running at peak performance.

We want you to join us in the fight against poor performance. Join our Database Performance Team crew as part of the Database Street Team!

We’ll be introducing the members of our super group in the coming weeks. As we introduce the Database Performance Team (the “characters” below), we want you! We’ll be offering “missions” for you to complete: challenges, puzzles, or actions that get you prizes for success!

Your first mission: guess the identities of our secret team before we reveal them!

Mystery Character 1

Mystery Character 2

Mystery Character 3

Mystery Character 4

Mystery
Character 5

Mystery-1 Mystery-2 Mystery-3 Mystery-4 Mystery-5
Hint: Hint: Hint: Hint: Hint:
Funny, friendly, quick-witted, supporting, fast and courteous – but still able to get the job done with amazing competence. Computer-like smarts, instant recall, a counselor, able to understand a problem and the solution quickly. Technical, with clairvoyant foresight, with the knowledge and statistics to account for all issues, manages problems before they happen. Remotely all-seeing, a director, good at multi-tasking, adapts-on-the-fly, cool in a crisis. Insanely strong, can’t be stopped, hard to knock down, the product of rigorous testing, unlimited endurance.
Who am I? Who am I? Who am I? Who am I? Who am I?

Follow @Percona on Twitter and use the hashtag #DatabasePerformanceTeam to cast your guess on who any mystery character is. Correctly guess any of their names or roles, and the lucky winner gets their choice of our mystery T-shirt in either men’s or women’s style.

Stay tuned, as we reveal the identities of the Database Performance Team over the coming weeks! Respond with your guess in the comments below.

Join the ranks of the Database Street Team! Fun games, cool prizes – more info is coming soon!

Some facts:*

Gartner has estimated the average cost of downtime at $5,000 per minute!

Join The Database Performance Team today!

 

 

*Source: Global Cloud-based Database Market 2014-2018

Jan
12
2016
--

Play the Percona Powerball Pool!!

percona powerball

percona powerballThe Only Sure Thing is Percona Powerball Pool

Everyone is talking about the upcoming Powerball lottery draw. 1.4 BILLION dollars!! And growing! Millions of people are imagining what they would do IF they win. It’s the stuff of dreams.

That is literally true. The chances of winning the Powerball Lottery are 1 in 292.2 million. Or roughly speaking, the chances of picking the right combination of numbers is like flipping a coin and getting heads 28 times in a row. You’re more likely to get struck by lightning (twice) or bitten by a shark.

Sorry.

You know what is a sure thing? Percona’s ability to optimize your database performance and increase application performance. Our Support and Percona Care consultants will give you a 1 in 1 chance of making your database run better, solving your data performance issues, and improving the performance of your applications.

However, in the spirit of moment, Percona has bought 10 sets of Powerball numbers and have posted them on Facebook, Twitter and LinkedIn. It’s the Percona Powerball Pool! Like either post and share it, and you are qualified for one (1) equal share of the winnings! Use #perconapowerball when you share.

Here are the numbers:

percona powerball

We at Percona can’t promise a huge Powerball windfall (in fact, as data experts we’re pretty sure you won’t win!), but we can promise that our consultants are experts at helping you with your full LAMP stack environments. Anything affecting your data performance – on that we can guarantee you a win!

Full rules are here.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com