Jul
06
2018
--

Another Day, Another Data Leak

another day another data leak Exactis

another day another data leak ExactisIn the last few days, there has been information released about yet another alleged data leak, placing in jeopardy “…[the] personal information on hundreds of millions of American adults, as well as millions of businesses.” In this case, the “victim” was Exactis, for whom data collection and data security are core business functions.

Some takeaways from Exactis

Please excuse the pun! In security, we have few chances to chuckle. In fact, as a Security Architect, I sigh deeply when I read about this kind of issue. Firstly, it’s preventable. Secondly, I worry that if an organization like Exactis is not getting it right, what chance the rest of the world?

As the Wired article notes the tool https://shodan.io/ can be revealing and well worth a look. For example, you can see there are still MANY elasticSearch systems exposed to the public internet here. Why not use shodan to check what everyone else in the world can see openly on your systems ?

Securing databases

Databases in themselves do not need to be at risk, as long as you take the necessary precautions. We discussed this in this blog post that I co-authored last year.

In this latest alleged gaffe, as far as I can discern, had the setup made use of iptables or a similar feature then the breach could not have occurred.

With immaculate timing, my colleague Marco Tusa wrote a post last month on how to set up iptables for Percona XtraDB Cluster, and if you are not sure if or how that applies to your setup, it is definitely worth a read. In fact, you can access all of our security blog posts if you would like some more pointers.

Of course, security does not stop with iptables. Application developers should already be familiar with the need to avoid SQL injection, and there is a decent SQL injection prevention cheat sheet here, offered by The Open Web Application Security Project (OWASP). Even if you don’t fully understand the technical details, a cheat sheet like this might help you to ask the right questions for your application.

MySQL resources

For a more in-depth look at MySQL security, I have two talks up on YouTube. The first of these is a twenty-minute presentation on hardening MySQL and the second on web application security and why you really should review yours. You could also check out our recorded webinar Security and Encryption in the MySQL world presented by Dimitri Vanoverbeke.

MongoDB resources

Of course, security challenges are not unique to SQL databases. If you are a MongoDB user, this webinar MongoDB Security: Making things secure by default might be of interest to you. Or perhaps this one on using LDAP Authentication with MongoDB? Adamo Tonete presents both of these webinars.

For a more widely applicable view, you could try Colin Charles’ recent webinar too.

There are always consequences

As Exactis are no doubt discovering, managing the fallout from such a breach is a challenge. If you are not sure where you stand on security, or what you can do to improve your situation, then audit services such as those we offer could prove to be a valuable investment.

Finally, some of you will be lucky enough to have someone dedicated to IT security in your organizations. Next time you see them, instead of avoiding their steely stare, why not invite them for a coffee* and a chat? It could be enlightening!

*Beer or scotch is also almost always accepted too…

The post Another Day, Another Data Leak appeared first on Percona Database Performance Blog.

Jun
14
2018
--

What is the Top Cause of Application Downtime Today?

Application outages lurking monster

Application outages lurking monsterI frequently talk to our customer base about what keeps them up at night. While there is a large variance of answers, they tend to fall into one of two categories. The first is the conditioned fear of some monster lurking behind the scenes that could pounce at any time. The second, of course, is the actual monster of downtime on a critical system. Ask most tech folks and they will tell you outages seem to only happen late at night or early in the morning. And that they do keep them up.

Entire companies and product lines have been built around providing those in the IT world with some ability to sleep at night. Modern enterprises have spent millions to mitigate the risk and prevent their businesses from having a really bad day because of an outage. Cloud providers are attuned to the downtime dilemma and spend lots of time, money, and effort to build in redundancy and make “High Availability” (HA) as easy as possible. The frequency of “hardware” or server issues continues to dwindle.

Where does the downtime issue start?

In my discussions, most companies I have talked to say their number one cause of outages and customer interruptions is ultimately related to the deployment of new or upgraded code. Often I hear the operations team has little or no involvement with an application until it’s put into production. It is a bit ironic that this is also the area where companies tend to drastically under-invest. They opt instead to invest in ways to “Scale Out or Up”. Or perhaps how to survive asteroids hitting two out three of their data centers.

Failing over broken or slow code from one server to another does not fix it. Adding more servers to distribute the load can mitigate a problem, but can also escalate the cost dramatically. In most cases, the solutions they apply don’t address the primary cause of the problems.

While there are some fantastic tools out there that can help with getting better visibility into code level issues — such as New Relic, AppDynamics and others — the real problem is that these often end up being used to diagnose issues after they have appeared in production. Most companies carry out some amount of testing before releasing code, but typically it is a fraction of what they should be doing. Working for a company that specializes in open source databases, we get a lot of calls on issues that have prevented companies’ end users from using critical applications. Many of these problems are fixable before they cost a loss of revenue and reputation.

I think it’s time technology companies start to rethink our QA, Testing, and Pre-Deployment requirements. How much time, effort, and money can we save if we catch these “monsters” before they make it into production?

Not to mention how much better our operations team will sleep . . .

The post What is the Top Cause of Application Downtime Today? appeared first on Percona Database Performance Blog.

Nov
13
2017
--

Percona Live Open Source Database Conference 2018 Call for Papers Is Now Open!

Percona Live

Percona LiveAnnouncing the opening of the Percona Live Open Source Database Conference 2018 in Santa Clara, CA, call for papers. It will be open from now until December  22, 2017.

Our theme is “Championing Open Source Databases,” with topics of MySQL, MongoDB and other open source databases, including PostgreSQL, time series databases and RocksDB. Sessions tracks include Developers, Operations and Business/Case Studies.

We’re looking forward to your submissions! We want proposals that cover the many aspects and current trends of using open source databases, including design practices, application development, performance optimization, HA and clustering, cloud, containers and new technologies, as well as new and interesting ways to monitor and manage database environments.

Describe the technical and business values of moving to or using open source databases. How did you convince your company to make the move? Was there tangible ROI? Share your case studies, best practices and technical knowledge with an engaged audience of open source peers.

Possible topics include:

  • Application development. How are you building applications using open source databases to power the data layers? What languages, frameworks and data models help you to build applications that your customers love? Are you using MySQL, MongoDB, PostgreSQL, time series or other databases?  
  • Database performance. What database issues have you encountered while meeting new application and new workload demands? How did they affect the user experience? How did you address them? Are you using WiredTiger or a new storage engine like RocksDB? Have you moved to an in-memory engine? Let us know about the solutions you have found to make sure your applications can get data to users and customers.
  • DBaaS and PaaS. Are you using a Database as a Service (DBaaS) in the public cloud, or have you rolled out your own? Are you on AWS, Google Cloud, Microsoft Azure or RackSpace/ObjectRocket? Are you using a database in a Platform as a Service (PaaS) environment? Tell us how it’s going.
  • High availability. Are your applications a crucial part of your business model? Do they need to be available at all times, no matter what? What database challenges have you come across that impacted uptime, and how did you create a high availability environment to address them?
  • Scalability. Has scaling your business affected database performance, user experience or the bottom line? How are you addressing the database environment workload as your business scales? Let us know what technologies you used to solve issues.
  • Distributed databases. Are you moving toward a distributed model? Why? What is your plan for replication and sharding?
  • Observability and monitoring. How do we design open source database deployment with observability in mind? Are you using Elasticsearch or some other analysis tool? What tools are you using to monitor data? Grafana? Prometheus? Percona Monitoring and Management? How do you visualize application performance trends for maximum impact?
  • Container solutions. Do you use Docker, Kubernetes or other containers in your database environment? What are the best practices for using open source databases with containers and orchestration? Has it worked out for you? Did you run into challenges and how did you solve them?
  • Security. What security and compliance challenges are you facing and how are you solving them?
  • Migrating to open source databases. Did you recently migrate applications from proprietary to open source databases? How did it work out? What challenges did you face, and what obstacles did you overcome? What were the rewards?
  • What the future holds. What do you see as the “next big thing”? What new and exciting features just released? What’s in your next release? What new technologies will affect the database landscape? AI? Machine learning? Blockchain databases? Let us know what you see coming.

The Percona Live Open Source Database Conference 2018 Call for Papers is open until December 22, 2017. We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Share your open source database experiences with peers and professionals in the open source community by presenting a:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Speaking at Percona Live is a great way to build your personal and company brands. If selected, you will receive a complimentary full conference pass!

Submit your talks now.

Tips for Submitting to Percona Live

Include presentation details, but be concise. Clearly state:

  • Purpose of the talk (problem, solution, action format, etc.)
  • Covered technologies
  • Target audience
  • Audience takeaway

Keep proposals free of sales pitches. The Committee is looking for case studies and in-depth technical talks, not ones that sound like a commercial.

Be original! Make your presentation stand out by submitting a proposal that focuses on real-world scenarios, relevant examples, and knowledge transfer.

Submit your proposals as soon as you can – the call for papers is open until December 22, 2017.

Sep
22
2017
--

How to Deal with XA Transactions Recovery

XA Transactions

XA TransactionsFor most people (including me until recently) database XA transactions are a fuzzy concept. In over eight years with Percona, I have never had to deal with XA transactions. Then a few weeks ago I got two customers having issues with XA transactions. That deserves a post.

XA 101

What are XA transactions? XA transactions are useful when you need to coordinate a transaction between different systems. The simplest example could be simply two storage engines within MySQL. Basically, it follows this sequence:

  1. XA START
  2. Some SQL statements
  3. XA END
  4. XA PREPARE
  5. XA COMMIT or ROLLBACK

Once prepared, the XA transaction survives a MySQL crash. Upon restart, you’ll see something like this in the MySQL error log:

2017-08-23T14:53:54.189068Z 0 [Note] Starting crash recovery...
2017-08-23T14:53:54.189204Z 0 [Note] InnoDB: Starting recovery for XA transactions...
2017-08-23T14:53:54.189225Z 0 [Note] InnoDB: Transaction 45093 in prepared state after recovery
2017-08-23T14:53:54.189244Z 0 [Note] InnoDB: Transaction contains changes to 2 rows
2017-08-23T14:53:54.189257Z 0 [Note] InnoDB: 1 transactions in prepared state after recovery
2017-08-23T14:53:54.189267Z 0 [Note] Found 1 prepared transaction(s) in InnoDB
2017-08-23T14:53:54.189312Z 0 [Warning] Found 1 prepared XA transactions
2017-08-23T14:53:54.189329Z 0 [Note] Crash recovery finished.
2017-08-23T14:53:54.189472Z 0 [Note] InnoDB: Starting recovery for XA transactions...
2017-08-23T14:53:54.189489Z 0 [Note] InnoDB: Transaction 45093 in prepared state after recovery
2017-08-23T14:53:54.189501Z 0 [Note] InnoDB: Transaction contains changes to 2 rows
2017-08-23T14:53:54.189520Z 0 [Note] InnoDB: 1 transactions in prepared state after recovery
2017-08-23T14:53:54.189529Z 0 [Note] Found 1 prepared transaction(s) in InnoDB
2017-08-23T14:53:54.189539Z 0 [Warning] Found 1 prepared XA transactions

The command

xa recover

 shows you an output like:

mysql> xa recover;
+----------+--------------+--------------+-----------+
| formatID | gtrid_length | bqual_length | data      |
+----------+--------------+--------------+-----------+
|     1234 |            4 |            5 |  bqual |
+----------+--------------+--------------+-----------+
1 row in set (0.00 sec)

There are some binary data that can’t be shown in HTML. The XA Xid is made of three fields: gtrid (global trx id), bqual (branch qualifier) and formatId. Java applications use all three fields. For my example above, I used “X’01020304′,’bqual’,1234”. You can trust Java application servers to be creative with Xid values. With MySQL 5.7, you can output the data part in hex with

convert xid

 :

mysql> xa recover convert xid;
+----------+--------------+--------------+----------------------+
| formatID | gtrid_length | bqual_length | data                 |
+----------+--------------+--------------+----------------------+
|     1234 |            4 |            5 | 0x01020304627175616C |
+----------+--------------+--------------+----------------------+
1 row in set (0.01 sec)

The Problem

If you do nothing, the prepared transaction stays there forever and holds locks and a read view open. As a consequence, the history list grows without bound along with your ibdata1 file, where the undo entries are kept. If you have slaves, they all have the prepared transaction too (at least with 5.7). No fun.

As a consequence, if you are using XA transactions, you MUST check if there are prepared transactions pending after the server or mysqld restarted. If you find such transactions, you need to commit or roll them back, depending on what is involved.

But how do you commit these XA transactions? The problem here is the output of

xa recover

. As it is, the output is unusable if there is a bqual field or non-default formatID field:

mysql> xa commit 0x01020304627175616C;
ERROR 1397 (XAE04): XAER_NOTA: Unknown XID

The Fix

Looking back at the

xa recover convert xid

 output above, the gtrid_length and bqual_length are provided. With the use of these values, you can extract the parts of the data field which gives us:

  • gtrid = 0x01020304
  • bqual = 0x627175616C

And, of course, the formatID is 1234. Altogether, we have:

mysql> xa commit 0x01020304,0x627175616C,1234;
Query OK, 0 rows affected (0.15 sec)

Which finally works! On 5.6 the

convert xid

 option is not available. You have to be a bit more creative:

root@master57:/var/lib/mysql# mysql -r -e 'xa recoverG' | hexdump -C
00000000  2a 2a 2a 2a 2a 2a 2a 2a  2a 2a 2a 2a 2a 2a 2a 2a  |****************|
00000010  2a 2a 2a 2a 2a 2a 2a 2a  2a 2a 2a 20 31 2e 20 72  |*********** 1. r|
00000020  6f 77 20 2a 2a 2a 2a 2a  2a 2a 2a 2a 2a 2a 2a 2a  |ow *************|
00000030  2a 2a 2a 2a 2a 2a 2a 2a  2a 2a 2a 2a 2a 2a 0a 20  |**************. |
00000040  20 20 20 66 6f 72 6d 61  74 49 44 3a 20 31 32 33  |   formatID: 123|
00000050  34 0a 67 74 72 69 64 5f  6c 65 6e 67 74 68 3a 20  |4.gtrid_length: |
00000060  34 0a 62 71 75 61 6c 5f  6c 65 6e 67 74 68 3a 20  |4.bqual_length: |
00000070  35 0a 20 20 20 20 20 20  20 20 64 61 74 61 3a 20  |5.        data: |
00000080  01 02 03 04 62 71 75 61  6c 0a                    |....bqual.|
0000008a

But there is a limitation in 5.6: you can only XA commit/rollback transactions that belong to your session. That means after a crash you are out of luck. To get rid of these you need to promote a slave or perform a logical dump and restore. The best plan is to avoid the use of XA transactions with 5.6.

I submitted this bug to Percona Server for MySQL in order to get a usable output out of

xa recover convert xid

. If you think this is important, vote for it!

Sep
22
2017
--

This Week in Data with Colin Charles #7: Percona Live Europe and Open Source Summit North America

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Europe 2017Percona Live Europe Dublin

Are you affected by the Ryanair flight cancellations? Have you made alternative arrangements? Have you registered for the community dinner? Even speakers have to register, so this is a separate ticket cost! There will be fun lightning talks in addition to food and drink.

You are, of course, already registered for Percona Live Europe Dublin, right? See you there! Don’t forget to pack a brolly, or a rain jacket (if this week’s weather is anything to go by).

Open Source Summit North America

Last week, a lot of open source folk were in Los Angeles, California for the annual Open Source Summit North America (formerly known as LinuxCon). I’ve been to many as a speaker, and have always loved going to the event (so save the date, in 2018 it is August 29-31 in Vancouver, British Columbia, Canada).

What were major themes this year? Containerization. Everyone (large and small) seem to be moving workloads into containers. Containers and stateful applications make things all the more interesting, as well as thoughts on performance. This is a big deal for us in the MySQL/MongoDB/other open source database space. Technologies to watch include: Docker/Moby, Kubernetes, and Mesos. These are technologies people are frankly already deploying on, and it looks like the on-ramp is coming. Videos to watch:

The cloud is still a big deal. Yes, people are all customers of Amazon Web Services. Sure they are looking at Microsoft Azure. Google Cloud Platform is – from my informal survey – the third most popular. In many instances, I had conversations about Oracle Cloud, and it looks like there is a huge push behind this (but not too many users that I’ve seen yet). So it’s still a bet on the future as it continues to be developed by engineers. A mention of Rackspace Cloud (which offers all the MySQL variants in the cloud) is good, but many large-scale shops haven’t thought about it.

There were also some “fun” keynotes:

I wish more events had this kind of diverse keynotes.

From a speaker standpoint, I enjoyed the speaker/sponsor dinner party (a great time to catch up with friends and meet new ones), as well as the t-shirt and speaker gift (wooden board). I had a great time at the attendee expo hall reception and the party at Paramount Studios (lots of fun catered things, like In-N-Out burgers!).

Releases

  • ProxySQL 1.4.3. Hot on the heels of 1.4.2 comes 1.4.3, nicknamed “The ClickHouse release.” Clients can connect to ProxySQL, and it will query a ClickHouse backend. Should be exciting for ClickHouse users. Don’t forget the SQLite support, too!
  • Percona XtraDB Cluster 5.6.37-26.21
  • MariaDB ColumnStore 1.1.0 Beta. Do you use ColumnStore? Or do you use ClickHouse? There’s a new beta that might be worth trying.
  • MySQL 8.0.3 Release Candidate. Download this on your way to Percona Live Europe Dublin! Try it. There are many talks for this, including a keynote. You’ll find things like Histograms, more improvements around the optimizer, JSON and GIS improvements, security improvements, resource groups seem very interesting, data dictionary changes and a whole lot more!

Link List

  • CallidusCloud Acquires OrientDB, the Leading Multi-Model Database Technology
  • Database provider MongoDB has filed to go public. Bound to happen, and some highlights according to TechCrunch: “The company brought in $101.4 million in revenue in the most recent year ending January 31, and around $68 million in the first six months ending July 31 this year. In that same period, MongoDB burned through $86.7 million in the year ending January 31 and $45.8 million in the first six months ending July 31. MongoDB’s revenue is growing, and while its losses seem to be stable, they aren’t shrinking either. There have been over 30 million downloads of MongoDB community, and the link also has a nice cap table pie chart.”

Upcoming appearances

Percona’s website keeps track of community events, so check that out and see where to listen to a Perconian speak. My upcoming appearances are:

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Jul
18
2017
--

Backups and Disaster Recovery

Backups and Disaster Recovery

Backups and Disaster RecoveryIn this post, we’ll look at strategies for backups and disaster recovery.

Note: I am giving a talk on Backups and Disaster Recovery Best Practices on July 27th.

When discussing disaster recovery, it’s important to take your business’ continuity plan into consideration. Backup and recovery processes are a critical part of any application infrastructure.

A well-tested backup and recovery system can be the difference between a minor outage and the end of your business.

You will want to take three things into consideration when planning your disaster recovery strategy: recovery time objective, recovery point objective and risk mitigation.

Recovery time objective (RTO) is how long it takes to restore your backups. Recovery point objective (RPO) is what point in time you want to recover (in other words, how much data you can afford to lose after recovery). Finally, you need to understand what risks you are trying to mitigate. Risks to your data include (but are not limited to) bad actors, data corruption, user error, host failure and data center failure.

Recommended Backup Strategies

We recommend that you use both physical (Percona XtraBackup, RDS/LVM Snapshots, MySQL Enterprise Backup) and logical backups (mysqldump, mydumper, mysqlpump). Logical backups protect against the loss of single data points, while physical backups protect against total data loss or host failure.

The best practice is running Percona XtraBackup nightly, followed by mysqldump (or in 5.7+, mysqlpump). Percona XtraBackup enables you to quickly restore a server, and mysqldump enables you to quickly restore data points. These address recovery time objectives.

For point-in-time recovery, it is recommended that you download binlogs on a regular basis (once an hour, for example).

Another option is binlog streaming. You can find more information on binlog streaming in our blog: Backing up binary log files with mysqlbinlog.

There is also a whitepaper that is the basis of my webinar here: MySQL Backup and Recovery Best Practices.

Delayed Slave

One way to save on operational overhead is to create a 24-hour delayed slave. This takes the place of the logical backup (mysqldump) as well as the binlog streaming. You want to ensure that you stop the delayed slave immediately following any issues. This ensures that the data does not get corrupted on the backup as well.

A delayed slave is created in 5.6 and above with:

CHANGE MASTER TO MASTER_DELAY = N;

After a disaster, you would issue:

STOP SLAVE;

Then, in order to get a point-in-time, you can use:

START SLAVE UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos;

Restore

It is a good idea to test your backups at least once a quarter. Backups do not exist unless you know you can restore them. There are some recent high-profile cases where developers dropped tables or schemas, or data was corrupted in production, and in one case five different backup types were not viable to use to restore.

The best case scenario is an automated restore test that runs after your backup, and gives you information on how long it takes to restore (RTO) and how much data you can restore (RPO).

For more details on backups and disaster recovery, come to my webinar.

Dec
02
2016
--

Business Continuity and MySQL Backups

MySQL Backups

MySQL BackupsThis blog post discusses the business continuity plan around MySQL backups, and how organizations should think about them.

During the years I’ve worked in IT, I’ve learned that backups sometimes are a conceptual subject in organizations. Many companies have them, but don’t document the associated business continuation plan for them. I experienced this the hard way many many years ago, somewhere around when MySQL 5.0 was still widely used.

In most organizations, there are a couple of business continuity subjects that should be described internally. For example, what is the recovery time objective and what is the recovery point objective. Let’s go a bit deeper into both concepts:

Recovery Point Objective:

A recovery point objective describes the utter limit of time data can be lost during a major incident. For example, recovery while a massive data center failure happens. One of the questions you should ask prior to these situations is what is a tolerable time point for lost information? 

If you have a recovery point objective of over a day, your daily backup routines might cover this. However, if you have a recovery point objective that is more stringent, you might be forced to have some additional tools like binary streaming or incremental backup.

Recovery Time Objective

This second term and concept is also essential in building a business continuity plan. Your environment has to remain active to generate traffic and, potentially, revenue.

What are the requirements promised to your customers? Are there any SLA’s described with the customer, or is it best effort? If it’s best effort, what would be the tipping point for your users to start using an alternative service from your competitor. These are all factors to consider while determining your RTO.

In Short

If the recovery point objective and recovery time objective are stringent, this might mean additional costs might be required when buying hardware, or perhaps having a secondary data center becomes mandatory. However, it’s a cost/value discussion: what makes your company lose revenue, and what is acceptable during a crisis?

Based on your business continuity requirements, you can potentially build your DR plans. Make sure your business continuity requirements builds the DR plan, and not vice versa.

What tools do you have at your disposal to create sensible MySQL backups?

Logical backups

MySQLdump. Remember mysqldump, the original tool included in MySQL? The good thing about mysqldump is that you can actually read and even edit the output of the backup before potentially restoring data, which can prove interesting during development work.

mysqldump’s biggest negative is that it’s not scalable, nor fast for backing up large amounts of data. Additionally, restoring data is even slower as you must replay the complete dataset on your new MySQL database servers (rebuild indexes, large IO, etc.).

mysqldump’s advantages include the convenience and flexibility of viewing or even editing the output before restoring. It gives you the ability to clone databases for development, and produce slight variations of an existing database for testing.

mydumper. This tool is comparable to mysqldump, however it does it in parallel, which provides significant benefits in backup time and restoration time.

Binary backups

Binary backups refers to copies made of the entire MySQL dataset. Binary backups are typically faster compared to logical backups, especially  on larger datasets. Several tools come to mind in these cases.

Percona XtrabackupAn opensource binary backup solution for InnoDB. The good thing about XtraBackup is that it is non-locking when using MySQL with the InnoDB storage engine. 

MySQL Enterprise BackupAn InnoDB hot backup solution that is included in the subscription level of MySQL enterprise. 

These tools can offer you incremental and daily backups, however they still don’t bring you point-in-time recovery. If your recovery point objective is very limited, it might mean that that you require to externally store (backup) your binary logs and replay them on your restored database. Keep in mind that this factor potentially impacts your recovery time objective.

Delayed Slaves

This concept is not a backup, but this technology might help you to recover your database and limit the recovery time significantly.

Conclusion

We’ve discussed having a business continuity requirement list, and some potential tools that might assist you in covering them (at least on the MySQL level). One of the last items that is important is actual testing. The number of companies that require data recovery and then notice that their backups are corrupted are way too numerous.

Make sure your organization tests their backups regularly. Are you sure they work properly? Make sure that you perform regression tests for new code – for example on a restoration set of the backups.

If you make sure you trust your backups, you might sleep better at night!   ;-). 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com