Oct
17
2023
--

Using CLONE INSTANCE With DATA DIRECTORY on Replicas With Data

CLONE INSTANCE with DATA DIRECTORY

This blog post discusses using the 
CLONE INSTANCE  command with the safety option 
DATA DIRECTORY  when you do not have enough disk space to store two datasets.

In my previous blog post on the
CLONE INSTANCE  command, The MySQL Clone Plugin Is Not Your Backup, I mentioned that using the option
DATA DIRECTORY  helps to avoid situations where you need to re-initialize replica and clone-related settings from scratch in case of the clone operation failure.

MySQL Clone plugin simplifies new replica provisioning but does not simplify server restore after failure unless you are ready to re-install your MySQL instance from scratch.

However, when you clone a replica that already has a huge dataset, you may not have enough space for two datasets: one from the source server and data present on the replica.

Since you decided to clone a replica from another server, you agreed to lose your current data. The only need for the
DATA DIRECTORY  option is to keep clone-related privileges and settings untouched in case of failure. You may use one of the following strategies to perform the clone operation safely.

Start from scratch

To do this, stop your current server, remove the data directory, initialize it again, connect, and set up clone-related privileges and options. This way, you will have a new instance with a small data directory, so you can use option
DATA DIRECTORY  without fear of exceeding available disk space.

Keep your existing MySQL schema

If you do not want to re-install your instance, you can remove user data from it instead.

  • List all non-system databases with query
    SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA 
    WHERE SCHEMA_NAME NOT IN ('mysql', 'sys', 'information_schema', 'performance_schema');
  • Remove them one by one. You can use the following stored procedure to do it:
    CREATE PROCEDURE p1()
    BEGIN
      DECLARE done INT DEFAULT FALSE;
      DECLARE dbname VARCHAR(64);
      DECLARE c1 CURSOR FOR SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME NOT IN ('mysql', 'sys', 'information_schema', 'performance_schema', 'test');
      DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
    
      OPEN c1;
    
      drop_loop: LOOP
        FETCH c1 INTO dbname;
        IF done THEN
          LEAVE drop_loop;
        END IF;
        SET @temp = CONCAT('DROP DATABASE ', dbname);
        PREPARE stmt FROM @temp;
        EXECUTE stmt;
      END LOOP;
    
      CLOSE c1;
    END

Note! If you store your InnoDB data in the shared tablespace (
innodb_file_per_table=0 ), file
ibdatawould not be shrunk, and you won’t be able to free disk space this way.

Cloning the instance

Once you have freed disk space by removing data manually, you can use the
CLONE INSTANCE  command with the option
DATA DIRECTORY.

CLONE INSTANCE FROM ‘clone_user'@'source_host':3306 IDENTIFIED BY 'password' DATA DIRECTORY = '/path/to/custom_dir';

In case of a successful clone, you need to finalize it with one extra step: stop your MySQL instance and replace the content of the data directory with the content of the directory you used for the clone operation. After that, start the server.

In case of a clone operation failure, remove cloned data, fix errors, and try again.

Conclusion

Clone operation may fail and force you to perform extra steps by re-initializing the MySQL instance on the replica. To avoid it, use the option
DATA DIRECTORY. Clean your existing data before cloning if you do not have enough disk space to store two copies of data.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Sep
26
2023
--

The MySQL Clone Plugin Is Not Your Backup

MySQL clone plugin

This blog post discusses the limitations of the MySQL Clone plugin.

The MySQL clone plugin significantly simplifies the process of replica provisioning. All you need to do is:

  • Ensure that the source server has binary logs enabled
  • Grant appropriate permissions
  • Execute the
    CLONE INSTANCE  command on the recipient

This works extremely easily when you provision a new replica that doesn’t have any data.

Due to its simplicity, you may want to use the clone plugin instead of a backup to restore a server that survives data inconsistency or corruption. E.g., after crash.

However, if you have data on your replica, you need to consider how you will recover if the
CLONE INSTANCE  command fails with an error.

CLONE INSTANCE  command, by default, works as follows:

  • Checks prerequisites on the replica
  • Wipes out data directory
  • Copies data from the source server to the replica
  • Finalizes data on the replica

Let’s discuss these steps in detail.

Checking prerequisites on the replica

This is a safe operation that stops the clone if any of the prerequisites listed at https://dev.mysql.com/doc/refman/8.0/en/clone-plugin-remote.html#clone-remote-prerequisites are not met. Data and user settings are left intact if failure happens at this stage.

However, not every option affecting the future clone operation could be verified. Debugging options, such as
innodb_force_recovery, could cause clone operation failure at further steps, as reported in MySQL Bug 109035. Unfortunately, even if Oracle adds this option into the prerequisites check, it would not zero all risks of option incompatibility failures that could occur during the
CLONE INSTANCE  operation.

Wiping the data directory

By default, after prerequisites are checked, the clone plugin wipes out the content of the data directory. This leads to the situation when all subsequent failures during the
CLONE INSTANCE  operation will leave your server unusable.

This means that you may have to re-install the MySQL server and re-apply all clone-related settings in order to repeat the
CLONE INSTANCE operation.

To avoid the negative effect of possible failures, use clause
DATA DIRECTORY of the
CLONE INSTANCE command. It will add an extra step to the finalization step but will save you in case of failure. Note that you must have enough disk space to store your current data on the replica together with the data copied from the source server.

Copying data from the source server to the replica

This stage is the most risky because MySQL has limited control over data transfer. If something bad happens to your network while MySQL copies data, the clone operation will be stopped with an error. You need to be prepared for this, and if you decide not to use the
DATA DIRECTORY  option, be ready to re-install the replica MySQL server from scratch.

Note. Some options control data retrieval in case of network failure. So, the copy operation will be retrieved if the network is recovered within clone_donor_timeout_after_network_failure minutes. However, this option does not protect you from all possible disaster scenarios.

Finalizing data on the replica

If the clone operation completes successfully, the replica is automatically restarted. Otherwise, an error is reported.

Recovering from error if data is wiped out

After the error is reported, you need to fix the error before restarting the clone operation. If data is already wiped, you should try fixing the error without restarting the replica while your privileges are still in the memory cache. If fixing the error requires a replica restart, as in the case of mistakenly setting option
innodb_force_recovery, your only option is to re-initialize the MySQL instance on the replica from scratch as you would do for any new MySQL installation.

Recovering from error if you used option DATA DIRECTORY

In this case, just remove data in the directory specified by the option
DATA DIRECTORY, fix the error, and repeat the
CLONE INSTANCE command. In this case, you can safely restart the replica without the risk of losing your clone-related privileges and settings.

Conclusion

The MySQL clone plugin simplifies new replica provisioning but does not simplify server restore after failure unless you are ready to re-install your MySQL instance from scratch. Using the MySQL clone plugin on servers with data is not easier than using traditional backup tools, such as Percona XtraBackup.

MySQL clone plugin resources

Provisioning Replication With Clone Plugin

MySQL 8.0.17 Clone Plugin: How to Create a Slave from Scratch

The MySQL Clone Wars: Plugin vs. Percona XtraBackup

Aug
30
2023
--

Backup and Recovery for Databases: What You Should Know

People used to say, “Coal is king,” and for decades, it was. Coal powered just about everything, but mismanagement and a lack of stewardship left some messes. 

These days, “Data is king,” and a lot more. Data powers everything, and unlike coal and coal combustion, data and databases aren’t going away. So staying with our analogy, and learning from historical lessons, organizations must be responsible stewards of data — for the sake of customers, stakeholders, and the business itself. 

Any organization that uses personal and other sensitive data must have a firm, proven plan for business continuity in the event of a disaster or cyberattack. Losing access to or control of data for an extended period of time will disrupt operations, lead to financial losses, and damage an organization’s reputation. Recovering from a tarnished reputation can be costly and time-consuming.

Data stewardship also means protecting people’s privacy, safeguarding against breaches, and adhering to regulations and standards such as the European Union’s General Data Protection Regulation (GDPR), the United States’ Sarbanes-Oxley Act (SOX), and the Payment Card Industry Data Security Standard (PCI DSS).  

In this blog, we’ll focus on the elements of database backup and disaster recovery, and we’ll introduce proven solutions for maintaining business continuity, even amid otherwise dire circumstances.

Why backup and recovery preparedness is so important

Such a royal introduction demands background, so let’s get after it: Any data loss or unexpected downtime hurts an organization. Some losses can be crippling, even a business death knell. So it’s important to know (and not overlook) the trouble spots. Some are hidden, but some are in plain sight. In general terms, here are potential trouble spots:

  • Hardware failure: Manufacturing defects, wear and tear, physical damage, and other factors can cause hardware to fail. Power surges, outages, and harsh conditions (i.e., heat) can damage hardware components and prompt data loss. 
  • Software failure: Software applications can become vulnerable, or they can crash altogether. Without data backup mechanisms, there can be data loss or system downtime. Even worse, entire operating systems can crash, also resulting in data loss.
  • Human mistakes: Incorrect configuration is an all-too-common cause of hardware and software failure. Similarly, accidental deletion is a culprit.

In all three instances, failure to regularly back up data can result in significant data loss in the event of a disaster. Solid backup procedures must be in place.

Backup types and strategies

It all begins with choosing a strategy, and that depends on factors such as the use and importance of data to your business, your recovery time objectives (RTO), and your budget. Depending on what is needed, here are some common database backup types, strategies, and considerations:

Full backup vs. incremental backup: Best suited for smaller databases or those that don’t incur a lot of changes, a full backup includes a complete restore point. Though it can put safety concerns at ease, a full backup can be time-consuming and expensive. An incremental backup, which is faster and requires less storage than a full backup, captures changes made since the previous backup. It’s suitable for databases with moderate change rates. (For a more detailed description, read Full vs. Incremental vs. Differential Backups: Comparing Backup Types.)

Hot backups vs. cold backups: A hot backup — used to minimize downtime and ensure data availability for critical applications — allows you to create a copy of a database while the system is still actively serving user requests and processing transactions. In a cold backup, the database is taken offline. This is advisable only in certain scenarios, such as maintenance during low-use hours.

Choosing the right backup frequency: The appropriate frequency, of course, will vary from one organization to another. If you can’t afford to lose much data, you’ll need more frequent backups, possibly even continuous data protection solutions. The rate at which data changes within your database is a significant factor as well. To incorporate the latest changes, databases with high write activity might require more frequent backups.

Offsite backup: This involves physically storing backups in a secure location somewhere other than the primary data center. It’s more expensive, but offsite backup protects against site-wide disasters.

With the right backup strategy, you’ll be better able to achieve the aforementioned business continuity. Still, disasters might happen, so let’s also examine recovery.

Disaster recovery strategies and testing

One might think disaster begets loss, but with the right planning, that loss can be so minimal it’s more “blip” than “bad.” With that planning in place, you can look at it this way instead: Backup begets disaster recovery.

Disaster recovery (DR) strategies are essential to ensuring the integrity, availability, and reliability of data, particularly in the event of unexpected failures or errors. Such strategies help restore a database to a consistent and usable state. A disaster recovery plan can be as simple as the use of a backup and recovery procedure, or it can be complex, depending on the RTO and the recovery point objective (RPO). Key elements of comprehensive database recovery plans include point-in-time recovery, high availability and failover, replication, and others (we’ll break them down in a bit). 

Whatever plan you arrive at, you must test it. There should be no shortcuts in the testing, which should include:

  • Backup testing: Test to ensure that data can indeed be recovered from backups. This includes both data and log backups.
  • Failover testing: If you have a high availability setup or a secondary data center, test to ensure the failover process switches over to the backup database server.
  • Application testing: Test to make sure applications function correctly after recovery.
  • Data consistency: Verify that data consistency between the primary and secondary systems will exist after recovery.
  • Application testing: Test to make sure applications function correctly after recovery.

Backup and recovery tools

Now that we’ve covered some strategies, it’s a good time to look at some tools for putting plans in motion. There’s too much out there to provide a comprehensive list, so we’ll mention just a few high-profile options. These obviously are not detailed descriptions; they’re just introductions:

MySQL Enterprise Backup: This software supports hot backups and incremental backups.

Oracle Recovery Manager (RMAN): This enables users of Oracle databases to perform full, incremental, and differential backups. This tool also provides point-in-time recovery.

SQL Server Management Studio: This software includes backup and recovery tools for use with Microsoft SQL Server.

MongoDB Atlas backup: This hosted cloud service offers continuous backups or snapshots for point-in-time recovery, as well as incremental backups.

MongoDB Cloud Manager: This hosted service, which uses a graphical user interface, supports backup and restoration of replica sets and sharded clusters.

MongoDB Ops Manager: Available with Enterprise Advanced subscriptions, this is an on-premise tool that provides backup software and features much like those of Cloud Manager.

IBM Data Studio: This provides graphics-based and command-line tools for backup and recovery within IBM Db2 databases.

Commvault: This data management and protection software includes features for backup and recovery. It’s used to help ensure the functionality of hardware, software, and applications. 

All of the options above have enterprise-grade attributes, but not one of them is truly open source. So let’s discuss some options that are open source.

Open source Percona solutions for backup and disaster recovery

When you read reviews of what’s out there for backup and disaster recovery solutions, you tend to see glowing words about software with big-name recognition. But you don’t see a warning that the software is proprietary (most often, it is); you just see hints, like a “starting price.” And you certainly don’t see a warning that vendor lock-in might be lurking

Amid the options, you see Percona at or near the top of lists and reviews. But with Percona, there are differences. There’s no hint of proprietary lock-in because it doesn’t exist. The software is truly open source. Percona solutions, fully supported and enterprise-grade, include:

Percona Backup for MongoDB: Percona Backup for MongoDB is a distributed and low-impact solution for consistent backups of MongoDB clusters, including sharding support. It enables you to make logical, physical, incremental, and selective backups and restores. Plus, point-in-time recovery functionality allows you to recover your database to a specific timestamp.

Percona XtraBackup: This is a free, complete online backup solution for all versions of Percona Server for MySQL, MySQL, and MariaDB. Percona XtraBackup performs online non-blocking, tightly compressed, highly secure backups on transactional systems. Percona XtraBackup is the world’s only open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. 

Percona Distribution for PostgreSQL: This production-ready PostgreSQL distribution includes pgBackRest, an open source backup and restore solution that enables full backup, incremental backup, and differential backup in PostgreSQL databases. The toolset also supports high availability and disaster recovery via Patroni, pg_bouncer, and HA proxy. 

Essential elements of database backup and recovery

Now, let’s introduce a couple elements of an ideal backup and recovery solution. (There are more coming in the next section, but these two are big, huge in fact.) 

Database replication

Data redundancy is a cornerstone of disaster recovery strategy, and to achieve it, you must have replication mechanisms in place. Depending on the use case, that can mean synchronous replication or asynchronous replication. 

In synchronous replication, data is written to the primary database. The data then is copied to one or more replica databases. The primary database waits for acknowledgment from the replica(s) before advancing the transaction to the application. This means that the data in the primary and replica databases is always in sync. (View a Percona whitepaper that shows synchronous replication in action.)

In asynchronous replication, data is written to the primary database, but the acknowledgment to the application occurs before the data is replicated to secondary databases. This results in a delay between the time data is written to the primary and when it appears in the replica(s). Real-time data consistency is not guaranteed.

Point-in-time recovery

With point-in-time recovery, a database is restored to a specific moment in time rather than the time of the most recent backup. PITR is especially essential in situations when data integrity and consistency cannot be compromised. Financial systems and critical business applications are especially dependent on PITR. In addition to protecting data accuracy and limiting data loss, PITR can help with auditing and compliance requirements by providing a record of changes to the database.

Common components of DR and HA architectures

By definition, there are differences between high availability (HA) and disaster recovery (DR). High availability is focused on preventing downtime and ensuring that the database remains available; disaster recovery is focused on recovering from a catastrophic event and minimizing negative effects on the business. High availability typically involves redundant hardware, software, applications, and network components that can quickly take over if the primary component fails; disaster recovery typically involves regular backups, replication to a secondary site, and a clear recovery plan with steps to be taken in the event of a disaster. 

Although the emphasis and configurations may vary depending on whether the focus is on high availability or disaster recovery, there are shared components. Some of them include:

  • Redundant hardware: Both HA and DR use redundant hardware components, including servers and storage devices. Redundancy ensures that if one component fails, there is another to take its place. This helps minimize downtime.
  • Clustering: In HA, clustering helps ensure that there are redundant database servers; if one fails, another can take over. This minimizes downtime during disasters or hardware failures. For disaster recovery, clustering can be used to maintain a synchronized copy of the database in a different location. Database replication, log shipping, or synchronous data mirroring can be used for DR purposes.
  • Load balancing: Load balancers distribute traffic evenly across multiple servers or data centers. In HA, load balancers help ensure that no single server is overwhelmed. In DR, load balancers route traffic to the secondary data center when a failover occurs.
  • Backup systems: Both HA and DR architectures have backup systems in place. HA setups tend to use backup servers within the same data center. DR setups have backup data centers in different locations.
  • Monitoring and alerting: Continuous monitoring of system health and performance is essential for both HA and DR. Automated alerts are set up to notify administrators of any issues that require attention.

What’s at stake and what to do about it

Now that we’ve taken a deeper dive into the components of a backup and disaster recovery solution, as well as a look at high availability, let’s expand on what backup and disaster recovery solutions are used for. We’ll also talk about what can happen if you don’t have a solid plan in place, if you take shortcuts, or if you turn it all over to the wrong vendor.

Data archiving and retention matter

We’ve discussed the essential nature of backup and recovery in business continuity. Relatedly, by separating historical data from operational data, archiving helps you manage data growth, maintain compliance, and optimize backup processes. Likewise, establishing clear retention policies for both backups and archives is crucial to balancing data recovery needs with data management efficiency and compliance requirements.

And the essential nature of compliance can’t be emphasized enough. Failure to adhere to legal requirements can result in monetary and even criminal penalties, reputational damage, and loss of data integrity. Here are some of those regulations:

General Data Protection Regulation (GDPR): In addition to stipulating that organizations must have a lawful basis for processing personal data, this regulation includes guidelines for data retention and erasure policies.

Sarbanes-Oxley Act: SOX, a U.S. federal law, requires companies to have database disaster recovery and business continuity plans in place. The purpose is to ensure the availability and integrity of financial data amid unexpected events.

Payment Card Industry Data Security Standard: The PCI DSS mandates that organizations must regularly back up data, including critical payment card data, so that data availability and integrity are maintained. The PCI DSS also prescribes steps for responding to data breaches and other security incidents, including how to restore services and data after a disaster. 

California Consumer Privacy Act: Similar to GDPR, CCPA also includes mandates concerning data retention and erasure policies.

Regional regulations: Depending on your location and who your customers are, your organization might have to adhere to privacy, archiving, and retention mandates as spelled out in HIPAA (healthcare), FERPA (education), PIPEDA (Canada), and other regulations.

Examining cloud-based backup solutions

Benefits related to accessibility, automation, scalability, and security might inspire you to go with a cloud service for database backup and disaster recovery. If this is the direction you choose, carefully consider the long-term costs, data security concerns, and potential vendor lock-in. Find a partner that will answer any questions about such concerns — and be as certain as possible that vendor lock-in is not on the horizon.

While cloud services can be cost-effective initially, long-term costs can escalate if your needs for data storage grow significantly. Relatedly, transferring large amounts of data into or out of the cloud can be expensive, especially for companies with limited bandwidth. Further, there are data compliance and security concerns that could matter a lot more to you than to the vendor. Whereas the vendor might make promises, the responsibility and potential repercussions are all on you. 

Safe and reliable cloud services exist, but read the fine print and ask those questions.

Never scrimp on security measures

Here are some of the best practices for helping ensure that data remains secure during backup and recovery:

  • Encryption (which also could have appeared in the Essential Elements section) is a must-have component of database backup and recovery strategies. Using algorithms and keys to make data unreadable, encryption helps safeguard information during backup and recovery. Even if backup files fall into the wrong hands, you’re safe. Encryption also helps you adhere to strict data-protection regulations (GDPR, Sarbanes-Oxley Act, PCI DSS, CCPA, HIPAA, etc.).
  • Access control is the process of managing who can access a database and what actions they can perform. In backup and recovery, access control helps prevent unauthorized access to sensitive backup files and database recovery mechanisms.
  • Backup authorization and recovery authorization are the processes of determining whether a user or process has the permissions to perform a specific action. Authorization helps ensure that only authorized entities can initiate backup and recovery operations.

Be proactive — set up monitoring and alerting

The coal miners of yesteryear carried caged canaries deep into mine tunnels. If carbon monoxide or other dangerous gasses were present, the gasses would kill the canary, signaling the miners to leave the tunnels immediately.

Thankfully, with database backup and disaster recovery, watching for warning signs can be a lot more scientific — and foolproof. Instead of relying on a wing and prayer, an effective monitoring and alerting solution can rely on:

Thresholds: Thresholds for various metrics, such as backup completion time, replication lag, and resource availability, are defined. When those thresholds are reached, alerts are triggered.

Notification channels for real-time alerts: With the proper configuration, the appropriate personnel are contacted promptly via automated email, text, chat, and other channels when the previously mentioned thresholds are hit. Such notification should include an escalation process in which a different support avenue or person is contacted if the alert is not acknowledged or resolved within a predetermined amount of time.

Automated storage increase: There should be automation — tied into the alerts — in which storage space is increased when it reaches a predefined threshold. This will help prevent backup failures.

Detailed logs: It’s important to maintain logs of all monitoring activities and alerts. Then, you have the information to generate reports for identifying trends and areas of improvement. 

With the right monitoring system in place, you can avoid losses. You also can spot critical performance issues faster, understand the root cause of incidents better, and troubleshoot them more efficiently moving forward.

Recovering from different scenarios

In database backup and disaster recovery planning, clear steps for dealing with hardware failure must be in place. Those steps should include:

  • Identification: The first step in recovering from a hardware failure is identifying the affected hardware component. This could be a hard drive, a server, or even an entire data center. Monitoring tools and systems can help detect such failures and trigger automated alerts described in the previous section.
  • Isolation and remediation: Once the failure is identified, IT staff or automated systems should work to isolate the affected hardware and restore it to a functional state as soon as possible. This may involve taking a server offline, rerouting network traffic, or replacing a failed disk drive.
  • Restoration: With the hardware issue resolved, the next step is to restore the database services. This involves restarting database servers, restoring network connections, and ensuring that the database management system is operational. The previously mentioned automated failover mechanisms and load balancers can help minimize downtime during this phase.
  • Recovery of data from backups: This might be a full backup or a combination of full and incremental backups. Use backup software to restore the database to its clean state at the time of the last good backup. This might involve copying data from backup storage to the production database server.

Perhaps we should call this the All Things Ominous Section because now we’ll look at restoring data after a cyberattack. (Of course, the dead canary was pretty dark.) 

But really, this section is about bouncing back. With the right preparedness, a cyberattack doesn’t have to be that death knell to your database system and business. In fact, you can beat down an incursion and come back stronger. So let’s look at planning, preparedness, and a systematic approach to minimizing downtime and data loss while ensuring the security and integrity of your systems and information. Here are key steps in overcoming a cyberattack:

Preparation: Conduct regular backups that ensure you have recent, clean copies of your data to restore. Keep backup copies in offsite or isolated locations, perhaps in the cloud. This safeguards data from physical damage or compromise in the event of an attack.

Documentation: Maintain thorough documentation of your database configurations, schemas, and data structures. This documentation will be invaluable during the restoration process.

Response plan: Develop a clear incident response plan that outlines roles, responsibilities, and steps to take in the event of a cyberattack.

Detection and isolation: As soon as you detect an attack, quickly identify the scope and nature. Determine which databases or systems are affected. Then, quarantine or disconnect affected systems from the network to prevent the spread of malware and further data corruption.

Damage assessment: Evaluate the extent of data loss or corruption. This assessment will help determine the appropriate restoration strategy.

Culprit identification: So that you can patch vulnerabilities and prevent future attacks, determine how the attack happened.

Data restoration: Use your latest clean backup to restore the affected database. Ensure the backup is from a time before the attack occurred. In some cases in which the attack had compromised data, you might have to perform incremental restoration. This involves applying incremental backups to bring the data up to standards.

Security updates and auditing: Immediately patch and update the database system and associated software to address vulnerabilities that were exploited.To prevent future attacks, implement intrusion detection systems (IDS) and access controls.

Data consistency and integrity

You must maintain data accuracy before, during, and after a disaster or attack. By doing so, your organization can recover quickly and reliably. In addition to replication, monitoring and alerts, encryption, auditing, and other activities already mentioned, here are some other best practices for maintaining data accuracy and addressing data corruption:

  • Perform regular backup testing, full and incremental, to verify data integrity.
  • Perform automated backup verification to check the integrity of backup files and ensure they are not corrupted.
  • Implement version control for database backups, which will give you a history of changes and let you choose a specific point-in-time if necessary.
  • Always apply the latest security patches and updates to your database management system and backup software to prevent vulnerabilities.
  • Use IDS and security information and event management (SIEM) tools to monitor network activity and detect suspicious activity.
  • Develop and regularly update a comprehensive disaster recovery plan that outlines roles, responsibilities, and procedures for data recovery in various scenarios.
  • Consider having outside experts assess your data protection and recovery plans and provide recommendations for improvement.

Scaling backup and recovery processes

Massive amounts of data can reside in large and enterprise databases. Though it might seem obvious, it’s important that you aren’t caught off-guard, and that means having significant storage capacity and efficient data transfer mechanisms.

Enterprises also typically require frequent backups, ranging from daily to hourly, depending on their RPOs. Automated backup processes are essential in ensuring data consistency and minimizing downtime. Techniques like online backups and snapshot-based backups can help ensure databases remain accessible during the backup process.

Relatedly, transferring large database backups over the network can strain available bandwidth. So, enterprises might need dedicated high-speed connections or WAN optimization solutions to mitigate network congestion during backup operations. To reduce storage and bandwidth requirements, compression and deduplication techniques are often applied to the backup data. This involves identifying redundant data and storing only the unique blocks.

Planning for business continuity

In real estate, they say “location, location, location.” In database management, we say: Proactivity. Proactivity. Proactivity. (OK, not so catchy and maybe we don’t say it in those words, but we think it.) And here, we’ll say it a fourth time in relation to protecting data and avoiding downtime: Proactivity. 

Any business or organization that relies on data (meaning just about all of them) must be proactive if they’re to maintain business continuity amid a disaster, power outage, cyberattack, or other event that could threaten data processes.

We covered a lot about components, tools, and best technical practices, so here we’ll key in on the actual planning parts that should be included in a business continuity plan. Since replication and redundancy are such big elements of database management, let’s stick with the redundancy theme and call these proactive activities:

Risk assessment: Identify potential risks and threats that could negatively affect your database systems. They might include natural disasters (hurricanes, tornadoes, earthquakes, flooding, blizzards, etc.), cyberattacks (data breaches, malicious bugs, ransomware, etc.), human error, and hardware failure. 

Impact assessment: Evaluate how each identified risk or threat could negatively affect your database systems.

Recovery objectives: To determine how quickly your organization must recover its database systems after a disruption (maximum allowable downtime), you should establish an RTO. To determine the maximum amount of data loss that your organization can tolerate, you should set an RPO. This will determine how frequently you should back up your data.

Disaster recovery plan (DRP): All the components in this section are part of the DRP, which outlines the steps to be taken in the event of a disaster or cyberattack. The DRP should include roles and responsibilities, communication procedures, and recovery procedures. You should test the DRP regularly through simulations to ensure it works effectively.

Communication plan: Develop a communication plan for keeping employees, customers, and stakeholders informed during a disaster or cyberattack.

Financial plan: Allocate budget resources for disaster recovery and business continuity initiatives to ensure they are adequately funded.

Additional tools and outside expertise: A business continuity plan isn’t something you just throw together. You might not have the tools and expertise on-staff to get it done. Consider ready-to-go backup software and whether or not you can design, implement, and maintain the business continuity plan on your own. If not, consider outside help, but beware of proprietary licensing and the pitfalls of vendor lock-in.  

Keep learning

Now that you’ve got the basics down, level up your knowledge with our on-demand presentation: The Many Ways To Copy Your Database. In it, Nicolai Plum of Booking.com discusses the best ways to copy your database – from logical data dump and file copying through native cloning and backup tools to advanced scale-out techniques.

 

Watch The Many Ways To Copy Your Database

Aug
23
2023
--

The Deprecation of qpress/QuickLZ Compression Algorithm

Deprecation of qpress/QuickLZ

To reduce the backup size, save storage space, and speed up the backup and restore process, you can compress a backup with Percona XtraBackup. The XtraBackup --compress option makes XtraBackup compress all output data, including the transaction log file and metadata files, with one of the supported compression algorithms. To decompress all files in a backup made with the --compress option, use the --decompress option.

Version changes

  • With Percona XtraBackup 8.0.34-29, qpress/QuickLZ is no longer supported for compress operations. The Zstandard (ZSTD) compression algorithm is moved to General Availability. With this version, ZSTD becomes the default compression algorithm for the --compress option. The alternative compression algorithm is LZ4.

    To compress files using the ZSTD compression algorithm, use the --compress option:

    xtrabackup --backup --compress --target-dir=/data/backup

    To compress files using the LZ4 compression algorithm, set the --compress option to LZ4:

    xtrabackup --backup --compress=lz4 --target-dir=/data/backup

    To decompress all files in a backup, use the --decompress option:

    xtrabackup --decompress --target-dir=/data/compressed/

    To decompress backups taken by older versions of Percona XtraBackup that used a QuickLZ compression algorithm, the --decompress option still supports qpress for backward compatibility.

  • Up to Percona XtraBackup 8.0.33-28, the --compress option uses a QuickLZ compression algorithm by default. When using --compress, the resulting files have the qpress (*.qp) archive format. 

    To compress files using the QuickLZ compression algorithm, use the --compress option:

    xtrabackup --backup --compress --target-dir=/data/backup

    Every *.qp file produced by XtraBackup is a one-file qpress archive. You can extract the contents of these files with the --decompress option that supports the qpress file archiver.

  • Starting with Percona XtraBackup 8.0.31-24, the use of qpress/QuickLZ to compress backups is deprecated. Percona recommends using either LZ4 or ZSTD compression algorithms. 
  • Percona XtraBackup 8.0.30-23 adds ZSTD compression algorithm in tech preview. ZSTD is a fast lossless compression algorithm that targets real-time compression scenarios and better compression ratios. 

    To compress files using the ZSTD compression algorithm, set the --compress option to zstd.

    xtrabackup --backup --compress=zstd --target-dir=/data/backup

    The --compress=zstd option produces *.zst files. You can extract the contents of these files with the --decompress option.

    Also, you can specify the ZSTD compression level with the --compress-zstd-level(=#) option as follows:

    xtrabackup --backup --compress --compress-zstd-level=1 --target-dir=/data/backup

 

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.

 

Download Percona XtraBackup

Jun
06
2023
--

Full vs. Incremental vs. Differential Backups: Comparing Backup Types

incremental vs differential backup

This blog was originally published in January 2012 and was updated in June 2023.

In today’s data-driven world, the protection and reliability of our valuable information is vital. The potential impact of data loss or corruption on individuals and organizations cannot be understated. Therefore, establishing a strong backup strategy is essential to ensure data security and minimize potential disruptions.

When it comes to backups, there are multiple types available, each with its own advantages and limitations. In this blog post, we will explore the realm of backups and conduct a comparative analysis of three backup types: Full, incremental, and differential backups.

Looking for a MongoDB backup solution?

Get started today with Percona Backup for MongoDB.

What is a full backup?

A full backup is a comprehensive data backup strategy that involves creating a complete, exact copy of all data and files in a database. Depending on an organization’s specific requirements, legal or otherwise, full backups are typically performed periodically, such as daily, weekly, or monthly.

Full backups are most suitable for critical systems or sensitive data that cannot afford any loss or downtime, allowing for a complete recovery of data in the event of data corruption, hacking, system failures, or natural disasters. In addition, they are used for restoring or moving data to a new system.

Full backups also provide a baseline for subsequent incremental or differential backups, which we will discuss a little later.

In order to implement effective full backups of your database, you must follow some best practices. This includes establishing a regular backup schedule, regular testing to ensure data integrity, storing backups securely and offsite/in multiple locations, and documenting backup procedures.

Advantages of full backups

Full backups provide several benefits in terms of safeguarding data and facilitating recovery processes, as they encompass thorough data replication and enable complete system restoration during critical situations.

  • Comprehensive Data Protection: A big advantage of full backups is that they capture every bit of information, including the operating system, applications, databases, and user files, ensuring that all critical data is replicated and protected against loss or corruption. In the event of system failures or catastrophic events, the ability to restore the entire system from a full backup ensures minimal data loss.
  • Simplicity and Ease of Restoration: Since full backups capture a complete copy of all data and files, it simplifies the recovery and restoration of a database. This allows for faster and more convenient restoration, minimizing downtime and allowing organizations to resume normal operations quickly.
  • Independent Backup Copies: The self-contained nature of full backups — capturing a complete copy of all data and files — allows each backup set to be accessed and restored independently. This provides flexibility in data recovery and allows organizations to retrieve specific versions or data sets as needed effectively.
  • Fast Recovery: Restoring a system from a full backup is generally faster when compared to other backup types, allowing for minimal downtime and fast resumption of operations.

Disadvantages of full backups

While full backups offer numerous advantages, it is important to consider their potential disadvantages in order to make informed decisions regarding backup strategies.

  • Increased Storage Requirements: One of the disadvantages of full backups is the increased storage requirements due to the fact that they capture a complete copy of all data and files. As the amount of data grows, organizations may need to allocate additional storage resources to accommodate the larger backup sets, leading to higher costs.
  • Longer Backup Time: Full backups have longer backup times due to the comprehensive nature of capturing all data and files, which can be particularly time-consuming when dealing with large datasets.
  • Network Bandwidth Utilization: This is another potential disadvantage when it comes to full backups. Since they capture all of the data and files, they require significant network resources to transfer backup sets over the network, resulting in increased network congestion and potential performance issues.
  • Frequent Backup Cycles: An aspect to consider with full backups is the frequency of necessary backup cycles. Full backups can be resource-intensive and time-consuming, and organizations may need to allocate more time and storage resources to accommodate regular full backups.

Learn how to restore a single InnoDB table from a full backup after accidentally dropping it in this blog.

What is an incremental backup?

Incremental backups capture and store only any changes made to the data since the last backup (full or incremental). This results in smaller backup sets, reducing storage requirements, time to take the backup, and network usage, as compared to full backups. These backups are particularly suitable for those environments where data changes frequently.

To implement incremental backups effectively, it is important to establish a baseline with a full backup that serves as the initial reference for subsequent incremental backups. And in order to ensure data integrity and recoverability, DBAs should be testing incremental backups regularly, including the verification of the integrity of backup sets, as well as doing test restores.

When deciding where to store incremental backups, consideration should be given to offsite storage in order to provide protection against data loss or damage, and the regular scheduling of incremental backups is essential to ensure that all data changes are captured and maintain a reliable backup chain.

Advantages of incremental backups

In this section, we will explore the benefits and advantages of utilizing incremental backups for data protection which makes them a valuable database backup solution.

  • Reduced Backup Time: This is a big advantage of incremental backups. Since only the changes made since the last backup are captured, the backup process is far faster as compared to full backups.
  • Lower Storage Requirements: Incremental backups help lower storage needs by only storing the changes made to data since the last backup rather than saving the entire dataset. This optimizes storage space and allows for more frequent backups.
  • Faster Recovery: Faster data recovery is a key advantage of incremental backups. Because this process allows for selective restores of only specific changes since the last backup, it reduces the time organizations need for data recovery.
  • Bandwidth Optimization: Because a smaller amount of data needs to be transferred during an incremental backup, it can reduce network resources and congestion. For issues that may arise during peak network usage, this can prove to be valuable in keeping applications performing efficiently.

Disadvantages of incremental backups

Although incremental backups do provide several advantages over other backup strategies, it is essential to be aware of their potential drawbacks to make well-informed decisions that work for your requirements.

  • Longer Restore Time: The restore process for incremental backups may take longer because it could involve having to sequentially apply multiple incremental backups to restore any missing or corrupted data that spans several backup versions.
  • Increased Complexity: Implementing incremental backups can introduce increased complexity compared to other backup strategies, as DBAs always need to manage and maintain the backup chain. This requires careful tracking and organization to ensure data integrity and availability at all times.
  • Dependency on Full Backup: The restoration of data from an incremental backup strategy still relies on the availability of an initial full backup, as well as all subsequent incremental backups.
  • Longer Backup Chain: Incremental backups create a longer backup chain as each backup captures the changes since the last backup, including all incremental backups. Because each of these backup sets must be managed and stored, the length of the backup chain grows over time, requiring adequate storage capacity that may increase costs.

Want to learn more about incremental backups in MySQL using page tracking? Check out this blog!

What is a differential backup?

A differential backup backs up and keeps all the changes made since the last full backup was taken. Unlike the incremental backups discussed above, which capture only the changes made since the last backup, differential backups result in larger backup sets over time.

Differential backups are ideal when organizations must find a balance between backup size and restoration time. This strategy is optimal for quick data recovery, as the process involves applying the latest full backup followed by the most recent differential backup. It also simplifies data restoration compared to incremental backups, which require multiple backup sets to be used.

In order to optimally use differential backups, it’s vital to establish a regular backup schedule, test and verify data integrity constantly, store all backups securely and maintain up-to-date documentation of backup procedures.

Advantages of differential backups

Differential backups provide significant benefits in terms of restore efficiency and storage space optimization, making them a valuable choice for organizations looking for a backup strategy.

  • Efficient Restore Process: Differential backups offer an efficient restore process by requiring only the latest full backup and the most recent differential backup to restore the data. This results in a quicker data restoration than incremental backups, which involve multiple backup files or sets.
  • Storage Space Optimization: Differential backups optimize storage space usage by capturing and storing only the changes made since the last full backup, eliminating the need for multiple backup files to restore data. With one differential backup containing all changes since the full backup, storage requirements are reduced while still ensuring efficient data recovery.
  • Simplified Data Recovery: The data recovery process for differential backups is simplified and streamlined as it requires only the latest full and differential backup, making it more efficient than incremental backups.
  • Faster Backup Times: Unlike incremental backups, which require scanning and comparing the entire backup chain, differential backups involve capturing the cumulative changes in a single backup operation, minimizing backup time and resource utilization.

Disadvantages of differential backups

In this section, we examine the potential disadvantages and drawbacks of utilizing differential backups for data protection.

  • Increasing Backup Size Over Time: One drawback of differential backups is they often increase in size over time. Since each differential backup captures all changes made since the last full backup, the backup files gradually become larger than incremental backups.
  • Longer Restore Times: Differential backups can lead to longer restore times compared to incremental backups because when restoring data involves applying the latest full backup and all subsequent differential backups.
  • Higher Storage Requirements: Due to the accumulation of changes since the last full backup, differential backups can require higher storage requirements because of larger backup files over time.
  • Potential Data Redundancy: Differential backups may result in data redundancy, as each subsequent backup includes all changes since the last full backup, leading to the possibility of the same data being backed up several times.

Choosing between incremental vs. differential backups

When trying to choose between incremental and differential backups, you will need to understand what you are trying to accomplish with your backup strategy and consider factors such as the size of your data, how long backups will take, the efficiency of your restore process, the storage requirements for your data and data sets, and the potential for downtime to restore said data.

One backup strategy may be better when trying to minimize data size, and another may be better when your focus has to be on the ease and quickness of data recovery. With that in mind, let’s take a look at a few factors that should help you decide what strategy would work best for your needs and requirements.

Performance Considerations:

When considering the performance abilities of these two backup strategies, incremental backups are generally faster — but the restoration process can be slower, as previously discussed. Alternatively, differential backups generally take longer, but data restores are often faster. Consider which is more important to match your specific needs.

Storage Requirements:

When evaluating the storage needs of incremental vs. differential backups, keep these factors in mind. Incremental backups generally have reduced storage requirements since only the changes since the last backup are captured. Differential backups, on the other hand, grow in size as each backup captures all changes since the last full backup. Storage costs can rise quickly, so be sure to consider your options when making your choice.

Recovery Time:

Incremental backups tend to need more time to restore than differential backups, as they require sequentially applying multiple backup files to restore the data. Differential backups, in contrast, make for faster restores as they only require the latest full backup and differential backup for the restore.

Choosing the right backup type for your needs

Having the proper backup strategy in place for your organization is vital to minimize the potential business disruption of data loss, security breaches, or corruption. And with multiple types of backups available — full, incremental, and differential — it’s important to know how they work, their advantages and disadvantages, and their use cases.

And depending on how much data you need to back up, the storage requirements for that data, how long the data takes to back up, as well as the restore efficiency of each method, it’s important to choose the appropriate strategy for your specific needs.

Percona XtraBackup is a free, open source, and complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows. Percona XtraBackup is capable of taking incremental backups, and when you specify the base backup as the previous incremental or full backup, you can easily make differential backups.

Discover why Percona XtraBackup has been downloaded 3,500,000 times and counting.

 

Contact us to learn more!

Jan
24
2023
--

Backup Databases on Kubernetes With VolumeSnapshots

Backup Databases on Kubernetes With VolumeSnapshots

Backup Databases on Kubernetes With VolumeSnapshotsDatabases on Kubernetes continue their rising trend. We see the growing adoption of our Percona Kubernetes Operators and the demand to migrate workloads to the cloud-native platform. Our Operators provide built-in backup and restore capabilities, but some users are still looking for old-fashioned ways, like storage-level snapshots (i.e., AWS EBS Snapshots).

In this blog post, you will learn:

  1. How to back up and restore from storage snapshots using Percona Operators
  2. What the risks and limitations are of such backups

Overview

Volume Snapshots went GA in Kubernetes 1.20. Both your storage and Container Storage Interface (CSI) must support snapshots. All major cloud providers support them but might require some steps to enable it. For example, for GKE, you must create a VolumeSnapshotClass resource first.

At the high level, snapshotting on Kubernetes looks like this:

As PersistentVolume is represented by the real storage volume,

VolumeSnapshot

is the Kubernetes resource for volume snapshot in the cloud.

Getting ready for backups

First, we need to be sure that VolumeSnapshots are supported. For the major clouds, read the following docs:

Once you have CSI configured and Volume Snapshot Class is in place, proceed to create a backup.

Take the backup

Identify the PersistentVolumeClaims (PVC) that you want to snapshot. For example, for my MongoDB cluster, I have six PVCs: three x replica set nodes and three x config server nodes.

$ kubectl get pvc
NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongod-data-my-cluster-name-cfg-0   Bound    pvc-c9fb5afa-1fc9-41f9-88f3-4ed457f88e58   3Gi        RWO            standard-rwo   78m
mongod-data-my-cluster-name-cfg-1   Bound    pvc-b9253264-f79f-4fd0-8496-1d88105d84e5   3Gi        RWO            standard-rwo   77m
mongod-data-my-cluster-name-cfg-2   Bound    pvc-5d462005-4015-47ad-9269-c205b7a3dfcb   3Gi        RWO            standard-rwo   76m
mongod-data-my-cluster-name-rs0-0   Bound    pvc-410acf85-36ad-4bfc-a838-f311f9dfd40b   3Gi        RWO            standard-rwo   78m
mongod-data-my-cluster-name-rs0-1   Bound    pvc-a621dd8a-a671-4a35-bb3b-3f386550c101   3Gi        RWO            standard-rwo   77m
mongod-data-my-cluster-name-rs0-2   Bound    pvc-484bb835-0e2d-4a40-b5a3-1ba340ec0567   3Gi        RWO            standard-rwo   76m

Each PVC will have its own VolumeSnapshot. Example for

mongod-data-my-cluster-name-cfg-0

:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: mongod-data-my-cluster-name-cfg-0-snap
spec:
  volumeSnapshotClassName: gke-snapshotclass
  source:
    persistentVolumeClaimName: mongod-data-my-cluster-name-cfg-0

I have listed all my VolumeSnapshots objects in one YAML manifest here.

$ kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/volume-snapshots/mongo-volumesnapshots.yaml
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-cfg-0-snap created
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-cfg-1-snap created
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-cfg-2-snap created
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-rs0-0-snap created
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-rs0-1-snap created
volumesnapshot.snapshot.storage.k8s.io/mongod-data-my-cluster-name-rs0-2-snap created

VolumeSnapshotContent is created and bound to every

VolumeSnapshot

resource. Its status can tell you the name of the snapshot in the cloud and check if a snapshot is ready:

$ kubectl get volumesnapshotcontent snapcontent-0e67c3b5-551f-495b-b775-09d026ea3c8f -o yaml
…
status:
  creationTime: 1673260161919000000
  readyToUse: true
  restoreSize: 3221225472
  snapshotHandle: projects/percona-project/global/snapshots/snapshot-0e67c3b5-551f-495b-b775-09d026ea3c8f

  • snapshot-0e67c3b5-551f-495b-b775-09d026ea3c8f is the snapshot I have in GCP for the volume.
  • readyToUse: true – indicates that the snapshot is ready

Restore

The restoration process, in a nutshell, looks as follows:

  1. Create persistent volumes using the snapshots. The names of the volumes must match the standard that Operator uses.
  2. Provision the cluster

Like any other backup, it must have secrets in place: TLS and users.

You can use this restoration process to clone existing clusters as well, just make sure you change the cluster, PVCs, and Secret names.

Create persistent volumes from snapshots. It is the same as the creation of regular PersistentVolumeClaim, but with a

dataSource

section that points to the snapshot:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongod-data-my-cluster-name-rs0-0
spec:
  dataSource:
    name: mongod-data-my-cluster-name-rs0-0-snap
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  storageClassName: standard-rwo
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

$ kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/volume-snapshots/mongo-pvc-restore.yaml
persistentvolumeclaim/mongod-data-my-cluster-name-cfg-0 created
persistentvolumeclaim/mongod-data-my-cluster-name-cfg-1 created
persistentvolumeclaim/mongod-data-my-cluster-name-cfg-2 created
persistentvolumeclaim/mongod-data-my-cluster-name-rs0-0 created
persistentvolumeclaim/mongod-data-my-cluster-name-rs0-1 created
persistentvolumeclaim/mongod-data-my-cluster-name-rs0-2 created

Once done, spin up the cluster as usual. The volumes you created earlier will be used automatically. Restoration is done.

Risks and limitations

Storage support

Both storage and the storage plugin in Kubernetes must support volume snapshots. This limits the choices. Apart from public clouds, there are open source solutions like Ceph (rook.io for k8s) that can provide snapshotting capabilities.

Point-in-time recovery

Point-in-time recovery (PITR) allows you to reduce your Point Recovery Objective by restoring or rolling back the database to a specific transaction or time.

Volume snapshots in the clouds store data in increments. The first snapshot holds all the data, and the following ones only store the changes. This significantly reduces your cloud bill. But snapshots cannot provide you with the same RPO as native database mechanisms.

Data consistency and corruption

Snapshots are not data-aware. When a snapshot is taken, numerous transactions and data modifications can happen. For example, heavy write activity and simultaneous compound index creation in MongoDB might lead to snapshot corruption. The biggest problem is that you will learn about data corruption during restoration.

Locking or freezing a filesystem before the snapshot would help to avoid such issues. Solutions like Velero or Veeam make the first steps towards data awareness and can create consistent snapshots by automating file system freezes or stopping replication.

Percona Services teams use various tools to automate the snapshot creation safely. Please contact us here to ensure data safety.

Cost

Public clouds store snapshots on cheap object storage but charge you extra for convenience. For example, the AWS EBS snapshot is priced at $0.05/GB, whereas S3 is only $0.023. It is a 2x difference, which for giant data sets might significantly increase your bill.

Time to recover

It is not a risk or limitation but a common misconception I often see: recovery from snapshots takes only a few seconds. It does not. When you create an EBS volume from the snapshot, it takes a few seconds. But in reality, the volume you just created does not have any data. You can read more about the internals of EBS snapshots in this nice blog post.

Conclusion

Volume Snapshots on Kubernetes can be used for databases but come with certain limitations and risks. Data safety and consistency are the most important factors when choosing a backup solution. For Percona Operators, we strongly recommend using built-in solutions which guarantee data consistency and minimize your recovery time and point objectives.

Learn More About Percona Kubernetes Operators

Nov
25
2022
--

Percona Operator for MongoDB Backup and Restore on S3-Compatible Storage – Backblaze

Percona Operator for MongoDB Backblaze

Percona Operator for MongoDB BackblazeOne of the main features that I like about the Percona Operator for MongoDB is the integration with Percona Backup for MongoDB (PBM) tool and the ability to backup/restore the database without manual intervention. The Operator allows backing up the DB to S3-compatible cloud storage and so you can use AWS, Azure, etc.

One of our customers asked about the integration between Backblaze and the Operator for backup and restore purposes. So I was checking for it and found that it is S3-compatible and provides a free account with 10GB of cloud storage. So I jumped into testing it with our Operator. Also, I saw in our forum that a few users are using Backblaze cloud storage. So making this blog post for everyone to utilize if they want to test/use Backblaze S3-compatible cloud storage for testing our Operator and PBM.

S3-compatible storage configuration

The Operator supports backup to S3-compatible storage. The steps for backup to AWS or Azure blob are given here. So you can try that as well. In this blog let me focus on B2 cloud storage configuring as the backup location and restore it to another deployment.

Let’s configure the Percona Server for MongoDB (PSMDB) Sharded Cluster using the Operator (with minimal config as explained here). I have used PSMDB operator v1.12.0 and PBM 1.8.1 for the test below. You can sign up for the free account here – https://www.backblaze.com/b2/cloud-storage-b.html. Then log in to your account. You can first create a key pair to access the storage from your operator as follows in the “App Keys” tab:

Add Application Key GUI

 

Then you can create a bucket with your desired name and note down the S3-compatible storage’s details like bucketname (shown in the picture below) and the endpointUrl to point here to send the backup files. The details of endpointUrl can be obtained from the provider and the region is specified in the prefix of the endpointURL variable.

Create Bucket

 

Deploy the cluster

Now let’s download the Operator from GitHub (I used v1.12.0) and configure the files for deploying the MongoDB sharded cluster. Here, I am using cr-minimal.yaml for deploying a very minimal setup of single member replicaset for a shard, config db, and a mongos.

#using an alias for the kubectl command
$ alias "k=kubectl"
$ cd percona-server-mongodb-operator

# Add a backup section in the cr file as shown below. Use the appropriate values from your setup
$ cat deploy/cr-minimal.yaml
apiVersion: psmdb.percona.com/v1-12-0
kind: PerconaServerMongoDB
metadata:
  name: minimal-cluster
spec:
  crVersion: 1.12.0
  image: percona/percona-server-mongodb:5.0.7-6
  allowUnsafeConfigurations: true
  upgradeOptions:
    apply: 5.0-recommended
    schedule: "0 2 * * *"
  secrets:
    users: minimal-cluster
  replsets:
  - name: rs0
    size: 1
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
  sharding:
    enabled: true
    configsvrReplSet:
      size: 1
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 3Gi
    mongos:
      size: 1

  backup:
    enabled: true
    image: percona/percona-backup-mongodb:1.8.1
    serviceAccountName: percona-server-mongodb-operator
    pitr:
      enabled: false
      compressionType: gzip
      compressionLevel: 6
    storages:
      s3-us-west:
        type: s3
        s3:
          bucket: psmdbbackupBlaze
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-004
          endpointUrl: https://s3.us-west-004.backblazeb2.com/
#          prefix: ""
#          uploadPartSize: 10485760
#          maxUploadParts: 10000
#          storageClass: STANDARD
#          insecureSkipTLSVerify: false

 

The backup-s3.yaml contains the key details to access the B2 cloud storage. Encode the Key ID and Access Details (retrieved from Backblaze as mentioned here) as follows to use inside the backup-s3.yaml file. The key name: my-cluster-name-backup-s3 should be unique which is used to refer to the other yaml files:

# First use base64 to encode your keyid and access key:
$ echo "key-sample" | base64 --wrap=0
XXXX==
$ echo "access-key-sample" | base64 --wrap=0
XXXXYYZZ==

$ cat deploy/backup-s3.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-cluster-name-backup-s3
type: Opaque
data:
 AWS_ACCESS_KEY_ID: XXXX==
 AWS_SECRET_ACCESS_KEY: XXXXYYZZ==

 

Then deploy the cluster as mentioned below and deploy backup-s3.yaml as well.

$ k apply -f ./deploy/bundle.yaml
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com created
role.rbac.authorization.k8s.io/percona-server-mongodb-operator created
serviceaccount/percona-server-mongodb-operator created
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created
deployment.apps/percona-server-mongodb-operator created

$ k apply -f ./deploy/cr-minimal.yaml
perconaservermongodb.psmdb.percona.com/minimal-cluster created

$ k apply -f ./deploy/backup-s3.yaml 
secret/my-cluster-name-backup-s3 created

After starting the Operator and applying the yaml files, the setup looks like the below:

$ k get pods
NAME                                               READY   STATUS    RESTARTS   AGE
minimal-cluster-cfg-0                              2/2     Running   0          39m
minimal-cluster-mongos-0                           1/1     Running   0          70m
minimal-cluster-rs0-0                              2/2     Running   0          38m
percona-server-mongodb-operator-665cd69f9b-44tq5   1/1     Running   0          74m

$ k get svc
NAME                     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)     AGE
kubernetes               ClusterIP   10.96.0.1     <none>        443/TCP     76m
minimal-cluster-cfg      ClusterIP   None          <none>        27017/TCP   72m
minimal-cluster-mongos   ClusterIP   10.100.7.70   <none>        27017/TCP   72m
minimal-cluster-rs0      ClusterIP   None          <none>        27017/TCP   72m

 

Backup

After deploying the cluster, the DB is ready for backup anytime. Other than the scheduled backup, you can create a backup-custom.yaml file to take a backup whenever you need it (you will need to provide a unique backup name each time, or else a new backup will not work). Our backup yaml file looks like the below one:

$ cat deploy/backup/backup-custom.yaml
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  finalizers:
  - delete-backup
  name: backup1
spec:
  clusterName: minimal-cluster
  storageName: s3-us-west
#  compressionType: gzip
#  compressionLevel: 6

Now load some data into the database and then start the backup now:

$ k apply -f deploy/backup/backup-custom.yaml 
perconaservermongodbbackup.psmdb.percona.com/backup1 configured

The backup progress looks like the below:

$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS      COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:21:58Z   requested               43s
$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS      COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   requested               46s
$ k get perconaservermongodbbackup.psmdb.percona.com
NAME      CLUSTER           STORAGE      DESTINATION            STATUS    COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   running               49s

Here, if you have any issues with the backup, you can view the backup logs from the backup agent sidecar as follows:

$ k logs pod/minimal-cluster-rs0 -c backup-agent

To start another backup, edit backup-custom.yaml and change the backup name followed by applying it (using name:backup2):

$ k apply -f deploy/backup/backup-custom.yaml 
perconaservermongodbbackup.psmdb.percona.com/backup2 configured 

Monitor the backup process (you can use -w option to watch the progress continuously). It should show the status as READY:

$ k get perconaservermongodbbackup.psmdb.percona.com -w
NAME      CLUSTER           STORAGE      DESTINATION            STATUS   COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   ready    12m         14m
backup2   minimal-cluster   s3-us-west                                               8s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   requested               21s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   running                 26s
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   ready       0s          41s

From the bucket on Backblaze, the backup files are listed as they were sent from the backup:

Backup Files in B2 cloud Storage

 

Restore

You can restore the cluster from the backup into another similar deployment or into the same cluster. List the backups and restore one of them as follows. The configuration restore-custom.yaml has the backup information to restore. If you are using another deployment, then you can also include backupSource section which I commented on below for your reference, from which the restore process finds the source of the backup. In this case, make sure you create a secret my-cluster-name-backup-s3 before restoring as well to access the backup.

$ cat deploy/backup/restore-custom.yaml 
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  name: restore2
spec:
  clusterName: minimal-cluster
  backupName: backup2
#  pitr:
#    type: date
#    date: YYYY-MM-DD HH:MM:SS
#  backupSource:
#    destination: s3://S3-BACKUP-BUCKET-NAME-HERE/BACKUP-DESTINATION
#    s3:
#      credentialsSecret: my-cluster-name-backup-s3
#      region: us-west-004
#      bucket: S3-BACKUP-BUCKET-NAME-HERE
#      endpointUrl: https://s3.us-west-004.backblazeb2.com/
#      prefix: ""
#    azure:
#      credentialsSecret: SECRET-NAME
#      prefix: PREFIX-NAME
#      container: CONTAINER-NAME

Listing the backup:

$ k get psmdb-backup
NAME      CLUSTER           STORAGE      DESTINATION            STATUS   COMPLETED   AGE
backup1   minimal-cluster   s3-us-west   2022-09-08T03:22:19Z   ready    3h5m        3h6m
backup2   minimal-cluster   s3-us-west   2022-09-08T03:35:56Z   ready    171m        172m
backup3   minimal-cluster   s3-us-west   2022-09-08T04:16:39Z   ready    130m        131m

 

To verify the restore process, I write some data into a collection vinodh.testData after the backup and before the restore. So the newly inserted document shouldn’t be there after the restore:

# Using mongosh from the mongo container to see the data
# Listing data from collection vinodh.testData
$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[ { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 } ]
pod "mongo-client" deleted

Inserting a document into it:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.insert({id:2})\" --quiet "
If you don't see a command prompt, try pressing enter.
DeprecationWarning: Collection.insert() is deprecated. Use insertOne, insertMany, or bulkWrite.
{
  acknowledged: true,
  insertedIds: { '0': ObjectId("631980fe07180f860bd22534") }
}
pod "mongo-client" delete

Listing it again to verify:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=10.96.30.92 --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[
  { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 },
  { _id: ObjectId("631980fe07180f860bd22534"), id: 2 }
]
pod "mongo-client" deleted

Running restore as follows:

$ k apply -f deploy/backup/restore-custom.yaml
perconaservermongodbrestore.psmdb.percona.com/restore2 created

Now check the data again in vinodh.testData collection and verify whether the restore is done properly. The below data proves that the collection was restored from the backup as it is listing only the record from the backup:

$ kubectl run -i --rm --tty mongo-client --image=mongo:5.0.7 --restart=Never -- bash -c "mongosh --host=minimal-cluster-mongos --username=root --password=password --authenticationDatabase=admin --eval \"db.getSiblingDB('vinodh').testData.find()\" --quiet "
If you don't see a command prompt, try pressing enter.
[ { _id: ObjectId("631956cc70e60e9ed3ecf76d"), id: 1 } ]

 

Hope this helps you! Now you can try the same from your end to check and use Backblaze in your production if it suits your requirements. I haven’t tested the performance of the network yet. If you have used Backblaze or similar S3-compatible storage for backup, then you can share your experience with us in the comments.

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

Learn More About Percona Kubernetes Operators

May
27
2022
--

Physical Backup Support in Percona Backup for MongoDB

Physical Backup Support in Percona Backup for MongoDB

Percona Backup for MongoDB (PBM) is a backup utility custom-built by Percona to help solve the needs of customers and users who don’t want to pay for proprietary software like MongoDB Enterprise and Ops Manager but want a fully-supported open-source backup tool that can perform cluster-wide consistent backups in MongoDB.

Version 1.7.0 of PBM was released in April 2022 and comes with a technical preview of physical backup functionality. This functionality enables users to benefit from the reduced recovery time. 

With logical backups, you extract the data from the database and store it in some binary or text format, and for recovery, you write all this data back to the database. For huge data sets, it is a time-consuming operation and might take hours and days. Physical backups take all your files which belong to the database from the disk itself, and recovery is just putting these files back. Such recovery is much faster as it does not depend on the database performance at all.

In this blog post, you will learn about the architecture of the physical backups feature in PBM, see how fast it is compared to logical backups, and try it out yourself.

Tech Peek

Architecture Review

In general, physical backup means a copy of a database’s physical files. In the case of Percona Server for MongoDB (PSMDB) these are WiredTiger

*.wt

  Btree, config, metadata, and journal files you can usually find in

/data/db

.  The trick is to copy all those files without stopping the cluster and interrupting running operations. And to be sure that data in files is consistent and no data will be changed during copying. Another challenge is to achieve consistency in a sharded cluster. In other words, how to be sure that we are gonna be able to restore data to the same cluster time across all shards.

PBM’s physical backups are based on the backupCursors feature of PSMDB (PSMDB). This implies that to use this feature, you should use Percona Server for MongoDB.

Backup

On each replica set, PBM uses

$backupCursor

to retrieve a list of files that need to be copied to achieve backup. Having that list next step is to ensure cluster-wide consistency. For that, each replica set posts a cluster time of the latest observed operation. The backup leader picks the most recent one. This will be the common backup timestamp (recovery timestamp) saved as the

last_write_ts

in the backup metadata. After agreeing on the backup time, the

pbm-agent

on each cluster opens a

$backupCursorExtend

. The cursor will return its result only after the node reaches the given timestamp. Thus the returned list of logs (journals) will contain the “common backup timestamp”. At that point, we have a list of all files that have to be in the backup. So each node copies them to the storage, saves metadata, closes cursors, and calls it a backup. Here is a blog post explaining Backup Cursors in great detail.

Of course, PBM does a lot more behind the scenes starting from electing appropriate nodes for the backup, coordinating operations across the cluster, logging, error handling, and many more. But all these subjects are for other posts.

Backup’s Recovery Timestamp

Restoring any backup PBM returns the cluster to some particular point in time. Here we’re talking about the time, not in terms of wall time but MongoDB’s cluster-wide logical clock. So the point that point-in-time is consistent across all nodes and replica sets in a cluster. In the case of logical or physical backup, that time is reflected in the

complete

section of

pbm list

  of

pbm status

  outputs. E.g.:

    2022-04-19T15:36:14Z 22.29GB <physical> [complete: 2022-04-19T15:36:16]
    2022-04-19T14:48:40Z 10.03GB <logical> [complete: 2022-04-19T14:58:38]

This time is not the time when a backup has finished, but the time at which cluster state was captured (hence the time the cluster will be returned to after the restore). In PBM’s logical backups, the recovery timestamp tends to be closer to the backup finish. To define it, PBM has to wait until the snapshot on all replica sets finishes. And then it starts oplog capturing from the backup start up to that time. Doing physical backups, PBM would pick a recovery timestamp right after a backup start. Holding the backup cursor open guarantees the checkpoint data won’t change during the backup, and PBM can define complete-time right ahead.

Restore

There are a few considerations for restoration.

First of all, files in the backup may contain operations beyond the target time (

commonBackupTimestamp

). To deal with that, PBM uses a special function of the replication subsystem’s startup process to set the limit of the oplog being restored. It’s done by setting the

oplogTruncateAfterPoint

value in the local DB’s

replset.oplogTruncateAfterPoint

 collection.

Along with the

oplogTruncateAfterPoint

 database needs some other changes and clean-up before start. This requires a series of restarts of PSMDB in a standalone mode.

Which in turn brings some hassle to the PBM operation. To communicate and coordinate its work across all agents, PBM relies on PSMDB itself. But once a cluster is taken down, PBM has to switch to communication via storage. Also, during standalone runs, PBM is unable to store its logs in the database. Hence, at some point during restore,

pbm-agent

logs are being available only in agents’ stderr. And

pbm logs

 won’t have access to them. We’re planning to solve this problem by the physical backups GA.

Also, we had to decide on the restore strategy in a replica set. One way is to restore one node, then delete all data on the rest and let the PSMDB replication do the job. Although it’s a bit easier, it means until InitialSync finishes, the cluster will be of little use. Besides, logical replication at this stage almost neglects all the speed benefits (later on that) the physical restore brings to the table. So we went with the restoration of each node in a replica set. And making sure after the cluster starts, no node will spot any difference and won’t start ReSync.

As with the PBM’s logical backups, the physical once currently can be restored to the cluster with the same topology, meaning replica set names in the backup and the target cluster should match. Although it won’t be an issue for logical backups starting from the next PBM version. And later this feature will be extended to the physical backups as well. Along with that, the number of replica sets in the cluster could be more than those in the backup but not vice-versa. Meaning all data in the backup should be restored. 

Performance Review

We used the following setup:

  • Cluster: 3-node replica set. Each mongod+pbm-agent on Digital Ocean droplet: 16GB, 8vCPU (CPU optimized).
  • Storage: nyc3.digitaloceanspaces.com
  • Data: randomly generated, ~1MB documents

physical backup MongoDB

In general, a logical backup should be more beneficial on small databases (a few hundred megabytes). Since on such a scale, the extra overhead on top of data that physical files bring still makes a difference. Basically reading/writing only user data during logical backup means less data needs to be transferred over the network. But as the database grows, overhead on logical read(select) and mostly write(insert) became a bottleneck for the logical backups. As for the physical backup, the speed is almost always bounded only by the network bandwidth to/from remote storage. In our tests, restoration time from physical backups has linear dependency on the dataset size, whereas logical restoration time grows non-linearly. The more data you have, the longer it takes to replay all the data and rebuild indexes. For example, for a 600GB dataset physical restore took 5x less time compared to logical. 

But on a small DB size, the difference is neglectable – a couple of minutes. So the main benefit of logical backups lay beyond the performance. It’s flexibility. Logical backups allow partial backup/restore of the database (on the roadmap for PBM). You can choose particular databases and/or collections to work with.  As physical backups work directly with database storage-engine files, they operate in an all-or-nothing frame.

Hands-on

PBM Configuration

In order to start using PBM with PSMDB or MongoDB, install all the necessary packages according to the installation instructions. Please note that starting from version 1.7.0 the user running the

pbm-agent

 process should also have the read/write access to PSMDB data directory for the purpose of performing operations with datafiles during physical backup or restore. 

Considering the design, starting from 1.7.0 the default user for

pbm-agent

is changed from

pbm

to

mongod

. So unless PSMDB runs under a different user than

mongod

, no extra actions are required. Otherwise, please carefully re-check your configuration and provide the necessary permissions to ensure proper PBM functioning.

In addition, keep in mind that for using PBM physical backups, you should run Percona Server for MongoDB starting from versions 4.2.15-16 and 4.4.6-8 and higher – this is where hotBackups and backup cursors were introduced.

Creating a Backup

With the new PBM version, you can specify what type of backup you wish to make: physical or logical. By default when no type is selected, PBM makes a logical backup.

> pbm backup
Starting backup '2022-04-20T11:12:53Z'....
Backup '2022-04-20T11:12:53Z' to remote store 's3://https://storage.googleapis.com/pbm-bucket' has started

> pbm backup -t physical
Starting backup '2022-04-20T12:34:06Z'....
Backup '2022-04-20T12:34:06Z' to remote store 's3://https://storage.googleapis.com/pbm-bucket' has started

> pbm status -s cluster -s backups
Cluster:
========
rs0:
  - rs0/mongo1.perconatest.com:27017: pbm-agent v1.7.0 OK
  - rs0/mongo2.perconatest.com:27017: pbm-agent v1.7.0 OK
  - rs0/mongo3.perconatest.com:27017: pbm-agent v1.7.0 OK
Backups:
========
S3 us-east-1 s3://https://storage.googleapis.com/pbm-bucket
  Snapshots:
    2022-04-20T12:34:06Z 797.38KB <physical> [complete: 2022-04-20T12:34:09]
    2022-04-20T11:12:53Z 13.66KB <logical> [complete: 2022-04-20T11:12:58]

Point-in-Time Recovery

Point-in-Time Recovery is currently supported only for logical backups. It means that a logical backup snapshot is required for pbm-agent to start periodically saving consecutive slices of the oplog. You can still make a physical backup while PITR is enabled, it won’t break or change the oplog saving process. 

The restoration process to the specific point in time will also use a respective logical backup snapshot and oplog slices which will be replayed on top of the backup.

Checking the Logs

During physical backup, PBM logs are available via

pbm logs

command as well as for all other operations. 

> pbm logs -e backup/2022-04-20T12:34:06Z
2022-04-20T12:34:07Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] backup started
2022-04-20T12:34:12Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] uploading files
2022-04-20T12:34:54Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] uploading done
2022-04-20T12:34:56Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] backup finished

As for restore,

pbm logs

command doesn’t provide information about restore from a physical backup. It’s caused by peculiarities of the restore procedure and will be improved in the upcoming PBM versions. However,

pbm-agent

still saves log locally, so it’s possible to check information about restore process on each node: 

> sudo journalctl -u pbm-agent.service | grep restore
pbm-agent[12560]: 2022-04-20T19:37:56.000+0000 I [restore/2022-04-20T12:34:06Z] restore started
.......
pbm-agent[12560]: 2022-04-20T19:38:22.000+0000 I [restore/2022-04-20T12:34:06Z] copying backup data
.......
pbm-agent[12560]: 2022-04-20T19:38:39.000+0000 I [restore/2022-04-20T12:34:06Z] preparing data
.......
pbm-agent[12560]: 2022-04-20T19:39:12.000+0000 I [restore/2022-04-20T12:34:06Z] restore finished <nil>
pbm-agent[12560]: 2022-04-20T19:39:12.000+0000 I [restore/2022-04-20T12:34:06Z] restore finished successfully

Restoring from a Backup

The restore process from a physical backup is similar to a logical one but requires several extra steps after the restore is finished by PBM.

> pbm restore 2022-04-20T12:34:06Z
Starting restore from '2022-04-20T12:34:06Z'.....Restore of the snapshot from '2022-04-20T12:34:06Z' has started. Leader: mongo1.perconatest.com:27017/rs0

After starting the restore process, pbm CLI returns the leader node ID, so it’s possible to track the restore progress by checking logs of the pbm-agent leader. In addition, status is written to the metadata file created on the remote storage. The status file is created in the root of the storage path and has the format

.pbm.restore/<restore_timestamp>.json

. As an option it’s also possible to pass

-w

flag during restore which will block the current shell session and wait for the restore to finish:

> pbm restore 2022-04-20T12:34:06Z -w
Starting restore from '2022-04-20T12:34:06Z'....Started physical restore. Leader: mongo2.perconatest.com:27017/rs0
Waiting to finish...........................Restore successfully finished!

After the restore is complete, it’s required to perform the following steps:

  • Restart all
    mongod

    (and

    mongos

     if present) nodes

  • Restart all pbm-agents
  • Run the following command to resync the backup list with the storage:

    $ pbm config --force-resync

Conclusion

MongoDB allows users to store enormous amounts of data. Especially if we talk about sharded clusters, where users are not limited by a single storage volume size limit. Database administrators often have to implement various home-grown solutions to ensure timely backups and restores of such big clusters. The usual approach is a storage-level snapshot. Such solutions do not guarantee data consistency and provide false confidence that data is safe.

Percona Backup for MongoDB with physical backup and restore capabilities enable users to backup and restore data fast and at the same time comes with data-consistency guarantees. 

Physical Backup functionality is in the Technical Preview stage. We encourage you to read more about it in our documentation and try it out. In case you face any issues feel free to contact us on the forum or raise the JIRA issue.

Mar
11
2022
--

PostgreSQL 101 for Non-Postgres DBAs (Simple Backup and Restore)

PostgreSQL 101 Backup and Restore

It’s no surprise that PostgreSQL is becoming the de facto goto database for many. Just a few of the many reasons include advanced technology, scalability, and ways to save money. With that said, we see many experienced DBAs being tasked with migrating existing databases from Oracle, MySQL, SQL Server, and others to Postgres. Although fundamentally speaking, a good DBA should have a conceptual knowledge and understanding of database fundamentals, translating your existing way of performing daily tasks differs from one technology to the other. With that in mind, this blog is addressed to those experienced DBAs that have a well-known and proven set of routines in their old technology and want to know how to perform them in Postgres.

Postgres offers several utilities for performing both physical and logical backups and restores. We will talk about these and how to use them here.

For the purpose of this mini-tutorial, we are assuming all tasks will be performed by the user “postgres”, which has superuser privileges on the database unless otherwise noted.

I Want To …..

 

Logical Backups

Logical backups are processed with native tools such as pg_dump and pg_dumpall. These tools should be included in the default bin directory for postgres installation such as /usr/pgsql-11/bin. If your path is not set, you may want to include the bin directory in your path.

There are many options that can be used when running these tools to customize your data dumps. So, we will cover a few scenarios in this blog.

 

Physical Backups

Physical backups are processed with native tools such as pg_basebackup. Again, these tools should be included in the default bin directory for postgres installation such as /usr/pgsql-11/bin. If your path is not set, you may want to include the bin directory in your path.

You can also use system tools for physical backups such as tar or other archiving tools at your disposal.

 

Prerequisite for Remote Backups

The source database server has to allow a remote connection for the user performing the task. Remember, we are assuming for our examples that the user is postgres. 

 

  1. Create an entry in the pg_hba.conf file similar to the following under the IPv4 connections section.

host    all        postgres        0.0.0.0/0               md5

      2. Edit your postgresql.conf file or whatever file you may be loading for runtime configs and change the parameter listen_addresses to the following:

listen_addresses = ‘*’

Once the above changes are made, reload your configuration file or restart postgres. 

The above examples are pretty open. For security, you most likely will restrict the IP address in the hba.conf file to a more specific IP, Subnet.

In our example, we are allowing postgres to connect from anywhere with password authentication. Thus, the 0.0.0.0/0 and md5. You could change the 0.0.0.0/0 to the address of the other database server like 192.168.1.2/32 We also specify the user postgres with the -U option since it is the user we opened up in the pg_hba.conf file.

If the user running the commands has different credentials on source/target servers you will need to save the password to .pgpass or set the environment variable PGPASSWORD so you are not prompted for the password whenever it is needed.

I want to dump my entire database, including users and credentials to a file.

This is quite a simple task to perform if you have the correct privileges and configuration settings along with the storage needed depending on your database size.

Performing the Data Dump Locally

If you have only one instance of postgres running on your server and have minimal / default configuration for the pg_hba.conf file and your path includes the postgres bin directory, all you need to do as user postgres is ….

pg_dumpall > savedfile.sql

The above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump a specific instance all you do is …

pg_dumpall -p port > savedfile.sql

Replace the port above with the port number the instance you wish to dump is running on.

Performing the data dump remotely.

Although this is pretty much the same thing as on a local server, there are a few things you need to have configured in order to execute this data dump remotely. Plus, your prerequisites need to be addressed.

Now from our remote client or server, we can run the following commands as long as the postgres tools are installed.

pg_dumpall -h host -p port -U postgres > savedfile.sql

Replace the host above with the address of the source DB and port with the port number it is running on.

There are other flags and options you can use. Have a look here for the usage options

I want to dump a specific database only.

Performing the data dump locally.

Similar to the other commands with a slight variation

pg_dump -d dname > savedfile.sql

Like in other scenarios, the above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump from a specific instance all you do is …

pg_dump -p port -d dbname > savedfile.sql

I want to dump a specific database and specific table or tables only.

On a local server

Similar to the other commands with a slight variation

pg_dump -d dname -t tablename > savedfile.sql

Like in other scenarios, the above works well for small databases where you have space on the local server and just want a quick and simple dump of your database.

If you are running multiple instances on the local server and want to dump from a specific instance all you do is …

pg_dump -p port -d dbname -t tablename > savedfile.sql

If you want more than one table, list their names or patterns like so …

pg_dump -d dname -t table1 -t table2 -t table3 > savedfile.sql

From a remote server

Just like in previous examples, specify the connection options with -h host -p port

I only want to dump the users and credentials to restore them somewhere else.

This is just as simple as the above data dumps. However, keep in mind that this will not get you what you need if your instance is an RDS instance. Amazon really locks down what you can do as a privileged user on an RDS instance. Even as Postgres.

From a local server

pg_dumpall -g > users.sql

From a remote server or client. ( saves file locally )

pg_dumpall -g -h host -p port -U postgres > users.sql

You can edit the above dump file and remove any user you do not wish to apply when you restore the file to a different server.

Restoring a Logical Dump

Restoring the newly created backup is a simple task. There are several ways to accomplish this and we will go over a few of these just to get you going.  Keep in mind there is a pg_restore utility as well which we will not be addressing in this blog. Pg_restore lets you get more creative with your dumps and imports.

Again, we assume all actions here are executed as user postgres.

Restoring a pg_dumpall to a local server from a saved file.

psql postgres -f savedfile.sql

Restoring a pg_dumpall to a remote server from a saved file.

psql -h host -p port postgres -f savedfile.sql

Restoring a pg_dumpall to a remote server from the source server.

pg_dumpall | psql -h host -p port postgres

Restoring from a pg_dumpall from a remote server to a remote server.

pg_dumpall -h src_host -p src_port | psql -h target_host -p target_port postgres

Restoring a pg_dump of a specific database from a saved file.

psql dbname -f savedfile.sql

Restoring a pg_dump of a specific database to a remote server from a saved file.

psql -h host -p port dbname -f savedfile.sql

Restoring a pg_dump with a different owner on the target.

Sometimes you don’t have access to the users and credentials on a source database or want them to be different on your target/restored database. Follow these steps to achieve this.

  1. Perform your pg_dump command as noted previously but add the –no-owner option.
  2. Perform the restore as noted above but run the commands as the new owner. 

pg_dump -d database –no-owner > savedfile.sql

psql -U newowner dbname -f savedfile.sql

Remember for remote servers as noted in the other examples, use the -h host -p port and any other connection string option needed.

If the user’s credentials are different and you are prompted for passwords,  read the prerequisites section of this blog.

Let’s Get Physical with pg_baseback

A common way of performing physical backups in Postgres is with the use of pg_basebackup. This tool allows us to generate a physical backup with the necessary WAL files needed to restore or stand up a stand-alone instance.

There are many flags and options for this tool including compression but for the sake of this blog, we will focus on the basic use of pg_basebackup with minimal options.

For the purpose of this document, we will cover physical backups using the native pg_basebackup tool.

NOTE: Typically, one specifies the destination path for the physical backup. This is noted with the -D option of pg_basebackup.

Saving the backup to destination path

pg_basebackup -D /destination/path -Pv –checkpoint=fast

Sending the backup as tar files to the directory path specified

pg_basebackup -D /destination/path -Pv –checkpoint=fast -F t

The above will generate two tar files. A base.tar and a pg_wal.tar

Create a Physical Backup From a Remote Instance

Make sure you have set up the prerequisites as explained here

The only difference between remote and local execution is that for remote, we specify a source server with the -h remote_host and the port postgres is running on with the -p remote_port  

pg_basebackup -h host -p port -D /destination/path -Pv –checkpoint=fast

If the user executing pg_basebackup is not trusted directly from the server executing the pg_basebackup, add the additional option of -U username. For example …

pg_basebackup -U postgres -h host -p port -D /destination/path -Pv –checkpoint=fast

Stand up a Local Instance of Postgres using pg_basebackup

Tar file method

If you execute the pg_baseback with the tar file option, it will generate two tar files. A base.tar and a pg_wal.tar 

Extract the base.tar. If you do not have different WAL files to restore, extract the pg_wal.tar and place the wal segment file in the pg_wal directory.

Directory method

Make sure the directory where the new cluster will be located exists with the proper permissions and storage capacity. Remember, this will consume the same amount of space as the source database.

Define where the target database will reside.

  • mkdir -p /destination/path
  • chmod 700 /destination/path
  • chown postgres:postgres  /destination/path

As user postgres, run the following command assuming pg_basebackup is in your path.

Source database is local

pg_basebackup -D /destination/path-Pv –checkpoint=fast -X stream

Source database is on a remote server

pg_basebackup -h host -p port -D /destination/path-Pv –checkpoint=fast -X stream

What does the above do?

  1. Assumes postgres is running on the localhost using the default port of 5432 and the user executing it has the necessary privs to do so.
  2. initiate a pg_basebackup of the current and running instance of postgres.
  3. Save the copy to the path specified after the -D 
  4. Optionally, the -Pv will show the progress and verbose output of the process.
  5. Perform a fast checkpoint rather than spreading it out. Makes the backup start sooner.
  6. Stream the WAL changes that are happening on the running cluster and save them in the new cluster. This will allow for starting the new cluster without additional WALs.

The above applies to whether the database is remote or not.

Starting the separate instance of postgres

When the pg_basebackup completes, to start up the new local instance, go into the new data directory /destination/path modify the postgresql.conf file or whatever file you may have defined your previous port in..

  • Set the port to a number not in use such as 5433. I.e  port = 5433
  • Modify any memory parameters necessary
  • Make sure, if archiving is enabled, it archives to a different location than the original cluster.

You can then proceed to start the new instance of postgres as follows:

pg_ctl -D /destination/path -o “-p 5433” start

You should now be able to connect to the new cluster with the exact credentials as the source cluster with 

psql -p 5433

Stand up a remote cluster

This process is pretty much identical to the local cluster process above. The only difference is you will specify a host and credentials.

From the remote target host 

pg_basebackup -h source_server -p port -U username -D /destination/path  -Pv –checkpoint=fast -X stream

As you can see, we are simply adding a connection string to the original command we ran for the local copy. This will generate the backup on the remote host and save it to the local destination path.

Once the copy is placed on the target host, if necessary, change your port and archive location if archiving is enabled as mentioned above.

Last words

The above examples are meant to get you started with basic backups and restores. They do not cover more advanced options such as archiving of wal files, point in time recovery, etc … This will be addressed in a future blog or by simply searching online.  Furthermore, using backups to stand up replicas will also be addressed in future blog postings.

Jul
07
2021
--

Migrating Ownership of Your Stored Routines, Views, and Triggers in MySQL

Migrating Ownership MySQL

Migrating Ownership MySQL“It would be nice to have an option, that would allow to suppress the DEFINER statement in the CREATE VIEW statements generated by mysqldump. This would help when transferring data structures between databases with different security models.” 

TLDR;

Use mysqlpump with option

--skip-definer

instead of

mysqldump

.

The Story

This was requested as MySQL Bug #24680 on Nov 29, 2006. This feature request got large Community support. Even if we cannot see the number of people who voted for this request, the number of comments is impressive.

The request is very reasonable:

mysqldump

is widely used during application development and it is a very common practice to migrate database structure between developers’ machines and to the production servers.

Imagine a situation where developer Sveta creates a database and adds few objects with

DEFINER

  clauses there. We will use only one for this post but in reality, she can have dozens.

mysql> CREATE VIEW large_tables AS SELECT * FROM information_schema.tables WHERE DATA_LENGTH > 100000000;
Query OK, 0 rows affected (0,01 sec)

Once you create a view default

DEFINER

  is the user who created this view:

mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';
+---------+--------------+--------------+
| DEFINER | TABLE_SCHEMA | TABLE_NAME   |
+---------+--------------+--------------+
| sveta@% | definers     | large_tables |
+---------+--------------+--------------+
1 row in set (0,01 sec)

And this causes issues when another user tries to import such a view into a different server:

mysql> CREATE USER production;
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT ALL ON definers.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT SESSION_VARIABLES_ADMIN ON *.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

$ mysqldump -usveta definers | mysql -uproduction production
ERROR 1227 (42000) at line 61: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation

Here is the content of line 61:

$ mysqldump -usveta definers | head -n 62 | tail
/*!50001 DROP VIEW IF EXISTS `large_tables`*/;
/*!50001 SET @saved_cs_client          = @@character_set_client */;
/*!50001 SET @saved_cs_results         = @@character_set_results */;
/*!50001 SET @saved_col_connection     = @@collation_connection */;
/*!50001 SET character_set_client      = utf8mb4 */;
/*!50001 SET character_set_results     = utf8mb4 */;
/*!50001 SET collation_connection      = utf8mb4_0900_ai_ci */;
/*!50001 CREATE ALGORITHM=UNDEFINED */
/*!50013 DEFINER=`sveta`@`%` SQL SECURITY DEFINER */

So this is a

CREATE VIEW

  operation that failed during import.

Unfortunately,

mysqldump

still does not have an option that allows migrating definers.

But since August 2015 and MySQL 5.7.8 we have a solution that, unfortunately, was overlooked in favor of the famous tool

mysqldump

.

Version 5.7.8 and all which created after it, come with a new dump tool:

mysqlpump

  that has the option

--skip-definer

  and allows to migrate database objects without any issue:

$ mysqlpump -h127.0.0.1 -P3306 -usveta --skip-definer definers | mysql -h127.0.0.1 -P13000 -uproduction definers
Dump completed in 17

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SHOW FULL TABLES"
+--------------------+------------+
| Tables_in_definers | Table_type |
+--------------------+------------+
| large_tables       | VIEW       |
+--------------------+------------+

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';"
+--------------+--------------+--------------+
| DEFINER      | TABLE_SCHEMA | TABLE_NAME   |
+--------------+--------------+--------------+
| production@% | definers     | large_tables |
+--------------+--------------+--------------+

Note that

mysqlpump

automatically adds

CREATE DATABASE

  into the dump and full path to the database objects. E.g.

CREATE ALGORITHM=UNDEFINED VIEW `definers`.`large_tables` AS select …

  Therefore this method cannot be used to migrate view, routine, or trigger definitions between different databases on the same server.

For more information about

mysqlpump

 and why you should switch to this tool from

mysqldump

 read this blog post, The mysqlpump Utility.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com