Oct
13
2021
--

Migrating MongoDB to Kubernetes

Migrating MongoDB to Kubernetes

Migrating MongoDB to KubernetesThis blog post is the last in the series of articles on migrating databases to Kubernetes with Percona Operators. Two previous posts can be found here:

As you might have guessed already, this time we are going to cover the migration of MongoDB to Kubernetes. In the 1.10.0 release of Percona Distribution for MongoDB Operator, we have introduced a new feature (in tech preview) that enables users to execute such migrations through regular MongoDB replication capabilities. We have already shown before how it can be used to provide cross-regional disaster recovery for MongoDB, we encourage you to read it.

The Goal

There are two ways to migrate the database:

  1. Take the backup and restore it.
    – This option is the simplest one, but unfortunately comes with downtime. The bigger the database, the longer the recovery time is.
  2. Replicate the data to the new site and switch the application once replicas are in sync.
    – This allows the user to perform the migration and switch the application with either zero or little downtime.

This blog post is a walkthrough on how to migrate MongoDB replica set to Kubernetes with replication capabilities. 

MongoDB replica set to Kubernetes

  1. We have a MongoDB cluster somewhere (the Source). It can be on-prem or some virtual machine. For demo purposes, I’m going to use a standalone replica set node. The migration procedure of a replica set with multiple nodes or sharded cluster is almost identical.
  2. We have a Kubernetes cluster with Percona Operator (the Target). The operator deploys 3 standalone MongoDB nodes in unmanaged mode (we will talk about it below).
  3. Each node is exposed so that the nodes on the Source can reach them.
  4. We are going to replicate the data to Target nodes by adding them into the replica set.

As always all blog post scripts and configuration files are available publicly in this Github repository.

Prerequisites

  • MongoDB cluster – either on-prem or VM. It is important to be able to configure mongod to some extent and add external nodes to the replica set.
  • Kubernetes cluster for the Target.
  • kubectl to deploy and manage the Operator and database on Kubernetes.

Prepare the Source

This section explains what preparations must be made on the Source to set up the replication.

Expose

All nodes in the replica set must form a mesh and be able to reach each other. The communication between the nodes can go through the public internet or some VPN. For demonstration purposes, we are going to expose the Source to the public internet by editing mongod.conf:

net:
  bindIp: <PUBLIC IP>

If you have multiple replica sets – you need to expose all nodes of each of them, including config servers.

TLS

We take security seriously at Percona, and this is why by default our Operator deploys MongoDB clusters with encryption enabled. I have prepared a script that generates self-signed certificates and keys with the openssl tool. If you already have Certificate Authority (CA) in use in your organization, generate the certificates and sign them by your CA.

The list of alternative names can be found either in this document or in this openssl configuration file. Note DNS.20 entry:

DNS.20      = *.mongo.spron.in

I’m going to use this wildcard entry to set up the replication between the nodes. The script also generates an

ssl-secret.yaml

file, which we are going to use on the Target side.

You need to upload CA and certificate with a private key to every Source replica set node and then define it in the

mongod.conf

:

# network interfaces
net:
  ...
  tls:
    mode: preferTLS
    CAFile: /path/to/ca.pem
    certificateKeyFile: /path/to/mongod.pem

security:
  clusterAuthMode: x509
  authorization: enabled

Note that I also set

clusterAuthMode

to x509. It enforces the use of x509 authentication. Test it carefully on a non-production environment first as it might break your existing replication.

Create System Users

Our Operator needs system users to manage the cluster and perform health checks. Usernames and passwords for system users should be the same on the Source and the Target. This script is going to generate the

user-secret.yaml

to use on the Target and mongo shell code to add the users on the Source (it is an example, do not use it in production).

Connect to the primary node on the Source and execute mongo shell commands generated by the script.

Prepare the Target

Apply Users and TLS secrets

System users’ credentials and TLS certificates must be similar on both sides. The scripts we used above generate Secret object manifests to use on the Target. Apply them:

$ kubectl apply -f ssl-secret.yaml
$ kubectl apply -f user-secret.yam

Deploy the Operator and MongoDB Nodes

Please follow one of the installation guides to deploy the Operator. Usually, it is one step operation through

kubectl

:

$ kubectl apply -f operator.yaml

MongoDB nodes are deployed with a custom resource manifest – cr.yaml. There are the following important configuration items in it:

spec:
  unmanaged: true

This flag instructs Operator to deploy the nodes in unmanaged mode, meaning they are not configured to form the cluster. Also, the Operator does not generate TLS certificates and system users.

spec:
…
  updateStrategy: Never

Disable the Smart Update feature as the cluster is unmanaged.

spec:
…
  secrets:
    users: my-new-cluster-secrets
    ssl: my-custom-ssl
    sslInternal: my-custom-ssl-internal

This section points to the Secret objects that we created in the previous step.

spec:
…
  replsets:
  - name: rs0
    size: 3
    expose:
      enabled: true
      exposeType: LoadBalancer

Remember, that nodes need to be exposed and reachable. To do this we create a service per Pod. In our case, it is a

LoadBalancer

object, but it can be any other Service type.

spec:
...
  backup:
    enabled: false

If the cluster and nodes are unmanaged, the Operator should not be taking backups. 

Deploy unmanaged nodes with the following command:

$ kubectl apply -f cr.yaml

Once nodes are up and running, also check the services. We will need the IP-addresses of new replicas to add them later to the replica set on the Source.

$ kubectl get services
NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)           AGE
…
my-new-cluster-rs0-0    LoadBalancer   10.3.252.134   35.223.104.224   27017:31332/TCP   2m11s
my-new-cluster- rs0-1   LoadBalancer   10.3.244.166   34.134.210.223   27017:32647/TCP   81s
my-new-cluster-rs0-2    LoadBalancer   10.3.248.119   34.135.233.58    27017:32534/TCP   45s

Configure Domains

X509 authentication is strict and requires that the certificate’s common name or alternative name match the domain name of the node. As you remember we had wildcard

*.mongo.spron.in

included in our certificate. It can be any domain that you use, but make sure a certificate is issued for this domain.

I’m going to create A-records to point to public IP-addresses of MongoDB nodes:

k8s-1.mongo.spron-in -> 35.223.104.224
k8s-2.mongo.spron.in -> 34.134.210.223
k8s-3.mongo.spron-in -> 34.135.233.58

Replicate the Data to the Target

It is time to add our nodes in the Kubernetes cluster to the replica set. Login into the mongo shell on the Source and execute the following:

rs.add({ host: "k8s-1.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-2.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-3.mongo.spron.in", priority: 0, votes: 0} )

If everything is done correctly these nodes are going to be added as secondaries. You can check the status with the

rs.status()

command.

Cutover

Check that newly added node are synchronized. The more data you have, the longer the synchronization process is going to take. To understand if nodes are synchronized you should compare the values of

optime

and

optimeDate

of the Primary node with the values for the Secondary node in

rs.status()

output:

{
        "_id" : 0,
        "name" : "147.182.213.59:27017",
        "stateStr" : "PRIMARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},
{
        "_id" : 1,
        "name" : "k8s-1.mongo.spron.in:27017",
        "stateStr" : "SECONDARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDurable" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},

When nodes are synchronized, we are ready to perform the cutover. Please ensure that your application is configured properly to minimize downtime during the cutover.

The cutover is going to have two steps:

  1. One of the nodes on the Target becomes the primary.
  2. Operator starts managing the cluster and nodes on the Source are no longer present in the replica set.

Switch the Primary

Connect with mongo shell to the primary on the Source side and make one of the nodes on the Target primary. It can be done by changing the replica set configuration:

cfg = rs.config()
cfg.members[1].priority = 2
cfg.members[1].votes = 1
rs.reconfig(cfg)

We enable voting and set priority to two on one of the nodes in the Kubernetes cluster. Member id can be different for you, so please look carefully into the output of

rs.config()

command.

Start Managing the Cluster

Once the primary is running in Kubernetes, we are going to tell the Operator to start managing the cluster. Change

spec.unmanaged

to

false

 in the Custom Resource with patch command:

$ kubectl patch psmdb my-cluster-name --type=merge -p '{"spec":{"unmanaged": true}}'

You can also do this by changing

cr.yaml

and applying it. This is it, now you have the cluster in Kubernetes which is managed by the Operator. 

Conclusion

You truly start to appreciate Operators once you get used to them. When I was writing this blog post I found it extremely annoying to deploy and configure a single MongoDB node on a Linux box; I don’t want to think about the cluster. Operators abstract Kubernetes primitives and database configuration and provide you with a fully operational database service instead of a bunch of nodes. Migration of MongoDB to Kubernetes is a challenging task, but it is much simpler with Operator. And once you are on Kubernetes, Operator takes care of all day-2 operations as well.

We encourage you to try out our operator. See our GitHub repository and check out the documentation.

Found a bug or have a feature idea? Feel free to submit it in JIRA.

For general questions please raise the topic in the community forum

You are a developer and looking to contribute? Please read our CONTRIBUTING.md and send the Pull Request.

Percona Distribution for MongoDB Operator

The Percona Distribution for MongoDB Operator simplifies running Percona Server for MongoDB on Kubernetes and provides automation for day-1 and day-2 operations. It’s based on the Kubernetes API and enables highly available environments. Regardless of where it is used, the Operator creates a member that is identical to other members created with the same Operator. This provides an assured level of stability to easily build test environments or deploy a repeatable, consistent database environment that meets Percona expert-recommended best practices.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
15
2019
--

Amazon migrates more than 100 consumer services from Oracle to AWS databases

AWS and Oracle love to take shots at each other, but as much as Amazon has knocked Oracle over the years, it was forced to admit that it was in fact a customer. Today in a company blog post, the company announced it was shedding Oracle for AWS databases, and had effectively turned off its final Oracle database.

The move involved 75 petabytes of internal data stored in nearly 7,500 Oracle databases, according to the company. “I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated),” AWS’s Jeff Barr wrote in the company blog post announcing the migration.

Over the last several years, the company has been working to move off of Oracle databases, but it’s not an easy task to move projects on Amazon scale. Barr wrote there were lots of reasons the company wanted to make the move. “Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted,” he wrote.

More than 100 consumer services have been moved to AWS databases, including customer-facing tools like Alexa, Amazon Prime and Twitch, among others. It also moved internal tools like AdTech, its fulfillment system, external payments and ordering. These are not minor matters. They are the heart and soul of Amazon’s operations.

Each team moved the Oracle database to an AWS database service like Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS) and Amazon Redshift. Each group was allowed to choose the service they wanted, based on its individual needs and requirements.

Oracle declined to comment on this story.

 

Mar
18
2019
--

PostgreSQL Upgrade Using pg_dumpall

migrating PostgreSQL using pg_dumpall

PostgreSQL logoThere are several approaches to assess when you need to upgrade PostgreSQL. In this blog post, we look at the option for upgrading a postgres database using pg_dumpall. As this tool can also be used to back up PostgreSQL clusters, then it is a valid option for upgrading a cluster too. We consider the advantages and disadvantages of this approach, and show you the steps needed to achieve the upgrade.

This is the first of our Upgrading or Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions series where we’ll be exploring different paths to accomplish postgres upgrade or migration. The series will culminate with a practical webinar to be aired April 17th (you can register here).

We begin this journey by providing you the most straightforward way to carry on with a PostgreSQL upgrade or migration: by rebuilding the entire database from a logical backup.

Defining the scope

Let’s define what we mean by upgrading or migrating PostgreSQL using pg_dumpall.

If you need to perform a PostgreSQL upgrade within the same database server, we’d call that an in-place upgrade or just an upgrade. Whereas a procedure that involves migrating your PostgreSQL server from one server to another server, combined with an upgrade from an older version (let’s say 9.3) to a newer version PostgreSQL (say PG 11.2), can be considered a migration.

There are two ways to achieve this requirement using logical backups :

  1. Using pg_dumpall
  2. Using pg_dumpall + pg_dump + pg_restore

We’ll be discussing the first option (pg_dumpall) here, and will leave the discussion of the second option for our next post.

pg_dumpall

pg_dumpall can be used to obtain a text-format dump of the whole database cluster, and which includes all databases in the cluster. This is the only method that can be used to backup globals such as users and roles in PostgreSQL.

There are, of course, advantages and disadvantages in employing this approach to upgrading PostgreSQL by rebuilding the database cluster using pg_dumpall.

Advantages of using pg_dumpall for upgrading a PostgreSQL server :

  1. Works well for a tiny database cluster.
  2. Upgrade can be completed using just a few commands.
  3. Removes bloat from all the tables and shrinks the tables to their absolute sizes.

Disadvantages of using pg_dumpall for upgrading a PostgreSQL server :

  1. Not the best option for databases that are huge in size as it might involve more downtime. (Several GB’s or TB’s).
  2. Cannot use parallel mode. Backup/restore can use just one process.
  3. Requires double the space on disk as it involves temporarily creating a copy of the database cluster for an in-place upgrade.

Let’s look at the steps involved in performing an upgrade using pg_dumpall:

  1. Install new PostgreSQL binaries in the target server (which could be the same one as the source database server if it is an in-place upgrade).

    -- For a RedHat family OS
    # yum install postgresql11*
    Or
    -- In an Ubuntu/Debian OS
    # apt install postgresql11
  2. Shutdown all the writes to the database server to avoid data loss/mismatch between the old and new version after upgrade.
  3. If you are doing an upgrade within the same server, create a cluster using the new binaries on a new data directory and start it using a port other than the source. For example, if the older version PostgreSQL is running on port 5432, start the new cluster on port 5433. If you are upgrading and migrating the database to a different server, create a new cluster using new binaries on the target server – the cluster may not need to run on a different port other than the default, unless that’s your preference.

    $ /usr/pgsql-11/bin/initdb -D new_data_directory
    $ cd new_data_directory
    $ echo “port = 5433” >> postgresql.auto.conf
    $ /usr/pgsql-11/bin/pg_ctl -D new_data_directory start
  4. You might have a few extensions installed in the old version PostgreSQL cluster. Get the list of all the extensions created in the source database server and install them for the new versions. You can exclude those you get with the contrib module by default. To see the list of extensions created and installed in your database server, you can run the following command.

    $ psql -d dbname -c "\dx"

    Please make sure to check all the databases in the cluster as the extensions you see in one database may not match the list of those created in another database.

  5. Prepare a postgresql.conf file for the new cluster. Carefully prepare this by looking at the existing configuration file of the older version postgres server.
  6. Use pg_dumpall to take a cluster backup and restore it to the new cluster.

    -- Command to dump the whole cluster to a file.
    $ /usr/pgsql-11/bin/pg_dumpall > /tmp/dumpall.sql
    -- Command to restore the dump file to the new cluster (assuming it is running on port 5433 of the same server).
    $ /usr/pgsql-11/bin/psql -p 5433 -f /tmp/dumpall.sql

    Note that i have used the new pg_dumpall from the new binaries to take a backup.
    Another, easier, way is to use PIPE to avoid the time involved in creating a dump file. Just add a hostname if you are performing an upgrade and migration.

    $ pg_dumpall -p 5432 | psql -p 5433
    Or
    $ pg_dumpall -p 5432 -h source_server | psql -p 5433 -h target_server
  7. Run ANALYZE to update statistics of each database on the new server.
  8. Restart the database server using the same port as the source.

Our next post in this series provides a similar way of upgrading your PostgreSQL server while at the same time providing some flexibility to carry on with changes like the ones described above. Stay tuned!


Image based on photo by Sergio Ortega on Unsplash

Aug
13
2018
--

Webinar Tues 8/14: Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL

Amazon Migration Service to migrate to MySQL

Amazon Migration Service to migrate to MySQLPlease join Percona’s Solution Engineer, Dimitri Vanoverbeke as he presents Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL on Tuesday, August 14th, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

In this talk, we will learn about the Amazon Migration Tool. The talk will cover the possibilities, potential pitfalls prior to migrating and a high-level overview of its functionalities.

Register Now

The post Webinar Tues 8/14: Amazon Migration Service: The Magic Wand to Migrate Away from Your Proprietary Environment to MySQL appeared first on Percona Database Performance Blog.

Aug
09
2018
--

Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Monitoring Amazon Aurora with PMM

Monitoring Amazon Aurora with PMMPlease join Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to Aurora and Monitoring with PMM on Thursday, August 9th, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migration to Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using PMM.

The topics in this session include:

  • Why AWS Aurora? What is the future of AWS Aurora?
  • Build Aurora Infrastructure
  • Using Terraform (Without Data)
  • Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
  • Verify data consistency
  • Aurora migration
  • 1:1 migration
  • Many:1 migration using Percona Server multi-source replication
  • Show benchmarks and PMM dashboard
  • Demo

Register Now

 

Vineet KhannaVineet Khanna, Senior Database Engineer, Autodesk

Vineet Khanna, Senior Database Engineer at Autodesk, has 10+ years of experience as a MySQL DBA. His main professional interests are managing complex database environments, improving database performance, architecting High Availability solutions for MySQL. He has handled database environments of organizations like Chegg, Zendesk, Adobe.

 

Tate McDanielTate Mcdaniel, Sr. MySQL DBA

Tate joined Percona in June 2017 as a Remote MySQL DBA. He holds a Bachelors degree in Information Systems and Decision Strategies from LSU. He has 10+ years of experience working with MySQL and operations management. His great love is application query tuning. In his off time, he races sailboats, travels the Caribbean by sailboat, and
drives all over in an RV.

The post Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM appeared first on Percona Database Performance Blog.

Jul
04
2018
--

How to Set Up Replication Between AWS Aurora and an External MySQL Instance

Amazon RDS Aurora replication to external server

Amazon RDS Aurora replication to external serverAmazon RDS Aurora (MySQL) provides its own low latency replication. Nevertheless, there are cases where it can be beneficial to set up replication from Aurora to an external MySQL server, as Amazon RDS Aurora is based on MySQL and supports native MySQL replication. Here are some examples of when replicating from Amazon RDS Aurora to an external MySQL server can make good sense:

  • Replicating to another cloud or datacenter (for added redundancy)
  • Need to use an independent reporting slave
  • Need to have an additional physical backup
  • Need to use another MySQL flavor or fork
  • Need to failover to another cloud and back

In this blog post I will share simple step by step instructions on how to do it.

Steps to setup MySQL replication from AWS RDS Aurora to MySQL server

  1. Enable binary logs in the option group in Aurora (Binlog format = mixed). This will require a restart.
  2. Create a snapshot and restore it (create a new instance from a snapshot). This is only needed to make a consistent copy with mysqldump. As Aurora does not allow “super” privileges, running
    mysqldump --master-data

      is not possible. The snapshot is the only way to get a consistent backup with the specific binary log position.

  3. Get the binary log information from the snapshot. In the console, look for the “Alarms and Recent Events” for the restored snapshot instance. We should see something like:
    Binlog position from crash recovery is mysql-bin-changelog.000708 31278857
  4. Install MySQL 5.6 (i.e. Percona Server 5.6) on a separate EC2 instance (for Aurora 5.6 – note that you should use MySQL 5.7 for Aurora 5.7). After MySQL is up and running, import the timezones:
    # mysql_tzinfo_to_sql /usr/share/zoneinfo/|mysql

    Sample config:

    [mysqld]
    log-bin=log-bin
    log-slave-updates
    binlog-format=MIXED
    server-id=1000
    relay-log=relay-bin
    innodb_log_file_size=1G
    innodb_buffer_pool_size=2G
    innodb_flush_method=O_DIRECT
    innodb_flush_log_at_trx_commit=0 # as this is replication slave
  5. From now on we will make all backups from the restored snapshot. First get all users and import those to the new instance:
    pt-show-grants -h myhost...amazonaws.com -u percona > grants.sql

    # check that grants are valid and upload to MySQL

    mysql -f < grants.sql

    Make a backup of all schemas except for the “mysql” system tables as Aurora using different format of those (make sure we connect to the snapshot):

    host="my-snapshot...amazonaws.com"
    mysqldump --single-transaction -h $host -u percona
    --triggers --routines
    --databases `mysql -u percona -h $host -NBe
    "select group_concat(schema_name separator ' ') from information_schema.schemata where schema_name not in ('mysql', 'information_schema', 'performance_schema')"` > all.sql
  6. Restore to the local database:
    mysql -h localhost < all.sql
  7. Restore users again (some users may fail to create where there are missing databases):
    mysql -f < grants.sql
  8. Download the RDS/Aurora SSL certificate:
    # cd /etc/ssl
    # wget 'https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem'
    # chown mysql.mysql rds-combined-ca-bundle.pem
  9. Configure MySQL replication. Take the values for the binary log name and position from #3 above. Please note: now we connect to the actual instance, not a snapshot:
    # mysql -h localhost
    ...
    mysql> CHANGE MASTER TO
    MASTER_HOST='dev01-aws-1...',
    MASTER_USER='awsreplication',
    MASTER_PASSWORD='<pass>',
    MASTER_LOG_FILE = 'mysql-bin-changelog.000708',
    MASTER_LOG_POS = 31278857,
    MASTER_SSL_CA = '/etc/ssl/rds-combined-ca-bundle.pem',
    MASTER_SSL_CAPATH = '',
    MASTER_SSL_VERIFY_SERVER_CERT=1;
    mysql> start slave;
  10. Verify that the slave is working. Optionally add the SQL_Delay option to the CHANGE MASTER TO (or anytime) and specify the slave delay in seconds.

I hope those steps will be helpful for setting up an external MySQL replica.

The post How to Set Up Replication Between AWS Aurora and an External MySQL Instance appeared first on Percona Database Performance Blog.

Aug
23
2017
--

Migrating Data from an Encrypted Amazon MySQL RDS Instance to an Encrypted Amazon Aurora Instance

Migrating Data

Migrating DataIn this blog post, we’ll discuss migrating data from encrypted Amazon MySQL RDS to encrypted Amazon Aurora.

One of my customers wanted to migrate from an encrypted MySQL RDS instance to an encrypted Aurora instance. They have a pretty large database, therefore using mysqldump or a similar tool was not suitable for them. They also wanted to setup replication between old MySQL RDS and new Aurora instances.

Spoiler: this is possible without any logical dump.

At first, I checked Amazon’s documentation on encryption and found nothing about this type of migration. Even more, if I trust the documentation it looks like they don’t support replication or migration between encrypted MySQL RDS and encrypted Aurora. All instructions are for either “MySQL RDS to MySQL RDS” or “Aurora to Aurora” setups. For example, the documentation says here:

You can create Read Replicas of both encrypted and unencrypted DB clusters. The Read Replica must be encrypted if the source DB cluster is encrypted.

When I tried to create an Aurora read replica of my encrypted MySQL RDS instance, however, the “Enable Encryption” select control was grayed out and I could not change “No” to “Yes”.

I had to find a workaround.

Another idea was creating an encrypted MySQL RDS replica and migrating it to Aurora. While creating encrypted MySQL replica is certainly possible (actually all replicas of encrypted instances must be encrypted) it was not possible to migrate it to any other instance using the standard “Migrate Latest Snapshot” option:

However, the documentation specified that Aurora and MySQL RDS use the same AWS KMS key. As a result, both kinds of encryption should be compatible (if not practically the same). Amazon also has the “AWS Database Migration Service“, which has this promising section in its FAQ:

Q. Can I replicate data from encrypted data sources?

Yes, AWS Database Migration Service can read and write from and to encrypted databases. AWS Database Migration Service connects to your database endpoints on the SQL interface layer. If you use the Transparent Data Encryption features of Oracle or SQL Server, AWS Database Migration Service will be able to extract decrypted data from such sources and replicate it to the target. The same applies to storage-level encryption. As long as AWS Database Migration Service has the correct credentials to the database source, it will be able to connect to the source and propagate data (in decrypted form) to the target. We recommend using encryption-at-rest on the target to maintain the confidentiality of your information. If you use application-level encryption, the data will be transmitted through AWS Database Migration Service as is, in encrypted format, and then inserted into the target database.

I decided to give it a try. And it worked!

The next step was to make this newly migrated Aurora encrypted instance a read replica of the original MySQL RDS instance. This is easy in part with the help of great how-to on migration by Adrian Cantrill. As suggested, you only need to find the master’s binary log file, current position and supply them to the stored routine

mysql.rds_set_external_master

. Then start replication using the stored routine

mysql.rds_start_replication

.

Conclusion: While AWS Database Migration Service has limitations for both source and target databases, this solution allows you to migrate encrypted instances easily and securely.

Save

Save

Save

Save

Jun
05
2017
--

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

MySQL in the Cloud

MySQL in the CloudJoin Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).

Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.

When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”

Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

  • Public and private cloud
  • Migration to the cloud
  • Best practices
  • High availability
  • Scaling

Register for the webinar here.

Peter ZaitsevPeter Zaitsev, Percona CEO and Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Jul
05
2016
--

LzLabs launches product to move mainframe COBOL code to Linux cloud

Black white photo of man sitting in front of a mainframe computer in the 1960s Somewhere in a world full of advanced technology that we write about regularly here on TechCrunch, there exists an ancient realm where mainframe computers are still running programs written in COBOL.
This is a programming language, mind you, that was developed in the late 1950s, and used widely in the ’60s and ’70s and even into the ’80s, but it’s never really gone away. Read More

Sep
22
2014
--

Should you migrate to Percona XtraDB Cluster?

Interest in Percona XtraDB Cluster / Galera has been high ever since we introduced the product in 2012.  I typically have a conversation about Galera and Percona XtraDB Cluster (PXC) at least once a week with a consulting customer who wants to know if it will be a good fit for their application.  Last week I gave a webinar entitled “Migrating to Percona XtraDB Cluster.”

I covered everything in the webinar that I feel it is important for someone to know who is considering Galera and I’d suggest anyone who wants Should you migrate to Percona XtraDB Cluster / Galera?a brief overview of PXC/Galera spends an hour watching the recording.  There were many questions asked in the webinar, but I answered all of them regarding Percona XtraDB Cluster. Access to the webinar is free along with download of the accompanying slides.

* * *

What is Percona XtraDB Cluster? PXC is a replacement for conventional MySQL master/slave architectures to eliminate replication lag and achieve a highly-available masterless cluster of MySQL servers. Like all Percona software, PXC is open source and free.

The post Should you migrate to Percona XtraDB Cluster? appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com