Learn how to modify an AWS Database Migration Service job to continuously replicate data, and to view the ongoing sync using MySQL Workbench and pgAdmin database clients.
- [Instructor] Now that we've seen a one time data migration, let's see how the database migration service can achieve an ongoing replication. Click in to the DMS link. Then go to tasks. We've got the one time migration here, which is done, and can be kicked off again any time we like. However, we want to create a task that will do this on a continuing basis. We'll call this replication task. It will use our existing replication instance and our existing MySQL and Postgres in points. Remember how I said those could be reused? We'll click start task on create, but I don't know if we'll trust it, we may have to kick this off on our own.
Migration type is where we want to change things. We want to migrate existing data and replicate ongoing changes. We get a warning here about our MySQL database that replicating requires the MySQL binary log to be enabled and set to row. This is the parameter change that we effected earlier in the chapter. So we should be good. Scrolling down, we'll keep these defaults. And then the all important table mappings. We'll make sure that the MySQL RDS demo is set to use a wildcard and pull all tables, and that the action is include.
Add this selection rule and we're good to go. Click create. The task is now creating. When it hits the ready state, if it doesn't start right away, we'll start it manually. Unlike the one time migration, this replication task is actually starting on its own. And now it's running. So what should happen at this point is the replication instance will be paying attention for changes on the MySQL source and convey those over to the Postgres target. To see that in action let's go to MySQL workbench and put a change in. Here I am already connected to the MySQL database source.
Let's select all the employees just so we see what we've got. I'll highlight and use the lightning bolt. Okay, so we've got one through 12 and number 99, which I added earlier. Let's look at PG admin and make sure it's the same. Okay, one through 12 and number 99. Let's go back to MySQL and add one more row. This is employee number 100. So we do our select again, we'll see that it's taken place here of course.
Now let's go look at the PG admin window and the Postgres target. We'll query all rows one more time. So it looks like the replication actually didn't take place. Let's go find out why. I'll head back to the AWS console. Here we can see that the status of the replication task is failed. Well, what happened? If we scroll down in the overview, not only do we see the status, but we see the last failure message. MySQL binary logging must use row format. I thought we changed that already. So let's head back to the RDS console and see what's going on.
Under MySQL on the magnifying glass tab we can look at the parameter group and we can see that it's not changed from pending to available or in sync, it's changed to pending reboot. SO this change actually needed a reboot and it didn't go ahead and do it, so we can initiate our own reboot. Click MySQL, instance actions, and choose reboot. Are you sure? Say yes, click reboot. This will take a few moments, and when it does the MySQL instance will come back up with a new parameter set and we can attempt the replication again.
Now that the MySQL instance has rebooted, let's head back to the DMS console. Click database migration service, and go to tasks. We'll start slash resume this failed task. And we have an option, I can start or restart, in this case we'll choose restart. We'll wait for the status to go back to running. Now the status says load complete, but if you look below, the actual status is running. So we should be able to make a change and see it replicate over to Postgres. Let's head to the MySQL workbench.
And we'll try something a little bit more involved. We'll create another table. We'll just call it new table and we'll give it some message value and we'll create it here. So now we have a new table. It won't have anything in it, but we can see that it exists on the MySQL side. Okay, let's look back at DMS. It still only says four tables loaded. And now that we've refreshed we see that it is replicated because now it says five tables.
We can check that out in Postgres. Back to PG admin, if we refresh the table list, there it is, our new table. So now we've achieved continuous ongoing replication from MySQL to Postgres, a heterogeneous migration and replication.
Join AWS architect Brandon Rich and learn how to configure object storage solutions and lifecycle management in Simple Storage Service (S3), a web service offered by AWS, and migrate, back up, and replicate relational data in RDS. Find out how to leverage flexible network storage with Elastic File System (EFS), and use the new AWS Glue service to move and transform data. Plus, learn how Snowball can help you transfer truckloads of data in and out of the cloud.
- What is data management?
- AWS S3 basics
- S3 bucket creation
- S3 upload and logging
- S3 event notifications
- S3 data lifecycle configuration
- Working with Amazon Elastic Block Store volumes
- Creating and mounting an EFS
- Creating an AWS RDS instance
- RDS backup and recovery
- Moving data with AWS Database Migration Service
- Moving data with Data Pipeline and Glue