Customer wants to migrate data from DynamoDB to Aurora MySQL. This project will provide guidance on how to set up a glue job to load data into MySQL destination.
- Export DynamoDB table to S3
- Use Glue Crawler to crawl data and generate Glue table
- Perform mappings logic in Glue job
- Export data to MySQL using a Glue Connections

- Export DynamoDB table to S3 in json format following guide here.
- Create a Glue Crawler that crawler the S3 folder containing the exported json file.
- Run Crawler to generate Glue table, take a note on the table name here. You will need to input it in your job script later.
- Download the MySQL JDBC connector.
- Pick MySQL connector .jar file (such as mysql-connector-java-8.0.19.jar) and upload it into your Amazon Simple Storage Service (Amazon S3) bucket.
- Make a note of that path, because you are going to use it in the AWS Glue job to establish the JDBC connection with the database.
- Create a Connections in Glue for MySQL connections.
- Create VPC Endpoint for Glue to connect to the
- Download ETL script
git clone https://github.com/Simone319/dynamodb-to-aurora.git
-
Change the parameters in
s3-to-mysql-script.py
to your own value.your.mysql.host.here
YourDatabase
your-MySQL-table
your-user-name
your-password
<path-to-connector>
(this is where you uploaded your MySQL connector .jar file in the previous step)glue-database
glue-table
- Finally, change the data mappings logic as needed.
-
Set up and run Glue ETL job by uploading the script:
s3-to-mysql-script.py
. -
In the Job details, expand Advanced properties, remember to add the Connections you just created.