MongoDB to Redshift

This page provides you with instructions on how to extract data from MongoDB’s backend and load it into Amazon Redshift. (If this manual process is a bit more involved than you’d prefer, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

Pulling Data Out of MongoDB

In order to get your MongoDB data into AWS Redshift, you have to start by extracting it from your MongoDB database. This is extremely hard, and depending on how you have loaded data into MongoDB over time may be literally impossible to do in a fully complete manner.

The reason is that NoSQL databases don’t force you to have any structure (i.e. specific columns) with each new record you insert into a collection (the analogue to a table in a more traditional DB system). Since Redshift takes on a more traditional, rigid relational structure, you will be forced to create some kind of predefined format in which to insert your MongoDB data into Redshift.

The good news is that, despite the flexibility of MongoDB, the vast majority of data that gets inserted into most collections is machine-generated and therefore is predictably structured. That means there is a high likelihood that at least some key fields will exist in every single record. You’ll need to create the Redshift table that maps as closely as possible to the fields that are important to you and consistently appear in the records in each collection you are attempting to replicate.

You can retrieve data from Mongo in a number of ways, but the most common is by running the find() command on a collection.

Sample MongoDB Code and Results

MongoDB stores and returns JSON-formatted data. Below is an example of the kind of find() command and response you might see when querying the a products collection.


db.products.find( { qty: { $gt: 25 } }, { _id: 0, qty: 0 } )

{ "item" : "pencil", "type" : "no.2" }
{ "item" : "bottle", "type" : "blue" }
{ "item" : "paper" }

Inserting MongoDB Data into Redshift

Once you have identified all of the columns you will want to insert, you can use the CREATE TABLE statement in Redshift to create a table that can receive all of this data.

With a table built, it may seem like the easiest way to add your data (especially if there isn’t much of it), is to build INSERT statements to add data to your Redshift table row-by-row. If you have any experience with SQL, this will be your gut reaction. But beware! Redshift isn’t optimized for inserting data one row at a time, and if you have any kind of high-volume data being inserted, you would be much better off loading the data into Amazon S3 and then using the COPY command to load it into Redshift.

Keeping Data Up-To-Date

So, now what? You’ve built a script that pulls data from MongoDB and loads it into Redshift, but what happens tomorrow when you have ten new transactions?

The key is to build your script in such a way that it can also identify incremental updates to your data. Thankfully, since you can add whatever fields you would like in your MongoDB database, you can easily create fields like created_at and modified_at to indicate when your data has changed and allow you to easily find() new data points.

Other Data Warehouse Options

Redshift is totally awesome, but sometimes you need to start smaller or optimize for different things. In this case, many people choose to get started with Postgres, which is an open source RDBMS that uses nearly identical SQL syntax to Redshift. If you’re interested in seeing the relevant steps for loading this data into Postgres, check out MongoDB to Postgres

Easier and Faster Alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your MongoDB data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Amazon Redshift data warehouse.