Upgrading a production backend


I am using Serverless to deploy a production backend on AWS with many functions, Cognito user pool, API gateway and several Dynamodb tables.

My question is: What is the right way to do upgrades to the backend without loosing all the data in the dynamo tables and with minimal downtime?

A few more details: In that case I added a LSI (Local secondary index) to one of my dynamodb tables, and now just running sls deploy won’t work.


I would avoid tying production databases to Serverless projects. I would suggest duplicating your existing Serverless project and removing the DynamoDB creation piece from your configuration. Then add whatever permissions are necessary to your new project’s role to connect to the existing DynamoDB tables. When you publish your new project, you can switch your custom domain in API gateway so that it connects to your new project. This should eliminate any downtime and preserve your data.

Until you can copy over your DynamoDB tables, I believe you’ll have to leave the old CloudFormation stack intact. You could remove all the functions from it and just leave the DynamoDB piece if you want to clean up the rest of the old components.

Also, you can’t sls deploy anymore because the current configuration and the Cloudformation stack are out of sync. You could try removing the LSI and that might fix this issue.

I agree with @jeremydaly that your CloudFormation is now out of sync but I’m 99% sure you can’t fix it by removing the LSI because adding and removing the LSI results in creating a new resource with a different ID.

Before you do anything make sure you have backed up the data in your tables. I would then remove the table from my serverless.yml and attempt an sls deploy. This should clean up your CloudFormation by removing the old table. You can then create the table again by adding it to your serverless.yml and deploying before restoring data.

I would disagree on moving the database setup outside of Serverless as I think there are too many advantages to having it inside.

One thing I strongly recommend is setting DeletionPolicy: "Retain" on your tables to prevent accidental deletion if they’re removed from the CloudFormation policy.

@buggy, what are the advantages to keeping production database tables attached to a Serverless project? I would NEVER have Serverless create RDS instances for me.

Which is fine because I would never use RDS with Lambda because of the cold start delays. :stuck_out_tongue_closed_eyes:


  • Every environment automatically stays in sync
  • You can setup your handler events automatically (for example: DynamoDB streams)
  • You can setup permissions automatically

It’s important to remember that DynamoDB is not even close to RDS. With RDS you also need to setup a database, schema, users, etc. With RDS you create the table and you’re done.

Thanks for your very helpful answers.

I agree that I want to keep the Dynamodb tables part of my Serverless project.

What I am looking for is guidelines or algorithm on how to manage the complex update cases automatically (or semi automatically).

can anyone share, how do you manage such cases?

You’re probably going to end up creating a new table and copying data from the old table. Once the data has been copied then deploy the application to use the new table. If you’re dealing with frequently changing data in a large table then you might want to look at DynamoDB streams to help keep the new table in sync with the old table.

Reusing RDS connections in Lambda significantly mitigates cold start problems: https://www.jeremydaly.com/reuse-database-connections-aws-lambda/

Also, I agree that certain types of applications are perfect for a DynamoDB backend. Personally I like the ability to teardown and recreate my Serverless stacks without worrying about the affect on my underlying data.