Serverless Deploy Best Practices Question

Hello All,

I’m looking for information on what you’ve found as a best practice for building and deploying Serverless microservices in a large scale application environment. What I’d “like” to do is to create a build script that will create a clean stage of my entire application - that includes some things with cognito, some things with dynamodb, some things with S3/Cloudfront and some things with Serverless.

What I’m curious about is how people have successfully grouped things like this into an automated process at both the macro and micro levels. For instance - do you create a bash script to deploy “the world” and then smaller detailed scripts for each piece to deploy at the micro level. Also, do you typically script code to change the domains on your API Gateway endpoints to your custom domains, or do you do that by hand in the console?

Thanks in advance for any input.

-D

One best practice is to use different AWS accounts for each stage. If you don’t it can get messy very quickly. Previously you’ve been able to link these accounts for billing purposes and AWS Organizations will make that easier.

2 Likes

I don’t know if it’s best practice but I’ve got

  • cloudformation yml for almost everything not directly related to the lambdas - parameterised on a stage variable (and other things) (VPC, subnets, s3 endpoints, RDS, S3, CloudFront (for custom domains on both s3 buckets and apigateway), iam)
  • bash script for running the above yml, extracting Outputs and injecting them into environment .json’s for the lambdas in serverless
  • bash script for installing/updating python dependencies for the serverless deployment
  • on bash script that takes a stage variable, and runs the above and serverless deploy --stage $STAGE
2 Likes