Dry run with serverless framework stack

I need your input about deployment on how you test your codes before deploy it.

I need a dry-run option with sls that I can get the differences between old an new codes.

Any suggestions? Especially for production deployment, how do you have the confidence to deploy the update codes with sls deploy?

1 Like

The first command I found is:

sls deploy --noDeploy --stage dev

with option --noDeploy, I can validate the file serverless.yml

But this is not enough, need more checks.

Hi Bill,

As with most methods of development, you need to maintain control of what gets taken live. With Serverless, if you have a production environment you deploy to, then you should never make changes to the Serverless service and only add, remove or alter existing resources using the serverless.yml file for the service. That way, if you are the only developer deploying to that service, then when you deploy changes, your service will change to be exactly what it is you deploy.

If what you are after is some way to test your code locally before deploying, then I would recommend considering using a setup involving unit tests locally. Lambda functions are just basic functions that receive a specially structured event object as a parameter from some external trigger such as API Gateway, S3, SNS, SQS or a multitude of other events. So in order to execute your code locally, you need someway to call your function with an event object that matches the event object of the trigger you are expecting to call your function.

The serverless-mocha-plugin makes this easier by combining mocha (a unit testing module) and your function together. You can then build the event object you need to send to the function and see that your code works as intended. I have found this AWS documentation has some great event object examples. Once you have a test event object, mocha lets you run your function repeatedly while you are developing to make local development easier.

Now you can execute your functions locally. But they will still attempt to make calls to AWS services that you may be using. To help with this I have found the aws-sdk-mock plugin to be very useful. It allows you to capture calls to AWS services like S3, DynamoDb, etc and return your own custom responses; even errors. So now you can test handling specific responses from these AWS services.

Let me know if this makes sense or you want further clarification.

1 Like

I guess I should also add, if what you are looking for is a way to test your service on AWS before going live, this is doable by deploying to an alternate AWS account and testing there first.

1 Like

Thanks @garethmcc

There are several levels for dry-runs

  1. IaC (infrastructure as code)

I used terraform with option plan, I can clearly see the resource changes before I deploy the change.

With this dry-run option (terraform plan), I can check APIG, Lambda (not the code details, only setting about them), dynamodb tables, SQS/SNS or others.

I only check the configuration changes at this stage. In production, it shouldn’t change frequently, but I still need find a way to test a dry-run first.

With sls deploy, or sls deploy -noDeploy, I can’t have similar output for reference.

Think about this scenario, I set a CICD pipeline, I normally set these steps:

  • dry-run
  • approval
  • deploy (or apply the change)

With every deployment, I need go through above steps, what should I put in first step (dry-run)

I will promote the deployment from dev, uat, then production. Even the changes are all fine in dev/uat, I still need a dry-run step before production deployment.

  1. Unit test (which mostly focus on labmda function codes).

I agree to run with mocha/chai or other testing tools.

  1. More, I am still looking for what should be put.

Regarding point #1, Second test I added.

Save the file .serverless/cloudformation-template-update-stack.json as artifacts when deploy the serverless stack on master branch. I set this task in CICD piepline, and did something similar test as below:

# After stack is deployed. 
aws s3 cp ./serverless/cloudformation-template-update-stack.json s3://<bucket_name>/config/<environment>/cloudformation-template-update-stack.json 

When run the test, it will generate the latest package file and compare with the exist running package file

# Before stack is deployed.
aws s3 cp s3://<bucket_name>/config/<environment>/cloudformation-template-update-stack.json  /tmp/cloudformation-template-update-stack.json 
diff -w ./serverless/cloudformation-template-update-stack.json  /tmp/cloudformation-template-update-stack.json 

It is not perfect, but do give me some readable details how the IaC changes.

I am currently going through aws cdk (AWS Cloud Development Kit (AWS CDK) v2), it has the diff feature to compare the resources change. I am checking if I can use it with serverless framework deployment.

The CDK Toolchain is a command line tool for interacting with CDK apps. It enables developers to synthesize artifacts such as AWS CloudFormation templates, deploy stacks to development AWS accounts, and diff against a deployed stack to understand the impact of a code change.

Regarding alternate AWS account and testing there first., of course, following aws best practices, the codes will be deployed to dev/uat/stag or other non-prod environments first, then prod.

But before deploy to dev, I’d like to run some tests first.

I second this. In particular what I really need here is to know exactly what CF changes are and are not going to be made before they are applied.