Serverless E2E Testing

Hi all!

I’ve read quite a few articles that seem to suggest that spinning up a temporary stack is useful for running automated tests. We want to run a set of acceptance-tests against this temporary stack as part of our e2e tests. As api gateway generates a different api endpoint url for each temporary stack we set up, what’s the best way to get this endpoint to then pass into our acceptance test scripts?

As an aside, how is everyone e2e testing their services? If you have multiple services that work together to perform a task, are you spinning them all up in a temporary environment like this? I’ve thought about mocking but most of the integration complexity now lies in the AWS configuration…

Thanks for any help! :slight_smile:

1 Like

There are a couple of ways you can try to fix the first issue:

  1. You can write a script to get the newly created url after deploying serverless, then write URL to .env file for your tests (unit, acceptance test).
  2. Assign a custom domain for a new stack, mostly we can use your task id such as JIRA issue key.

Regarding the temporary stacks for new task (for featured environment), not sure whether you read some posts from Yan Cui, if not, you should take a look


I haven’t verified, but could you add the endpoint as an Output to your cloudformation and then print/parse via:

sls -v info

I’ve had good success with the serverless-stack-output plugin.

I output any properties I need in follow-up scripts. Configure the plug-in to write them to ./${self:provider.stage}-stack.json which I can then read in my script.

Hope it helps.

I’m also interested in learning how others are performing end to end testing of their services. There are a lot of resources available that suggest different approaches around testing in the cloud vs locally with mocks or tools like serverless-offline and localstack. There doesn’t seem to be a very clear standard in practice however, so I’m curious what methods people are using.

The limited work I’ve done has used a separate temporary environment for testing, because our services are mostly around Lambda, SQS, SNS and API Gateway, which are quick enough to spin up and pull down. I imagine people using RDS may be using alternative ways to avoid the need to spend the significant time it takes to instantiate a new database instance.

From my experience, local testing is difficult but not impossible and does not need bloated tools such as localstack that try to emulate AWS services. The thing to bear in mind about Lambda functions is that ultimately they are just functions that need parameters passed to them and that should generate a certain expected output with those parameters. This means we can use tools as simple as Mocha (and equivalents in other runtimes) and just do unit tests to help execute code locally. This does not mean you have to go down the rabbit hole and build out a full suite of unit tests (you can of course do so if you want), but using a tool such as this means you can now execute your function code by passing it parameters repeatedly and tie that execution into a debugger.

The only remaining issue is interaction with AWS services and this is where mocking steps in. And I know a lot of folks immediately hear mocking and think its bad practice. Mocking is ONLY bad practice if you use it to bypass elements you should actually be testing such as your own classes. But why would you want to test that making a putItem call to DynamoDB is successful? When in reality what is more useful is to be able to test what happens when that service fails. You can mock success and failure of AWS services and therefore test that. You cannot test failure if you are tied to the actual DynamoDB service and testing failure with a locally emulated version is not a cake walk either.

“But Gareth … that all sounds so painful to setup” I hear you say. Not really. There are existing Node modules and serverless plugins you can use as well as this hand dandy blog post I wrote a few months back to guide you in setting all this up. At the end you will have a way to repeatedly execute your function code with a debugger tied in and the best part is, its a unit test … you can run this in CI/CD as well.

  1. Your machine does not get bloated with emulated services
  2. You no longer need to deploy code into the cloud to realise you have a syntax error, spend 10 seconds fixing and another minute deploying.
  3. You can now test offline as mocking keeps everything local
  4. Every service is entirely modular and for a new developer to get started with an existing service is just npm install and their “testing environment” is all setup and ready to go.
  5. As a developer you are no longer relying on tools like serverless-offline which only help test HTTP endpoints, and this means you are now open to explore the breadth and depth of events in AWS, making your app richer in every way.
  6. Probably a bunch of benefits I didn’t cover.

Hi Gareth. Thanks for the above!

I read through your article and really liked it. One thing I noticed was at the end you wrote " The next step from here is to look at how we can incorporate this service we have been building in isolation into the rest of our application as a whole", do you have any further thoughts on this? I find the biggest issue with these smaller services is not so much the business logic but the configuration and maintaining the contract between the different services.