Hi all, first post here.
I aws wondering how people are testing their Lambdas; I’ve used Jest but I can’t seem to get it to mock the AWS-SDK properly. Has anyone else had similar issues?
Thanks in advance for any help
Hi all, first post here.
I aws wondering how people are testing their Lambdas; I’ve used Jest but I can’t seem to get it to mock the AWS-SDK properly. Has anyone else had similar issues?
Thanks in advance for any help
I build my handers in a way that I separate my internal functions (which I unit test, and easiliy) from my functions that require access to external resources (like the aws-sdk
). On principal I try avoid testing things I don’t own; It’s not always possible.
Hi, thanks for the response.
I’ve structured my functions similarly, however this isn’t always possible and a need for mocking does occassionally arise, I was wondering if anyone had a solution?
@hassankhan perhaps my answer here would help.
I managed to use aws-sdk-mock
with Jest in the end (although I am using Typescript as well):
Going to put my code up here in case it helps anyone in the future:
DynamoDBAdapter.ts
import ExternalError from “…/errors/ExternalError”;
import ResourceNotFoundError from “…/errors/ResourceNotFoundError”;
import ValidationError from “…/errors/ValidationError”;
export interface IDynamoDBItemKey {
[name: string]: string;
}
export default class DynamoDBAdapter {
protected client: DynamoDB.DocumentClient;
constructor(client: DynamoDB.DocumentClient) {
this.client = client;
}
public scan = (tableName: string, limit: number = 100) => {
return this.client.scan({
Limit : limit,
TableName : tableName,
}).promise();
}
public query = (tableName: string, keys: IDynamoDBItemKey, limit: number = 100) => {
const [ hashKey, rangeKey ] = Object.keys(keys);
const hasRangedKey = Object.keys(keys).length === 2 && typeof rangeKey !== "undefined";
const expressionAttributeValues: { [name: string]: string } = { ":hkey" : keys[hashKey] };
let keyConditionExpression = `${hashKey} = :hkey`;
if (hasRangedKey) {
keyConditionExpression += ` and ${rangeKey} > :rkey`;
expressionAttributeValues[":rkey"] = keys[rangeKey];
}
return this.client.query({
ExpressionAttributeValues : expressionAttributeValues,
KeyConditionExpression : keyConditionExpression,
Limit : limit,
TableName : tableName,
}).promise();
}
public batchGet = (tableName: string, keyName: string, keyValues: string[]) => {
const formattedKeys = map(keyValues, (value) => { return { [keyName] : value }; });
return this.client.batchGet({
RequestItems : {
[tableName] : {
Keys : formattedKeys,
},
},
}).promise();
}
public get = (tableName: string, key: IDynamoDBItemKey) => {
return this.client.get({
Key : key,
TableName : tableName,
})
.promise();
}
public put = (tableName: string, item: T): Promise => {
return this.client.put({
Item : item,
TableName : tableName,
})
.promise()
.then(() => {
return item;
});
}
public delete = (tableName: string, key: IDynamoDBItemKey): Promise => {
return this.client.delete({
Key : key,
TableName : tableName,
}).promise()
.then(() => {
return;
});
}
}
</details>
<details>
<summary> <code>DynamoDBAdapter.spec.ts</code> </summary>
import AWS from “aws-sdk”;
import MockAWS from “aws-sdk-mock”;
import DynamoDBAdapter from “…/…/src/adapters/DynamoDBAdapter”;
MockAWS.setSDKInstance(AWS);
describe(“DynamoDBAdapter”, () => {
let adapter: DynamoDBAdapter;
describe("#batchGet", () => {
beforeAll(() => {
MockAWS.mock("DynamoDB.DocumentClient", "batchGet", Promise.resolve().then(() => {
return {
Responses: {
tableName: [
{ foo : "bar" },
{ foo : "baz" },
{ foo : "quux" },
],
},
};
}));
adapter = new DynamoDBAdapter(new AWS.DynamoDB.DocumentClient);
});
afterAll(() => {
AWS.restore("DynamoDB.DocumentClient");
});
it("can retrieve a batch of items in DynamoDB", () => {
return adapter.batchGet("tableName", "foo", [ "bar", "baz", "quux" ])
.then((response) => {
expect(response).toHaveProperty("Responses");
expect(response.Responses).toHaveProperty("tableName");
expect(response.Responses.tableName).toHaveLength(3);
});
});
});
describe("#get", () => {
beforeAll(() => {
MockAWS.mock("DynamoDB.DocumentClient", "get", Promise.resolve().then(() => {
return {
Item: {
foo : "bar",
}
};
}));
adapter = new DynamoDBAdapter(new AWS.DynamoDB.DocumentClient);
});
afterAll(() => {
AWS.restore("DynamoDB.DocumentClient");
});
it("can retrieve an item in DynamoDB", () => {
return adapter.get("tableName", { foo: "bar" })
.then((response) => {
expect(response).toHaveProperty("Item");
expect(response.Item).toHaveProperty("foo");
expect(response.Item.foo).toEqual("bar");
});
});
});
describe("#put", () => {
beforeAll(() => {
MockAWS.mock("DynamoDB.DocumentClient", "put", Promise.resolve());
adapter = new DynamoDBAdapter(new AWS.DynamoDB.DocumentClient);
});
afterAll(() => {
AWS.restore("DynamoDB.DocumentClient");
});
it("can put an item in DynamoDB", () => {
return adapter.put("tableName", { foo: "bar" })
.then((response) => {
expect(response).toEqual({ foo: "bar" });
});
});
});
describe("#delete", () => {
beforeAll(() => {
MockAWS.mock("DynamoDB.DocumentClient", "delete", Promise.resolve());
adapter = new DynamoDBAdapter(new AWS.DynamoDB.DocumentClient);
});
afterAll(() => {
AWS.restore("DynamoDB.DocumentClient");
});
it("can delete an item in DynamoDB", () => {
return adapter.delete("tableName", { foo: "bar" })
.then((response) => {
expect(response).toBeUndefined();
});
});
});
});
</details>
I have broken up my testing into two different techniques: unit and integration tests.
Unit tests are pretty straight forward, I attempt to break out code from my lambdas into independent modules and then invoke them from within unit tests like normal. Purely functional or limited mocking, no aws service code allowed in these tests. However this still leaves my actual lambda handler code untested without more complex mocking. Therefore I actually test my handlers with integration tests instead of unit tests.
To do integration testing I rely heavily on Docker. To start with I create a docker-compose.yml file which launches all of my mock services which my lambdas are dependent on:
When I run integration tests I am using mocha, and in global before
/ after
functions I am shell exec’ing docker-compose up
and docker-compose down
to launch those services. After they are done starting up I then run my integration tests.
My integration tests are a set of json files and each file creates a single mocha test (e.g. it), which loads the file and invokes a lambda using the data in the file, which look a little bit like this:
{
"app": "feeds",
"handler": "handler.http2s3",
"expectedResult": { "const": true },
"event": {
"bucket": "test",
"key": "example/data-feed.json",
"url": "http://mock-api:5001/example/data-feed.xml"
},
"routes": [
{
"method": "GET",
"path": "/example/data-feed.xml",
"status": 200,
"body": "<?xml version=\"1.0\" encoding=\"utf-8\"?><examples><example><title>test</title></example></examples>"
}
]
}
I have multiple groups of lambdas, which I call an “app”. Each one has its own serverless.yml file, in this case one called feeds. I then specify which handler I am wanting to call, in this case one called http2s3
. It then grabs the event object and invokes the lambda passing in that data as the event data.
To invoke the lambda I am using another docker container called lambci/lambda
which I then put in the same virtual network as my mock services and when invoked runs my code locally in a near perfect simulation of a lambda container. The lambda container then emits the results of that invocation as json which I can parse and compare to the expectedResults
field using json schema validation.
The code to do this looks like this:
function invoke (data, callback) {
let app = data.app
let handler = data.handler
let expected = data.expectedResult
let validate = ajv.compile(expected)
let event = JSON.stringify(data.event)
let envs = fs.readFileSync('integration/.env', 'utf8')
.split('\n')
.map(line => line.replace(/#[^\n]*/g, '')) // remove # line comments
.filter(line => line) // remove empty lines
.map(line => `-e ${line}`)
.join(' ')
let cmd = `docker run --rm --net=${TEST_NETWORK} -v "$PWD/build/${app}":/var/task ${envs} lambci/lambda ${handler} '${event}'`
exec(cmd, (err, stdout, stderr) => {
if (err) return callback(err)
// stdout: json serialized return value from the lambda
let actual = JSON.parse(stdout)
// Ajv schema documentation:
// https://github.com/epoberezkin/ajv/blob/master/KEYWORDS.md#keywords-for-objects
let valid = validate(actual)
// stderr: console output coming from the lambda
let message = ''
message += `${PREAMBLE}${data.name.gray} ${'console'.gray}\n`
message += `${PREAMBLE}${stderr.trim().replace(/\n/g, `\n${PREAMBLE}`)}\n`
if (!valid) {
let errorText = ajv.errorsText(validate.errors)
message += `${PREAMBLE}${data.name.gray} ${'result'.gray} ${'(failure)'.red}\n`
message += `${PREAMBLE}${errorText.blue.trim().replace(/\n/g, `\n${PREAMBLE}`)}\n`
let errMessage = message
.replace(/\n/g, '\n ')
.replace(/λ/g, '')
.trim()
err = new Error(errMessage)
err.stack = undefined
} else {
message += `${PREAMBLE}${data.name.gray} ${'result'.gray} ${'(success)'.green}\n`
}
process.stdout.write(message)
callback(err)
})
}
The other mock services as they are starting up, such as the mock-api
also looked at the same test files and setup routes based on the routes in the test files. They then respond with the predefined payloads when the tests call them. I also have a .env file which allows me to override endpoints and other settings on aws-sdk
clients to point to local mocks instead of aws services in the cloud.
For example the above http2s3
test is configured in the .env
file with this line:
# S3
aws_s3_endpoint=http://s3rver:5000
aws_s3_s3BucketEndpoint=true
aws_s3_s3ForcePathStyle=true
And then in the docker-compose.yml
file I am running a s3rver server:
s3rver:
image: node:4.3
volumes:
- '$PWD:/usr/src/app'
working_dir: /usr/src/app
ports:
- 5000:5000
command: "./node_modules/.bin/s3rver --directory ./integration/buckets --port 5000 --hostname 0.0.0.0"
And then in my lambda I am loading configuration for s3 from environment variables using nconf:
// config.js
import nconf from 'nconf'
nconf
.env('_')
.defaults({
aws: {
s3: {
apiVersion: '2006-03-01',
region: 'us-east-1'
}
}
})
export default nconf
Because the values in the .env file are being loaded into the docker container as environment variables (e.g. -e aws_s3_endpoint=http://s3rver:5000
) nconf is then loading it into a config object which I can use to configure the s3 client like so:
import config from './config'
import { S3 } from 'aws-sdk'
const s3 = new S3(config.get('aws').s3)
This allows me to test my handler without needing to connect to aws at all, everything is running locally. I also don’t need any conditional code in the lambda which would behave differently when running locally vs. in production. I also don’t need to setup complex mocks per test, though I am now mocking entire services. I’ve found that most major services already have free docker containers for this purpose. And most other things can be mocked with a simple http server call and response.
The tests are not fast, about 3s per invocation plus aproximately 10s for startup/shutdown of docker-compose. Because of this speed (or lack there of) my goal is to write a single integration test per lambda and then rely on unit tests as much as possible.
Wow thank you for the detailed reply!!
I’ve done things a little differently, I have my Serverless projects and a shared “lib” module. The lib
module is covered by unit tests, and any functionality relating to AWS services is mocked/stubbed out.
The Serverless projects themselves are mostly integration tested by deploying a test
stage that includes all CloudFormation resources and running a series of fetch()
calls against it.
I have just under 80 tests for one Serverless project (~18 functions/30 endpoints), and running it against a deployed API finishes in just under a minute
This is really cool. Thanks for sharing.
It does seem like a complex setup - would you recommend it? Has it caught big/bad errors before deploy?
The tests are not fast, about 3s per invocation plus aproximately 10s for startup/shutdown of docker-compose. Because of this speed (or lack there of) my goal is to write a single integration test per lambda and then rely on unit tests as much as possible.
Would it not be almost faster to depoy to a test environment in AWS?
I’m not sure yet, I do like it mostly and am going to go down the path further for now but I do wish it was faster. Its definitely faster than a deploy and has the benefit of not needing to be online to run so those are pluses.
It is kind of complex though but it may be less complex than mocking with something like sinon. Ideally amazon would release an entire local simulator that would simplify things but lacking that its not too terribly bad with Docker.
Thanks for the extra info.
I just don’t see a local simulator ever happening - If AWS can’t even keep CloudFormation up to date with all the releases, how could they keep another thing up to date (especially since it’s just for testing purposes).
If only they had a way to test those versions before deploying them… like a local simulator or something!
But yes I agree its not likely. Each individual service releasing its own container version is an ok workaround.
@justinmchase although it’s an old topic I was wondering if after a year your testing methodology has changed ? Are you still using docker and mocks ? From my experience testing only locally does not catch all the integration problems that we’ve encountered, I actually run my tests in the cloud, although I pay in other areas, like:
But in the end we managed to catch more problems with this solution
If you follow the one Lambda one function principle then I find it easy to write tests with Jest. I’ll mock the externals dependencies, call the handler and check the return value plus calls to the external dependencies.
@buggy I agree with you when we’re talking about testing your business logic, but in the end integration tests do not test your logic, they test that all of your components are well configured and are behaving as expected.
I’ll take me as an example, in addition to the regular services that probably everybody use like S3, RDS or DynamoDB I’m also using AWS Rekognition (image analysis) and I’m using Google’s Firestore as persistent service for my mobile devices, it’s complicated to mock and orchestrate everything locally.
Mocking shouldn’t be complicated. By mocking my external dependencies and testing the handler as a whole I can test more than just business logic. I can also simulate error conditions in external services and be certain how my handler will respond.
I’m also using TypeScript and I take advantage of interfaces. For example: If Lambda A publishing a message to an SNS topic that Lambda B will read from then I have a common interface that is used by both Lambda A & B. If I forget to put a attribute into the message in Lambda A or try using a one that doesn’t exist in Lambda B then TypeScript will generate an error long before I even get to the test stage.
This gives me a high level of confidence when I ship.
At the end of the day you still need to test in a real environment (Staging, QA, production) and you need monitoring in production because silly things happen like putting the wrong value into an environment variable.
@buggy how do you verify that the code you wrote for your mobile device plays nicely with an api gw that you provisioned, you won’t convince me that mocking is going to give you this certainty. In the end serverless might and will contain components written in multiple languages, especially in big projects.
As a developer seeing my code running and being tested for the first time in a “real” cloud environment only during CI is not a responsibility I’m willing to take. In a nutshell, for me, finding out bugs in CI that I could have found during testing on my own in a real cloud environment is just wrong.
The original question was about testing your Lambda functions. What I’ve described is how I test my Lambda functions.
What you seem to be asking is how do you test your client? That’s not a serverless (architecture or framework) specific problem as any client/server architecture is going to have this problem. In non-serverless environments it’s a little easier to solve because developers can easily run the backend locally but in serverless environments simulators will only get you so far. There’s no easy solution for “how do i emulate <big expensive service> locally”. At some point you may need to have separate accounts your developers can use.
One solution that may help is the API gateway pattern. I’m doing this with a GraphQL API. My clients only interact with the backend through my GraphQLAPI. There is no direct access from the client to any other service. I can download the latest GraphQL schema in JSON format, generate TypeScript interfaces for it against the queries that I plan to run (validating them in the process) and then use those interfaces in my code. If the schema changes in a way that isn’t compatible with the client then either TypeScript or apollo-codegen will raise an error before tests are run.
It’s not 100% full proof but it gives me enough confidence.
As a bonus it also makes it easy to switch service providers. If I need to switch external providers for some reason I only need to modify the gateway to use the new service provider. It’s quick and painless. If my client is accessing multiple service providers directly then I need to update the client, ship it and wait for everyone to upgrade before I can remove the old provider.
I’m not asking how to test my client, I’m asking how do I test my system as a whole, which includes a client as well as other external services. Your solution works very well when you are the sole provider of all interfaces in the system, the idea of serverless is consuming services by other providers (not only AWS), unfortunately most of them (event the good ones) do not provide any consistent and testable interface.
I do agree with you that it’s not a problem that is specific for serverless system, but it’s amplified when moving to a serverless platforms.
Regarding your comment At some point you may need to have separate accounts your developers can use I’ll be honest, it’s not feasible with the current AWS offering, it’s quite expensive to open and provision certain services to all developers. I do not have a solution, but it’s a real problem.
In the end, what I wanted to emphasise is that concentrating only on Lambda testing is just dangerous