Not sure if our use case should be used as a reference, but we are working on a small to medium sized REST API (something around 15 resources with an average of 2 methods per resource + CORS). Our workflow/design may present future flaws and changes, but we are confident of the path.
This is our “third tour” with API Gateway + Lambdas. We were early adopters since day 1 and started with our own deployment, at first all by hand. In our “second tour” we wrote our own scripts in gulp. At that time there were no Serverles or Apex. Now we are confident that Serverless, being targeted and supported by AWS would be the best choice and we are re-writing everything with the Beta version. We have no intention to use other cloud services like Google, IBM or Azure. So multi-provider is not a value in our opinion.
Comments and suggestions are always welcomed.
Our project rely heavily in other AWS services like IAM, S3, SQS, Kinesis, Cognito, and a lot more (really, really a lot!!). So our desire to integrate Serverless project with Cloudformation are high but I’m not sure how much this will pan out in reality (more about this at the end).
We tend to break our services in microservices, not nanoservices. The breakdown is done considering operational aspects, (ie, a balance between how easy it is to deploy a new full stage) instead of following the “academical definitions” of microservices.
Our API has a centralized authentication strategy (we have our own OAuth2 implementation), so we have a token endpoint that handles all authentication processes and issues JWT token containing action based permissions plus context information, all data signed and encrypted. Once clients get a token they call whichever resource endpoints they need always sending this token. Each resource endpoint validates the JWT token before further processing the request. The token has information about which actions the client/user are allowed but Its up to the resource to decide if they want enforce it or not.
Even thou this process is common to every lambda function, we do not implement it as an authorizer. Those are only used when the API Gateway integrates directly to another AWS services, bypassing the lambda function. In this scenario we use authorizers to provision proper AWS credentials, otherwise everything else (in regards to IAM permissions) are declared directly on the Lambda execution roles.
Since we trace a lot of calls trough Cloudwatch, we pass along the request-id generated by the API Gateway to track other services brokers like SQS/SNS and so for.
Our projects are all nodejs based. A single project here is usually composed of multiple Serverless Functions with a single root Serverless.yml. (easy to deploy, remember?) This is also possible because most of our functions use the same node modules, hence there is very few gain to have each function handled by its own Serverless.yaml
Our project structure looks like this:
- project-dir/
- serverless.yaml
- functions/
- users.js
- user.js
- lib/
- our common helpers.js
- test/
- fixtures/
- specs/
- user.spec.js
- users.spec.js
- lib-helper.js
- lib-other-helper.js
- e2e/
- case1.e2e.js
- case2.e2e.js
- migrations
- resources
- cloudformation.templates.json
We have functions for collections resources and “entity” resources (ie: USERS and USER).
We avoid creating too much directories, since in the past we ended with a mess. This also helps visualize when we are creating complex solutions/dependencies. Simple is the new black.
We do BDD and E2E but not TDD. Thats due to AWS services dependencies that in the end forces us to develop in deployed stages most of the time. Mocking AWS service is done but in the end still very limited.
We are still considering if in future the Serverless.yaml will manage its AWS resources trough CF templates. We are keen because it enable us to create new stacks for whatever reason, but it also risky because removing the wrong stage may throw away resources that cannot be easily recovered (like RDS, S3 content, ES clusters). Also we had some headache with CF stacks updates/rollbacks.
As of today AWS services are managed manually through CloudFormation. Ie, we use CF as way to document and keep everything reproducible but we manually apply them in CF. We try to do not update CF stacks as much as possible for the mentioned reasons.
We use environment variables loaded in runtime to carry parameters and some sensitive information. Sensitive parameters are each day less and less present due to use of IAM Roles and credentials.
We are still fighting in regards to dynamic debug levels. Cloduwatch is sometimes a bless and other times a real pain. So we do not kept production with a verbose debug mode, but since those are controlled by a .env file, when we need to elevate the verboseness a re-deploy is required. Far from ideal. The use of a shared resource like an S3 file or KeyValue service does not work due to time to load and failure risks they impose. As I said haven’t found yet a good solution for this problem.
Hope that this information helps in anyway.
Cheers,
Eric