Serverless Development Workflow

The Serverless Framework is quite different from traditional frameworks, both in the development experience and the production infrastructure. We’re paving the road for a conventional, serverless development workflow, and we’d like to get your insight on the best and most efficient development workflow.

Some brainstorming questions:

  • How do you kick start your serverless projects? boilerplates or from scratch? and what boilerplates would you like to see?
  • What unit testing tools are you using?
  • How do you do integration testing?
  • How often do you deploy during development?
  • How often do you check the logs of your lambdas? and how helpful are they?
  • How do you coordinate work within your team? do you face any conflicts?
  • How do you share sensitive information within your team?
  • What do you think about Serverless error handling? how could you improve it?

These are just some questions to think about, but feel free to let us know about anything related to the serverless development process. We’re here to listen to you, so feel free to share your thoughts! :blush: :zap:

2 Likes

I really like the approach of Eric Elliott to unit testing.
Could you structure the default testing suite in Serverless to mimic this style?
Or even various styles depending on which you prefer? Could set up in CLI during project setup.

@n1alloc interesting! Thanks for sharing. Is that what you use to test your won serverless applications?

We initially used from scratch. Even though we still do not have a boilerplate, we do use couple of our own code blocks (We are still on V0.5.6). We make heavy use of serverless-offline plugin when it comes to the development.

We also have developed couple of serverless plugins which we use at development…

For DynamoDb

For Shared dependencies and dependency management

I would also love to see some functionality to debug serverless projects(Offline). May be attaching chrome debugger tools would be nice.

1 Like

Do you rely on local development 100%? it’s a little tough to locally test everything, specially when you’re dealing without outside services (Mailchimp, twiliio…etc). It seems that for practical applications you’re gonna have to get back online sooner or later.

Not 100% but most of the time. Do agree with you. when we need external services we move out from offline. However given our use cases, so far offline was pretty useful for us.

1 Like

I started out testing 100% locally. I abandoned that fairly quickly as the support isn’t great. The important part of local testing is quick feedback loops. I don’t really care if my code is run on my machine or AWS, but I do care if it takes 5 minutes to test a small change or I have to dig through lots of CloudFormation logs to see an error.

I’ve gone from testing locally to testing only on AWS. I’m using TDD, writing testing with vows and lambda wrapper to test code before deploying and testing on AWS.

CloudFormation logs are very helpful, when you find the right logs. The API Gateway logs can be difficult to deal with. Matching a request to API Gateway logs is hard and it’s frustrating to test and resolve.

I would echo @rehrumesh comments earlier. We wish to do more local development. This is especially useful in the debugging stage since we can use code-insight while the code is running in our environments.
A pluggable mock system could help to use plugins for different AWS services as they become available. The goal is the following:

  • Local Development and unit tests with interactive debugging
  • CI for testing on AWS DEV Stage
  • Integration Testing on AWS QA stage
  • Beta for user acceptance
  • Production

At each stage the we could weed out issues with the least amount of work. Digging through AWS log files is a very inefficient way at the moment.

2 Likes

I would also really like to develop locally for most things. I am working on a REST API to sit in front of DynamoDB, and I’m creating an SPA front-end application (based on Ember) that will utilize the API. Several Lambda scripts will be triggered based on events. I don’t need every event’s result to be accurately represented, only that I’m passing in what I’m expected, and getting returned what I expected. Whether an external service like MailChimp got the request, or whether a resulting file actually ended up in S3 is not really that important in the initial stages for my use case.

But being able to get the basic interaction and flow of data between the front end application to the API Gateway to DynamoDB in place locally would be really nice. And being able to see what’s happening in a particular place in my code by dropping a break point or debugger statement is much more helpful than sifting through logs after the fact.

Then when I feel like I’m achieving what I wanted, deploy to dev, test, then stage, test, then prod.

1 Like

@dehuszar If you are using Serverless v0.5X give following library a try.
https://www.npmjs.com/package/serverless-dynamodb-local

This library will emulate DynamoDB locally.

@rehrumesh Thanks! Yeah, I had seen that as well. Since 1.x is a bit of a big-bang rewrite, I am stating the above more as a desire to ensure the new codebase can still accommodate the features provided by the library you mention, as well as the serve and offline plugins.

I will probably keep learning how to use and configure AWS by hand using their provided tools until the 1.x codebase gets a little more stabilized and regains some form of offline testing workflow.