I’m starting a project using Serverless and i need some guidance in how to have CI/CD
i’d rather spend a few hours setting it up from the start than integrating automated test deploys along the way.
few considerations: the project its for the company so it has to be in a private repo
i have never used a CI before but i have experience with TDD and testing frameworks
i’d like to start with a free option but a paid option in the future could be possible.
i’d love to hear from you guys how are you automating your test/deploys cycles.
I’ve never created a CD pipeline for a Serverless project, but I have done plenty of them for RoR projects and one for a .Net project. Having a private repo shouldn’t be a problem and you shouldn’t have to pay for anything except hosting costs.
I suggest you go with TeamCity for your deployment pipeline since it’s free up to 25 projects and Bitbucket offers free private repo hosting. TeamCity, like most other CD platforms, allows you to break up your pipeline into build steps.
Usually, Step 1 would be building the project. For a Serverless project that means running npm install. The second step might be to run your tests, so npm test. And the third step might be to deploy the code to whatever your staging environment is, like dev or alpha.
TeamCity also allows you to create sub projects, which I used to separate my different environments. This isn’t the only way to separate environments, but it’s the way I chose. So you can set your staging environment to automatically pull down, build and test your changes everytime you push up code chages, but then have production only run when you log into TeamCity and actually click the Build Now button. You can also have production only run once you tag a release or if you merge in changes to a branch named production. It all depends on what works best for you.
Like I said before though, I never set up a pipeline for a Serverless project, so I’m sure there are plenty of gotchas out there. I haven’t done one to figure out what those are.
I did not try it on my own, but studied a bit… looks quiet interesting:
LambCI, it’s a CI environment built on Lambda, hence even (nearly) no hosting costs
If you find it useful and do some testing, I would be really interested in your findings!
I’m using BitBucket.org with their pipelines, its super quick to set up with a single configuration file. The file format is pretty simple and you can do just about anything you want as each ‘build’ runs in a docker container. An SLS deployment shouldnt take more than 15-30 minutes to setup as long as you are only doing unit testing. Integration testing will likely involve more effort.
Currently you get 300 minutes a month free. I haven’t yet seen any long term pricing on it.
+1 for the Bitbucket Pipelines CI/CD. I have a simple yml file that specifies how to build and deploy for a dev version and a production version. You can then combine that with the environment variables within Pieplines configuration and Serverless to ensure you do not store details such as web service, AWS and database credentials in GIT which is always painful.
Here is an example of our yml deploy configuration (note this is our full deploy config. Because of the environment variables being setup within Pipelines we don’t need to redact anything from the deploy script and no confidential data in git):
image: node:4.6.0
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- npm install
- npm install serverless -g
- serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY --stage prod
- sls deploy --stage prod
develop:
- step:
script: # Modify the commands below to build your repository.
- npm install
- npm install serverless -g
- serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY --stage dev
- sls deploy --stage dev
@matt-filion so Matt, If I understand correctly you are suggesting that the CI server would start the SLS deployment creating a different stack, run the integration tests using that stack and then destroy the newly created stack. Is that right? Any thoughts on this approach? Wouldn’t be slow or costly?
@garethmcc I don’t see the testing step. Are you running tests?
Yes, you could create a temp stack or use something like serverless-offline to mock the functionality. Optionally, if you wanted to get super clever you could always do some sort of red/black deployment where you do your deploy, run your tests against the new deploy and then change whatever routes the traffic to point to the ‘new one’. Though this is probably only appropriate for production.
For reference here is the script I use for my deployments. In my case, I’m optimistic about my dev builds in that breaks will be rare, and when they happen I can fix them quickly. So I just push, then run newman, if it breaks then a new build will happen to resolve the break.
Same as with @matt-filion, we run the build and deploy on the pipelines CI server. We have also just recently added the yarn test command, added the serverless-optimizer plugin to reduce per function size which makes deployment faster and more streamlined and ensure that all of our dev tools (such as mocha, chai, serverless-offline, etc) are marked as only for dev so that the lambda server doesn’t include them either.
Mocha works beautifully with Pipelines, as a failed test immediately stops the deployment and then pipelines can notify you of failed deployment. We’ve already caught some very interesting integration bugs with this setup. Currently every service has a vagrant machine defined as a part of the project purely for dev testing purposes. Every developer working on a service has to vagrant ssh into the machine for that service in order to accurately run the tests.
Bear in mind, with this setup, functions are never down. The lambda only switches to the new version in S3 once its pushed. Any currently running functions continue to use the old version until they are finished. I believe, but may be wrong, that they also need to time out before the new version is used, so it even gives you a few minutes buffer as Lambda doesn’t destroy functions immediately once finished.