Splitting big serverless.yml?

Inside the serverless.yml file, there are lots of things :

  • backing services (which for me consist of rds, elasticsearch, 3rd parties API keys, …)
  • networking config (the VPCs)
  • functions and events (the interesting part)

The backing services and networking does not change a lot. The functions change every time we add a new service to our platform (we have one lambda per service)

Is there a way to store one yml file per service ?

That is, we’d like the following structure
project
|-serverless.yml
|-service1
| |-serverless-function.yml
|-service2
| |-serverless-function.yml

I’ve seen the post Serverless.yml splitting (per function) : is that the current recommended approach ?

Are you sure this is good practices to manage vpc, es, rds in serverless framework?

Hi bill,

thanks for the interest in my question.

No I am not sure this is good practice to manage everything in the serverless fmk…

However, there are lots of articles and examples doing that on the internet.
My 2 favorites are
http://blog.brianz.bz/post/accessing-vpc-resources-with-lambda/

My personal opinion is that sls deploy should deliver a ready to go stack from end to end, with different behavior in function of the stage.

For ex, in dev, I want to provision a t2.micro mysql for my rds whereas in prod i want to proviosion an aurora.

If I only needed serverless to configure my functions and events, then using CloudFormation alone would be suffficient because it does not change much between stages AND I dont need the multi provider support.

What is your opinion ? Do you know any resource that describe such good/bad practices ? What are the drawbacks to manage that in serverless.yml ?

As a final note, please note that :
I have multiple serverless.yml for different services
I configure the VPCs and subnets directly in AWS (I just reference them in the serverless.yml)

1 Like

Hi bill,

after much digging here and there, I think that your point is

  1. create a stack of backing services (directly in CF or elsewhere but not in serverless)
  2. user serverless “only” for the lambda part, with reference to the stack backed by CF

That way

  1. sls deploy is much faster since it only deploys api gateways and lambdas (no more VPC, RDS, …)
  2. you can have cf-dev.yml and cf-prod.yml to handle the difference btw stage and use variables in serverless.yml

Is that correct ?

If so, could you please show me how I can reference another stack from my serverless.yml ?

For ex, if i have a cf-dev.yml with a MySQL RDS and a cf-prod.yml with an aurora RDS, how can I get back that as a single set of DB_HOST, DB_USER, DB_PASSWORD variables inside serverless.yml ?

I am not sure which is best/good practice as well. Seems your research made the right conclusion. :grinning:

The owner of the link you provided talks about Serverless Design Patterns and Best Practices. but only focus solution with Cloudformation template only.

If you or your team have strong knowledges about Cloudformation template, it is not bad idea to manage ALL codes in serverless.yml

It will depend on the skillsets in your team.

I personally have very well knowledges and several year project experiences with Hashicorp Terraform, so in my serverless projects, I only manage serverless related resource with serverless framework, and let Terraform to take care of whole infrastructure as code.

The reason is, Cloudformation yaml is too hard to write and will spend much time to maintain in the future. I always try to avoid writting CFN. With current features in serverless framework, a lot of infrastrure resource codes are missed, so in most time, if the resource is not api gateway, lambda, dynamodb, I just copy the cloudformation yaml codes directly into serverless.yml. There are serverless plugins which can do extra jobs easily, such as manage domain name, but I will only add these plugins if it need spend too much time with CFN.

For layer management, of course, you need several stacks. VPC, databases/redis/memcache/Elastic/etc, and application layers.

For application layers, you can manage one serverless.yml (or several serverless.yml) with different custom option for different stages (such as dev/uat/prod). I discuss this with another ticket before: Manage variables for deploying a serverless project with different environments ,take a look.

If you need reference another sls stack outputs, please read this: https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs

I wouldn’t be managing anything infrastructure related in serverless.com other than very tiny starter projects. Sls is great for managing apis and lambda functions but you’re going to get a lot more mileage with your Infrastructure-as-code strategy having a library of other CloudFormation templates that do everything else. We have master CF templates that build the basics such as VPCs and security groups, a different template for security related items like roles and policies used by serverless, and then just reference that in the serverless templates by passing the ARNs.

We also easily got to a point of dividing up our serverless projects into multiple repos each with their own serverless.yml, grouping API calls by categories. (Any code you would share across repos you’d do custom npm packages for or just reference a URL directly in package.json.)

Use serverless for what it’s good at, don’t make it for your entire distributed system.

1 Like

I like managing ALL my infrastructure in serverless.yml, as it allows me to create the base infrastructure and other resources based on serverless variables. I want to have separate databases, S3 buckets, perhaps even VPCs, depending on whether I’m deploying to dev, qa, or prod. I create a serverless.yml file that references a serverless-vars.yml file; that vars file has sections for each stage, [dev], [qa], [prod] and perhaps others. Each of those can specify customized resources.

For pre-serverless projects we’ve used CloudFormation but ended up wrapping it with Troposphere to get sane variable handling using Python parsing of .ini files. Doing everything in serverless with variables seems easier to me.

1 Like

Thanks everyone for your constructive answers.

I see that this is definitely not a one size fits all solution and I will keep learning by doing.

At the moment, I manage infra in CFN and lambdas with serverless.

1 Like

Bit late to contribute here, but fwiw I’ve found it much easier to break down my serverless config using the undocumented serverless.js feature. Pehaps more specifically, I’ve found it much easier to share serverless config this way.

If I’m creating a dynamodb table, I need a role statement, a table definition, potentially a plugin like serverless-dynamodbdb-local. If you use serverless.js you can have an npm module that builds a serverless config object and merge that into you service config. I imagine some would frown on this saying “this is config not code” but I found the yaml restrictions to be untenable using many distributed services. I imagine serverless.js would open similar doors for you with breaking down a large serverless configuration.

A bit too late here: for the sake of completeness, take a look at Norstrom’s approach at their canonical “Hello-Retail” app: https://github.com/Nordstrom/hello-retail
Services are in separate serverless.sls, end-to-end deployment wired by custom script. As for CF tempaltes - just an opinion - I found serverless quite convenient way to embedding / including them, as opposed to scripting the end-to-end orchestration outside with custom code or something else.

Thanks for sharing the article BTW. My thoughts on project structure in “Project structure” section of this story. https://medium.com/@dzimine/exploring-serverless-with-python-stepfunctions-and-web-front-end-8e0bf7203d4b

Just thought it might be beneficial to some to share our solution.

We split out code into two stacks: data stack (contains all resources) and api stack (contains lambda functions). We use a --stack parameter for deploy and use yaml-boost to separate our config nicely. Data stack only gets deployed occasionally when there are changes (manually), api stack gets deployed automatically through ci.

So far this worked really well for us (we started this about three month back).

1 Like