The best practice for resource provisioning

Oftentimes, I need to create various aws resource (s3 bucket or sns topic, sqs, iamRole) for lambda to consume. I tend to define all required resources in the resource section.

I wonder if that is the correct way to do it? or use resource automation tool like ansible to provision the required resources before deploy my serverless?

The downsides of defining them in resource section is I need to follow cloudformation syntax which is not as concise as doing with ansible.

I don’t think there is one right or wrong answer to this.

So far I’ve tended to include extra resources in the yml with my lambda(s) if those components belong together.

Recently there have been occasions where we’ve deployed components separately - in one instance a kinesis stream was shared between elements of a larger system and on another occasion we had a dynamo db table that we needed to ensure persisted irrespective of the wider server being removed.

However, I’m not sure you need to use ansible or another tool to do this - to date we’ve managed quite well with a separate serverless yml with no handler and just resources, this will create a separate cloud formation stack for you.

Hope that helps!

1 Like

Thanks for your reply.
So you deploy resources sitting in a separate yml file first b4 deploy the main serverless.yml?

Again not sure if it matters two much, so long as both service and resource and deployed before you try and use the service :slight_smile:

That said if we are deploying our resources separately from our service then it is likely to be done first as a one off deployment outside of our CD pipeline

I recommend making multiple YAML files. I have one to set up all of the static parts of my deployment – things that rarely change like Cloudfront. I have a second to set up the IOT portion of my system. And a third for all of the lambdas used by the web app. My three files are designed so that they can be deployed independently.

Also – my deployments write out local config files which the other YAML files read. For example the ID of a Cognito User pool created by the static YML file is read by the lambda YAML file.

We also have separate yaml files and we use a deploy parameter to select the desired stack. We further use yaml-boost since there are some limitations when it comes to merging yaml files in serverless configs.

Thanks for your advice.

Wonder if you can show me any yml example of how you structure them and deploy them separately?


The key bit is writing an output file. Put the things that don’t change like Cloudfront, S3 buckets, Cognito Pools, etc in one YML file. When it creates those static things write the IDs out to a local file. In a second YAML file put your volatile stuff like your API and read in the IDs for the static items. I might only run my static YAML file once a month.

Here is how to read from a file. This reads the developer’s name so that he can have a custom domain name.

    WEBBUCKET: 'www.$<file(../config/developer.json):developer>'
        - $<self:provider.environment.WEBBUCKET>

More details on how to write variables to files…

  variableSyntax: '\$<([ :a-zA-Z0-9._,\\-\\/\\(\\)]+?)>'
    EVENT_TABLE: $<self:service>-$<opt:stage, self:provider.stage>-event
    OWNER_TABLE: $<self:service>-$<opt:stage, self:provider.stage>-owner
    USER_TABLE: $<self:service>-$<opt:stage, self:provider.stage>-user
    WEBBUCKET: 'www.$<file(../config/developer.json):developer>'
    VIDEOBUCKET: 'video.$<file(../config/developer.json):developer>'

  - serverless-stack-output

    handler: scripts/output.handler # Same syntax as you already know
    file: ../config/stack.json # toml, yaml, yml, and json format is available

      Ref: S3BucketVideo
      Name: "S3BucketVideo::BucketName"
      Ref: S3BucketWeb
      Name: "S3Bucket::BucketName"
      Ref: UserPool
      Name: "UserPool::Id"
      Ref: UserPoolClient
      Name: "UserPoolClient::Id"
      Ref: IdentityPool
      Name: "IdentityPool::Id"

Thank you so much for your examples.