Centralize S3 Deploy Bucket

Hi,

Let me start off by first saying that I love the serverless framewor, I’m excited to see how it progresses in coorelation with the SAM Serverless that AWS is doing.

My question is how can I configure the serverless framework to use sub folders in one central s3 instead of creation an s3 everytime. I do not want a bunch of s3 buckets floating around and given there names need to be globally unique it doesn’t seem feasible longterm.

Configure the deploymentBucket property in provider.
example:

provider:
  name: aws
  runtime: nodejs6.10
  stage: dev
  region: us-east-1
  deploymentBucket: my.${self:provider.region}.serverless.deploys

You can see it in the Serverless Docs here.

5 Likes

Does deploymentBucket property refers to an existing bucket? My experiment shown it does. I want it to be created as part of deployment (to follow good naming convention as Deamoner, yet simplify deployment). Tried to create a bucket in resources section and use it here, but doesn’t seem to work. What is the recommended way? And shouldn’t deploymentBucket directive create a bucket if one doesn’t exist?

The deploymentBucket just defines a bucket(parent folder) that serverless will then store it’s deployments inside of.
In my above example, it creates a bucket named my.us-east-1.serverless.deploys.
Then creates it’s normal folder structure inside that bucket.
example:
/my.us-east-1.serverless.deploys/serverless/my-service/dev/1495754005038-2017-05-25T23:13:25.038Z/(zip & json files)

So it’s
deploymentBucket/serverless/<service-name>/<stage>/<uid datetime>/<files>

Hope that helps.

What happens if I set the deploymentBucket on a service that is already deployed. I’m currently using the default behavior but would like to consolidate deployments under a single parent bucket.

2 Likes

I just whipped up a “very” simple serverless project to test this out and found that
You must manually delete the contents of the original deployment bucket or you’ll get a failed deploy because CF is trying to delete the bucket and it’s not empty.
Other than that it appears to work fine.

2 Likes

Thanks @bfieber for confirming this. It seems that Serverless Framework keeps a state about my service. Whenever I deploy it does a delta of changes between last deployed and new state. It is unclear to me where this state is saved and how it can be shared with my team mates.

  • Is it in the Serverless Platform? I’m logged into that with my github account.
  • Is it in the AWS share account? If so is that the deployment bucket? – I hope the state is in the AWS account.

The reason I’m mentioning my confusion is that it relates to state of deployment bucket before and after the change. Source control will have the definition after the change, but not the previous state.

In my setup, I expect each developer on the team will have their own Serverless Platform credentials tied to their github account and their own AWS Credentials that maps to same AWS account but mapped to different IAM users, all with appropriate access to manage the service.

If SF manages pre/post state, then why does it fail to delete the old bucket? Anyway, per your description I can change the deploymentBucket, delete old bucket, and then “sls deploy” and everything should work with same lambda function / API gateway / etc.

Thanks.

I was able to make the transition to use a deployment bucket and I figured out how things work under the hood. Serverless Framework defines a new cloud formation and submits it and AWS does the transition from previous Stack to new one. So anyone who connects to same account and attempts to update service would update the same CloudFormation and hence service.

@bfieber, thanks for sharing!
Actually, I have another related question with that regard.

As you mentioned, serverless creates deployment package files under: deploymentBucket/serverless////

I was thinking, if we can leverage as a version for CI/CD purpose. So, when CF stack with actual Lambda(s) code is tested and confirmed, instead of making new deployment, we can ‘promote’ exactly the same package to another env (e.g. TEST -> PROD). So, my question is if this value is available in run-timw during deployment, so we can hook up to deployment pipeline and persist it as a service version. And assuming we have it, is there any way (plugin?) to just get it deployed from remote S3 folder. I know, there is an option to create actual deployment package is sls locally, and get it deployed as a next step. But, just curious, if we can do the same with S3.

I’d appreciate your thoughts on that.
Thank you!

The problem I see with that strategy is that all the variable magic that happens in the sls package/deploy is stage/env specific. So any entries in your serverless.yml file such as:

provider:
  stage: ${opt:stage, self:custom.default_stage}

would still point to the stage/env that you originally built for(eg. TEST)
So you would still need to sls package/deploy to update those configuration properties to point to PROD.

That’s a very simple example, but get much more complicated if/when you start laying in references to other Resources or exported/imported CF variables.

Personally, I commit the code & configuration variables to a code repository, but require a sls package/deploy for any time I push code to any stage/environment.

@bfieber, valid point regarding env specific configuration pieces.

Actually, our intent is to introduce some sort of versioning per sls service, so we can always say what’s deployed on some particular environment. And having this timesamp based folder structure is pretty close conceptually to artifact based deployments, which might not be applicable for serverless deployment model.

Anyway, I really appreciate your feedback.
Thanks!

1 Like