How to configure Serverless to use an existing iamRole, S3 bucket for deployment?

We are facing two challenges with server-less deployment feature -

  1. Ideally development team will not have the admin privileges to create s3 buckets & roles through server-less. So the deployment is failing.
  2. How we can configure the lambda to use an existing lambda role rather creating a new role for each deployment.
1 Like

There is a PR for Custom roles open:

You can set a preexisting bucket:

deploymentBucket: com.serverless.${self:provider.region}.deploys

Should we follow the specific naming pattern or I can put something like -
deploymentBucket: test-bucket

You can use any bucket name that you want there. One thing you need to make sure is that the bucket is in the same region as the lambda function, so I would put the region in the bucket, but its really up to you.

Thank you, will try that out.

As per Aws documentation, S3 does not need a region selection and its global.

For some reason, the logic does not work for me ! I am using serverless 1.0.0-rc.2. My configuration is as below

profile: ${file(…/deployment-env.yml):custom.${self:custom.myStage}.profile}
name: aws
runtime: nodejs4.3
iamRoleARN: ${file(…/deployment-env.yml):iamRoleARN}
deploymentBucket: ${file(…/deployment-env.yml):deploymentBucket}

The name will be read from another yml field as above. The entries in that file is
deploymentBucket: test-serverless-deployment

@deepu.sundar If you don’t provide a region it will create the bucket in US-EAST-1. If you want to deploy your functions somewhere else make sure the bucket is there:

Just faced with another strange behavior.
Based on the comments of the feature

“After the deploy you will see the S3 resource created initially, then removed from the CF file and the bucket created and deleted”

Does this mean that S3 bucket with randomly generated name will be created anyway even if you explicitly define deploymentBucket in serverless.yml?
I am asking because I identified a lot of S3 buckets via AWS console, but all of them except specified one are empty.
This might be due to the ‘Retain’ policy I have assigned before, but anyway. My expectation is that sls doesn’t create bucket in case if it is specified. Is this correct?

Thank you!

Your first link is dead. New one is

1 Like

I seemed to have run into an issue trying to do a serverless deploy when using the deploymentBucket to specify a preexisting S3 bucket. Does this work in the version 1.3.0 ?

I’ve added a github issue 2888 but I’m not sure if it’s a bug or something I’m doing wrong.

Here is a snippit from my serverless.yml

name: aws
runtime: python2.7
stage: dev
region: us-west-2
deploymentBucket: my-cool-bucket
profile: sdrad-workflow-dev

Here is the error I get.

$ serverless deploy -v

Serverless Error ---------------------------------------

 self signed certificate in certificate chain

Get Support --------------------------------------------

Your Environment Information -----------------------------
OS: darwin
Node Version: 7.2.1
Serverless Version: 1.3.0

Hi, Were you able to resolve your problem? Would you mind sharing it if so. Thank you.

For me the bucket I was original trying to use for deployment required serverside encryption. The admins had a policy on the bucket that requires you me to upload objects with SSE: AES256. serverless only supports specifying a deployment bucket but no options for out serverless uploads to that bucket.

In that ticket I suggested we implement an object for deploymentBucket, or deploymentBucketOptions so we can specify S3 upload parameters like ss3, or ss3KmsKeyId.

Probably not an issue for most people but on my end I don’t have much control over the bucket policies.

1 Like

I definitely feel your pain, this is my most recent blocker for using serverless at work :frowning:

Has this been solved yet? It is effortless to use the web UI to select an existing role.