Say I have a function triggered by an s3 bucket with a dynamic name:
functions:
foo:
handler: handler.foo
events:
- s3: my-bucket-${self:provider.stage}
And I would like to add cors configuration to this s3 bucket.
Within the resources property I can hardcode the name:
resources:
Resources:
S3BucketMyBucketDev:
Type: AWS::S3::Bucket
Properties:
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
AllowedOrigins:
- "*"
But if I want to make it a dynamic name using the Properties.BucketName property, it returns an error that the bucket already exists in stack.
resources:
Resources:
S3BucketMyBucketDev:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-${self:provider.stage}
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
AllowedOrigins:
- "*"
Is this a current limitation or am I doing something wrong?
rowanu
November 2, 2016, 12:05am
2
Unfortunately it looks like you’ve hit a current limitation - Serverless automatically creates any buckets mentioned in your events configuration, which is why you’re getting a conflict with the bucket you’ve defined in your resources
section. A quick check of CFN docs makes it look like you can’t create CorsConfiguration
outside of the Bucket
resource.
I think you’re going to need to raise an issue for this if you want it to work. You basically want this but for S3.
I had the same issue and ended up following the guidance from @eahefnawy in issue #2967 .
The answer is not to define an S3 event with the function…since serverless attempts to create a new S3 bucket…but to manually define the NotificationConfiguration in the S3 bucket resource, as well as a corresponding Lambda permission resource. (This solution relies on the CloudFormation naming convention used by serverless for Lambda functions.) In your case, it would look something like:
functions:
foo:
handler: handler.foo
resources:
Resources:
S3BucketMyBucketDev:
DependsOn:
- FooLambdaPermissionS3BucketMyBucketDevS3
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-${self:provider.stage}
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
AllowedOrigins:
- "*"
NotificationConfiguration:
LambdaConfigurations:
- Event: "s3:ObjectCreated:*"
Function:
"Fn::GetAtt": [ FooLambdaFunction, Arn ]
FooLambdaPermissionS3BucketMyBucketDevS3:
DependsOn:
- FooLambdaFunction
Type: AWS::Lambda::Permission
Properties:
FunctionName:
"Fn::GetAtt": [ FooLambdaFunction, Arn ]
Action: "lambda:InvokeFunction"
Principal: "s3.amazonaws.com"
SourceArn: "arn:aws:s3:::my-bucket-${self:provider.stage}"
3 Likes
rowanu
January 13, 2017, 10:56pm
4
Nice. I like how that works out - the defaults are reasonable, but can be overridden if needed.
In case anybody struggles with this, I continued to receive the validation error because I was using the full bucket ARN in the SourceArn property for the permission. After I copied the example and concatenated with my bucket name everything worked.
I tried @mfrankwork ’s solution but am getting a cryptic error:
An error occurred: DiyBucket - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: 6516C03FC6506837; S3 Extended Request ID: RD1gHlYOOA+DRL7J8vg1MIf6xnPkDQ28p+lAkZOzgJijzalp/z6i1u1CaXUslLIzrmZ6Y4glYFE=).
My complete serverless.yml:
service: vg-diy-bucket
plugins:
- serverless-dotenv-plugin
provider:
name: aws
stage: ${{opt:stage, 'dev'}}
region: ${{opt:region, 'us-east-1'}}
# use ${{}} to access serverless variables
# this is necessary because cloudformation uses ${} syntax
variableSyntax: "\\${{([ ~:a-zA-Z0-9._\\'\",\\-\\/\\(\\)]+?)}}"
runtime: nodejs8.10
memorySize: 512
custom:
userFolderName: alpha
functions:
transformOnBucketUpload:
handler: dist/transform.onBucketUpload
reservedConcurrency: 100
# events:
# - s3:
# bucket: vg-diy-bucket-dev-diybucket-14aa5jo8lpn0j # fixme
# event: s3:ObjectCreated:*
# # rules:
# # - prefix: input/
# # - suffix: .sketch
# environment:
# sloppy: true
# outputBucketArn:
# Fn::GetAtt: [DiyBucket, Arn]
# iamRoleStatements:
# - Effect: Allow
# Action:
# - s3:*
# Resource:
# - Fn::GetAtt: [DiyBucket, Arn]
# - Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/*']]
resources:
Resources:
# lambda permission for the function to be invoked by the s3 bucket
ResizeLambdaPermissionPhotosS3:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt: [TransformOnBucketUploadLambdaFunction, Arn]
Principal: s3.amazonaws.com
Action: lambda:*
# SourceAccount:
# Ref: AWS::AccountId
SourceArn:
Fn::GetAtt: [DiyBucket, Arn]
# the s3 bucket to store sketch files and receive applications
DiyBucket:
Type: AWS::S3::Bucket
Properties:
NotificationConfiguration:
LambdaConfigurations:
- Event: "s3:ObjectCreated:*"
Function:
Fn::GetAtt: [TransformOnBucketUploadLambdaFunction, Arn]
# the admin user that can create bucket users
DiyBucketAdminUser:
Type: AWS::IAM::User
Properties:
LoginProfile:
Password: ${{env:DIY_BUCKET_ADMIN_PASSWORD}}
PasswordResetRequired: false
Groups:
- Ref: DiyBucketAdminUserGroup
# the policy group that bucket admin users should be assigned to
DiyBucketAdminUserGroup:
Type: AWS::IAM::Group
Properties:
Policies:
- PolicyName: DiyBucketAdminUserGroupPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
# required for user to use AWS S3 Console web ui
- Sid: AllowGroupToSeeBucketListInTheConsole
Action:
- s3:ListAllMyBuckets
- s3:GetBucketLocation
Effect: Allow
Resource:
- arn:aws:s3:::*
- Sid: AllowFullDiyBucketS3Access
Action:
- s3:*
Effect: Allow
Resource:
- Fn::GetAtt: [DiyBucket, Arn]
- Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/*']]
- Sid: AllowFullIAMAccess
Action:
- iam:*
Effect: Allow
Resource: '*'
# the policy group that bucket users should be assigned to
# see https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
DiyBucketUserGroup:
Type: AWS::IAM::Group
Properties:
Policies:
- PolicyName: DiyBucketUserGroupPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
# required for user to use AWS S3 Console web ui
- Sid: AllowGroupToSeeBucketListInTheConsole
Action:
- s3:ListAllMyBuckets
- s3:GetBucketLocation
Effect: Allow
Resource:
- arn:aws:s3:::*
- Sid: AllowRootAndHomeListingOfCompanyBucket
Action:
- s3:ListBucket
Effect: Allow
Resource:
- Fn::GetAtt: [DiyBucket, Arn]
Condition:
StringEquals:
s3:prefix:
- ''
- '${{self:custom.userFolderName}}/'
s3:delimiter:
- '/'
- Sid: AllowListingOfUserFolder
Action:
- s3:ListBucket
Effect: Allow
Resource:
- Fn::GetAtt: [DiyBucket, Arn]
Condition:
StringLike:
s3:prefix:
- '${{self:custom.userFolderName}}/${aws:username}/*'
- '${{self:custom.userFolderName}}/${aws:username}'
- Sid: AllowInputSketch
Action:
- s3:GetObject
- s3:PutObject
- s3:DeleteObject
Effect: Allow
Resource:
- Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/${{self:custom.userFolderName}}/${aws:username}/input/*.sketch']]
- Sid: AllowDownloadOutput
Action:
- s3:GetObject
- s3:DeleteObject
Effect: Allow
Resource:
- Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/${{self:custom.userFolderName}}/${aws:username}/output/*']]
- Sid: DenyInputOutputFolderDeletion
Effect: Deny
Action:
- s3:DeleteObject
Resource:
- Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/${{self:custom.userFolderName}}/${aws:username}/input/']]
- Fn::Join: ['', [Fn::GetAtt: [DiyBucket, Arn], '/${{self:custom.userFolderName}}/${aws:username}/output/']]
Can you please elaborate on this, I am still getting error "Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; "
My persmission in yaml looks as follows:
rLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt rLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Sub ${AWS::AccountId}
SourceArn: !Sub ‘arn:aws:s3:::${pBucket}-${AWS::AccountId}-${AWS::Region}-us-east-1’
Thanks
I’ve used this and appreciated it, thanks!
As of Jan 2020 there is an easier way to do this; described here . Here’s how that would look:
functions:
foo:
handler: handler.foo
events:
- s3: s3bucketmybucketdev # must be lowercase
event: s3:ObjectCreated:*
provider:
s3:
s3bucketmybucketdev:
name: my-bucket-${self:provider.stage}
corsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
AllowedOrigins:
- "*"
Hi @donpedro , I followed this method also as per the doc but it does not work, I can define an s3 event in the lambda and the framework would create a new S3 bucket along with bucketPolicy I defined in resources but nothing I added in provider.s3 is set in the S3 bucket.
is there any special rule when naming/ref the bucket in provider?
TomC
November 28, 2024, 8:30am
10
I’d double check my indentation, and the compiled CloudFormation template - is the expected change there and just not being applied on deploy ?