Hi,
for a project with a partner company, I store files to their S3 bucket (already used in production). Now I have a requirement for a lambda function which should be triggered when a new file is uploaded to the bucket.
To make installation of the function for different environment seamless through the CI of my customer, I wanted to use serveless framework. Surprisingly the setup from scratch worked frictionless, but I am now stuck with the problem that our source S3 bucket exists already in production and when I am running the deploy command for an existing bucket I receive an error message that the bucket already exists:
An error occurred: S3Bucket... - ... already exists.
I read that serverless (or Cloudformation?) keeps track of the existence of a bucket which it has been creating itself, so that subsequent executions of the deploy command don’t try to re-create the resource, just like a DB migration framework would not execute the same migration twice. From my experience these kind of commands usually provide something like a --fake
option to be able to achieve a fake in-sync status of the tool.
Is there anything comparable in the serverless command options that I am not aware of? Or is there a best practice that you can recommend?
I’m not sure it’s a “best practice” but we break our serverless files into “functions” and “infrastructure”. Our infrastructure files are deployed once to build the infrastructure and the function files are deployed as the functions change.
Our function files reference our infrastructure files.
A better approach is to use SNS notification, for example, i have
./resources/notifications/transform.yml
Resources:
TransformTopic:
Type: AWS::SNS::Topic
TransformTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: AllowUploadBucketToPushNotificationEffect
Effect: Allow
Principal:
Service: s3.amazonaws.com
Action: sns:Publish
Resource: "*"
Topics:
- Ref: TransformTopic
Outputs:
TransformTopicName:
Description: "Transform SNS Name"
Value:
Fn::GetAtt:
- "TransformTopic"
- "TopicName"
TransformTopicArn:
Description: "Transform SNS Name"
Value:
Ref: TransformTopic
./resources/s3/upload.yml
Resources:
UploadBucket:
DependsOn:
- "UploadTopic"
- "ProcessTopic"
- "TransformTopic"
Type: AWS::S3::Bucket
Properties:
VersioningConfiguration:
Status: "Enabled"
NotificationConfiguration:
TopicConfigurations:
- Event: s3:ObjectCreated:*
Topic:
Ref: UploadTopic
Filter:
S3Key:
Rules:
- Name: prefix
Value: uploads/
- Event: s3:ObjectCreated:*
Topic:
Ref: ProcessTopic
Filter:
S3Key:
Rules:
- Name: prefix
Value: process/
- Event: s3:ObjectCreated:*
Topic:
Ref: TransformTopic
Filter:
S3Key:
Rules:
- Name: prefix
Value: original/transform
UploadBucketPolicy:
DependsOn: UploadBucket
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: UploadBucket
PolicyDocument:
Statement:
- Action:
- s3:putObject
Effect: "Allow"
Principal:
"AWS":
- "arn:aws:iam::#{AWS::AccountId}:root"
Resource:
- Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: UploadBucket
- '/*.csv'
- Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: UploadBucket
- '/.tsv'
- Fn::Join:
- ''
- - 'arn:aws:s3:::'
- Ref: UploadBucket
- '/.txt'
Outputs:
UploadBucket:
Value:
Fn::GetAtt:
- "UploadBucket"
- "Arn"
so this way, i am able to attach events on an existing bucket and my transform.yml
is
./resources/functions/transform.yml
handler: bin/transform
description: >-
`SNS` - `TransformTopic` trigger event
memorySize: 128
timeout: 120
package:
individually: true
include:
- ./bin/transform
environment:
DESTINATION_BUCKET:
Ref: UploadBucket
events:
- sns:
arn:
Ref: TransformTopic
topicName: TransformTopicName
hope this helps