I have two functions, one dumps some files in S3 and the other responds to s3:ObjectCreated:* and is supposed to perform some transformations on the data (and perhaps dump it into another bucket).
Right now any bucket used as an event source has to be created during deployment, unless a plugin is used. I wonder if I should really use that plugin, as my buckets are supposed to serve the purpose of an archive and I don’t think it’d be a good idea to attach those to the lifetime of any function’s deployment, yet I’m not sure how tightly is this is going to be coupled to the deployment, can someone help me to understand it better?
Also, I’m kind of wondering whether this whole problem is really just to do with limitation in CloudFormation/IAM, or it’s somewhat deliberate? I’ve not figured why would input buckets have to be created, why it is up to me to create any output buckets, or am I missing something?
I suppose a better way to get around this problem would be to simply invoke one function after the other has finished, as there no other means for data to appear in the first S3 bucket.
You could also just add it to your resources section.
I’m going to assume you’re dumping the output into another bucket (you only mention this as a “perhaps”) and that’s what you’re calling the output bucket.
You don’t need to create the input buck (the one triggering the Lambda to process s3:ObjectCreated events) because Serverless automatically creates buckets that are specified as an event source.
You need to create the output buckets because they’re not an event source for anything else.