When deploying a new AWS service the deployment bucket (if defined) has to exist beforehand, while the s3 event bucket has to NOT exist. One is the opposite of the other. Isn’t that weird?
Not really.
People who need to specify the deployment bucket tend to be in corporations with devops teams who create these resources for them. For everyone else the name of the deployment bucket is meaningless so they just let Serverless handle naming and creation of the bucket.
While Serverless defines the S3 event as part of the function in the CloudFormation template it’s actually the opposite direction. Your S3 bucket is configured to trigger a Lambda in response to events. This means that if your S3 bucket isn’t part of the CloudFormation stack managed by Serverless then you can’t use the bucket as an event source for the function in Serverless. For S3 buckets setup outside of Serverless you can still trigger functions deployed by Serverless but you need to set up the triggers when you create the S3 bucket after you’ve deployed the functions via Serverless.
My natural inclination was to define the deployment bucket, which on reflection was a mistake. The idea was that my architecture would be more explicitly defined in the code.
You mentioned that I can reconfigure an existing bucket to trigger an existing lambda function. I guess my question then is, if I can do it, why can’t Serverless?
You can log into the console and change the settings on any resource in your account.
Serverless uses CloudFormation to deploy the stack so it’s limited to resources that it creates.
Having said that… I’m sure someone will now build a plugin for Serverless that allows you to do exactly this by making API calls after the CloudFormation stack has been deployed.