I apologize if this has been covered a lot in the past, I’ve spent a long time googling lately and I just can’t wrap my head around it for some reason. I’m actually struggling to find a succinct way of asking my question in the first place, so apologies ahead of time and thanks for any thoughts / advice!
I work with a team that prefers to use the “alias” concept to separate dev, QA, and production environments. This gets confusing with terminology between the serverless framework and AWS, but basically the idea is to use a single stage for all of the environments, but then use aliases to point different versions of lambda function at a given API Gateway stage.
This seems to work OK, or at least it did, until I started dealing with lambda entry points other than API gateway. Now I’m using S3 to process files when they are uploaded and the issue is that the concept of a “stage” doesn’t exist within event triggers as far as I’m aware. In other words, when I setup an event trigger, I select a lambda function but I can’t select an “alias” or “stage” from a dropdown, it’s just a function name.
This leads me to wonder, what is the correct way of handling this? I’ve read that ideally there would be entirely different AWS accounts to handle setup of different environments, to really isolate things, but for now that’s not likely to happen. I’m not sure what the best course of action is, given this will live within one AWS account.
What I’m FAIRLY certain of (and would love confirmation of!) is that I’m going to need to create different functions (with different function names) based for each environment to handle the S3 events. In other words, I’ll need
import-process-QA, etc. This seems to be the only way I can (within one environment) make sure that when I deploy an update to dev (for example) that I’m not also updating QA / production (again, implied question).
To take this further, although this is going ot seem super simplistic, I’m just now starting to realize that the concept of “stage” and “alias” really are only useful for API Gateway and Lambda. Outside of that (S3, DynamoDb), “stage” and “alias” don’t really mean anything. I mean, sure, I can add a tag to a S3 bucket or DynamoDB instance to indicate it’s stage, but this doesn’t really DO anything (?).
To put this another way, if I have a S3 bucket called
import, and I have a function called
import-processor that is triggered on a S3 event, if I deploy
--stage=ACC, it won’t matter in terms of those buckets and the event. There will only be one bucket, and the bucket will trigger the same event, so setting the stage to DEV or ACC in that instance effectively does nothing (?). Is that true?
Thanks for reading and your thoughts!