Function names across AWS stages? Do AWS aliases make sense outside of the context of API Gateway?

Hello!

I apologize if this has been covered a lot in the past, I’ve spent a long time googling lately and I just can’t wrap my head around it for some reason. I’m actually struggling to find a succinct way of asking my question in the first place, so apologies ahead of time and thanks for any thoughts / advice!

I work with a team that prefers to use the “alias” concept to separate dev, QA, and production environments. This gets confusing with terminology between the serverless framework and AWS, but basically the idea is to use a single stage for all of the environments, but then use aliases to point different versions of lambda function at a given API Gateway stage.

This seems to work OK, or at least it did, until I started dealing with lambda entry points other than API gateway. Now I’m using S3 to process files when they are uploaded and the issue is that the concept of a “stage” doesn’t exist within event triggers as far as I’m aware. In other words, when I setup an event trigger, I select a lambda function but I can’t select an “alias” or “stage” from a dropdown, it’s just a function name.

This leads me to wonder, what is the correct way of handling this? I’ve read that ideally there would be entirely different AWS accounts to handle setup of different environments, to really isolate things, but for now that’s not likely to happen. I’m not sure what the best course of action is, given this will live within one AWS account.

What I’m FAIRLY certain of (and would love confirmation of!) is that I’m going to need to create different functions (with different function names) based for each environment to handle the S3 events. In other words, I’ll need import-process-DEV, import-process-QA, etc. This seems to be the only way I can (within one environment) make sure that when I deploy an update to dev (for example) that I’m not also updating QA / production (again, implied question).

To take this further, although this is going ot seem super simplistic, I’m just now starting to realize that the concept of “stage” and “alias” really are only useful for API Gateway and Lambda. Outside of that (S3, DynamoDb), “stage” and “alias” don’t really mean anything. I mean, sure, I can add a tag to a S3 bucket or DynamoDB instance to indicate it’s stage, but this doesn’t really DO anything (?).

To put this another way, if I have a S3 bucket called import, and I have a function called import-processor that is triggered on a S3 event, if I deploy --stage=DEV or --stage=ACC, it won’t matter in terms of those buckets and the event. There will only be one bucket, and the bucket will trigger the same event, so setting the stage to DEV or ACC in that instance effectively does nothing (?). Is that true?

Thanks for reading and your thoughts!

API Gateway stages and Lambda aliases have a place but it’s not “dev”, “staging” and “production”. Tell the developers they need to deploy these to isolated stages. Ideally that would be different accounts but you can do it inside a single account if you use Serverless stages and add the stage to the resource name.

Every time I’ve seen someone using aliases for “dev”, “staging” and “production” it turns out “dev” is using the same resource (i.e. DynamoDB tables, SNS, SQS, S3 buckets, etc) as “production”. If you’re doing that you might as well just develop on production because it’s only a matter of time until someone destroys production accidentally when deploying to “dev”.

Thanks for the reply!

Am I correct in understanding that most of the time, if stages are being used as you mention, the result would be in that API Gateway you would have multiple APIs? e.g. My Service DEV, My Service QA, etc? And within each of those, there would be a single AWS API Gateway stage (DEV and QA respectively)?

Each stage would have it’s own API Gateway and each API Gateway would have a single stage with the name of the stage you’re deploying to.