Stage-specific parameters?

Hi there!

First post after observing from afar for a few months. I built a poor substitute for what the Serverless Framework provides in Java for my own purposes and am starting to play around here with the hope of reducing some of my own code burden. About half of my 30+ Lambda functions today get triggered via CloudWatch cron timers at different times of the week to scrape data off of websites or call APIs to gather data that I then perform some transformations on to build my analytics web site for my users.

For my own Java framework I ran into the issue of stage-specific parameters and didn’t see an obvious solution in the documentation here. You can set what geography a deployment is targeted to with the stage settings in your serverless.yml file clearly, but I was looking for something slightly different.

In my own framework, my functions load a config file whose location is based on what geography the function is executing in. That config file then drives geography-specific locations of other resources (S3, SES, etc.) In the AWS Java API, the functions that help you discover what geography you are in only work on EC2 and not ECS or Lambda so the workaround I created was to cheat by prefixing the geography code to the name of the function as it gets uploaded into Lambda. The Eclipse plug in for AWS lets you change that on a per deployment basis and it’s not the cleanest solution to have the first thing the function does is check its own name, but it has been functional for me.

So my question is, how might you approach something similar here with the Serverless Framework? Is there support for environment- or stage-specific variable passing I haven’t found yet?

Hey @nerdguru,

Serverless is definitely capable of this. Check out the docs on variables: https://serverless.com/framework/docs/providers/aws/guide/variables/

here’s an example of where I am setting my CORS origins per stage:

  myLambdaFunction:
    handler: functions/myLambdaFunction/handler.handler
    events:
      - http:
          path: "v1/agents"
          method: get
          authorizer: authorization
          integration: lambda
          cors:
            origins:
              - ${self:custom.stages.${opt:stage, self:provider.stage}.vars.URL-Access-Control-Allow-Origin}
            headers:
              - Content-Type
              - X-Amz-Date
              - Authorization
              - X-Api-Key
              - X-Amz-Security-Token
            allowCredentials: true

hope this helps… it’s very powerful.

If you want to reference code inside your actual lambda code, you can use the serverless-plugin-write-env-vars plugin:

cheers,
Stretch

Thanks so much @str3tch for the quick turnaround and the pointers. That gives me something to play with and if I run into issues I’ll post something else.

As a follow up, here’s how I mimicked what I did on my own in Java, now using serverless.com and Node in my handler. I hadn’t realized the phase was part of the function name already, so I spilt off of that, use it to find the right bucket (phase + baseBucketName)/object (.json) that then has config information that tells my function what to do:

var nameParts = context.functionName.split('-');
var bucketName = nameParts[1] + baseBucketName;
var objectName = nameParts[2] + ".json";
console.log("Bucket name: " + bucketName + " objectName: " + objectName);
var s3 = new AWS.S3();
var params = {
  Bucket: bucketName, 
  Key: objectName 
};

s3.getObject(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else { // successful response
	var configs = JSON.parse(data.Body.toString('utf-8'));  
  }
});

Now, I can pass different parameters into my function by editing the config .json file and not have to redeploy.

1 Like