Can I access outputs from custom resources as variables in serverless.yml?

I’m creating an AWS elasticsearch domain as a custom resource and I want to be able to pass the domain endpoint into my lambda functions.

Is there a way to access a property of a generated resource as a variable in serverless.yml?

I’m guessing not as the endpoint wouldn’t be generated until the CF deployment is done by which time I’d expect that the value couldn’t be made available to serverless-plugin-write-env-vars plugin that I’m using to setup the env vars for the lambda functions.

I’m hoping someone has come up with a way to do this…

1 Like

I just signed up to ask exactly this. I’ve rolled my own deployment tools, which use a CF custom resource to grab my lambda’s zip from S3, inject a simple json file containing references to other CF resources and then replace the original zip. The lambda resource DependsOn this pre-processing step. Note that its possible to save these references as a separate S3 config file, but the latency is way better when its injected directly into he lambda.

I’d like to replicate this capability in serverless in order to complete the transition from my own tools to serverless. Its not clear to me if I can add the required DependsOn to the current function declaration, if I need to create a plugin or alternatively if there is some other technique which is clearly better. I do a similar thing to inject KMS encoded secrets.

2 Likes

Yeah, this has been discussed many times in the the Gitter.

I don’t think it’s possible to refer to CloudFormation Outputs in serverless.yml due to the reason @viz mentioned - there’s a circular dependency; You can’t deploy your service without a defined serverless.yml, and you don’t know your outputs until you’ve deployed…

The way to deal with this would be to create two stacks/services, one with the dependent resources (exposed via CFN outputs) and the other one to refer to it using cross stack references.

As @rowanu mentioned this is not possible yet, but a super important feature for us. To really make this work though we need some way for passing those variables to your lambda functions inside your Cloudformation template. Basically AWS needs to implement this feature so we can use it to expose that data to you (we can’t really do anything about that ourselves because we’d always have to make this a two step process because we don’t know the data ourselves beforehand)

Hopefully this is something we can provide soon, we’re definitely telling AWS that this is necessary.

@flomotlik I have a solution for this now. Basically I have a custom CloudFormation resource which grabs the zip file and injects a file containing the template references I need and then replaces the original zip in S3. The lambda function DependsOn this pre-processing step.

The code for my solution is below. In this case, I’m deploying “LambdaFunction” and I want to inject a reference to “ABucket” into that function. I’ve created a custom resource "LambdaSettings to specify the settings I want to inject. The other lambda, “SettingsFunction” simply grabs the zip from S3, injects the settings and replaces the original zip. The only prior knowledge required is the zip file location. Obviously I’ve omitted the permission and role resources etc. I inject a json file below, but a settings.py would be better.

"SettingsFunction": {
    "Type" : "AWS::Lambda::Function",
    "Properties" : {
        "Code": {
            "ZipFile": {"Fn::Join": ["\n", [
                "import boto3",
                "import cfnresponse",
                "import io",
                "import json",
                "import zipfile",

                "def handler(event, context):",
                " bucket = event['ResourceProperties']['Bucket']",
                " key = event['ResourceProperties']['Key']",

                " client = boto3.client('s3')",

                " if event['RequestType'] == 'Delete':",
                "  return cfnresponse.send(event, context, 'SUCCESS', {})",

                " buff = io.BytesIO()",
                " client.download_fileobj(bucket, key, buff)",

                " with zipfile.ZipFile(buff, 'a') as zip_file:",
                "  info = zipfile.ZipInfo('settings.json')",
                "  info.external_attr = 0777 << 16L",
                "  zip_file.writestr(info, json.dumps(event['ResourceProperties']['Settings']))",
                "  zip_file.close()",

                " buff.seek(0)",
                " client.upload_fileobj(buff, bucket, key)",

                " return cfnresponse.send(event, context, 'SUCCESS', {})"
            ]]}
        },
        "Handler": "index.handler",
        "Runtime": "python2.7",
        "Timeout": "300"
    },
},

"ABucket": {
    "Type": "AWS::S3::Bucket",
},

"LambdaSettings" : {
    "Type": "Custom::Settings",
    "Properties": {
        "ServiceToken" : {"Fn::Join": [":", [
            "arn:aws:lambda",
            {"Ref": "AWS::Region"},
            {"Ref": "AWS::AccountId"},
            "function",
            {"Ref": "SettingsFunction"}
        ]]},
        "Bucket": deploy_bucket,
        "Key": deploy_key,
        "Settings": {
            "ABucket": {"Ref": "ABucket"}
        }
    }
},

"LambdaFunction": {
    "Type" : "AWS::Lambda::Function",
    "DependsOn": {"Ref": "LambdaSettings"},
    "Propertes": {
        "Code": {
            'S3Bucket': deploy_bucket,
            'S3Key': deploy_key
        }
    }
}

I would like to extend serverless to do exactly this so I can ditch my custom deployment tools and switch to serverless. Its not clear to me if I need to extend serverless itself or if I can implement this feature via a plugin.

@andrew.beck awesome stuff. Its certainly possible to implement this as a plugin (basically everything in Serverless is a plugin, all of our AWS deployment integrations for example are no different than any plugin you could write).

But we can’t merge this into master because we’re working on a solution with AWS that should make this much easier. We currently have to wait until AWS gets ready to release something in this direction though. This is why we’d love to see this as an option, but we can’t include it in Serverless directly.

If thats fine with you please go ahead and let us know when the plugin is ready, we’d be happy to include it in our README list until we have something else in place that makes this easier.

For anyone looking for a way to access their queues you can use the AWS SDK in your function body to retrieve queues by QueueNamePrefix, given that you can define the queue name in the serverless.yml resources section this makes a workaround for this issue fairly straightforward.

 var params = {
  QueueNamePrefix: "slss-ms-orchestrator"
 };
 sqs.listQueues(params, function(err, data) {
   if (err) console.log(err, err.stack); // an error occurred
   else     console.log(data);           // successful response
   /*
   data = {
    QueueUrls: [
       "https://queue.amazonaws.com/80398EXAMPLE/SLSSMSOrchestratorQueue"
    ]
   }
   */
 });

resources:
  Resources:
    debounceStateVerifierQueue:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: slss-ms-orchestrator-debounce-queue
        Tags:
          -
            Key: environment
            Value: multi-staging

In the meantime I hope AWS and the Serverless team continue to work together.

1 Like

Guys, have Amazon released native solution or I should use @andrew.beck’s one? @flomotlik may be you know?

No, I don’t think so. I’ve also spent few hours digging into the question and this thread is one of the most helpful together with this github discussion in a similar topic.

The solution of @andrew.beck seems a bit hard to maintain, I’d personally go for splitting the service into 2 and try using cross stack reference as @rowanu has mentioned.

The original question of @viz is about elasticsearch, just like my case. Only the deployment of the smallest instance in a self-managed service way takes at least 10 minutes, which is a good-enough reason to split service for CF and another one for the functionalities already.

Update after success, hopefully this will be useful for someone else or my future-self.

Service 1, the elasticsearch one, creating a domain:

service: service1

custom:
  ES_DOMAIN: ${self:provider.stage}-projects

provider:
  name: aws
  runtime: nodejs6.10
  stage: ${opt:stage, file(config.json):stage, 'dev'}
  region: ${opt:region, file(config.json):region, 'eu-central-1'}

resources:
  Resources:
    ProjectsElasticSearchDomain:
      Type: AWS::Elasticsearch::Domain
      Properties:
        DomainName: ${self:custom.ES_DOMAIN}
        ElasticsearchVersion: 5.5
        EBSOptions:
          EBSEnabled: true
          VolumeType: gp2
          VolumeSize: 10
        ElasticsearchClusterConfig:
          InstanceType: t2.small.elasticsearch
          InstanceCount: 1
          DedicatedMasterEnabled: false
          ZoneAwarenessEnabled: false
        AccessPolicies:
          Version: '2012-10-17'
          Statement:
          # Public can query for information
          - Effect: Allow
            Principal: "*"
            Action:
             - "es:ESHttpHead"
             - "es:ESHttpGet"
            Resource: "arn:aws:es:${self:provider.region}:*:domain/${self:custom.ES_DOMAIN}/*"
          # Lambda can take actions on the ES domain
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
          # Admin access to Kibana from an IP
          # https://goo.gl/eiGgpD
          - Effect: Allow
            Principal: "*"
            Action: es:*
            Condition:
              IpAddress:
                aws:SourceIp:
                - THE_IP
            Resource: "arn:aws:es:${self:provider.region}:*:domain/${self:custom.ES_DOMAIN}/*"

  Outputs:
    ServiceEndpoint:
      Description: The API endpoint of the projects' elasticsearch domain.
      Value:
        Fn::GetAtt: ["ProjectsElasticSearchDomain", "DomainEndpoint"]
      Export:
        Name: "${self:provider.stage}:${self:service}:ServiceEndpoint"

The Outputs are important as the serverless framework and serverless-stack-output will not give you the endpoint after successful deployment, at least at the moment.

Then, in Service 2, where the endpiont is to be feeded in automatically:

service: service2

custom:
  index: projects

provider:
  name: aws
  runtime: nodejs6.10
  stage: ${opt:stage, file(config.json):stage, 'dev'}
  region: ${opt:region, file(config.json):region, 'eu-central-1'}
  iamRoleStatements:
    # https://goo.gl/U21zxP
    - Effect: "Allow"
      Action: "es:*"
      Resource: "arn:aws:es:${self:provider.region}:*:domain/*"

functions:
  onObjectCreated:
    handler: src/events/onObjectCreated.handler
    name: ${self:provider.stage}-${self:service}-onObjectCreated
    memorySize: 256
    environment:
      API:
        Fn::ImportValue: ${self:provider.stage}:elasticsearch:ServiceEndpoint
      INDEX: ${self:custom.index}
    events:
      - SOME_EVENT: http, sns, etc.

In the code of the function then, it’s safe to pull the necessary API information

const { API, INDEX } = process.env;
1 Like

@kalinchernev how do you orchestrate this such that service 1 is deployed prior to service 2 on a sls deploy? or do you use terraform etc?

personally I need to create a custom cloudformation resource backed by a lambda, where the lambda has to be pulled from an AWS s3 bucket as per the AWS docs. Of course I don’t want to manually create a s3 bucket and upload the lambda code – this has to be automated. I’m looking into creating a plugin to do this, but I have no idea how to have my serverless.yml reference to the s3 bucket created by the plugin.

For the moment, a shell script

#!/bin/sh

# Exit the script on any command with non 0 return code
set -ex

# Go to project root
cd "$(dirname "$0")"
cd ..

# Deploy first service
cd ./services/one
./node_modules/.bin/serverless deploy -v

# Deploy second service
cd ./services/two
./node_modules/.bin/serverless deploy -v

...

It might be that there are better ways I don’t know of, but that’s the current very basic orchestration, in which serverless-stack-output is configured in each service so that services can rely on the endpoint provided by the previous one.