Core encryption key management framework for serverless

I just noticed this project and it would make sense to add something similar to this to the core of serverless. It might be possible to hide all of this in a cross platform way.

The basic idea is that you would have a command in your yml file to generate keypairs for each stage and to store the private keys in encrypted storage (like SSM). The twist here is that the keypairs are generated at the cloud provider and the private key is placed into SSM without any human being able to look at it.

A couple of reasons to want keys like this…

  1. RSA private key for an IOT CA
  2. VAPID server encryption key
  3. EC2 private key
  4. Oath server secrets

sls would have a companion lambda function it uses to generate these keys via a custom cloudformation command. In the linked github repo the lambda function has the ability to generate several different key types.

Then an admin account can upload lambdas that have to ability to read these private keys but have not ability to print out their values. No human ever has to see the key. And the ability to upload a lambda with this role permission can be locked down with MFA. The keys are never stored locally where they can inadvertently leak.

I’ve started cobbling together the pieces I specifically need to implement my solution, but a general purpose solution would be much more useful. By making this work with production stages, it would help build a secure practice like this in from the start of development.

I’ve started hacking this together…

functions:
  KeysResource:
    role: digidevlambda
    handler: vapidkeys.handler
  
resources:
  Resources:

    MyKeysResource:
      Type: Custom::MyKeysResource
      Properties:
        TriggerRun: "1"  # increment this to force the VAPID Key generation function to run
        Stage: $<opt:stage, self:provider.stage>
        ServiceToken:
          'Fn::GetAtt': [KeysResourceLambdaFunction, Arn]
    digidevlambda:
      Type: AWS::IAM::Role
      Properties:
        Path: /
        RoleName: digidevlambda
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Principal:
                Service:
                  - lambda.amazonaws.com
              Action: sts:AssumeRole
        Policies:
          - PolicyName: digiDefaultPolicy
            PolicyDocument:
              Version: '2012-10-17'
              Statement:
                - Effect: Allow # note that these rights are given in the default policy and are required if you want logs out of your lambda(s)
                  Action:
                    - logs:CreateLogGroup
                    - logs:CreateLogStream
                    - logs:PutLogEvents
                  Resource: arn:aws:logs:$<opt:region, self:provider.region>:*:log-group:/aws/lambda/*:*:*
                - Effect: Allow
                  Action:
                    - ssm:PutParameter
                  Resource: "*"

  Outputs:
    VAPIDPublicKey:
      Value:
        {"Fn::GetAtt": ["MyKeysResource", "VAPIDPublicKey"]}

And the lambda routine

  const response = require('cfn-response');
  const webpush = require('web-push');
  const AWS = require("aws-sdk");
  const ssm = new AWS.SSM();
  
  exports.handler = function(event, context) {
    console.log(event);

    if (event.RequestType == "Update") {
      let keys = webpush.generateVAPIDKeys();
      var params = {
        Name: '/VAPID/' + event.ResourceProperties.Stage + '/publicKey', /* required */
        Type: 'SecureString',
        Value: keys.publicKey,
        Description: 'VAPID public key for server push',
        Overwrite: true
      };
      ssm.putParameter(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response
      });
      params.Name = '/VAPID/' + event.ResourceProperties.Stage + '/privateKey';
      params.Value = keys.privateKey;
      params.Description = "VAPID private key for server push"
      ssm.putParameter(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response
      });
      return response.send(event, context, response.SUCCESS, {
        VAPIDPublicKey: keys.publicKey
      });
    } else {
      return response.send(event, context, response.SUCCESS, {
        VAPIDPublicKey: ''
      });
    }
  };

One thing to watch out for, if you make a syntax error in the lambda function it will exit and not send a reply to cloudformation. That will result in cloudformation hanging for an hour before it decides to give up and exit. It will look like cloudformation is stuck, but sooner or later it will decide to exit. So don’t do what I did and initially put this into a stack that takes a couple of hours to rollback and rebuild. Debug it standalone first.

What is the purpose in doing all of this? The private key was generated on AWS instead of a local machine. Since it was never on a local machine there is no way to accidentally expose it – and a lot of people have been burnt very badly by accidentally exposing private keys. The only way to see the private key is to log into my root AWS account which is MFA protected, but you should never need to do that.

Another plus - SSM keeps a history of all of these keys. So you have to work at it in order to lose your old key values. Should help in preventing accident deletion.