Storing Database credentials securely



Hi folks,

I am a new joinee here. Just recently picked up AWS Lambda and serverless. Had a question on storing database credentials securely in AWS lambda ( I have a infra key as well) looked into few resources which suggested AWS KMS. Want to know what’s the most way regular to store such information. I am currently working in python.

Thanks !


I too am finding it difficult to find a clear answer to this question. It seems that Amazon suggests using KMS and enviroment variables. Is this possible to set up with Serverless? If someone could post some simple instructions, or better yet, a detailed blog post or video about this, I’m sure it would be quite popular and helpful.
Thanks for your help in advance!

Mock integration endpoint

Currently I think the easiest way is using environment variables - it means you can set different values depending on environment/stage, and you can see where they’re coming from relatively clearly in your serverless.yml.

Via Serverless Variables you can even store your secrets in environment variables on your deployment machine and have them loaded at deploy time. This means you don’t even need to write them to your serverless.yml.

There is an open feature request to allow you to select a KMS key, so that the environment variables are encrypted at deployment (i.e. you can’t see them via the console).


@jimjimovich This comes up so often I wrote a detailed post explaining exactly how to do it with environment variables.


Thanks! After reading your post, I was able to figure out the environment variables. I also figured out how to use the awscli to encrypt my passwords with KMS and then include the encrypted ciphertext as the environment variable and decrypt it with KMS at run time.


So here is my initial investigation.

The serverless example: is still a security problem here because it passes the unencrypted ENV variable as a cloudformation template and you can see the unencrypted value in the console. I’m not even sure if there is a way with cloudformation to “Add encryption helpers and use this key” so github issue 2996 might not even be possible.

The plugin: gives you runtime encryption but not using encrypted environment variables, you create a file (.serverless-secret.json) via the plugin hooks and the CipherText is decrypted at runtime by an injected module slscrypt. This is the most secure and complete way of doing it but unfortunately it isn’t using ENV variables and will cost you a call to the KMS API.

What would be awesome is if you could add encrypted ENV variables that were encrypted by the aws/lambda key to your cloudformation template and they were automatically decrypted runtime. This doesn’t seem possible because you can’t use the aws/lambda key to encrypt anything.

JimJimovich might be able to give a bit more information on exactly how he went about using awscli to encrypt env vars.


Hey Jim could you enlighten us on how you did the KMS part? I already have my serverless functions deployed.


Hey Jim, this has been asked before but could you please elaborate on the part where you encrypt and decrypt it at runtime? I’m struggling a lot with this here :pensive:


I am using KMS, which is an aws service. I have a file in my local system called secrets.json which contains the raw secrets and is not committed into github. There is a corresponding file called secrets.encrypted which is committed into github and simply contains the encrypted contents of secrets.json.

I then created some gulp scripts which I can run using my aws credentials like this:

$ AWS_PROFILE=staging npm run encrypt

This then encrypts the contents of secrets.json and puts it into a file called secrets.encrypted

I have the inverse as well:

$ AWS_PROFILE=staging npm run decrypt

Which decrypts the contents of secrets.encrypted and puts the decrypted contents into secrets.json.

I also made a gulp script which I run before a publish which diffs the decrypted contents of both files and makes sure that they are the same, because since I’m not committing it into git the two files can become unsynchronized or overwritten by a merge.

When publishing I publish the encrypted secrets.encrypted file. During runtime in my lambda it finds secrets.encrypted and then uses the KMS api to decrypt its contents:

import AWS from 'aws-sdk'
export default function getSecrets(callback) {
  let kms = new AWS.KMS()
  fs.readFile(path.join(__dirname, 'secrets.encrypted'), 'utf8', (err, encrypted) => {
    if (err) return callback(err)
    kms.decrypt({ CiphertextBlob: new Buffer(encrypted, 'base64') }, (err, data) => {
      if (err) return callback(err)
      try {
        let decrypted = data.Plaintext.toString('utf8')
        let secrets = JSON.parse(decrypted)
        callback(null, secrets)
      } catch (ex) {

When looking to decrypt you don’t have to specify a key the KMS api will use the current users access rights to find and try a key until decryption is successful. In my case I have a single key in my account that I use and grant access rights to users or roles that need it and it has a well known name. You then use that key to do the encryption.

KMS keys can be found under the IAM > Encryption Keys section in the AWS console.

From there just create a key and give it access, here is an example access policy which would let any user or role in your account use the key to encrypt / decrypt files:

  "Version": "2012-10-17",
  "Id": "example-key",
  "Statement": [
      "Sid": "Enable IAM User Permissions",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
      "Action": "kms:*",
      "Resource": "*"
      "Sid": "Allow use of the key",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      "Action": [
      "Resource": "*"

When you create a key this way it gets a Key ID and an ARN, you need to use that ARN for encryption (not decryption).

Here is my entire encrypt / decrypt set of commands for Gulp4:

// secrets.js
import fs from 'fs'
import gulp from 'gulp'
import { KMS } from 'aws-sdk'
import { account, force } from '../helpers/env'
import diff from 'variable-diff'

const arns = {
  prod: 'arn:aws:kms:us-east-1:abc123:key/c6f433fa-ec75-4214-866e-fbc225df5295',
  stag: 'arn:aws:kms:us-east-1:xyz987:key/abdf54ba-5f62-40f1-8e64-c82f5dfec4a4'

export function checkSecrets (callback) {
  getDecryptedContent((err, secrets) => {
    if (err) return callback(err)
    if (!secrets.decrypted) return callback() // no locally decrypted secrets, fine
    let result = diff(secrets.decrypted, secrets.encrypted)
    if (!result.changed) return callback() // Secrets are sync'd no problem
    console.log('You have un-syncrhonzied secrets in your decrypted secrets file.')
    console.log('Please manually merge and then run `npm run encrypt`')

function getDecryptedContent (callback) {
  let encryptedPath = `apps/secrets.${account}.encrypted`
  let decryptedPath = `apps/secrets.${account}.json`
  fs.readFile(encryptedPath, 'utf8', (err, encrypted) => {
    if (err && err.code !== 'ENOENT') return callback(err)
    if (err) encrypted = ''
    let kms = new KMS({ region: 'us-east-1' })
    kms.decrypt({ CiphertextBlob: new Buffer(encrypted, 'base64') }, (err, data) => {
      if (err && (err.code !== 'ValidationException' || encrypted)) return callback(err)
      let encryptedPlaintext = err ? 'null' : data.Plaintext.toString('utf8')
      fs.readFile(decryptedPath, 'utf8', (err, decryptedPlaintext) => {
        let encryptedObj = JSON.parse(encryptedPlaintext)
        let decryptedObj = err
          ? null
          : JSON.parse(decryptedPlaintext)
        callback(null, {
          encrypted: encryptedObj,
          decrypted: decryptedObj

function decrypt (callback) {
  getDecryptedContent((err, secrets) => {
    if (err) return callback(err)
    let result = diff(secrets.decrypted, secrets.encrypted)
    if (secrets.decrypted && result.changed && !force) {
      console.log('Encrypted secrets differ from Unencrypted secrets, you must manually merge them or --force:')
    } else if (!secrets.decrypted || !result.changed || force) {
      console.log('Writing Unencrypted secrets file...')
      if (result.changed) console.log(result.text)
      // Write the encrypted file contents
      fs.writeFile(secrets.decryptedPath, JSON.stringify(secrets.encrypted, null, 2), callback)

export function encrypt (callback) {
  getDecryptedContent((err, secrets) => {
    if (err) return callback(err)
    if (!secrets.decrypted) return callback(new Error('No decrypted secrets to write.'))
    let result = diff(secrets.encrypted, secrets.decrypted)
    if (result.changed && !force) {
      console.log('Encrypted secrets differ from Unencrypted secrets, you must manually merge them or --force:')
    } else if (!result.changed || force) {
      console.log('Writing Encrypted secrets file...')
      if (result.changed) console.log(result.text)
      let kms = new KMS({ region: 'us-east-1' })
      let key = arns[account]
      let params = {
        KeyId: key,
        Plaintext: new Buffer(JSON.stringify(secrets.decrypted), 'utf8')
      kms.encrypt(params, (err, data) => {
        if (err) return callback(err)
        let decryptedCiphertext = data.CiphertextBlob.toString('base64')
        fs.writeFile(secrets.encryptedPath, decryptedCiphertext, callback)

gulp.task('encrypt', encrypt)
gulp.task('decrypt', decrypt)


Hey, thank you so much for the thorough explanation. Has definitely helped me a lot, it works now and the usage of gulp is very nice too! Appreciated! :v:


By the way, it may be easier to store secrets in the new AWS Secrets Manager. I tested it out today with Go in a lambda function and it works quite well.


@jimjimovich Seconding AWS Secrets Manager. I’ve used it quite a bit for DB Credentials as well as API Keys and it seems to work really slick in Python. I’ve even combined it with environmental variables as @buggy suggests to have different secrets managers for the different stages.

If anyone is looking for a quick example:

import boto3
import json
import psycopg2

# Setup our secrets manager
secrets_manager = boto3.client('secretsmanager')
rds_credentials = json.loads(
username = rds_credentials['username']
password = rds_credentials['password']

# Setup our Postgres connection
connection_parameters = {
    'host': 'localhost',
    'database': 'postgres',
    'user': username,
    'password': password
conn = psycopg2.connect(**connection_parameters)
conn.autocommit = True

def handler(event, context):
        with conn.cursor() as cursor:
            cursor.execute("SELECT * FROM ...")
            rows = cursor.fetchall()
    except psycopg2.Error as e:


  name: aws
  runtime: python3.7
    - Effect: "Allow"
        - "secretsmanager:GetSecretValue"
      Resource: "<SECRET MANAGER ARN>"

I also wrote a more detailed post on setting up the secrets manager here if anyone is still having issues:


Hey guys, would you be concerned about the extra resources required for the execution of retrieving the secrets from KMS? Essentially every time a lambda gets executed it’ll have to decrypt the secret adding CPU / memory / execution time or is this only while the function is ‘not warm’? Ideas?