Lessons Learned — A Year Of Going “Fully Serverless” In Production

Sharing some of the lessons my company learned during the last year using serverless. The discussion already has interesting tips and questions and I wanted to have more feedback from the community.

The post is on Medium: https://hackernoon.com/lessons-learned-a-year-of-going-fully-serverless-in-production-3d7e0d72213f

Looks great.

The only part I will recommend to improve is the part on how to manage secrets.

Maybe aws secrets manager is too new, when prepare this wiki. But I would recommend to use it more than parameter store.

Plus, aws secrets manager has SDK directly and secrets rotation, which is a perfect match with Lambda.

Thanks for the feedback. Indeed the secrets manager came out just recently and we haven’t migrated to it. It looks good - some people say its a bit expensive compared to other services offered by AWS.

Cold start time can be greatly reduced by starting your services all in parallel. I’ve switched over to Go for my function. Go is compiled and significantly faster than Node. Go has Go-routines for parallel execution. I used xray to look at my cold starts and noticed that each service waits 50-200ms for a response when it starts. So a big way to lower your cold start time is to do those waits in parallel. Example in Go…

BTW - I also only use the minimum sized lambda containers, 128MB. Again Xray helped me determine what was going on. My functions are spending 80-90% of their time waiting for AWS to respond. It costs 4x as much to wait inside a 512MB instance as it does to wait in a 128MB one. Sure my code runs a little slower in the 128MB (a few milliseconds) but the performance difference is minor and my bills are 1/4 as much. If a remote AWS API call is going to take 80ms to respond, it takes 80ms so matter what size container your are in.

func initDB(sess *session.Session, c chan bool) {
	// Create DynamoDB client
	dbs = dynamodb.New(sess)
	//xray.AWS(dbs.Client)
	c <- true
}

func initIOT(sess *session.Session, c chan bool) {
	iots = iot.New(sess)
	//xray.AWS(iots.Client)
	c <- true
}

func initIOTDataplane(sess *session.Session, c chan bool) {
	iotdatas = iotdataplane.New(sess, &aws.Config{
		Endpoint: aws.String(os.Getenv("IOT_ENDPOINT")),
	})
	//xray.AWS(iotdatas.Client)
	c <- true
}

func initS3(sess *session.Session, c chan bool) {
	s3s = s3.New(sess)
	c <- true
}

func initSSM(sess *session.Session, c chan bool) {
	ssms = ssm.New(sess)
	var parameter *ssm.GetParameterOutput
	parameter, _ = ssms.GetParameter(&ssm.GetParameterInput{
		Name:           aws.String("/VAPID/dev/privateKey"),
		WithDecryption: aws.Bool(true),
	})
	privateKey = parameter.Parameter.Value
	c <- true
}

func init() {
	//xray.Configure(xray.Config{
	//	LogLevel:       "info", // default
	//	ServiceVersion: "1.2.3",
	//})

	sess := session.Must(session.NewSession())

	c := make(chan bool)
	go initDB(sess, c)           // 1
	go initS3(sess, c)           // 2
	go initIOT(sess, c)          // 3
	go initIOTDataplane(sess, c) // 4
	go initSSM(sess, c)          // 5

	_, _, _, _, _ = <-c, <-c, <-c, <-c, <-c
}

You only have to use AWS Secrets Manager if you want to control the encryption keys. By default each AWS account comes with one AWS generated key that is free to use. So when you use System Parameter Store pick default encryption an it will use this free key. AKAIK this key is identical to the Secrets Manager offering but AWS holds the private key for it instead of you.

I am not paranoid enough yet to worry about trusted AWS employees with access to the private keys getting into my stuff.

Good points! Even in Node it’s always better to run in parallel where possible (await Promise.all([...])).

I know Go starts faster, but we’re using JS heavily on all fronts.

One other alternative to helping with the cold starts is to add a front controller. Have a single function be the handler for all events and have it route to the required functions as needed. For one this gives you great multi-vendor abstraction but it also means that you are not instantiating a cold start seperately for each function type. All requests can re-use the same pool of warm lambda’s.

There may be other problems with this approach; like I said I’ve only started looking into it. But it may help.

Secrets Manager does more than just handle “bring your own” encryption keys. it also has a Hashicorp Vault like capability where it can dynamically generate/rotate login info for their various database solutions.

blog on the example:

It’s not a bad option for those who have a simple app <–> db relationship as it builds a more secure solution than a static encrypted password in a param somewhere (uniquely generated , short lived account per instance spawned).

We , however, ended up using a complete Hashicorp vault solution instead as it had more dynamic options available to it than the subset AWS Secret manager offered (we make ops teams request temporary creds to login to aws at all, for example). I’m not certain on the price difference between two instances vs Secrets Manager. But the extra flexibility made for easy justification.