Lambda and VPCs

I’ve read that cold starts for a VPC can take 10 seconds. It’s important to have my RDS instance behind a VPC for security. 10 seconds is unacceptable for HTTP requests.

Is there a solution I’m missing here? Are default VPCs faster?

Correct

You can try keeping your Lambda warm by implementing a ping function then calling it regularly. This reduces the chance of someone hitting a cold start in some circumstances.

No.

1 Like

This only helps that one instance. The minute scale is needed beyond that request time starts climbing again.

Bummer, I think I’ll look into dropping the VPC.

Thanks for the response.

The 10 second cold start usually only applies to the first few concurrent functions. After that, Lambda has already allocated container space that is connected to your VPC. If you have significant load, the cold start times drop dramatically as ENIs are shared between containers. Also note that VPC Lambdas stay warm for a minimum of 15 minutes, as opposed to 5 minutes for non-VPC ones. This means that intermittent spikes that scale up your Lambdas will most likely be warm and ready to respond on subsequent spikes. This dramatically reduces cold starts.

If you absolutely need to keep VPC Lambdas warm, use this: https://github.com/jeremydaly/lambda-warmer

It was endorsed by Chris Munns over at AWS since it follows all of their recommendations. It can even be used to keep several concurrent functions warm.

3 Likes

@piersmacdonald

This only helps that one instance. The minute scale is needed beyond that request time starts climbing again.

Can you please elaborate on that?

You might be able to keep one instance warm but if AWS needs to scale the number instances of a Lambda then each new instance has a 10 second delay* in launching. Any requests being serviced by these new instances is going to be delayed until the new instance is running.

It means that keeping your Lambda warm as a strategy really only works if you have a Lambda with low usage and a concurrency of 1. If your traffic spikes and a lot of Lambda’s need to be launched then it doesn’t help.

Note: * AWS is continually working to improve Lambda performance inside a VPC so this may be smaller now.

Heya, so actually https://github.com/FidelLimited/serverless-plugin-warmup now has built in concurrency warmup, you’ll need to add it to your package.json and point at the master branch as they’re not doing a release until they’ve updated/done the tests I believe. Seems to work rather well.

I would say if you’re API has relatively stable traffic people tend to all worked up about cold starts when essentially if that endpoint is getting hit a couple of times an hour at least you won’t get them at all.

Sorry, poorly worded on my part.

Lambda warming only works if you keep enough lambdas warm to deal with load at any given time. Doing so eliminates a key advantage of serverless: scalability. If I keep 1 lambda warm it helps the first request after a cold start but if there are 10 simultaneous requests 90% of them are still experiencing slowness.