Internal Routed Function vs Function Per Endpoint

Hello,

I’m currently playing around with an entirely serverless stack using Serverless Aurora (MySQL) with Golang Lambdas.

This stack has a fairly considerable cold start due to a couple of factors, Serverless Aurora is super slow to cold start in any event at ~8s and also must exist in a VPC, which of course adds further latency to this start time.

I’ve been trying to figure out ways to prevent cold starting as much as possible. One way I’m attempting to do so is to convert my application to use a single binary per model type and use internal routing to determine which function handler is executed, rather than having a binary per endpoint.

For example, in the serverless.yml I would have something like this:

get_users:
    handler: bin/users # note the binary is the same

    events: 
      - http:
          path: users
          method: get

get_user:
    handler: bin/users # note the binary is the same

    events: 
      - http:
          path: users/{id}
          method: get

Then the function would check the request method, if it is a GET request and there is an id path parameter the request would be handled as get_user, otherwise if the id did not exist it would be treated as a get_users request.

Does this make sense to do? Am I correct in assuming if the same binary is be used for n number of endpoints by making a request to any of the endpoints, it effectively warms all of the others, as the same binary is used to satisfy any of the endpoints?

Thanks for any help on this.

Hi, I don’t work with go but it seems to me that you need to have a single entry point instead of a single binary. What I mean is that you need to have a path: * and an internal router. You can also take a look at plugins like this https://github.com/FidelLimited/serverless-plugin-warmup. good luck.

Hi Moro,

Awesome! Yeah we’re both talking about the same thing, having a single binary would, as you say be a single entry point that then uses internal routing to determine the handler.

I ran some load testing against a couple of test scenarios and having the single entry point most definitely yielded the result I eluded to in my original post.

You can also take a look at plugins like this GitHub - juanjoDiaz/serverless-plugin-warmup: Keep your lambdas warm during winter. ♨. good luck.

I’ll take a look, thanks for the help, I appreciate it!

If you’re using Aurora Serverless have you though about setting the minimum capacity to 1 so that it keeps the cluster running and using the data API so your Lambda can run outside the VPC?

If you’re using Aurora Serverless have you though about setting the minimum capacity to 1 so that it keeps the cluster running and using the data API so your Lambda can run outside the VPC?

Nope and nope, but this sounds very interesting!

If you’re using Aurora Serverless have you though about setting the minimum capacity to 1 so that it keeps the cluster running…

Hm, so I’m using CloudFormation to build the infrastructure, I assume you’re referring to ScalingConfiguration?

{
  "AutoPause" : Boolean,
  "MaxCapacity" : Integer,
  "MinCapacity" : Integer,
  "SecondsUntilAutoPause" : Integer
}

How does this impact the running costs? Would this not consume the same resource as a non-serverless RDS instance?

…and using the data API so your Lambda can run outside the VPC

Wow this looks really useful, I’ll have a read through the documentation. So tldr; it basically creates a http service for the RDS instance?

Thanks for this, it’s super helpful!

Financially it would be like running an instance all the time. It would still scale up (and down to 1) but there would always be something running which should fix the cold start time.

I can’t help with the Cloud Formation because I don’t use RDS anymore.