Don't understand lambda concurrency limit, 1000 seems ridicously low to me

As I understand it, at any given point in time there is a limit of 1000 lambda functions executing per region.
Based on my current application (not serverless), I expect to scale to an average of 100-400 web req/sec.
In my mind, each web request will have maybe 5-10 lambda calls through api gateway.
In this way, I will reach the concurrency limit pretty soon…And I will not scale any further.

So:

  1. I don’t understand well the concurrency limit
  2. my design is flawed, i.e. I should reduce drastically the number of functions needed to serve a web request (but then maybe they will execute for longer time?)

I’m trying to migrate an application to serverless and I’m struggling to understand if it’s the right choice

Here a couple of links I think you should read to get more details and resolutions how to prevent throttle error




Hi @tinynet,
I’m not sure what you mean by this: “ In my mind, each web request will have maybe 5-10 lambda calls through api gateway.”. What are these 5-10 calls?

In a typical serverless web app, each web request would be served by exactly 1 synchronous Lambda function invocation (triggered by API Gateway).
So if you have 400 reqs/sec hitting APIGW and let’s say your average Lambda execution time is 0.1 seconds, you would on average have 40 concurrent Lambda executions under this peak load.

You could check out this calculator I created a while back to get an idea of how close to the concurrency limits your application would get: https://winterwindsoftware.com/lambda-scaling-calculator/

Also note that the 1000 is a soft limit that can be increased via a request to AWS Support.

I think we’re at 10’000 and wanted to have this increased further but AWS support is acting up a bit when it comes to that.