API Gateway and downtime with updates

Hi – I have a fairly straight forward deployment using a lambda function that is fronted by a simple API Gateway endpoint, using the recommended ‘lambda-proxy’ integration. The configuration is straight forward:

description: Full API
handler: com.my.package.Application::handler
memorySize: 1024
timeout: 15
- http:
path: '{proxy+}'
method: 'any’
integration: ‘lambda-proxy’

Each time I deploy an update to the lambda, I experience anywhere from 10-25 seconds of downtime, where the API Gateway returns a 503. I believe this is because Serverless creates a custom AWS::ApiGateway::Deployment resource on each update.

Am I correct in this assessment? If so, is there a reason for creating a timestamped Deployment, and deleting the old one, rather than create it statically?


While I haven’t looked in to this part of the deployment recently, my impression is that the API GW Deployment Resource Id is timestamped to ensure that it is unique so that you can have multiple HTTP events on a function and not have a name collision.

I’ll keep an eye on it next time I do a deployment and see if I can add more detail. I get the feeling this was done deliberately, but I can definitely see why you wouldn’t want to do it - maybe the name should be generated in a more consistent manner so that it can be unique without changing every time…

After observing it through the day, I’m no longer sure that what I’m seeing is downtime, but might actually be a performance problem in my lambda’s startup process. I have a few further tests I will be running tomorrow, and will add detail here on my findings