Hello
I am seeing an unusual issue with a Lambda function deployed using the Serverless Framework. Even though the function runs under 100MB of memory during execution & has minimal dependencies; I am still experiencing cold start delays of 4โ6 seconds; especially in the
us-east-1
region.
This happens even when the function is set to a higher memory allocation (512MB) and when using the latest Node.js runtime. I was under the impression that low memory usage & fewer dependencies should result in faster cold starts.
I have already checked common suggestions like bundling with esbuild, trimming dependencies, and using the package.individually
option in my serverless.yml
but the cold start time doesnโt improve. When I deploy the same function in another AWS region; the cold start is almost instant. Could there be a region-specific issue / a hidden configuration Iโm overlooking in the Serverless Framework setup?
Checked Optimizing cold start performance of AWS Lambda using advanced priming strategies with SnapStart | AWS Compute Blog guide for reference. I came across a discussion on what is pl sql, and it made me think about how different cloud-based & traditional approaches to logic execution really are.
Has anyone else experienced similar inconsistent cold start times, and are there specific flags, plugins, or deployment practices that helped resolve it? I would appreciate any insights or even just confirmation that I am not alone in this weird behavior.
Thank you !!