Hi,
I have a number of live deployments on AWS Lambda (nodejs 8.10) using Serverless, and recently I upgraded to v1.57.0. Unfortunately I can’t figure out what version I upgraded FROM because I didn’t keep a note and now I don’t know where to figure this out from!! (but I think it was probably 1.3x ish)
After making some routine changes to the node.js code and redeploying, the Lambda functions now run EXTREMELY slowly compared to the same functions deployed on the previous version.
A comparison from one particular endpoint:
— Current production version (deployed using old SLS version) —
Lambda Function: hsbbconsumer-prod-covcheckCheckProducts
Duration: 615.52 ms
Log Output:
START RequestId: 12226d03-9ed1-43a8-b3d3-961cf1547dd9 Version: $LATEST
END RequestId: 12226d03-9ed1-43a8-b3d3-961cf1547dd9
REPORT RequestId: 12226d03-9ed1-43a8-b3d3-961cf1547dd9 Duration: 615.52 ms Billed Duration: 700 ms Memory Size: 1024 MB Max Memory Used: 76 MB Init Duration: 2.76 ms
— Staging version (deployed after SLS upgrade) —
Lambda Function: hsbbconsumer-stg-covcheckCheckProducts
Duration: 5966.64 ms
Log Output:
START RequestId: f8b20880-a670-4d33-acac-15067b78e510 Version: $LATEST
2019-11-11T09:13:32.924Z f8b20880-a670-4d33-acac-15067b78e510 SERVERLESS_ENTERPRISE {"origin":"sls-agent","schemaVersion":"0.0","timestamp":"2019-11-11T09:13:32.924Z","requestId":"f8b20880-a670-4d33-acac-15067b78e510","type":"transaction","payload":{"schemaType":"s-span","schemaVersion":"0.0","operationName":"s-transaction-function","startTime":"2019-11-11T09:13:31.609Z","endTime":"2019-11-11T09:13:32.924Z","duration":1313.975842,"spanContext":{"traceId":"f8b20880-a670-4d33-acac-15067b78e510","spanId":"2107b69b-b287-4402-9e44-984ceb257367","xTraceId":"Root=1-5dc9263b-d42b14a9a0d81c3abb978abd;Parent=117c3bac4ff9b6e5;Sampled=0"},"tags":{"schemaType":"s-transaction-function","schemaVersion":"0.0","timestamp":"2019-11-11T09:13:31.609Z","tenantId":"rothwellchris","applicationName":"hsbbconsumer","serviceName":"hsbbconsumer","stageName":"stg","functionName":"hsbbconsumer-stg-covcheckCheckProducts","timeout":6,"computeType":"aws.lambda","computeRuntime":"aws.lambda.nodejs.8.10.0","computeRegion":"ap-southeast-1","computeMemorySize":"1024","computeMemoryUsed":"{\"rss\":61771776,\"heapTotal\":30228480,\"heapUsed\":14058912,\"external\":368772}","computeMemoryPercentageUsed":1.26953125,"computeContainerUptime":0.264,"computeIsColdStart":true,"computeInstanceInvocationCount":1,"computeCustomSchemaType":"s-compute-aws-lambda","computeCustomSchemaVersion":"0.0","computeCustomFunctionName":"hsbbconsumer-stg-covcheckCheckProducts","computeCustomFunctionVersion":"$LATEST","computeCustomArn":"arn:aws:lambda:ap-southeast-1:754402087859:function:hsbbconsumer-stg-covcheckCheckProducts","computeCustomRegion":"ap-southeast-1","computeCustomMemorySize":"1024","computeCustomInvokeId":null,"computeCustomAwsRequestId":"f8b20880-a670-4d33-acac-15067b78e510","computeCustomXTraceId":"Root=1-5dc9263b-d42b14a9a0d81c3abb978abd;Parent=117c3bac4ff9b6e5;Sampled=0","computeCustomLogGroupName":"/aws/lambda/hsbbconsumer-stg-covcheckCheckProducts","computeCustomLogStreamName":"2019/11/11/[$LATEST]79a16f7737884c98825d1cd68c64a0c3","computeCustomEnvPlatform":"linux","computeCustomEnvArch":"x64","computeCustomEnvMemoryTotal":1219444736,"computeCustomEnvMemoryFree":1078190080,"computeCustomEnvCpus":"[{\"model\":\"Intel(R) Xeon(R) Processor @ 2.50GHz\",\"speed\":2500,\"times\":{\"user\":2900,\"nice\":0,\"sys\":3800,\"idle\":14159300,\"irq\":0}},{\"model\":\"Intel(R) Xeon(R) Processor @ 2.50GHz\",\"speed\":2500,\"times\":{\"user\":2100,\"nice\":0,\"sys\":1700,\"idle\":14162800,\"irq\":0}}]","eventType":"unknown","eventTimestamp":"2019-11-11T09:13:31.610Z","eventSource":null,"eventCustomStage":"stg","errorId":null,"errorFatal":null,"errorCulprit":null,"errorExceptionType":null,"errorExceptionMessage":null,"errorExceptionStacktrace":null,"transactionId":"2107b69b-b287-4402-9e44-984ceb257367","appUid":"NsCD8gQz87MQksrbNh","tenantUid":"13CvSM0ZnjjZ3Y0rWc","pluginVersion":"3.2.2","totalSpans":1,"traceId":"f8b20880-a670-4d33-acac-15067b78e510"},"logs":{},"spans":[{"tags":{"type":"aws","requestHostname":"search-digihsbb2-f6dqtxsfn7wk7ons3ba2rxhaia.ap-southeast-1.cloudsearch.amazonaws.com","aws":{"region":"ap-southeast-1","service":"cloudsearchdomain","operation":"search","errorCode":null}},"startTime":"2019-11-11T09:13:32.797Z","endTime":"2019-11-11T09:13:32.919Z","duration":122}],"eventTags":[]}}
END RequestId: f8b20880-a670-4d33-acac-15067b78e510
REPORT RequestId: f8b20880-a670-4d33-acac-15067b78e510 Duration: 5966.64 ms Billed Duration: 6000 ms Memory Size: 1024 MB Max Memory Used: 94 MB Init Duration: 95.54 ms
Both do the same thing, return the correct data, and nothing weird appears in the CloudWatch logs. Looking at the Lambda and APIGW configuration, everything seems to be set up the same.
Of course, in this latest version I now get more advanced log output and the Dashboard features. But can this can’t be contributing to a x2000 increase in execution time??
Any ideas how to troubleshoot this type of slow execution?
Thanks in advance!
Chris.