Unmodified aws lambdas being deployed as part of "serverless deploy"

Very often I need to make small changes in my resources and deploy my AWS stack. To do so I modify the resource definition and run “sls deploy”.

I have noticed that even when the change was not affecting lambdas at all, the zip containing the lambda code is uploaded and the cloudformation stack updates every lambda in the serverless.yml file. This takes time, so I am wondering whether it is possible for the serverless framework to detect that there were no changes in the lambda code and avoid updating them when that is the case.

If no code/yml changes happen the deployment should stop. If thats not the case check the version of framework it at the latest.

If you modify anything in serverless.yml you must run a full sls deploy to deploy the changes.

For faster deploys of just code changes, you can run sls deploy function -f NameOfFunction and it will just zip up code and be much faster than full deploy

If I don’t modify anything (code or yml), as you said, “sls deploy” does not do anything. My concern is when I just modify a resource in the yml definition (which has nothing to do with lambda). In that case when I “sls deploy” every lambda is redeployed (so a new version of the lambda is created in AWS) with exactly the same code as it had before.

Yeah it creates new versions (If I recall correctly) and cleans up older ones.

Out of curiosity, what is your concern with this? =)

I think you can disable this with this flag Serverless Framework - AWS Lambda Functions

It’s good to know about that flag, thanks! I have already had problems with old versions of my functions taking too much space.

But my concern was mainly the amount of time that every deployment has to spend first uploading the zip file with the code and then creating new lambda versions for each lambda in the serverless.yml file. This happens regardless of whether those lambdas were modified. Unfortunately I believe that the versionFunctions flag does not address this problem, it just avoids keeping old versions of the functions hanging there.

1 Like

I’m having the same problem here.

Even if nothing is changed, I’m spending 1-2 minutes of my CI/CD pipelines just re-uploading things and updating the stack.

I was equally annoyed by this. It’s time consuming, and it’s needlessly filling my deployment bucket. I roll out lots of infra through my serverless.yml and regularly update those resources without touching Lambda code.

For the longest time I saw the warning that Serverless wasn’t allowed to “GetFunction”, so I gave it those permissions, assuming that would solve it. After all, GetFunction will return a hash of the function body, so with that, I figured it would know enough not to upload it needlessly. But it seems it’s still doing it.

Has this received any attention or was addressed in any way? It takes ages to deploy our stack.

You can split your large project into more than one stack.

There’s a hard AWS limit on the number of resources you can have in a single CloudFormation you must be very close to hitting anyway ?

We’re at 450 resources (limit is 500). How does splitting this into substacks solve the issue of unchanged code being deployed?

I’m sorry, you said it was time consuming and filling a bucket (you can’t fill an S3 bucket, but we’ll roll over that question :slight_smile: )

If you split the stacks, then any given deploy of the sub parts will be quicker, and won’t update the other stacks.

I’m not talking about sub-stacks. Actual separate stacks. Depends how your code is structured though…

For instance, sometimes we see a stack with fooDev and fooLive instead of using a single foo stack and --stage at deploy time.
Or we might see chains of dozens of functions-and-SQS, which can be easily broken at any point and just share the common SQS names.

@TomC thanks for taking the time to respond. I’m not following your reference about the bucket or S3.

It is time consuming, meaning it takes “serverless deploy” anywhere from 5 to 15min to deploy the stack, depending on how quick CloudFormation works. Changing the attribute of an SQS bucket or updating the API Gateway log format should ideally not result in repackaging of code and deploy of all lambda functions.

Yes, splitting up the project would work but that is a workaround for the frameworks shortcomings. I rather prefer the framework would handle this.

I think of it more as a CloudFormation limit.

I’m not sure how the framework would know what a good split point was for a given block of YAML.

I’ve not used hXXps://www.serverless.com/plugins/serverless-plugin-nested-stacks to know if it’d be any help.

(Edit stupid forum gets the auto generated link wrong, because the page title is wrong, so had to break the link)

Interesting enough if I run sls deploy locally multiple times it does not deploy the lambas again but only the other resources.

There seems to be some hashing going on but I haven’t worked out what needs to be cached in CI to enable this behaviour:

Probably your CI/CD doesn’t persist the .serverless folder the way your local does ?