How do I deploy the same package to multiple environments?



I can create a package using which fine:
serverless package --stage etc etc --package some-dir

The directory contains my zip, state and cloudformation files.

Now, once built and unit tested, I would like to carry these artifacts through my deployment pipeline, so ideally, I would like to generate just the state and cloudformation files retaining my original tested code in my build step. How can I do this? It seems like serverless deploy --package some-dir requires those files pre-generated. This kind of breaks the pipeline for me as I would ideally like my build step not to have any real information about the deployment environments.

How has someone solved this problem?


Have you tried something like this?

# Create a package with the original artifacts
serverless package --stage artifacts --package /path/to/package/artifacts

# Create a new package for the new environment (newenv)
serverless package --stage newenv --package /path/to/package/newenv

# Copy the zip file from the original artifacts into the new environment package
cp /path/to/package/artifacts/*.zip /path/to/package/newenv

# Deploy the new environment package (with the zip file from the original artifacts)
serverless deploy --stage newenv --package /path/to/package/newenv


We have the same issue. Buggy’s solution above is not an option for us.

We have multiple environments where the same artifact should be able to be used. We are trying to avoid building environment related versions. That fights against the pipeline thinking. We should be able to build an artifact that can be deployed into any existing environment. In our opinion there should be a configuration next to the artifact and the configuration + artifact = deployment package for specific environment.


Hey @rajwilkhu, have you found out the solution?
I’m looking for how to solve that too!


Hi all, maybe someone found some solution for this scenario??


We have a “kind of” solution: SSM:Parameters and cloudformation exports.

Wherever possible we store unique environment variables in Parameters so that nothing “unique” ends up in cloudformation itself. Serverless allows you to reference cloud formation directly as well, so you can also often Fn::ImportValue as well, and these end up embedded in the cloudformation template file. It starts you down the path of naming standards and those challenges … but it does drive you closer to the goal of “build once deploy anywhere”.


thank you for the reply, right now I’m using SSM Parameters the problem that I have is for example with the dynamoDB tables’s name because to not have conflict with the table on each environment I put as a prefix the environment name, how do you manage this scenario?