How do I deploy the same package to multiple environments?

I can create a package using which fine:
serverless package --stage etc etc --package some-dir

The directory contains my zip, state and cloudformation files.

Now, once built and unit tested, I would like to carry these artifacts through my deployment pipeline, so ideally, I would like to generate just the state and cloudformation files retaining my original tested code in my build step. How can I do this? It seems like serverless deploy --package some-dir requires those files pre-generated. This kind of breaks the pipeline for me as I would ideally like my build step not to have any real information about the deployment environments.

How has someone solved this problem?

1 Like

Have you tried something like this?

# Create a package with the original artifacts
serverless package --stage artifacts --package /path/to/package/artifacts

# Create a new package for the new environment (newenv)
serverless package --stage newenv --package /path/to/package/newenv

# Copy the zip file from the original artifacts into the new environment package
cp /path/to/package/artifacts/*.zip /path/to/package/newenv

# Deploy the new environment package (with the zip file from the original artifacts)
serverless deploy --stage newenv --package /path/to/package/newenv

We have the same issue. Buggy’s solution above is not an option for us.

We have multiple environments where the same artifact should be able to be used. We are trying to avoid building environment related versions. That fights against the pipeline thinking. We should be able to build an artifact that can be deployed into any existing environment. In our opinion there should be a configuration next to the artifact and the configuration + artifact = deployment package for specific environment.

Hey @rajwilkhu, have you found out the solution?
I’m looking for how to solve that too!

Hi all, maybe someone found some solution for this scenario??

We have a “kind of” solution: SSM:Parameters and cloudformation exports.

Wherever possible we store unique environment variables in Parameters so that nothing “unique” ends up in cloudformation itself. Serverless allows you to reference cloud formation directly as well, so you can also often Fn::ImportValue as well, and these end up embedded in the cloudformation template file. It starts you down the path of naming standards and those challenges … but it does drive you closer to the goal of “build once deploy anywhere”.

thank you for the reply, right now I’m using SSM Parameters the problem that I have is for example with the dynamoDB tables’s name because to not have conflict with the table on each environment I put as a prefix the environment name, how do you manage this scenario?

Sorry, been a long while sense I’ve read this forum. Not intended.

Short answer is SSM:parameters. We use it like a huge central config file. If anything is to be unique in one environment vs another it’s a parameter. The functions always look to that fixed Key location, and get a variable value back … and thus our serverless configs are static. Anything “unique” to a deployment should be a config and in the parameter store. Our parameter store is then managed by CF templates or sometimes direct manipulation by Ops … but we obviously prefer the former. In some accounts we don’t allow ANY direct manipulation except for break glass to help ensure it’s predictable.

By having a clear “line in the sand” between code and config it becomes much easier because we can actually get away with identical CF templates run in multiple accounts, and thus are capable of doing the above goal, or at least using a very generic codebuild task in each account and just pass unmodified source code to each one. We actually do the later, and literally lock out anything but readonly access to the account in question. So we have high assurance that it’s deployed the exact same way every time.

1 Like