Orchestrating deployment and sharing stack outputs in a declarative manner with Lerna repos

Hi folks,

I created a lerna repo where each package is its own serverless service, with its own cloudformation etc. These are separated domains like:

  • auth
  • web
  • api
  • ssl

Sometimes one domain will need to know the stack outputs of the other. For example, my cognito authenticated role ( = auth domain ) has to define that when authenticated it can make execute actions to the API gateway of the api domain. This introduces many questions.

  1. currently I deploy the domains in a random sequence, but if one requires an output from another, they will need to be deployed in the right sequence; how do you orchestrate this? and what if they reference each other?

  2. how does service A know that it depends on service B? I could use Fn::Include but it would be rather ugly because these dependencies would not be explicitly defined. It would be nicer if you could do something like require('../service_a').someStackVariable inside service B.

Motivation
I need this lerna repo structure because otherwise my cloudformation file becomes 1000+ lines. In addition, the setup (plugins) etc. differ for each domain. E.g. api requires a complicated setup with babel, because it has more business logic that is written in ES6. Auth on the other hand just sets up some cognito resources and can be deployed much faster without plugins. In short, service separation works really well, but now the challenge is in orchestrating deployment and sharing variables between the services.

Has anyone solved this issue already?

1 Like

I can relate to your thoughts, I’ll share what we do in a project, hopefully someone will give better ideas or suggestions in the discussion :slight_smile:

So, our repo is relative simple data lake. Our auth part is simply signed requests back and forth as in time it will be machines dealing with the requests, not people using UI.

Although we use lerna in other projects, we don’t use it in this one, because the services are not npm packages that we would publish on npm. For “orchestration” we simply use scripts. Yes, it’s not perfect, but it does the job as each deployment follows the other synchronously.

As a general rule of thumb, each service which contains serverless.yml would share output via serverless-stack-output. This plugin helps with that each service deployment results in a json file having information about service endpoint. And following some common conventions, like putting this output file in .serverless folder, and knowing which service comes after which service, we can relatively safely rely on these output files being loaded via variables in serverless.yml files. It’s something like “service discovery”. Each mini endpoint can be glued to other services and used via environment variables in lambda functions.

In some cases, however, when you use less popular services or plugins, such as elasticsearch, you’ll still need to use Cloud Formation’s Output for the serverless-stack-output, or as you mentioned, import the value of the previous CF ouput with something like Fn::ImportValue: ${self:provider.stage}:elasticsearch:ServiceEndpoint which indeed gets funky at some point and harder to maintain.

In short, to my personal knowledge, scripts with sls deploy in series help at this point and the different services might need CF specific output to share their service mini endpoints with other services.

1 Like

We’re currently having the same challenge. One of our repos has about 5 SLS services that need to be deployed in a specific order. We too right now are using scripts and serverless-stack-output to hot potato outputs into the params of the next.

However, we’re looking for a better way. Also wanting to reuse JS across projects where it makes sense.
Currently, we’re investigating https://nx.dev/ and https://lerna.js.org/ to see if either will work.

So far, lerna has much of what we need save for the ability to ORDER the deployments etc.

Nx seems to have it all, even has implicitDependencies so we can say that the sls-frontend depends on the sls-api which in turn depends on the sls-auth. However, Nx seems to have a good deal more to learn and hoverhead in the way of config. They try to solve that extra conf by creating boilerplates for projects…
which is just one more thing to create/manage.

Still on the look for the right thing. Still playing with these two.