Hi,
We are working with a multiple account approach and I’m going to deploy serverless applications that execute lambdas in different accounts, copy rds snapshots between accounts etc.
Simple example:
I deploy a serverless application to account A. This application includes a Lambda, that needs to assume a role in account B, so I can fetch the status of EC2-Instances in Account B.
It would be ideal if it would be possible to describe all roles/ policies from both accounts in one single serverless.yml file.
Is this somehow possible? Are there workarounds or different approaches/ best practices?
Thanks
Stefan
Sorry, I’m not bringing much of a solution to the table but very interested into what could come into this discussion.
Like in your case, we are also following a multi-account strategy and using serverless for holding our logic and define the required infrastructure.
Matter of fact is that we’ve just released a small product to production. The main account being completely defined with serverless, but we’ve left the management of the “child” account to a separate process.
This secondary account is purely infrastructure (no lambdas) so it’s kind of ok to manage it manually as an independent CFN stack.
We do have to create IAM roles and policies in this secondary account and add the lambda principals of the main account as trusted entities.
Ideally, I would love to be able to have a breakdown of several services with their inter-dependencies being able to be defined (and more importantly deployed) in a consistent manner across environments via serverless.
There are a number of already on-going feature requests that I’d say could help greatly towards this goal.
opened 02:17PM - 08 Dec 17 UTC
enhancement
cat/variable
<!--
1. If you have a question and not a bug/feature request please ask it at h… ttp://forum.serverless.com
2. Please check if an issue already exists so there are no duplicates
3. Check out and follow our Guidelines: https://github.com/serverless/serverless/blob/master/CONTRIBUTING.md
4. Fill out the whole template so we have a good overview on the issue
5. Do not remove any section of the template. If something is not applicable leave it empty but leave it in the Issue
6. Please follow the template, otherwise we'll have to ask you to update it
-->
# This is a Feature Proposal
## Description
There are frequently people having trouble to find out the final resource names or identifiers that are used in the CF template, because they need them for their custom resources (e.g. Ref or GetAtt).
In the Serverless framework the generation of these identifiers is deterministic and they are already known at the very beginning of the lifecycles, thus they are known at variable resolution time.
A new `${cfname()}` variable resolution class can effectively solve the problems, reduce complexity for the users and finally, reduce the number of incoming issues.
It could work like this:
| Reference | Description |
| --- | --- |
| `${cfname(function:myFunc)}` | Resolves to the logical id of myFunc |
| `${cfname(function:myFunc:version)}` | Resolves to the logical id of the version resource of myFunc |
| `${cfname(function:myFunc:event:xxx)}` | Resolves to the logical id of the automatically created event resource for event xxx of myFunc |
| `${cfname(apig:deployment)}` | Resolves to the name of the AWS::ApiGateway::Deployment resource |
| `${cfname(apig:api:root)}` | Resolves to the name of the AWS::ApiGateway::RestApi root resource |
These are only samples. As soon as a name resolver is implemented, the available entities that can be resolved, can be added easily. It would make sense to even offer resolution for non randomized resources, because that will abstract the names away from the resources section and will not break if SLS changes them.
The variables then can be easily used anywhere in the serverless.yml:
```
Resources:
MyResourceThatNeedsARefToAFunction:
Type: AWS::Something
Properties:
Function:
Ref: ${cfname(function:myFunc)}
```
## Additional Data
* ***Serverless Framework Version you're using***:
* ***Operating System***:
* ***Stack Trace***:
* ***Provider Error messages***:
opened 03:03PM - 18 Aug 17 UTC
closed 01:59PM - 11 Jun 19 UTC
needs feedback
cat/dx
# Summary
serverless framework is currently wedded to a few concepts which ma… ke it very cumbersome to share javascript code and compiler compilation contexts between many node 'micro-service' stacks. Specifically the fact that serverless does not allow specifying a path to the serverless config file pushes every set of cloud-formation stacks deployed by serverless toward needing to be a separate npm package. This requirement is essentially enforced by the framework due to the framework not exposing any mechanism's for supplying a path to the serverless config file to use for a given serverless invocation.
# Problems
Forcing developers toward describing every stack as an npm package adds a lot of overhead to the development process -- particularly when its desirable to share code between stacks.
What overhead is added by the need to describe each serverless stack as an npm package? How much of this overhead is incidental complexity vs actually useful modularization? I argue that for many projects/contexts, this requirement adds a large amount of incidental complexity with net negative benefit rather than encouraging useful patterns:
- every serverless deployable stack needs to setup and define its tooling environment. Modern tooling requirements for javascript codebases are extensive (compilers/transpilers, test runners, linters, ide integrations). Re-use of these processes between related projects is enormously beneficial. It seems to be true that in general -- the fewer the number of packages for which you have to setup this tooling when working on related code, the better the development experience you can arrange for your team.
- every serverless deployable stack needs to describe full set of dependencies, devDependencies and needs to separately install those dependencies in order for the serverless commands to be runnable. Full-encapsulation of development environment (rather than requiring the installation of tools globally) into package.json is very nice -- so if you wish to package all the required development dependencies then every stack deployment context ends up needing to install its own copy of the serverless framework. If you are a using compiler/linter, every stack is also going to need its own copy of the compiler and linter (and configuration of those). Every stack deployment context is going to need to manage versions of these things, is going to have to install these things when setting up the development context for each stack and when running automated tests for each stack.
- its very difficult to share code between stacks. Simple solutions for sharing code between stacks that rely on node's module resolution algorithm to organize shared dependencies won't work. Furthermore, 'normal' techniques for easing development of related sets of npm packages won't work well with serverless because serverless doesn't handle packaging of symlinked node dependencies in a good way.
- tools like `npm link`lerna/yarn-workspaces which aim to ease development of related sets of npm packages also don't work well with serverless. If lerna symlinks related-module-a into service-b/node_modules/, then when serverless is packaging service-b all of 'related-module-a/node_modules/' will be packaged into service-b including all the dev dependencies for related-module-a (not to mention source artificats, the set of things normally excluded by .npmignore etc) ... This will generally make the package too large to deploy when packages have a lot of devDependencies. This fact basically breaks the ability to use lerna to ease the development process of related-module-a in the context of testing/deploying/packaging service-b.
- even if we could use lerna, the whole lerna concept is something of a large hack and for many projects introduces a large amount of complexity that may not actually be needed.
# Goals
For my purposes my goals are:
- easily share code generated from a single compilation and testing context between serverless deployed stack's
- manage devDependencies and tooling configuration for as much of the codebase as possible in a single place (shared code should be in the same compilation context for ensuring smoothest path to maximum benefit from compiler and ide tooling)
- support easily creating new variations of a stack for use in automated testing or to deal with other kinds of strange business/development requirements -- ideally by just copying/modifying a serverless.yml file
- get the best experience from ide's
The pattern I've fallen on is to structure our repository like this:
```
.serverless_plugins/ <-- custom plugins -- should be shareable between stacks
package.json <-- scripts to start compiler, run tests etc.
jest.json
jest.debug.json
tsconfig.json
tslint.json
wallaby.js
src/ <-- typescript code
lib/ <-- compiled javascript
services/some_service/serverless.yml
services/some_service/node_modules => ../../node_modules/
services/some_service/package.json => ../../package.json/ <-- needed to expose devDependencies to serverless packaging process
services/some_service/lib/ => ../../lib/
services/some_service/.serverless_plugins/ ../../.serverless_plugins/
```
The basic idea of above pattern is to use symlinks to trick serverless so that running `serverless` command from any services/*/ directory behaves as if you had copied the serverless.yml file into the project root directory and ran serverless from there.
# Solution
I really think a behavior like this should just be supported by default without having to use symlinks to hack a serverless.yml file into the execution context of another folder. I'd like an option to allow providing a path to a yml file to use for serverless config. This should interpret the config file as if the config file was located in the cwd() -- paths defined in config files should be specified relative to the cwd() -- everything should be executed just as if you had moved the configuration file to ./serverless.yml prior to execution.
# Alternative
If adding the requested feature and exposing to users the ability to control which config file to use from the command line isn't deemed desirable -- would a pull request that modified the Service class and Serverless constructor to allow easier implementation via a custom subclass of the Serverless class be accepted?
Right now I'm monkey patching the serverless framework to allow using alternative config files in my automated testing environments ... Its not pretty and involved by necessity some copy/pasting but essentially required changing Service.load method to consider options passed to Service constructor ... A little bit of changes to the framework could greatly ease this customization ...
```
// PATCHED: attempt to get serviceFilenames from `that` rather than only supporting hard-coded paths ...
// List of supported service filename variants.
// The order defines the precedence.
const serviceFilenames =
that.serviceFilenames != null
? that.serviceFilenames
: ["serverless.yaml", "serverless.yml", "serverless.json"];
```
And then hacking the Serverless constructor to use my patched Service subclass and pass in custom path to serverless config file ...
```
export class JestServerlessConfig extends Serverless {
constructor(serverlessContext: JestServerlessContext) {
// configure servicePath ...
super({
// set servicePath to the directory in which you want serverless command to execute (abs path)
servicePath: serverlessContext.executionPath,
interactive: false
});
// HACK: re-create the service, this time with patched behavior ...
this.service = new JestServerlessServicePatch(this, {
serviceFilenames: [serverlessContext.relativeServerlessConfigPath]
});
// HACK: Variables class picks off this.serverless.service instance inside constructor and saves it ... --
// make sure to update that reference to point at patched object...
this.variables.service = this.service;
this._jestServerlessContext = serverlessContext;
}
```
I also have to monkey patch serverless/lib/utils/getServerlessConfigFile.
# Other Alternative
Its possible I could add webpack to my build process and generate a bundle including all required node_modules for each services/*/ directory. This could be a good alternative but webpack is complicated (and it would be much slower). It also wouldn't really work all that well many varieties of common testing tools. The need to do this ultimately results from a small addressable limitation of the serverless framework which I think would be better to handle with what seems a small augmentation to the framework...
opened 09:23AM - 27 Apr 17 UTC
enhancement
needs feedback
cat/variable
cat/packaging
cat/deployment
cat/design
With the new package/deploy semantics, the builds and deployments can be distrib… uted on different servers. The created artifacts can then be deployed to AWS (using the CF template created in the build step).
Normally you construct your services, so that some properties are set by variables defined in the serverless.yml that are set depending on the service target (e.g. stage or region). A prominent example for that is the setting of Lambda environment variables.
Imagine you have the following service definition:
```
service: stashimi-bot-dash
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
...
environment:
MY_VAR: ${self:custom.${self:custom.stage}.myVar}
...
custom:
stage: ${opt:stage, self:provider.stage}
...
dev:
myVar: "My-3rdParty-Auth-Id-For-Dev"
prod:
myVar: "My-3rdParty-Auth-Id-For-Prod"
```
The problem with the package command is now, that the variables are already substituted there and the CF template that is transferred to the deploy stage already contains the resolved variables. As a consequence it is not possible to deploy a built/packaged service to different stages, i.e. the stage used to build also has to be used to deploy.
A workaround is to build all stages/regions independently on the build server, create artifacts for each stage and deploy the corresponding artifact on the deployment server.
A proper solution would be that the build phase creates an artifact, that is independent from the environment that the artifact is deployed to.
### Proposal
The build phase should keep the literal variable references in the artifact's CF template and the deploy phase should do the variable substitution depending on the stage/region selected at the time of deployment. This would mean, that only the variable subsitution functionality has to be moved.
Of course only variables that are used/placed in the generated CF template should be deferred.
So maybe the `generateXXXTemplate()` function that is run in the build phase would be the target for a viable implementation - then only the right variables would be affected. On the deploy phase the substitution has to be made as soon as the CF template from the artifact is loaded, to have the real values available for all plugins running in the deployment context.
Care has to be taken for plugins that might use the variable information during the build phase (imo that's wrong anyway).
Essentially, they are all targeted at being able better integration and easier management of the underlying CFN infrastructure.
There might be a case for a plugin that could wrap some of this requirements together. However, my knowledge of the internals of serverless is limited to envisage what could potentially be the path of least resistance.
Any potential hacks from other users, or guiding input from serverless folks would be most welcome