Restructuring AWS - Proper way to configure AWS Accounts, Organisations and Profiles when using Serverless



Hello everyone.

After playing around for about 6 months with GCP/AWS and Serverless Framework, we have decided to rethink our AWS Organisation structure entirely.

Why reorganise AWS?

Because our current way of doing things actually limits us, and we need to lift those limits.
Here is what’s the most annoying thing at the moment:

Only I can use the Serverless framework, because we created a serverless-admin IAM user with AdministratorAccess permissions.
This user is used to manage everything through Serverless, from staging to production applications.

  • If I give this user to any collaborator, I take the risk that any of them can destroy anything on the production environments.
  • If I don’t give it, then none of them can access staging environment.

Of course, at the beginning, it didn’t matter much because I was mostly playing around doing a bunch of R&D and testing all those amazing things like Lambdas and alike. But now, I need to be able to allow other people into the playground and get dirty. #scalability

That particular limitation is what I aim to fix, by restructuring our usage of AWS.

How to reorganise AWS?

That’s what I’m currently wondering. I’ve talked with several people about the Serverless Framework’s particular case, where you basically need to give a “admin” role to your collaborators so that they can actually use it.
Especially at the beginning because it just makes life easier to get started with. (and because there is nothing critical online at that moment)
Also, we all noticed how setting up IAM properly is a pain, so many permissions to deal with, so difficult to figure out what you actually need. Using an “admin” role is just so much simpler.

But this practice doesn’t scale, you can’t add new people and give them “admin” permissions. That’s where we are, trying to change our AWS structure and looking for a proper solution that’ll hold for years to come and will scale with the upcoming people and projects.

That’s where AWS Organisations and AWS Accounts come into action. The first thing is to understand how those two should be used.

I’ve had a long discussion with other SLS members (thanks to Franklin and Rob) on Slack, see

AWS Organisation

An Organisation is the top-level block or your company on AWS, you should only have one. It’s where you manage AWS Accounts, consolidated-billing (overview for all Accounts) and top-level DNS.

AWS Account

An Account can represent different things, depending on how you decide to use it.
It can be a person, an environment, a product+environment, and many other things. It totally depends on your design.

An Account has its own billing unit and permission unit.

When you first sign up on AWS, you automatically create an Account. Quite often, that same account is used to become the Master Account of your AWS Organisation

The Master Account shouldn’t be an environment nor product, it shouldn’t contain any deployed service. It should just handle top-level configuration like top-level DNS (your main domain name). The email linked to the Master account shouldn’t be an individual, but rather an alias.

Then, depending on how you want to manage your company, you may use AWS Accounts in different ways, but in my case, it’s as follow:

AWS accounts:
	Master account (has access to AWS Organisation configuration)
		Consolidated Billing (for all other AWS Accounts)
		Top-level DNS and Route 53 domain management
		IAM Users (with cross-account access when needed)
	Account production
	Account staging
	Account development

Production, staging and development Accounts have their own AWS Services, unit billing, etc.
When a User needs access to multiple accounts (dev + staging for instance), it’s handled through “cross-account” configuration.

This setup provides enough flexibility:

  • You have both consolidated billing for all your accounts, and per-account billing. You know exactly how much cost you staging and production environments, separately. Even if you don’t use tags, it gives a good and reliable cost overview.
  • Security is enhanced, each IAM User can be setup in a way that allows access to AWS Console, and programmatic access. Using the “switch role” feature allows for a smooth transition between the different roles (AWS Console)

Configuring which environment to works with, locally

Alright, now that there is a proper separation between environments, let’s talk about how it changes the way developers manipulate SLS on their machine.

Since every environment (production, staging, development) has its own Account, a User who has access to development will not have access to staging (I’m not sure, but I don’t think cross-account apply to programmatic access, it may only work for Console access), one simple way to properly configure your multiple AWS IAM credentials, is through the use of Profiles.

There are several ways of doing it, I prefer the automated way which chooses the right profile based on the environment you’re deploying to.


  5. Sample Serverless.yml for multiple AWS accounts needed!

This post is a work in progress, it’s a struggle to setup AWS properly, and anticipate how your company will grow and how you should organise it. The goal here is to build some kind of “AWS Setup Guide” from the experience and feedbacks of the community. Don’t hesitate to ask questions and share your own struggles! I also may misunderstand some parts of AWS and don’t hesitate to tell me if so!


The current best practice is one account per service per stage. This is designed to limit the blast radius when something goes wrong.

I’m the first to admit that I don’t normally follow that advice for smaller projects. If I’m only working with a couple of services then I’m more likely to use one account per stage with all services in that stage deployed to the same account. This limits the blast radius but there are still times (especially during deployment) when things can go very wrong.

AWS goes even further with SAM. They recommend one SAM project for each event sources.

I think what your missing here is automated deployments. Developers shouldn’t be executing deployment scripts. Instead you should look at automating this using your CI/CD pipeline.


Interesting! The “one account per service per stage” makes sense, but my main issue with this rule so far is defining what’s a service, because when building micro-services architecture many services are related to each other and deciding what’s a service and what’s inside a service can be hard. Also, we all eventually start with a small project, which then grows, and grows, until it’s not small anymore. Anticipating growth can be hazardous.

I don’t use SAM, but I heard about it a lot, does SLS generates SAM templates on its own? How/why use SAM when using SLS, aren’t they the same thing?

You’re totally right about automated deployments, I don’t currently have any. It’s not that I don’t want to, but don’t really know what to use (third party? bitbucket CI/CD? AWS CI/CD? …).
The SLS world is fairly new.


Both Serverless and SAM transform into CloudFormation for deployment. They do a similar thing but very differently.

You could achieve the same result with Serverless that the AWS SA was recommending for SAM by only having one event source per Serverless project.

Defining a microservice can be difficult and I don’t believe there is a single correct answer. But that’s why they pay you the big bucks. I wouldn’t stress too much about getting it 100% right day one. Like any code you write your architecture will need to be refactored over time.

For example: There’s nothing wrong with building user notifications into a service but if you discover that multiple services are sending notifications to users then you might want to look at moving that into a notifications service. Equally, it may be obvious from day one that you’re going to have user notifications sent from multiple services so you just build it that way from the beginning.

It might help to ask questions like:

  1. Is this something that could be used by multiple other services?
  2. Does this implement a discrete business or technical function?
  3. Can I build this in a reusable manner?

For automated deployments I would start with the CI/CD solution that you already understand. If you don’t have one then maybe give code pipeline a go?


So, I’m not new to Lambda, but I am new to SLS. I’ll be using it for production very soon and I’ve had the same considerations and issues. For example, when using Lambda with SQS triggers, you’ll find there are additional SQS permissions needed, which are not necessarily documented in the most obvious places and the whole process becomes somewhat trial-and-error … (see How to grant access to SQS in Serverless.yml).

In any event since these are micro-services we are pushing out with SLS, and … on their own should be simple, non-proprietary functions … I also maintain the philosophy that each of these (non-proprietary) functions should be sharable … or portable in the sense that I can give it to any 3rd party to use or develop … on their own AWS account … using what ever liberal or conservative permission scheme they desire. So in other words, just as I expect a developer to have their own GitHub account, I also expect them to have their own AWS account, which shouldn’t be an issue since there is a free tier available. It also means that I don’t have to manage other accounts for other environments. My SLS project will always include unit tests for the http endpoints, and a deployment script which executes these tests the moment sls deploy completes successfully. Your situation may be a little different … but hopefully this gives you some ideas.

Regarding the idea of what a micro-service is … yes that can be tricky, but the first thing I consider is … is this a function I can share publicly or is it proprietary. If it’s proprietary (e.g. contains specific SQL which I don’t want to share with anyone) … then most likely the service/function is not designed properly in the first place and the proprietary part should be moved back to the calling application. Thus when these services are stripped of proprietary logic even domain specific functionality, and in turn become more generic … they naturally end up looking like proper, generic, micro-services.


Sorry for summoning an old post from the grave, but it’s a great topic with what appears to be great information.
Our environment has finally grown enough that we keep hitting various AWS limits. The latest being the Max Deployed Code size for Lambda. So I’ve decided to re-structure by moving dev/test/sandbox environments into their own accounts as children of an Org.

I’ve created an Org in our original account so it’s the Master.
I’ve also created an account as my personal dev environment as a child account.
My core stack which contains S3 buckets, Dynamodb tables, Lambda Functions for some core functionality that are exported for use by API stacks, etc is deployed fine.

Following the advice in the main post here, I still have my Route53 DNS in the Org Master account.
Now as I go to deploy my first API stack, and create the Custom Domain, I wonder:

  • How can I use the same domain (and same ACM Certificate) for my dev stack in my child account?
    Or should I just create a new ACM Cert for my child account?

Creating a new ACM Cert in each child account isn’t too annoying, but that still leaves me wondering how I use the Master accounts DNS/Domain from the child accounts.

My long-term goal being to spin up a separate account for each developer, multiple accounts for QA, an Integration Test account, Sandbox account, and possibly others.


Honestly I should write a blog post about what we did to our AWS setup. But basically we have 2 accounts for every product: Production and Staging.

I tried keeping my route53 config on the main account (org root), it works as long as you don’t have multiple sub domains like, actually it can work either way but since we use for instance, the “staging” DNS config must be configured through the root org account, and that gets complicated over time, it’s just easier if the Staging AWS account deals with its own DNS setup to be honest. Less headaches and more agile (avoid waiting on an operation that can only been done by a high-clearance operative)

@bfieber bout your question for your ACM Certificate, I have the following policy (because ran into the same issue):
All my certificates ask for at least 2 domain names, the concerned domain, and a wildcard domain. For instance, my ACM for “product-name” will be:

This way, I can create as many sub domains as I want without generating/configuring additional ACM, helped me recently with a and a for instance. Nothing more to do on the ACM side with this policy.

But, if you split your domain names between accounts, like:
production aws account:
staging aws account:

Then you’ll need to generate two ACM anyway because they can’t be shared across AWS accounts.

Also, I made a rule to allow all my developers to READ-ONLY Route53 settings, so that they can fetch DNS configuration of production systems, that’s usually needed when you’re doing CNAME or NS rules from one domain to another.

One additional advice before you decide to change your setup, modify your NS TTL value from 2 days to 5 mins, and wait 2 days after changing it before changing the whole thing if that’s your plan, that way you’ll save yourself lots of headaches because you won’t need to wait 2 days to see if your DNS changes are applied (basically reduces the cache time, perfect during development)

Also, with this kind of setup, you’ll hit another great AWS limit: A IAM user can’t belong to more than 10 groups at once, yup, sucks. A trick is to use “policies” which allows 10 more policies, but that’s it. Doesn’t help when you create one group per product/stage. My IAM user (as CTO) has reached that limit already and I think I’ll have to create a special group just for myself, that allows access to multiple AWS accounts and use that group instead of using one group per product/stage. (but that’s another issue that you’ll face later, annoying one)

Hope it helps, wanted to summarize it all but hell, that’s AWS we’re talking about…