Rest API with custom authorizer: How are you dealing with authorization and policy cache?

Hi all,

I am wondering what is the optimal way of dealing with authorization in a serverless REST API.

I have several endpoints (some of them pointing to the same function) deployed for a given service and I configure a common custom authorizer using Auth0 for all of them. By strictly following the examples we create a policy inside the authorizer code that is then cached (with a default TTL of 300s), that policy refers to the exact ARN of the called API/stage/method/path.

This poses a problem as any other endpoint (even using the same function w/ different method or param) will be rejected by throwing an User is not authorized to access this resource error as the ARN specified in the policy won’t match.

This topic on AWS dev forums discusses this matter in depth, and the suggestions are to specify wildcard access for the full API in the policy or disable the cache (expensive!).

Wildcard access seems like not doing authorization at all and just verifying the token, at it would ignore any scope of the API. You can delegate scopes and authorization in the service´s functions as lambda exposes the distilled token in the event (user and claims)… I feel like I am missing something…

I really like the concept of a custom authorizer, by hiding my functions behind an independent token verification the attack surface is significantly reduced…

How is everyone dealing with the authorization flow?

With custom authorizers you have two options:

  1. Most of the time you will want to return list of policy statements covering every resource the user needs to access. This will allow you to cache the result. While you can use a wildcard you can also list each resource as its own policy statement.

  2. Change the TTL to 0 so the policy is never cached. This will cause the custom authorizer to be executed for each request. Depending on your authentication mechanism this may allow you to cut someone off immediately.

Where possible I would go with option 1 over option 2.

1 Like

hi buggy!
could you please give a bit more details on why option 1 is preferred.

a bit of context:
the application i am working on MAY contain {entityId} in request string.
custom authorizer puts that entityId into context so that backend function would know which entity was authorized to be accessed/modified.
if no entityId is present in context (as no {entityId} is provided in query path), authenticated userId is used as entityId (user-to-entity relation is one-to-one)

i found that along with policy statements AWS also caches authentication context, so that all consequent calls to my service return the same result.

are there any guidelines/best practices to do such dynamic authorization? should it be located in backend function or custom authorizer is the right place?

thanks,
alex

I wrote a proper response on my blog. API Gateway authorization and policy caching

The short version is use the custom authorizer to implement broad level acess controls and the Lambda to implement fine-grained access controls.

thanks a lot for taking time for such detailed explanation! i now feel like my initial question should’ve been written in a bit different way. here is the answer to that question, i’ve compiled this from your response: “put non-functional/non-business security logic into authorizer; authorizer is not a way to separate security aspect, you should invent or find one by yourself; AWS does its best to make it as hard as possible: you’ll have to spread security-related code onto business and non-business parts”

info below is jfyi

my use case

problem to solve
entities:

  • user
  • superuser
  • profile

relations:

  • user owns profile
  • superuser owns all profiles

activities:

  • view owned profile
  • change owned profile

the solution i came up with
all users have access to profile by the same location (/profile); superuser accesses profile by user id (/profile/{userId}).
use custom authorizer to identify whether provided token grants access to owned resource or to all resources and for AWS policy.

the problem i met
unfortunately it was not possible to do authorization of superusers without turning off caching for custom authorizer. even though policy i have generated allowed access to any of possible resources (like api-invoke:...:*/profile/*) AWS was caching policies in couple with requests. because of such caching each consequent request from superuser to any other (than initial) profile returned the same (initial) profile.

imho

i do not stop wondering about a weirdness which AWS guys put into their designs :slight_smile:

unfortunately as engineer i see more issues in the AWS interfaces than benefits. inevitably serverless development helps to reduce expenses for business, that’s what they [cloud providers] are selling. but as for engineer it is a huge ugly mess of tech in its current state :frowning:

I recently encountered this issue and was able to solve it by allowing the authorized client to access all functions.
So, once user is authenticated, allow-all is cached, I can still use the cache.
Non-authenticated users are blocked obviously.

Your custom authorizer should return a resource ending with /*/* instead of returning event.methodArn

"Resource": "arn:aws:execute-api:eu-west-1:blah:yoink/*/*"