Trying to use AWS SDK S3.getObject within lambda, getting Access Denied

I know there are a lot of topics on this forum already with loosely the same issue but I’m not seeing one that actually solves what I am doing.

Very simple: Upload something to S3, Lambda triggers, reads content of that CSV file and puts it in DynamoDB.

However, every time I try to use the Node.js AWS SDK I am getting access denied when trying to get the object:

import { S3 } from 'aws-sdk';

const storage = new S3();

export async function handler(evt) {
  if (evt.Records.length < 2) {
    const [record] = evt.Records;

    try {
      const object = await storage.getObject({
        Bucket: record.s3.bucket.name,
        Key: record.s3.object.key,
      }).promise();
      return true;
    } catch (err) {
      console.error('Failed getting object from S3:', err);
      throw err;
    }
  }

  return true;
}

The bucket and object key are correct, I saw that in Serverless Dashboard / CloudWatch. Everytime getObject is triggered it results in: Access Denied

I think my serverless.yml file is as correct as it gets. Before using Resources, I also allowed the s3:GetObject action to arn:aws:s3:::${self:custom.bucketName}/* in the iamRoleStatements but that yields the same result…

Using resources:
service:
  name: csv-database

provider:
  name: aws
  runtime: nodejs12.x
  region: eu-west-1
  stage: ${opt:stage, 'staging'}
  iamRoleStatements:
    - Effect: Allow
      Action:
        - "s3:*"
      Resource:
        Fn::Join: ['', [Fn::GetAtt: [CSVBucket, Arn], '/*']]

custom:
  bucketName: ${self:service.name}-${self:provider.stage}

functions:
  csvToDynamoDB:
    handler: index.handler
    events:
      - s3:
          bucket: ${self:custom.bucketName}
          event: s3:ObjectCreated:*
          rules:
            - suffix: .csv

resources:
  Resources:
    CSVBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucketName}
Using simple arn role
service:
  name: csv-database

provider:
  name: aws
  runtime: nodejs12.x
  region: eu-west-1
  stage: ${opt:stage, 'staging'}
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:GetObject
      Resource:
        - 'arn:aws:s3:::${self:custom.bucketName}/*'

custom:
  bucketName: ${self:service.name}-${self:provider.stage}

functions:
  csvToDynamoDB:
    handler: index.handler
    events:
      - s3:
          bucket: ${self:custom.bucketName}
          event: s3:ObjectCreated:*
          rules:
            - suffix: .csv

Any help would be greatly appreciated.

1 Like

Checking the policy for this IAM Role I even tried granting it superuser access on every S3 bucket:

But this yields the same error result

Hi there. I would recommend adding two resources, one for the contents of the bucket which you already have

- 'arn:aws:s3:::${self:custom.bucketName}/*'

and the other for the bucket itself as well. If you don’t have both you are not giving permissions to perform function on the bucket itself such as list the bucket (not its contents, the bucket).

- 'arn:aws:s3:::${self:custom.bucketName}'

I would also suggest posting the full error message you get as that usually helps indicate the exact missing resource and method

2 Likes

I have added that ARN too but that does not make the difference. Full error is like this:

ERROR	Failed getting object from S3: AccessDenied: Access Denied
    at constructor.apply (/var/task/webpack:/node_modules/aws-sdk/lib/services/s3.js:816:35)
    at constructor.callListeners (/var/task/webpack:/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at constructor.call (/var/task/webpack:/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at constructor.emit [as emitEvent] (/var/task/webpack:/node_modules/aws-sdk/lib/request.js:683:14)
    at constructor.call (/var/task/webpack:/node_modules/aws-sdk/lib/request.js:22:10)
    at runTo (/var/task/webpack:/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at done (/var/task/webpack:/node_modules/aws-sdk/lib/state_machine.js:26:10)
    at constructor.call (/var/task/webpack:/node_modules/aws-sdk/lib/request.js:38:9)
    at constructor.call (/var/task/webpack:/node_modules/aws-sdk/lib/request.js:685:12)
    at constructor.callListeners (/var/task/webpack:/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  message: 'Access Denied',
  code: 'AccessDenied',
  region: null,
  time: 2020-01-15T18:42:04.690Z,
  requestId: '3C12F9A32F257735',
  extendedRequestId: '9SylZqWDz0CIVrEcguuRMfM6wJeYNqNmS/YVJ/L6Y78F5yeBrJuEYfeNLpI4RFKtHtRDemEXW2s=',
  cfId: undefined,
  statusCode: 403,
  retryable: false,
  retryDelay: 85.35308162849127
}

What I noticed is that region property in the error object is null. I have set a region in the S3 class but it makes no difference

If I log my bucket I can see that bucket and the key is correct. I have a bucket sls-s3-example with a file database.csv and logging what I get from the lambda received event I get:

record.s3.bucket.name = sls-s3-example
record.s3.object.key = database.csv

Have you confirmed you can access the bucket using the same credentials using an alternative method, for example, using AWS CLI?

aws s3api get-object --bucket=BUCKETNAME --key=OBJECTKEY /tmp/foo

Hello @thibmaek,

I was having same exact issue trying to upload a file to S3 using AWS Node.js SDK and the privilege I was missing turned out to be s3:PutObjectTagging . How did I found this out? I manually modified the Lambda’s role in IAM to provide full access to S3, like so,

{
            "Action": [
                "s3:*"
            ],
            "Resource": "MY-BUCKET/*",
            "Effect": "Allow"
        }

Then upload started working. This allowed me to narrow down the issue. It was definitely a missing permission. Then I read the AWS documentation at https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html, and noticed that my request is to upload in my Node.js app is trying to add tagging during upload. Below are the parameters I pass to aws-sdk.S3 client,

const uploadParams = {
          Bucket: chunk.bucketName,
          Key: chunk.filePath,
          Tagging: 'created_by=Missing-Image-Delivery-Pipeline',
          Body: passThrough
        }

After I found my root cause and in order to adhere to least privilege principle I changed the Lambda’s role policy to,

{
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectTagging"
            ],
            "Resource": "arn:aws:s3:::MY-BUCKET/*",
            "Effect": "Allow"
        }

So now I’m a happier camper. Hope this helps you narrow down your problem if you haven’t already, good luck.

2 Likes

In my case, I solved it by adding both arn:aws:s3:::bucket and arn:aws:s3:::bucket/* as Resources.

2 Likes