Failure to create S3 Lambda trigger

Issue:

I want to be able to create and deploy an s3 bucket with a trigger to a Lambda that will process the created file. try as I might, and having spent a long time trying to find a definitive reference to current sls, I am failing

Environment info

  • Operating System: linux
  • Node Version: 10.19.0
  • Framework Version: 1.79.0
  • Plugin Version: 3.7.1
  • SDK Version: 2.3.1
  • Components Version: 2.34.6

Plugins

  • serverless-tag-sqs
  • serverless-iam-roles-per-function
  • serverless-s3-remover

Current serverless.yml (parts)

custom:
  env: ${opt:stage, self:provider.stage}
  sandbox:
    s3:
      Ref: uploadbucket

functions:
  s3uploadhandler:
    handler: ./src/s3upload.php
    description: 'Incident Import Post CSV file upload worker'
    layers:
      - ${bref:layer.php-74}
      - ${bref:extra.ds-php-74}
    tags:
      Name: "Incident Import Post CSV file upload worker"
      App_Role: "Service Worker"
      OS_Version: "PHP 7.4"
    iamRoleStatements:
      - Effect: 'Allow'
        Action:
          - "s3:GetBucketNotification"
          - "s3:ListBucket"
          - "s3:GetObject"
        Resource:
          - Fn::Join:
              - ''
              - - 'arn:aws:s3:::'
                - Ref: uploadbucket
          - Fn::Join:
              - ''
              - - 'arn:aws:s3:::'
                - Ref: uploadbucket
                - '/*'
    events:
      - s3:
          bucket: ${self:custom.${self:custom.env}.s3}
          event: s3:ObjectCreated:*

Resources:
  uploadbucket:
    Type: "AWS::S3::Bucket"
    Properties:
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      # Set the CORS policy
      CorsConfiguration:
        CorsRules:
          -
            AllowedOrigins:
              - '*'
            AllowedHeaders:
              - '*'
            AllowedMethods:
              - GET
              - PUT
              - POST
              - DELETE
              - HEAD
            MaxAge: 3000
  S3VPCEndpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      PrivateDnsEnabled: False
      ServiceName: 'com.amazonaws.${self:provider.region}.s3'
      VpcEndpointType: Gateway
      VpcId: ${self:custom.${self:custom.env}.vpc.vpcId}
      RouteTableIds: ${self:custom.${self:custom.env}.vpc.routeTableIds}
  LambdaPermissionInvoke:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: incident-import-srvc-sandbox-s3uploadhandler
      Principal: 's3.amazonaws.com'
      Action: 'lambda:InvokeFunction'
      SourceAccount:
        Ref: 'AWS::AccountId'
      SourceArn:
        Fn::GetAtt:
          - uploadbucket
          - Arn

Outcome

Notice the indentation for

    events:
      - s3:
          bucket: ${self:custom.${self:custom.env}.s3}
          event: s3:ObjectCreated:*

This fails the deploy with a “Bucket name should not contain uppercase characters. Please check provider.s3.[object Object] and/or s3 events of function “s3uploadhandler”” error. But this is exactly how all the docs and comments I have read say that you should do the indentation, and indeed on another function

    events:
      - sqs:
          arn: ${self:custom.${self:custom.env}.sqs.arn}

works fine

Now changing the indentation to

    events:
      - s3:
        bucket: ${self:custom.${self:custom.env}.s3}
        event: s3:ObjectCreated:*

gives
"Serverless: Configuration warning:

Serverless: at ‘functions.s3uploadhandler.events[0]’: unrecognized property ‘bucket’

Serverless: at ‘functions.s3uploadhandler.events[0]’: unrecognized property ‘event’
"

The deployment goes ahead but the trigger is not created. Is there an absolute definitive guide on doing this that is up to date with current AWS S3/lambda operation? Anyone else that can share the knowledge?

I added the ‘LambdaPermissionInvoke’ Resources section after reading another post. It could be a red herring as no docs I have seen have indicated that you need it. However I do know that S3 changed recently particularly in relation to public access policy and this may now be required, or I need some other secret incantation when creating the S3 bucket, I don’t know.

2 Likes

Sorry, I don’t have an answer, but I was just going to post the exact question with the exact problem with my code. I’m getting a CREATE FAILED when deploy it and it rolls back, Or as you I get the capitalization error as well if I try to do a !Ref to my bucket variable. I actually came here to post this exact question/problem. Hope we can get some answers!

Further investigation suggests that it is the serverless parser for the event that is the problem.

events:
  - s3:
      bucket: ${self:custom.${self:custom.env}.s3}
      event: s3:ObjectCreated:*

where the custom variable is hardcoded to the actual bucket name e.g.

custom:
  staging:
    s3: uploadbucket-xxxxxxxx

throws the error, whereas setting the bucket name directly works, e.g.

events:
  - s3:
      bucket: uploadbucket-xxxxxxxx
      event: s3:ObjectCreated:*

Clearly, this is sub-optimal.

I have this same problem. The function wasn’t deploying with the normal indentation, and I don’t understand why this would be different but that’s another issue I suppose. The function deploys now, but the trigger is not being created. I have a hard coded bucket name as well.

functions:
  - process:
      handler: handler.process
      events:
        - s3:
            bucket: migration-uploads
            event: s3:ObjectCreated:*

resources:
  Resources:
    UploadsBucket:
      Type: 'AWS::S3::Bucket'
      Properties:
        BucketName: 'migration-uploads'
1 Like

One way I fixed this is by doing something like @wkhatch and using the following indentation.
Created an s3 bucket as a separate resource and referenced it in the event like below.
And then added the existing flag.

functions:
   - process:
       handler: handler.process
       events:
         - s3:
             bucket: !Ref UploadsBucket
             event: s3:ObjectCreated:*
             existing: true

resources:
 Resources:
     UploadsBucket:
       Type: 'AWS::S3::Bucket'
       Properties:
         BucketName: 'migration-uploads'