Default deployment bucket SSE

Hi, I am trying to enable SSE on the default deployment bucket on AWS. I read over this:

And then tried to set up in my yml file:

provider:
  name: aws
  runtime: python3.6
  deploymentBucket:
    serverSideEncryption: AES256

However once deployed its not encrypted. I also read over this: Serverless v1.16 - S3 server-side encryption and default exclusion of Node.js dev dependencies added

I would prefer to not have to create/manage the S3 deployment bucket outside of my yml file though.

Advice?

Thanks.

1 Like

Did you ever figure out how to set serverSideEncryption: AES256 on the deployment bucket?
I’m in the same boat as you.
So in my setup, i’m using serverless 1.34.1. I originally did not have a deploymentBucket section at all.
The deployment bucket was created without serverside encryption enabled, so I then put serverSideEncryption my yml file:

deploymentBucket:
    serverSideEncryption: AES256

But the deploymentBucket STILL does not have server-side encryption enabled.

I did. It ended up being way more complicated than I had hoped. It had to do with how serverless wants to create the bucket for s3 events. I was able to piece the attached yml file together from stuff in the forum and other sources. Seems to work for me. Take a look at the resources section of the yml.

service: foobar

# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"

# Things to exlude from the deployment
package:
  exclude:
    - .requirements/**
    - requirements.txt
    - venv/**
    - node_modules/**
    - .git/**
    - .gitignore
    - .pylintrc
    - .DS_Store
    - package-lock.json
    - package.json
    - docs/**
    - utils/**
    - data/**
    - Jenkinsfile
    - run_in_container.sh
    - Makefile
    - README.md
  include:
    - data/trade-schema.json

# Plugins needed to build a pythond deployment package from pip
plugins:
  - serverless-python-requirements
  - serverless-domain-manager

# Custom variable to define infrastructure
custom:
  stage: ${opt:stage, self:provider.stage}
  app_acronym: foobar-${self:custom.stage}
  # Generate a random S3 bucket name to make testing deployments non-colliding
  s3_name: ${self:custom.app_acronym}-${file(utils/random.js):s3Random}
  stack_name: ${self:custom.app_acronym}
  s3_bucket: ${self:custom.s3_name}
  s3_bucket_arn: arn:aws:s3:::${self:custom.s3_bucket}
  dynamo_name_static_data: ${self:custom.app_acronym}-static-data
  dynamo_name_trade: ${self:custom.app_acronym}-trade
  pythonRequirements:
    dockerizePip: non-linux
    slim: true # remove things like __pycache__ and dist-info directories
    noDeploy: # Omit formatting packages from deployment
      - appdirs
      - astroid
      - attrs
      - black
      - boto3
      - botocore
      - click
      - docutils
      - flake8
      - isort
      - jmespath
      - lazy-object-proxy
      - mccabe
      - numpy
      - pandas
      - pycodestyle
      - pyflakes
      - pylint
      - python-dateutil
      - pytz
      - s3transfer
      - six
      - toml
      - typed-ast
      - wrapt
  domains:
    default: ""
  domainEnabled:
    dev: true
    staging: true
    prod: true
    default: false
  customDomain:
    domainName: ${self:custom.domains.${self:custom.stage}, self:custom.domains.default}
    basePath: ""
    stage: ${self:custom.stage}
    createRoute53Record: true
    enabled: ${self:custom.domainEnabled.${self:custom.stage}, self:custom.domainEnabled.default}

# Allocate resources and set IAM role on AWS
provider:
  name: aws
  runtime: python3.6
  environment:
    S3_BUCKET: ${self:custom.s3_bucket}
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:*
      #Resource: "arn:aws:s3:::${self:custom.s3_bucket}/*" Chris >>> Would prefer this for limited access, but can't get to work
      Resource: "arn:aws:s3:::*"

# Our lambda functions
functions:
  trades_s3_event:
    handler: backend/trades_s3_event.trades_s3_event
    name: ${self:custom.stack_name}-trades_s3_event
    description: Called by s3 create/remove event to manage assets in dynamodb
    timeout: 300 # larger timeout needed due to JSON validation
    memorySize: 3008 # Increase memory due to processing needs of ingest
    events:
      - s3:
        bucket: uploadBucket # The bucket is configured in the resource section; necessary to CORS configuration

 # AWS service resource allocations
resources:
  Resources:
    S3BucketUploadBucket:
      DependsOn: TradesUnderscores3UnderscoreeventLambdaPermissionUploadBucketS3 # Can't create bucket until permissions set
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.s3_name}
        CorsConfiguration:
          CorsRules:
            -
              AllowedOrigins:
                - '*'
              AllowedHeaders:
                - '*'
              AllowedMethods:
                - PUT
              MaxAge: 3000
        BucketEncryption:
          ServerSideEncryptionConfiguration:
            - ServerSideEncryptionByDefault:
                SSEAlgorithm: AES256
        NotificationConfiguration:
          LambdaConfigurations:
            - Event: "s3:ObjectCreated:*"
              Filter:
                S3Key:
                  Rules:
                    -
                      Name: suffix
                      Value: xlsx
              Function:
                "Fn::GetAtt":
                  - TradesUnderscores3UnderscoreeventLambdaFunction
                  - Arn
    TradesUnderscores3UnderscoreeventLambdaPermissionUploadBucketS3:
      Type: "AWS::Lambda::Permission"
      Properties:
        FunctionName:
          "Fn::GetAtt":
            - TradesUnderscores3UnderscoreeventLambdaFunction
            - Arn
        Principal: "s3.amazonaws.com"
        Action: "lambda:InvokeFunction"
        SourceAccount:
          Ref: AWS::AccountId
        SourceArn:
          "Fn::Join":
            - ''
            - - 'arn:aws:s3:::'
              - '${self:custom.s3_name}'
          # Original approach for this was to use a simple SourceArn
          #
          # >>> SourceArn: "arn:aws:s3:::${self:custom.s3_name}"
          #
          # This would lead to a random error during deployemnt:
          #
          # >>> Serverless: Operation failed!
          # >>> Serverless Error ---------------------------------------
          # >>> An error occurred: S3BucketUploadBucket - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: 125CA1E72438F92A; S3 Extended Request ID: 0/3ZI2K+2RKPrKpM/bBCOn9kR0y9qydPNUx9rl0XCD3WOLaCc2iJn2ZfUlFR7GbrCUictG3ThF8=).
          #
          # Some Google-ing led to this:
          # >>> https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-circular-dependency-cloudformation/
          #
          # This states that to avoid this issue use "Fn::Join" for setting SourceArn

Curious if there has been any work on making easier in serverless - it seems like a very common usecase to be able to generate an S3 bucket for events that has SSE encryption.