EFS Creation Fails

Hi all -

I’m having an issue where I’m defining a new EFS instance (for FS and AP) in the resources section of my serverless.yml file. I then reference the FileSystem in one of my functions. However, the problem I’m having is that when the function attempts to deploy the access point is not yet ready and the function fails, which then rolls back the stack.

I can sort of work around this by commenting out all of the functions and just deploying the resources, but this is not a viable solution. Is there any way to force serverless to deploy resources only? Or some other solution to this issue? I’ve searched for a while and haven’t been able to find a solution to this but I’m guessing I can’t be the only one who has faced this.

Thanks,

Chris

1 Like

Hi Chris,

Running into exactly this. Did you already find a solution?

Cheers, Erik

A solution? Yes. A good one? Eh… :slight_smile: So never could figure this out directly in serverless so ended up just using Terraform (which is standard in my env) for EFS deployment, then dumping the TF outputs to a JSON outputs file which can be read in by serverless.

It works perfectly, but does introduce a step outside of serverless.

I’m sure this could be accomplished natively by creating a module similar to the domain creation module but for me, since we use TF anyway, TF was the way to go.

Chris

Haha yeah that’s a solution… I was thinking on splitting my yaml into two and applying the second after some delay :smirk:

Yeah, the nice thing with Terraform is that it doesn’t complete until the resource is actually created and available, so it will sit there running for a half hour if needed so you can just run TF and then serverless right afterward, no need for delay.

Yeah and I was under the impression serverless does that as well. It seems this case is more of an exception. I like how serverless seems more declarative compared to tf and also saves me from managing state myself.

Found it! So before I got this error:

 Serverless Error ----------------------------------------
 
  An error occurred: MyFunctionLambdaFunction - EFS file system arn:aws:elasticfilesystem:eu-west-1:<account-id>:file-system/<fs-code> referenced by access point arn:aws:elasticfilesystem:eu-west-1:<account-id>:access-point/<fsap-code> has mount targets created in all availability zones the function will execute in, but not all are in the available life cycle state yet. Please wait for them to become available and try the request again. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 5dd189ac-b6a9-42ef-a8e6-7dba88019d12; Proxy: null).

I got it working by adding an additional DependsOn attribute on the AccessPoint as suggested here. The serverless yaml then looks something like this:

frameworkVersion: '2'

provider:
  name: aws
  stage: ${opt:stage, 'dev'}
  region: eu-west-1
  lambdaHashingVersion: 20201221

functions:
  myFunction:
    handler: some_handler
    fileSystemConfig:
      localMountPath: /mnt/efs
      arn: !GetAtt AccessPoint.Arn
    vpc:
      securityGroupIds:
        - !GetAtt Vpc.DefaultSecurityGroup
      subnetIds:
        - !Ref SubnetA
        - !Ref SubnetB
        - !Ref SubnetC

resources:
  Resources:
    Vpc:
      Type: AWS::EC2::VPC
      Properties:
        CidrBlock: 172.31.0.0/16
        EnableDnsHostnames: True
        EnableDnsSupport: True
    SubnetA:
      Type: AWS::EC2::Subnet
      Properties:
        CidrBlock: 172.31.1.0/24
        VpcId: !Ref Vpc
        AvailabilityZone: "${self:provider.region}a"
    SubnetB:
      Type: AWS::EC2::Subnet
      Properties:
        CidrBlock: 172.31.2.0/24
        VpcId: !Ref Vpc
        AvailabilityZone: "${self:provider.region}b"
    SubnetC:
      Type: AWS::EC2::Subnet
      Properties:
        CidrBlock: 172.31.3.0/24
        VpcId: !Ref Vpc
        AvailabilityZone: "${self:provider.region}c"
    ElasticFileSystem:
      Type: AWS::EFS::FileSystem
      Properties:
        Encrypted: true
        PerformanceMode: generalPurpose
        FileSystemPolicy:
          Version: "2012-10-17"
          Statement:
            - Effect: "Allow"
              Action:
                - "elasticfilesystem:ClientMount"
                - "elasticfilesystem:ClientWrite"
              Principal:
                AWS: "*"
    AccessPoint:
      Type: AWS::EFS::AccessPoint
      Properties:
        FileSystemId: !Ref ElasticFileSystem
        PosixUser:
          Uid: "1000"
          Gid: "1000"
        RootDirectory:
          CreationInfo:
            OwnerGid: "1000"
            OwnerUid: "1000"
            Permissions: "0777"
          Path: "/my-data"
      DependsOn:
        - MountTargetA
        - MountTargetB
        - MountTargetC
    MountTargetA:
      Type: AWS::EFS::MountTarget
      Properties:
        FileSystemId: !Ref ElasticFileSystem
        SecurityGroups:
          - !GetAtt Vpc.DefaultSecurityGroup
        SubnetId: !Ref SubnetA
    MountTargetB:
      Type: AWS::EFS::MountTarget
      Properties:
        FileSystemId: !Ref ElasticFileSystem
        SecurityGroups:
          - !GetAtt Vpc.DefaultSecurityGroup
        SubnetId: !Ref SubnetB
    MountTargetC:
      Type: AWS::EFS::MountTarget
      Properties:
        FileSystemId: !Ref ElasticFileSystem
        SecurityGroups:
          - !GetAtt Vpc.DefaultSecurityGroup
        SubnetId: !Ref SubnetC

Hey @eriktim

Thanks for sharing your solution. It works for me!

The only issue that I found is when I have to use EFS combined with container image. When I try to mount in a function in an image, deploy is completed, but when invoked it, function always timed out.

Anyone has any clue?

You’re welcome @pradella. Did you try increasing the timeout or do you use the default (6 seconds)?

Yes, I did! Ive create a function to get an object from s3, and I put a console.log before, inside and after the function, and only the before is displayed.

All console.log are displayed when EFS is not a resource in my function (combined with a container image).

UPDATE: just found out that the current serverless.yml create VPC and subnets without internet gateway (so thats why I cannot perform s3.getObject).

Now I’m trying to figure out how to allow my lambda inside this newly VPC can access internet.

UPDATE 2: just found this article to setup VPC lambda with internet access: AWS Lambda: Enable Outgoing Internet Access within VPC | by Philipp Holly | Medium

But… NAT gateway is quite expensive.

In my case that I just need to access S3, the solution was to create a VPC Endpoint to S3:

When creating this endpoint, choose your VPC, subnet and select service name “com.amazonaws.us-east-1.s3” and endpoint type as “Gateway”.

Maybe there’s a way to setup this VPC endpoint in serverless.yml, but will try to do it later.

Thanks everyone!