Serverless AWs Custom Image ECR

Hi Yall,

Referring to this article, where we can now use custom container for running Lambdas.

Serverless does a lot of the heavy lifting if our Dockerfile is simple to define. What if I need to pass in a build-arg to my docker build, for example a credential to a private artifactory repo

docker build . -t image-name --build-arg API_KEY=&&$&%&%&##

the command below does not allow me to pass in any build-args

provider:
  name: aws
  ecr:
    # In this section you can define images that will be built locally and uploaded to ECR
    images:
      appimage:
        path: ./

How would I go about this?

I would suggest using other methods to access values such as that by passing environment variables to the Lambda functions. I have not done this with docker containers yet, but I assume you can just use the environment variable support to have things like API_KEY passed into the Lambda environment instead: https://www.serverless.com/framework/docs/providers/aws/guide/functions#environment-variables

Hmm, I guess I could have been more obvious about the problem.

The api key is to access a package repository from where I’m doing a pip install at build time for my docker. It’s not a runtime env key for my lambda.

The problem here is the serverless needing the Sha digest of the deployed image to ECR, which implies that the image is deployed then I do a SLS deploy, by copying the value into the serverless.yml

But I cannot have this 2 step process in a automated CICD, it needs some intervention.

1 Like

hey @alexanderluiscampino . While not optimal this is what i’m doing:

  • Have a main image (1) i build directly so i can pass build-args
  • Have another Dockerfile (2) i call Dockerfile.sls, which only does: FROM [The previous image]
  • Reference 2 in my serverless file

With this i still leverage serverless for uploading to ecr and create the lambda with the right version/digest.

It would definitively be useful to have it directly supported in the framework, have you created a feature request?
Hope this helps.
Best.

1 Like

Hi @macebalp ,

thanks for the suggestion, do you have this example on github, so I can see it? I am assuming that if you run on this on CICD, the runner that builds the 1st image keeps that layer around, so when the sls deploy happens and that 2nd docker file is used, it can pull from the image just created, since it is still around. Is that it?

Also, how do you make feature requests?

From the documentation

You can define arguments that will be passed to the docker build command via the following properties:

  • buildArgs : With the buildArgs property, you can define arguments that will be passed to docker build command with --build-arg flag. They might be later referenced via ARG within your Dockerfile . (See Documentation)

I’m a little late to this discussion, but have the same underlying issue, I think. I can build my image locally by doing something like this in the Dockerfile:

RUN --mount=type=secret,id=pip.conf,dst=/root/.config/pip/pip.conf
pip install -r requirements.txt -t ${FUNCTION_DIR}

with my separate repository packages in the requirements.txt file included like this:

my_package --index-url = ${ARTIFACTORY_URL}

And then building using the following on the command line:

docker build … --secret id=pip.conf,src=${HOME}/.config/pip/pip.conf …

This seems much better than using environment variables since this access to the local pip.conf lasts only during the single RUN line in the Dockerfile, ao does not persist in the image.

From the answers here, it seems that I will need to manage the image in ECR separately from
my serverless.yml configuration :frowning: and add the repository path in the config.

Has anyone discovered a way to add that “–secret …” argument set to the build command line?