Is it possible to sign an S3 PUT url without accessKeyId?

I understand that the URL needs to be signed with the same credentials that own the S3 bucket so that the token can be both calculated and later verified.

Most examples seem to illustrate this as follows:

  accessKeyId: '***********',
  secretAccessKey: '*************',
  region: 'us-east-1',
  signatureVersion: 'v4'

But there must be a way to do this with IAM which leads me to some questions:

  1. If I specify an iamRoleStatement for the S3 bucket in question in serverless.yml, can that be used to sign the URL? If so, how?
  2. What, if any, is the bucket policy to apply that specifies the same role?
  3. If the above is not viable, is there a workaround to avoid specifying the accessKeyId?

You may want to use something like AWS Cognito which can grant authenticated (or unauthenticated) users access to upload to S3, without needing to store or send any sensitive credentials. I’m not sure how else one would upload directly from client to S3.

It turns out that what I thought had been a bug I had been chasing for several weeks was in actual fact misunderstood behaviour in the way a new tool works.

I had iamRoleStatements in serverless set to grant the lambda execution role the necessary rights to sign the URL for PutObject and PutObjectACL. But it never worked, except locally/offline. The only workaround was to add the code snippet which resulted in a working URL, albeit signed with different credentials.

To cut a long story short I switched from PostMan to a VSCode extension called REST Client. It works very well as long as you don’t click the URL that is displayed in the response. Doing so brings up the VSCode dialog box to either navigate to or copy, the URL. Copying the URL also includes automatic escaping. Since the URL is so long and scrolls out of my viewport I never clued into the fact it was already escaped. Every time I pasted the URL into my client app to upload the file I was actually pasting it already escaped. My client app would then escape it a second time and obviously the upload fails.

My confusion was compounded by the fact the URL would work locally (because it was picking up and using my AWS access key from my environment, or worked if I added the AWS config with the same super credentials directly in the Lambda itself.

Needless to say, I felt rather foolish when I realised I was the cause of my own issues for accidentally double-escaping.