Serverless, CodeStar, CodePipeline & CloudFormation

I’d like to see the proper buildspec and template.yml for a Serverless based CodePipeline.

  • I have a working Serverless function (API Gateway/Lambda/Aurora) using serverless-webpack that deploys fine
  • I have a default CodeCommit repo created by CodeStar Node/Express template, the pipeline works fine
  • Combining the two gives me CF errors

Default buildspec provided by CS template:

version: 0.1

phases:
  build:
    commands:
      - aws s3 cp --recursive public/ s3://$WEBSITE_S3_BUCKET/public/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers
      - sed -i -e "s|assets/|$WEBSITE_S3_PREFIX/public/assets/|g" public/index.html
      - aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.json

artifacts:
  type: zip
  files:
    - template-export.json

My buildspec:

version: 0.1
phases:
  build:
    commands:
      - serverless deploy
  install:
    commands:
      - npm install
      - npm install -g serverless

CS provided template.yml:

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar

Parameters:
  ProjectId:
    Type: String
    Description: AWS CodeStar projectID used to associate new resources to team members

Resources:
  GetHelloWorld:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.get
      Runtime: nodejs4.3
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
      Events:
        GetEvent:
          Type: Api
          Properties:
            Path: /
            Method: get

My serverless.yml

service: xxx
provider:
  name: aws
  runtime: nodejs6.10
  region: us-xxx-x
	vpc:
		securityGroupIds:
			- sg-xxx
		subnetIds:
      - subnet-xxx
      - subnet-xxx
      - subnet-xxx
plugins:
  - serverless-webpack
custom:
  - webpackIncludeModules: true
functions:
  read:
    handler: handler.read
    events:
      - http:
          path: /
          method: get

My .babelrc

{
  "plugins": ["transform-runtime"],
  "presets": ["env"]
}

My .webpack.config.js

module.exports = {
  entry: './handler.js',
  target: 'node',
	rules: [
    { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" }
  ]
};

I get an S3 access denied error on the CF stack update. I’ve tried a few things but to no avail. Seems like someone should have a boilerplate for this type of setup?

1 Like

Maybe this doesn’t really make sense. CodeStar creates endpoints, Serverless creates endpoints… how to get the latter’s to be the former’s? A manually created pipeline with just source and build phases would make more sense, for staying with Serverless.

And using CodeStar, you probably don’t need Serverless… though you would have to do a lot of tricky CloudFormation work if you needed to provision resources dynamically or whatever.

It would be nice though, if Serverless could somehow be plugged into a CodeStar created project (or had its own template, wink). Serverless can create and manage resources in a nicer way that editing the CS CF templates.

I think this should be possible.
Underneath codestar is just linking together general purpose components, and each one of them should be able to work with serverless.

I’ve managed to reproduce the s3 error in the build step of the pipeline. The fix for that is in several steps:

in my serverless.yml I told it to use the deployment artifact bucket created by codestar like so:

service: serverless
provider:
  name: aws
  deploymentBucket: aws-codestar-region-account_id-serverless-pipeline
...

In my buildspec.yml I changed it to use serverless only to package things up:

version: 0.2
phases:
  install:
    commands:
      - npm install -g serverless
      - yum -y -q update
      - yum -y -q install jq
  build:
    commands:
      - serverless package
      - ./upload.sh
      - cp .serverless/cloudformation-template-update-stack.json .serverless/template-export.json

artifacts:
  type: zip
  files:
    - .serverless/*
  discard-paths: yes

The upload.sh script copies the artifact to the deployment bucket:

#!/bin/bash

set -eu
set o pipefile

STATE_FILE=.serverless/serverless-state.json
S3_PREFIX=$(jq -r '.package.artifactDirectoryName' < "$STATE_FILE")
ARTIFACT=$(jq -r '.package.artifact' < "$STATE_FILE")
AWS_PROFILE=${AWS_PROFILE:-}

PROFILE_OPTS=""
if [ "" != "$AWS_PROFILE" ]; then
  PROFILE_OPTS="--profile $AWS_PROFILE"
fi

aws $PROFILE_OPTS s3 cp .serverless/$ARTIFACT s3://$S3_BUCKET/$S3_PREFIX/$ARTIFACT

It should work from your dev machine as long as you set the S3_BUCKET environment variable which codestar seems to set in its pipeline for us (it’s the name of the deployment bucket).

Finally you need to modify the role that codestar uses for the pipeline. Mine seems to be called CodeStarWorker-serverless-CloudFormation. You need to add the “s3:GetBucketLocation” permission on the deployment artifact bucket.
Note that codebuild can modify the role and role policy so there’s a chance the policy change could be overwritten. things to try in order to be more defensive there: using attached policy instead of modifying the inline one, or uncheck the tickbox that allows codebuild to edit the role in the codebuild project’s properties.

Having got this far the deploy step of my pipeline fails because it’s supplying cloudformation template parameters where none are expected.

So next, go to the codepipeline and edit it.
The parameters are specified in the deploy phase’s GenerateChangeSet step. Modify that so the parameters json object is empty.
Further, the pipeline needs to run the cloudformation GenerateChangeSet with a particular capability.
Change the capability from CAPABILITY_IAM to CAPABILITY_NAMED_IAM

Next need to add some more permissions, this time to the cloudformation deploy role. Mine seems to be called “CodeStarWorker-serverless-CloudFormation”.
That role needs permissions to create everything that serverless puts in its generated cloudformation template.
That includes cloudwatch log groups, iam execution role for lambda and any other bits and pieces you’ve added.
At the minimum I added

                "lambda:GetFunction",
                "lambda:ListVersionsByFunction",
                "lambda:PublishVersion",
                "logs:CreateLogGroup",
                "logs:DeleteLogGroup",
                "iam:AttachRolePolicy",
                "iam:GetRole",
                "iam:GetRolePolicy",
                "iam:CreatePolicy",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:PutRolePolicy"

I also had to add iam:PassRole for the lambda execution role that serverless created:

        {
            "Action": [
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::account_id:role/serverless-dev-region-lambdaRole"
            ],
            "Effect": "Allow"
        },
...

Also note that serverless uses a shared iam role for lambdas. I don’t believe this is good practice and it also complicates things when the role is managed by more than one cloudformation template.

One of the things I’m not sure about is what the codestar cloudformation transformation does.
There’s a resource that gets deleted – a SyncResources thing. It’s not present in the serverless cloudformation probably because serverless doesn’t use the codestar CF transformation. I don’t yet know what breaks when you mess with this.

2 Likes

Post above needs more love… this is spot on, you saved me hours of bashing codestar into submission. I had to switch to serverless from the AWS SAM because I was unable to make it work with cors using inline swagger

I needed a number of changes to fix my codestar install to work with serverless:

Permission changes on the IAM role for the codebuild phase were against the role ‘CodeStarWorker--CodeBuild’ instead of CloudFormation (typo on your end) and they required:

s3:ListAllMyBuckets
s3:GetBucketLocation

Also, the build command for your upload needed to change to:

– sh ./upload.sh

Guys, I’m sorry if I don’t see something you all are seeing. But why not have a buildspec as simple as:

version: 0.2

phases:
  install:
    commands:
      - npm install
      - npm install -g serverless
  build:
    commands:
      - sls deploy

you could, but CodeStar sets up an explicit deploy phase which takes the cf template as input. Same result but you would have to change the pipeline entirely.

is there any way to deploy it (somehow) first with serverless, let cloudformation create everything and then “export” the complete template from CF to replace the template.yml file that CS needs for its deploy phase?

in my case I already have everything built through serverless but outside CS… I would like to start using CS, but avoiding having to start from scratch and “handing over” the control from sls to CS… how possible would that be?