Sls deploy doesn't exit when running in a drone.io deployment

Hi,

I’m trying to integrate the serverless deployment into our drone environment, but when it gets to the ‘Checking Stack update progress.’ in the drone deployment container it doesn’t seem to return any further info. The serverless process appears to become disconnected in some way and the drone deployment just continues on forever until manually cancelled. As in normally if a process in your deployment script fails with an error it will kill the rest of the deployment. Once all script steps have completed with a 0 exit code, the deployment finishes.

So not sure if anyone can shed any light on this? what’s happening when it gets to that check stack part? I’ve tried adding a -v, but it doesn’t output anymore info like it does locally. As I mentioned it’s almost like the process is just left abandoned in the background. The stack update completes fine in AWS.

$ sls deploy --stage prod -v
Serverless: Packaging service…
Serverless: Uploading CloudFormation file to S3…
Serverless: Uploading service .zip file to S3 (17.97 MB)…
Serverless: Updating Stack…
Serverless: Checking Stack update progress…
[info] Cancel request received, killing process //this is me cancelling it manually

Serverless v1.4.0
node 4.4.1

Cheers
Paul

This is still an issue. I’ve tried to do a bit more debugging.

If I login to the drone server and open a shell console to the docker container that is running the serverless deployment. while the deployment is currently in the hung state (ie it’s stuck on Checking Stack update progress…) and the actual stack update has completed successfully in Cloudformation. If at this point I manually run the sls deploy command, the original hung process picks up the stack updates from my manual process and finishes the job successfully.

This is a real strange one, I know probably not really something many if any are experiencing. If someone is able to shed some light on how that check updates process is working it might help. I see it’s using setTimeout, wonder if that’s causing it to lose connection or something?

Any help would be greatly appreciated.

Cheers
Paul

OK so I have managed to get to find where the issue is, but not the absolute root cause in this scenario.

in monitorStack it’s setting the monitoredSince time to now -5 seconds. So it only checks events after it started monitoring. The problem I’m seeing on the drone server is that the event times being reported are always before monitoredSince. Sometimes up to 60-80 seconds. So it’s never detecting that the updates have completed or failed.

I’ve checked the time on my drone server and it’s correct and synced to internet. My drone server is an EC2 instance in AWS, so it would probably have very quick response time to instigating the CF update. Not sure why there is such a delay between kicking off the cloudformation update and starting the monitoring task. I’ve modified it to -120 seconds and it’s all working ok.

Not really sure on the best way to fix it, maybe the monitor task should be passed the time from updatestack function from before updatestack function called AWS update. Rather than the monitorStack trying to guess the start time. I think that would be the most robust solution.

I’ll just raise an issue in the github repo and see if I can find some time to submit a PR.

Cheers
Paul