Solved: Task timed out after 30.00 seconds

Hello,
I have this function, but I keep getting time outs:

import { stripe } from '../../config';

export const getPlans = async ({ forceRefresh = false }) => {
  try {
    const existingPlans = await getPlansFromCache();
    if (!forceRefresh && shouldGetFromCache(existingPlans)) return JSON.parse(existingPlans.Body.toString('utf-8'));
    const result = await stripe.plans.list({ limit: 1000 });
    const plans = result.data.reduce((acc, plan) => {
      const [currency, , planCode] = plan.id.split('-');
      const { metadata = {} } = plan;
      if (!acc[planCode]) {
        return {
          ...acc,
          [planCode]: {
            planCode,
          },
        };
      }
      return {
        ...acc,
        [planCode]: {
          ...acc[planCode],
        },
      };
    }, []);
    return plans;
  } catch (err) {
    console.log('Err', err);
    return [];
  }
};
export default getPlans;

basically, I have a GraphQL endpoint which is executed from my Lambda function using POST method which then sends a request the stripe API to get all available plans, it writes it then writes it to S3.

here is part of my serverless.yml

  graphql:
    handler: build/main.graphql
    timeout: 30
    memory: 128
    events:
      - http:
          path: graphql
          method: post
          cors: true

any advice is much appreciated on how to fix this?

s3 was down yesterday. Perhaps this had something to do with it?

I don’t think so, as it is still not working and S3 is up.

const result = await stripe.plans.list({ limit: 1000 });

It groks at the above command - if I run it locally, all works as expected. The JSON payload is small!

Can you confirm you are able to reach the internet from the Lambda function?

This may sound silly, but if the Lambda is inside a VPC within a private subnet then you need a managed NAT instance (or roll your own) to access the internet - I fell into this trap just yesterday (took me ages to find out this little gem and fix it).

1 Like

hi yes, it is an issue with the lambda not being able to access the internet - a good resource i find is https://medium.com/@philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12#.hombvkfhy

1 Like

Awesome article - sorry I didn’t mention that I solved mine by using the internal dns of the EC2 that I wanted to access - prior to that I was using the public IP address - derp!

Hope you got your problem solved :smiley:

Hi, I have added my lambda function to my existing VPC and am able to return data back to my client, as detailed in the medium article.

Although I have an issue, which I am unclear as to why it is not working. On my VPC I have a mongo cluster, when I console.log the response from my lambda function, I am able to see it in the logs, but the curl client times out.

I am using,

curl -X POST -H "Authorization: Bearer _FQLs7c2k3E" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{ "query": "{ user { username } }" }' 'https://xxx.execute-api.us-east-1.amazonaws.com/dev/graphql'

other graphql queries work, when reading from S3 for example.

module.exports.graphql = (event, context, callback) => {
  const body = JSON.parse(event.body);
  const auth = authenticate(event.headers.Authorization);
  console.log('AUTH:', auth);
  if (!auth) { callback('ERROR: Authentication error'); return; }
  graphql(body.query, body.variables, auth)
    .then((response) => {
      console.log(response);
      callback(null, {
        statusCode: 200,
        body: JSON.stringify(response),
        headers: { 'Access-Control-Allow-Origin': '*' },
      });
    })
    .catch((err) => {
      console.log('ERROR: graphql', err);
      callback(err);
    });
};

here is the getUser, code.

export default ({ userId }) => new Promise(async (resolve, reject) => {
  try {
    console.log('try get user', userId);
    const user = await findUserById(userId);
    if (!user) return reject('User not found');
    return resolve({
      ...user,
      userId: String(user._id),
    });
  } catch (err) { return reject(err); }
});

here are the logs:

START RequestId: b187d7dd-0356-11e7-8adb-77c55d44b2a9 Version: $LATEST
2017-03-07 16:54:16.634 (+00:00)	b187d7dd-0356-11e7-8adb-77c55d44b2a9	AUTH: { role: 'user', userId: 'xxxx' }
2017-03-07 16:54:16.667 (+00:00)	b187d7dd-0356-11e7-8adb-77c55d44b2a9	try get user: xxxx
2017-03-07 16:54:17.335 (+00:00)	b187d7dd-0356-11e7-8adb-77c55d44b2a9	{ data: { user: { username: 'user@domain.tld' } } }

So my lambda function is connecting to the mongo cluster and getting the data from it, but it times out when returning it back to the client!

What am I missing here? Any advice is much appreciated.

OK, my issue was that the callback should return content.done(), this fixed it[quote=“khinester, post:7, topic:1422”]
callback(null, {
statusCode: 200,
body: JSON.stringify(response),
headers: { ‘Access-Control-Allow-Origin’: ‘*’ },
});
[/quote]

to

  content.done(null, {
    statusCode: 200,
    body: JSON.stringify(response),
    headers: { 'Access-Control-Allow-Origin': '*' },
  });

in https://github.com/serverless/serverless/issues/1036 is the old way!