Considering that you can’t control the # of concurrent instances of your function and in peak times it might be very high – are there best practices around how to handle connection limits on databases?
One idea that came to mind is to use pgBouncer, but then I have to maintain a none “serverless” component in my stack.
In my limited experience of using RDS with a serverless stack (NodeJS) you can keep DB connection pools ‘alive’ between invocations. The way I have done this in the past is to create my connection pool outside of the handler function. This way the connection pool is only created (new DB connection) on first initialization of the lambda and then available to all invocations while the lambda is ‘hot’.
The reason that this works is that lambda tries to reuse containers where possible, if your function is getting regular traffic then you will see that DB connection pool is getting reused without a new connection created each time.
After a bit of a google I found a similar question in the AWS forums that seems to confirm this: https://forums.aws.amazon.com/thread.jspa?threadID=216000
Obviously it is worth experimenting with this to see if it works for you as there are good few reasons having connections floating outside invocations isn’t a great idea (as discussed in the above linked forum thread)
Alternatively I recommend looking at DynamoDB if you need a more distributed DB to cover a large number of requests.