Best node orm to use?

I am pretty close to dumping dynamodb, aside from worrying about autoscaling gone wrong for some reason, lost of data due to throttling and in general not being able to get fast stats on table etc.

Anyway I will probably go with Aurora MySql (maybe that will offer me a future path to ‘Aurora Serverless’ down the line once I become more ninjaful in capacity provisioning )

So my question is - what is the best node orm out there to use with MySQL ? sequelize, knex etc. Seeing some new ones called TypeOrm etc ?

And also btw what os the best way to manage migrations in a typical serverless microservice ?

hi @walshe
I have the same question, so far I am using MongoDB with Mongoose for my serverless but recently I started looking at AppSync + DynamoDB. What is your experience with this database? I see there are using many concepts which I haven’t heard off. Sounds like you had some issues.


are you asking about dynamodb ? if so I would start playing with dynamodb on its own without appsync first just to make it easier to understand

originally wanted to move to MySQL/Aurora because I have massive spikes in db writes I need to do at basically any time of day - this leaves me unable to preset dynamodb to higher capacity for certain times and I dont trust the autoscale option.

Now with MySQL I see there are a whole pile of other issues - the main one being what the best way is to cache a mysql connection in a ‘warm’ lambda which all seems a bit dodgy too.

Anyone with any solid solutions there ?

I basically have a lambda consuming a kinesis stream ,in the lambda I need to write to database. My batch size is currently 1, but I may increase to get better throughput and/or make better use of mysql connections

Anyone any idea when Aurora Serverless is out btw ? I guess that will be more expensive than dynamodb in any case

what kind of issues do you have with a connection? I use mongodb and this is the approach I used

the main problem I have was with serverless-sentry-plugin, the old version has a bug and it was impacting my mongodb connections, it is fixed now btw

well… I am going to have spikes in a post registration offline task where I need to save maybe 70000 rows in a short amount of time. Even if I batch the inserts, theres going to be a lot of connections required. I did see that article and wondering if that is the general suggested approach to take…

I am also looking at FaunaDB which seems super cool in terms of scaling - but its a bit pricy perhaps

if you can control all requests to your lambda and send them in batches one by one then the cached connection should be reused, however, when all requests will come at the same time or during your lambda is busy it will scale up and run it in another lambda which means a new connection will be created

in terms of that article and mongodb I believe this is the best approach you can apply to your functions

I’ve never used FaundDB

have a look at this one if you haven’t seen already

I recently needed to do a similar transition (from SimpleDB to PostgreSQL, to allow for more complex time series queries). For a few reasons Aurora/RDS won’t work for me at the moment, so I’m currently running PostgreSQL on EC2 behind PgBouncer.

PgBouncer handles the connection pooling between Lambda and the database, as well as providing a way to protect the database connection with TLS (since I don’t want to run my Lambdas in VPC).

I’m using Alembic for migrations. So far it’s been super simple to use (coming from experience with Django/South).

For ORM, I’m using a couple different solutions. For general, regular DB queries I’m using Objection.js, which is an ORM wrapper around Knex. But my project also uses GraphQL at the API layer, so to avoid the N+1 problem I’m using JoinMonster (which also uses Knex in the background).

By the way, if you happen to be doing anything with time series data I highly recommend TimescaleDB. It’s an extension on top of PostgreSQL that enables highly performant time series insertions and queries. So far it’s been working spectacularly for my 2M+ row datasets.

thanks @bennett, very interesting

I had trouble with RDS too (couldnt connect from client outside aws) but just found out yesterday that my aws account was in an isolated state or something, but its good now. Is it possible to use to install those extensions on an RDS postgres instance easily ?

Unfortunately, no – that’s one of the reasons I’m using EC2. Amazon has a whitelist of the extensions that are allowed to be installed on RDS Postgres, and TimescaleDB currently isn’t on there. From what the creators say, it sounds like the best way to get it added is just to submit feedback. If enough people request it they’ll put it on the list.