Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Until very recently, at the company I work for, we were running one of the largest (if not the largest) replica set cluster in terms of number of documents stored (~20B) and data size ~11TB. The database held up nicely in the past 10 years since the very first inception of the product. We had to do some optimizations over the years, but those are expected with any database. Since Mongo Atlas has a hard limit on the maximum disk size of 14TB we explored multiple migration options - sharding Mongo or moving to Postgres/TimescaleDB or another time series database. During the evaluation of alternative database we couldn't find one which supported our use case, that's highly available, could scale horizontally and that's easily maintainable (e.g. upgrades on Mongo Atlas require no manual intervention and there's no downtime even when going to the next major version). We had to work around numerous bugs that we encountered during sharding that were specific to our workloads, but the migration was seamless and required ~1h of downtime (mainly required to fine-tune database parameters). We've had very few issues with it over the years. I think Mongo is a mature technology and it makes sense depending on the type of data you're storing. I know at least few other healthcare companies that are using it for storing life-critical data even at a larger scale.


Did you evaluate RavenDB? Just out of interest


Ravendb is trash. It can’t handle even 1/10 of this type of a load, it was trashed in jepsen testing too. I had to work with it 4 years and i disliked it.


Thanks! I am a little disappointed that it seems to be trash :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: