Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's great if you can fit a lot of your database in your server's memory, but seems like a terrible headache once you get a decent number of users.

Personally, I'd much rather have sane queries in the first place, but rails isn't really my cup of tea either, so take my opinion with a large pinch of salt if you do.



> That's great if you can fit a lot of your database in your server's memory, but seems like a terrible headache once you get a decent number of users.

Once you have few million users then you can think about better solution.

You can fit a lot into server memory, and spilling out of RAM to NVMe isn't that bad either.


> That's great if you can fit a lot of your database in your server's memory, but seems like a terrible headache once you get a decent number of users.

You'd surely care about getting a significant chunk of your usage in server memory rather than what percentage of total data that is, no?

To take the site we're on as an example, I'd be willing to bet the 30 things on the front page have one or two orders of magnitude more traffic than anything else (and probably a few more orders of magnitude more than the median post).


That seems much more specialized than what I'd imagined based on the prior description.

In this example, would rails only cache models that fit certain query parameters? Or is it a configurable LRU? How does the in-memory cache work when you have multiple puma workers? Or does this mechanism rely on something more esoteric? Given that this technique is part of solving the performance problems of N+1, I'm assuming things like votes and comments are included, and the high degree of write volume would imply that all of the caches need to stay up to date- at least with a fairly high degree of consistency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: