Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This may be an obvious point, but I didn't see it mentioned in the (otherwise excellent) article: I would have been interested in the cost saving in just implementing the 'delete on read' with S3 that they ended up using with the home-made in-memory cache solution. I can't see this on the S3 billing page, but if the usage is billed per-second, as with some other AWS services, then the savings may be significant.

The solution they document also matches the S3 'reduced redundancy' storage option, so I hope they had this enabled from day one.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: