Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For high-throughput workloads (>500 req/s), we actually saw better cost efficiency with S3 due to their economies of scale on bandwidth. The breakeven point seems to be around 100-200TB of relatively static data with predictable access patterns. Below that, the operational overhead of running your own storage likely exceeds S3's markup.

I just spent 5 minutes reading this over and over, but it still doesn't make any sense to me. First it says that high throughput = s3, low throughput = self hosted. Then it says low throughput = s3, (therefore high throughput = self hosted).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: