Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hm. I love Stack Overflow but architecture only starts to become interesting at 4-5 times their traffic.

95M/mo translates to a mere 36/sec avg. A single web- and db-server will handle that without breaking a sweat, although you of course want some more machines for peaks, redundancy and comfort.



36/sec but then they're doing 800 http requests per second, not sure what could cause such a big difference (lots of ajax stuff?) but I think 800/sec is an interesting thing to see them deal with, that's something in the region of 1bn a month, which fits your requirement for interesting-ness :p


Perhaps the 800 req/second number is a peak figure? 36 requests a second average isn't meaningful when your traffic is spiky.


Probably a combination of other page resources (images, JS, stylesheets, etc), bots and peak access times.


which fits your requirement for interesting-ness

Sorry, the threshold of interestingness for serving static assets is even higher. ;)

A single nginx on a moderate host will barely warm up before ~10k reqs/sec. The network tends to be the bottleneck here. Anyways, ~800/s should be doable from a $10 VPS.


I don't think they're paying all this money for the fun of it


Hm, I'm not sure I follow. What money do you mean?


They have 10 web servers. If they could replace all that with a VPS or two, they would.


According to the article only 3 are dedicated to StackOverflow which sounds about right to me.


The VPS figure was for serving static files. So, a lot of the stackoverflow content isn't static.


We (blocket.se) are hitting almost 1b/mo and I can assure you that it's irrelevant wether the average p/v per second can be handled by some basic hardware or not. During peak hours it's an order of magnitude difference. Worth mentioning though that we are tightly focused in only one country, and probably have larger variations during peak hours than stack overflow does. I'm not sure what our traffic looks like combined with our other local sites.


If your peak hours exceed average by an order of magnitude then you have oddly shaped traffic. Most sites follow a bell curve where regular peak hours range around factor 2-3 above average.

Either way I didn't mean to discount their efforts. Was just trying to point out that their architecture is not very interesting from a performance point of view, yet.


I wish I had to figure out how to deal with a "mere 36/sec avg" number of visitors. I'd sell my company and retire.


architecture only starts to become interesting at 4-5 times their traffic.

Which suggests that even sites with a medium-to-large internet presence, which dominatre thier niche, can do so with "uninteresting" architecture.


Which suggests that even sites with a medium-to-large internet presence [...] can do so with "uninteresting" architecture

Absolutely. Hardware is evolving so fast (Moore's Law & friends) that we humans have a hard time to keep up mentally.

Go back six years in time and the traffic they're dealing would have required roughly 24 servers, instead of the 3 that they have today.

This is of course a rough extrapolation and six years seems like an eternity on the internet-calendar.

However we're quickly approaching a point where there's only two scales left to worry about: "normal" and "web-scale", with only a few hundred sites falling into the latter category.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: