Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

website with that many requests


This is bound to problems - slighest change in your stack - language, runtime, OS would affect that so much that it's completely pointless to rely on measures taken and apply them later (that to be said, this is perfectly valid choice when you are locked in - like in the embedded or game console world, but rarely that's the case outside)

For example - https://go.dev/doc/go1.3#stacks https://agis.io/post/contiguous-stacks-golang/

So your measurements in go 1.2 would probably differed in go 1.3 (and this is just one example, in only one of the possible axises that have changed).

It's not like you own the data structure there to control it.


Not for a typical website. Even if you're getting a massive amount of traffic, like a million requests per second, you'd want the request to finish as quickly as possible, so they never pile up to be 1M concurrent tasks. If the tasks have real work to do and can't finish quickly, the having 1M in-flight tasks compete for orders of magnitude fewer CPU cores may not be a good idea — it will waste lots of memory for their state and it will be hard to keep latencies reasonably even. You'd probably use a simpler queue and/or load-balance to more machines.

A scenario with this many tasks spawned is more realistic if you have mostly idle connections. For example, a massive chat server with tons of websockets open.


You might not have a choice. How should a load balancer be implemented? It has to keep those connections alive while the underlying services fulfill the request.


Also I would quickly return something to say the website is busy, and apply some kind of back pressure - since a process can handle only that much. But I have less experience there to say, though my surprise while working with Java used for web serving was that if the gc took more 1% (or was it 3%) of some time sliced window, then the whole process was killed (that's how the team/company has it set up) - e.g. "better fail fast, than serve slow" - the last one can quickly affect your business if it's relying on sub 100ms (~) response time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: