Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Fun fact but for years updating GPUs led to a decrease rather than an increase in framerate.

Sounds more like a specific technical issue with an specific engine and drivers that lacked optimization rather than an axiom that can generalize games and GPUs for several years. If you have a source for this please share it.

Also, IIRC, back then the game physics were tied to the framerate, so having too high framerate made your gameplay completely wonky as it messed up character jumps and weapon balistics, so I assume that those framerate limits with more powerful GPUs could have been there on purpose to keep the game playable.



In the modern era we still have problems of this sort - higher core counts can cause games to perform worse due to problems like a game suddenly having 96+ threadpool threads competing for small amounts of work, often belonging to the driver. It's still the case that a high core count Ryzen will have worse performance in some games (or problems like microstuttering) that you won't experience on a slower, lower core count Intel chip.


That's because game devs optimize for the most common denominator hardware gamers have at home, which were mostly quad core chips with 8 threads until very recently.

96+ thread pool is way outside the norm for the average gamer or consumer, so you can't expect to be surprised that game devs ignore the 0.01% of gamers running server/workstation CPUs with this insane thread count.


In that case you might want to mask off some of the cores/hyperthreads, or even do core pinning. That's very common in applications where latency is important.


masking off threads and using pinning does help, but you still end up with 96 threadpool threads (they're just competing for fewer cores now). the real solution would be to actually tell an app or game that you have a set number of cores (like 8). sadly AFAIK neither windows or linux can do this, so the only option is a VM.


specifically, it sounds to me like ATI/AMD's rather-notoriously-poor OpenGL performance in action... another way to rephrase OP to take the emphasis off hardware generation (which makes no sense) would be "when I switched from NVIDIA to ATI I lost 40% of my performance"... which is still an experience many people have today with OpenGL on the AMD cards.


I could be wrong (I never really got into gaming as GMA950+Linux was an exceptionally bad combo for it) but I thought source was exclusively DirectX back then.


HL2 had both OpenGL and DirectX on paper, but the DirectX implementation was notoriously poor such that everyone used and recommended the OpenGL renderer.

https://arstechnica.com/civis/viewtopic.php?f=22&t=1007962

https://arstechnica.com/civis/viewtopic.php?f=6&t=931690

https://arstechnica.com/civis/viewtopic.php?f=22&t=987307

Ironically now with Valve's push for Linux gaming, I'd expect that Linux is probably a great platform for HL2, probably even better than Windows. Intel GMA drivers might still be bad though, not sure if that is too far back for the open-source driver that showed up in the Core era.


I believe Source had both OpenGL and DirectX renderers, alongside a software renderer.

Edit: I'm thinking of the GoldSrc engine used in Half-Life




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: