Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually, this is one of the easiest strategies to write slow software, because you're relying too much on the hardware to bail your poor decisions out. When running the software on worse hardware, you will see super linear slowdown.


This is from the perspective of designing a custom system for doing a particular business function, and its software and hardware are interlinked parts of the whole design (as would be e.g. a cloud platform if that was used for it), and it makes all sense of always considering them together as that enables valuable tradeoffs of one vs another instead of assuming that "this hardware must be powerful&efficient for any system" and "this software should run on any hardware". The software isn't intended to ever run on any other hardware, so in that context questions like "running the software on worse hardware" are meaningless as the software is not an isolated component meant to be deployed somewhere else on unknown hardware, it runs exactly on the hardware you want, and if you can save $5k of software costs by spending $1k of hardware costs then you should do it just as eagerly as if you could save $5k of hardware costs by spending $1k of software work. (Which is quite different from mass market software, where you have to consider small runtime savings multiplied by vary many different consumer devices running the same software)

But for many companies building such systems (custom line-of-business software is probably still the largest part of software development happening in the world) it's highly probable that if you can "bail your poor decisions out" by just buying RAM, that's is far more efficient than rebuilding parts of software - a saved man-month of engineering can buy a LOT of RAM; a terabyte of it is probably in the range where it's not even particularly worth trying to carefully evaluate the options as a well-staffed meeting or two discussing the possibilities can be more expensive than buying a few RAM sticks more.


This sounds straightforward but you're glossing over a lot of second order effects. Notably, if this idea dominates your attitude you'll be much more likely to miss the cases where a bit of judicious optimization actually would save substantially more in hardware costs. And it may not be 1k of hardware costs, but hundreds of k per year of cloud compute costs, depending on the situation.


I've been getting my hands dirty root causing a latency issue in another team's codebase at work. The program needs to run in real time, but the language is JIT compiled and garbage collected. In my most recent experiment, I successfully eliminated the most egregious spikes by completely turning off garbage collection during the latency sensitive phase. RAM usage creeped up to 50GB, but my workstation has 64GB so good enough? Also, I need to run the program 3-4 times to cache all the code paths, given that the program behavior is latency dependent. Just to make things extra spicy, the program seems to also spin off a rogue process that fills up my 1TB SSD and then die after a few hours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: