Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you may not be aware but your comment came off as somewhat condescending, given that you don't really have any idea where parent poster is coming from or what their background is


If someone says that premature optimization isn't a thing, I don't think it is condescending to point out that it is by posting original source material. :-)


> it only wastes the programmer’s time. ... Wasting CPU time on the other hand bothers me a lot!

As always the answer is... it depends. Programmer time costs money, CPU time is cheap by comparison.

If you're building something that runs occasionally, or is IO/UI/network bound... CPU time is largely irrelevant. But if you're building something that runs in a tight loop or a library that will be compiled in millions of lines of code, then the wasted programmer time will absolutely be worth the ROI.


Yes, but, again, this article is about standard headers. Since it is impossible for STL library authors to decide that they are or are not writing for a performance-sensitive audience, it behooves them to provide completely visible header-only implementations of everything.


The pImpl pattern costs programmer time as it's more code to write. By contrast compiling is just CPU time and you can trivially throw a bigger workstation at the problem.


This depends on who is writing the code and who is compiling the headers. A software developer who is building headers for someone else (an internal or external client) may trade the overhead of this pattern for a faster compilation time. Reducing compilation time by 80% may be well worth the overhead of adding 10% more code to an interface.

It is not always possible to just throw more hardware at the problem of compilation. For instance, one may be using a build pipeline that requires specific steps to be followed as part of gating tasks. The time it takes to compile code over and over again for unit testing, behavioral testing, acceptance testing, integration testing, etc., each impacts delivery time and handoff.

Earlier in my career, I worked with a code base that was approximately 10 million lines of code in size. Compiling this code base would take approximately 7 hours on the best hardware we could buy. The C++ developers were adamant about ensuring that their headers were "complete" as they called it. With a few changes, such as forward declarations, abstract interfaces, and encapsulation, my team was able to reduce that compile time to less than 35 minutes. Based on profile feedback, we saw less than a quarter of a percent difference in overhead. Productivity-wise, we managed to reduce developer workflows so significantly that it was a better use of our time for overall cost than working on a billable project.

Most projects aren't nearly that bad, but it does go to show that it is possible to significantly reduce compilation time without significantly impacting runtime performance, even in C++.


> It is not always possible to just throw more hardware at the problem of compilation. For instance, one may be using a build pipeline that requires specific steps to be followed as part of gating tasks. The time it takes to compile code over and over again for unit testing, behavioral testing, acceptance testing, integration testing, etc., each impacts delivery time and handoff.

All of that is solved by throwing more hardware at it.

Alternatively if compile time is not the slow part of that pipeline, then you're prematurely optimizing the wrong thing anyway.

> Earlier in my career, I worked with a code base that was approximately 10 million lines of code in size. Compiling this code base would take approximately 7 hours on the best hardware we could buy. The C++ developers were adamant about ensuring that their headers were "complete" as they called it. With a few changes, such as forward declarations, abstract interfaces, and encapsulation, my team was able to reduce that compile time to less than 35 minutes.

In other words you only optimized the critical 3% of the codebase rather than prematurely optimizing everything with pImpl abstractions?


> Alternatively if compile time is not the slow part of that pipeline, then you're prematurely optimizing the wrong thing anyway.

Developer productivity does not matter in your world?

> In other words you only optimized the critical 3% of the codebase rather than prematurely optimizing everything with pImpl abstractions?

Yes, because nowhere in this thread have I advocated prematurely optimizing everything with pImpl abstractions. Those are words you have put in my mouth. pImpl abstraction is a single tool that can be used to improve compile-time performance. Not all the time, but in a fraction of the 3% of the time where it is appropriate.


Well, to the extent that it is a thing it only wastes the programmer’s time. I happen to think programmers are spending too little time, so that doesn’t bother me. Wasting CPU time on the other hand bothers me a lot!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: