Having 'incrementality', when it isn't actually going to incrementally rebuild things when files change, is (in my experience), worse that having no incrementality at all. Having to remember when I have to manually disable to incrementality is an annoying overhead, and easy to forget.
If you can remember exactly when you need to manually skip the incremental build, that's great for you, but I find Make has enough of these kinds of footguns I don't recommend it to people any more.
Meh, it can practically never be perfect, without diminishing returns that at some point invalidate the advantage of any incremental builds anyway.
As I said, you can let the C compiler generate your header dependencies for you. But that's not enough, because what's with header and libraries that are external for your project? Keep in mind that they will have dependencies, too, and at some point your transitive closure will be really big.
To a lesser degree, what's with a compiler, linker, or any random other part of your toolchain that changes (at least those are supposed to mostly keep a stable interface, which isn't the case at all for external libraries, but in some cases the toolchain can cause the footguns you mentioned).
A lot of the times, you don't need to rebuild everything just because a system header or the toolchain changed, but your build system would have a hard time knowing that. Because at some point, the halting problem even gets in your way (i.e. you can't really reliably detect if a given change in a header file would need rebuilding of a source file, which you would need for "fully incremental" builds). So it's always a trade-off.
Personally, I've fared very well with generated header dependencies (sometimes manually declared ones or none at all for small projects), and many other projects have, too.
YMMV of course, but I don't observe this to be bad. Most people who program C and use make are, I think, aware of what header mismatches can cause, and how to avoid that.
I think one fundamental difference between the two of us (I'm guessing here, let me know if I'm wrong), is I'm willing to cope with longer compile times, as long as I always get a correctly built executable (so there are no 'halting problems', if in doubt, rebuild it!), while you would prefer faster build times, even if that then requires sometimes having to do a little manual work when the build system doesn't realise it needs to rebuild? Of course, no system will be 100% perfect, but you can choose where your trade-off point is.
Two reasonable viewpoints, but probably effects how we do our building setups!
I think that's fair. On top of that, being a low level system programmer, I think I'm pretty good at realizing/intuiting when something critical changed that needs me to nuke the build dir, i.e. the manual work you mention to force a full recompile (and also good at noticing mismatched headers even through very weird symptoms).
I do have to switch between multiple SDK versions very often for example, and I always either nuke the build dir after doing that, or have separate build dirs per SDK in the first place.
If you can remember exactly when you need to manually skip the incremental build, that's great for you, but I find Make has enough of these kinds of footguns I don't recommend it to people any more.