> My theory is that all software eventually becomes difficult to maintain and full of warts, regardless of smartness, regardless of conditions.
If a code base doesn't change too much in side or original intent then architecture and design (if it was good in the first place and is continued to be followed) will probably keep it fairly maintainable.
In a lot of cases though codebases slowly grow until they become of a size that requires a different architecture or approach to organising the code, especially if the number of collaborators increases too.
It's quite a hard thing to spot and then address while that codebase is still quite active.
Rewrites are tempting to be able to apply that architectural change but often you can be quite bound by the implementation specific behaviour of the original system.
It might be interesting to look how the linux kernel has changed internally as it moved from a single person project to what it has become today.
> often you can be quite bound by the implementation specific behaviour of the original system
Yes, this. Requirements accumulate over time. That is what makes it harder to refactor production code to be cleaner. When you're not bound to your requirements, they can change, but when your requirements start to set in stone, you lose the freedom to change them.
Choosing to rewrite already released code is likely to introduce regressions, is more difficult than with unreleased code because you are not allowed to change requirements, and redoes work work that was already done once. If management is paying attention, they will (and should) complain about paying for the engineering again. After all this, there are no guarantees it won't just happen again. Things don't tend to stay magically clean after rewriting. That's assuming the rewrite even finishes cleanly. What often happens is the engineers underestimate the time to rewrite because they didn't understand how well and how many things were working, and it gets cut short by management a third of the way through when the rewrite is obviously over budget. Now the codebase is messier than when it started, even though the engineers had the best of intentions and management gave them large swaths of time to try and fix things.
If a code base doesn't change too much in side or original intent then architecture and design (if it was good in the first place and is continued to be followed) will probably keep it fairly maintainable.
In a lot of cases though codebases slowly grow until they become of a size that requires a different architecture or approach to organising the code, especially if the number of collaborators increases too. It's quite a hard thing to spot and then address while that codebase is still quite active.
Rewrites are tempting to be able to apply that architectural change but often you can be quite bound by the implementation specific behaviour of the original system.
It might be interesting to look how the linux kernel has changed internally as it moved from a single person project to what it has become today.