In other words, you can make all the plans you want, but as soon as you put them into action they have to take into account the behavior of the enemy, which may or may not go as you predicted. And once events start departing from your predictions, gaps in your plan will appear that you will have to plug with whatever you have at hand, because once shots have been fired you can't un-declare war and start over with a new plan that incorporates what you've learned. You're hip-deep in the muck now, and have to struggle through to the other side as best you can.
Something similar could be said about designing software. If no military plan survives contact with the enemy, no software architecture survives contact with actual users.
The architecture of a particular piece of software is, at root, a hypothesis: given problem X, here is how one could go about applying a defined set of computing resources to solve it. And at the beginning those architectures are always clean, because they're being applied at a purely theoretical level where things we don't really understand about the problem aren't evident yet. And since it's all theoretical, there's no warts; re-drawing the architecture on the whiteboard doesn't inconvenience anyone, so we can do it boldly and often.
But at some point you have to translate that beautiful architecture into working software and put it in front of real people, and that's where the problems start. Because those real people will use the software in ways that surface facets of the problem you didn't appreciate, forcing you to modify it to keep up. And because now making changes means inconveniencing real people and losing actual money instead of just scrubbing off a corner of a whiteboard, those changes will have to be conservative and expedient rather than bold and sweeping. And this is where the warts start creeping in, as you try to drag your original vision into some form that actually fits the real world as quickly and cheaply and non-disruptively as you can.
If the architecture is the hypothesis, the software is the experiment.
And you may think, after running the experiment once, that if you could just start over with a clean sheet of paper armed with what you know now, this time you'd "get it right". But of course the real world isn't static, so by the time you develop a new hypothesis and are ready to run the experiment again, you often find the ground has moved out from under you. Your hypotheses are chasing a moving target, and so the need to patch them up with duct tape and bailing wire never ends.
I'd heard the quote about no plan surviving contact with the enemy but never bothered to look up the source. That is a great analogy, and I think you have restated a nice chunk of my working theory better than I did. Especially that an architecture is a hypothesis.
Under this framework, maybe I can reasonably suggest that software has at least 3 big enemies: customers, management, and the coders. We are one of the enemies, for lots of reasons, but in part because coding style fashion trends change quickly. One year everyone's using OO, classes with templates and separating their code and markup, the next year everyone's using composition instead of inheritance and smooshing their code and markup together and talking about how much better it is than the old ugly way.
People who didn't join a project at it's beginning tend to both complain about the bad practices disproportionately, and at the same time contribute to introducing new styles and making the codebase less consistent disproportionately. After a few of those, there's just no wonder that things start to look messy in any sizeable project.
Moltke the Elder (https://en.wikipedia.org/wiki/Helmuth_von_Moltke_the_Elder) wrote that to understand military strategy you have to understand two basic things:
- No plan survives contact with the enemy
- Strategy is a system of expedients
In other words, you can make all the plans you want, but as soon as you put them into action they have to take into account the behavior of the enemy, which may or may not go as you predicted. And once events start departing from your predictions, gaps in your plan will appear that you will have to plug with whatever you have at hand, because once shots have been fired you can't un-declare war and start over with a new plan that incorporates what you've learned. You're hip-deep in the muck now, and have to struggle through to the other side as best you can.
Something similar could be said about designing software. If no military plan survives contact with the enemy, no software architecture survives contact with actual users.
The architecture of a particular piece of software is, at root, a hypothesis: given problem X, here is how one could go about applying a defined set of computing resources to solve it. And at the beginning those architectures are always clean, because they're being applied at a purely theoretical level where things we don't really understand about the problem aren't evident yet. And since it's all theoretical, there's no warts; re-drawing the architecture on the whiteboard doesn't inconvenience anyone, so we can do it boldly and often.
But at some point you have to translate that beautiful architecture into working software and put it in front of real people, and that's where the problems start. Because those real people will use the software in ways that surface facets of the problem you didn't appreciate, forcing you to modify it to keep up. And because now making changes means inconveniencing real people and losing actual money instead of just scrubbing off a corner of a whiteboard, those changes will have to be conservative and expedient rather than bold and sweeping. And this is where the warts start creeping in, as you try to drag your original vision into some form that actually fits the real world as quickly and cheaply and non-disruptively as you can.
If the architecture is the hypothesis, the software is the experiment.
And you may think, after running the experiment once, that if you could just start over with a clean sheet of paper armed with what you know now, this time you'd "get it right". But of course the real world isn't static, so by the time you develop a new hypothesis and are ready to run the experiment again, you often find the ground has moved out from under you. Your hypotheses are chasing a moving target, and so the need to patch them up with duct tape and bailing wire never ends.