You are not wrong. But it is all about saving money on labor. The rest are just the constraints of the system you use. (Aka requirements) its like complaining about the need to use posix for Linux.
20 years ago. We had enterprise java, it’s still “there”, but running spring is very different from what it used to be.
You’d simply upload an ear or ear and the server and deployed would handle configuration like db etc.
It worked perfectly (ear/war, the persistence framework was too verbose and high level imo, but that was replaced by hibernate/jpa). There was too much configuration in xml, but that could easily be replaced my convention, annotations and some config
Again.. we are running in circles, and this industry will never learn, because most “senior” people haven’t been around long enough.
> Again.. we are running in circles, and this industry will never learn, because most “senior” people haven’t been around long enough.
And that likely won't change in our lifetime, given the rate of growth in demand for software: we literally can't create senior engineers fast enough for there to be enough to go around.
As an aside, I have the privilege of working with a couple of senior folks right now in my current gig, and it's pretty fucking fantastic.
The percentage of seasoned engineers is so low that 'senior' as a title often seems to stretch to "whoever is most experienced around here". That's probably fine, since people understand that experience is not reducible to that title. But this does bring to mind a metric for finding "objectively" senior engineers:
What's the biggest idea you've seen abandoned and then reinvented with a new name?
I feel like we're just transferring the labour from ops to dev though. Where I work we still haven't got as good a development workflow with lambdas as we did with our monolith (Django).
Optimistically, it could represent a positive trade-off that replaces perpetual upkeep with upfront effort, and all-hours patching and on-call with 9-5 coding.
In practice, I think a lot of those fixed costs get paid too often to ever come out ahead, especially since ops effort is often per-server or per-cluster. The added dev effort is probably a fixed or scaling cost per feature, and if code changes fast enough then a slower development workflow is a far bigger cost than trickier upkeep.
Moving off-hours work into predictable, on-hours work is an improvement even at equal times, but I'm not sure how much it actually happens. Outages still happen, and I'm not sure serverless saves much less out-of-hours ops time compared to something like Kubernetes.
I see your point though POSIX imposes very few (if any) architecture decisions on application developers. The kind of design choices we’re talking about are very different from those of POSIX-like utilities so I’m not sure if that analogy is a good one.