Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why does it take so long to build software? (2020) (simplethread.com)
221 points by fagnerbrack on Jan 27, 2021 | hide | past | favorite | 172 comments


Software takes a long time to build because it's always new.

That is, software is trivially copy-able, so there is no reason to spend effort duplicating any software that already exists. (Legal reasons and "not-invented-here" syndrome notwithstanding.) This is a huge different from how the "real" world works, where almost all of the work involved in building, say, a car, is actually just the work of assembling identical copies of that product so that it can be physically sold to more than one person.

The insane amounts of repeated effort involved in building any physical product at scale have a silver lining, in that the repetition enables a very good and stable estimate of how long another repetition will take. Software is the opposite situation --- we only need to build anything once, but not having built that thing before, we don't really know how long it will take.


I think focusing on this comparison to the real world assemblies and constructions doesn't yield much insight. The equivalent of a car being assembled is a compiler building an executable from source. You can consider factories to be just very, very long compiles, and forget about it.

The question of why software takes a long time to build pertains to the coding phase, to which real life's analogue is drafting/design phase. It is my impression that the drafting/design phase in "hard" industries can be just as messy and hard to schedule as software development is - we just don't pay attention, because for physical products, drafting/design is nearly free (capex & opex of the assembly phase dominates), whereas in software, the construction is nearly free (compiler time is too cheap to meter).

It's true that, on the surface, software is "always new". But after doing it for a while, I have this sinking feeling that a lot of it is old and repetitive work that we hadn't yet automated - there are deep similarities between pieces of code we're writing, but we don't have a language to fully capture them and abstract them away.


I thought the original article was very poor. There's no real data, no references (except Fred Brooks), and the graphs are just nice shaded opinions based on nothing much.

It fails to ask why the complexity of tools and systems grows over time, and whether this is matched by improvements in UX.

It also fails to note that one of the goals of abstraction is to hide inherent complexity. And in fact this happens time and time again in engineering and product design - an insight that starts off requiring near-genius levels of original mathematical creativity is packaged into commodified tools that make it easy to do a certain job.

Sometimes the tools are aimed at engineers, and sometimes they're built for the public. But because no one expects to literally reinvent the wheel, they all package genius in their different ways.

Except software. In software teams keep working to reinvent wheels completely every few years. Some are arguably more refined than existing wheels. But they're all wheels.

They're not a new kind of thing - like an engine. Or wings.

So I agree. Software has mediocre lumpy wheels They're mostly back-references to existing software, and not so much attempts to commodify and simplify standard problems - like security, reliability, scalability, and UX.

There are packages for all of the above, but they're rarely clean and elegant - in the sense that someone has really understood the problem and designed a maximally effective but minimally complex automated solution.

Worse, there are no processes for efficiently abstracting a domain. And that's what's missing.


I agree. Software is "always new" but it often shouldn't be.

I work for an engineering services firm that does mostly embedded systems. I push hard for reuse, basing our work on standard platforms, etc. Unfortunately, client needs vary so much -- one client might need a wearable, another wants a large system that is mounted on a skid, another needs something to sit on a desktop, all with hugely different feature sets -- that it's difficult to find a small set of devices that we can reuse. So one of our big problems is that we're very often bidding on a project that uses a device, or a technology we've never seen before and that's where the uncertainty comes in.

Patterns certainly help, higher-level languages would definitely help (I'm the only one in the organization programming small processors in C++, everyone else is only comfortable with C), but that's a slow, risky change.


> for physical products, drafting/design is nearly free (capex & opex of the assembly phase dominates)

It really depends on the industry. It's particularly bad in heavily regulated industries (pharma, aerospace, ...). On the other hand, creating a blog is probably the equivalent of designing a new t-shirt.


Fair enough. I haven't considered regulatory impact. On top of that, I wasn't thinking about pharma, or a lot of formulated products.

That said, setting aside the regulatory burden, from the little insight into the internals of chemical processing I had, there may be larger costs in designing and tuning the "build pipeline" (all the coolers, valves, reactors, etc.) than in designing the formula itself. That's just a consequence of scaling up being expensive where your production involves manipulating matter.


> The equivalent of a car being assembled is a compiler building an executable from source

I disagree. The equivalent of a car being assembled is copying the binary executable from one storage medium to another; building an executable from source would be the equivalent of building the production process for a single car model, while coding is the equivalent of designing and testing that model.


This is where the analogy breaks down somewhat - so I propose you imagine a universe in which `cp` doesn't exist and you have to build each copy of an executable from scratch :).

(Perhaps you're signing each copy with a different key, or something.)

I insist on connecting compilation to construction of individual cars, because the source code corresponds to the engineering plans of a car, and not to the plans of the production process.


Sure, every car has a unique VIN and every running process is a different constellation of electrons (or just the one, if one-electron universe is assumed).

But looking at getting functionality to users, looking at the business problem, the time/effort/risk is spent on car design and software development, and much less time and risk is in the copying business. (Yes, Tesla recently had car copying problems, but they brute forced their problem, and now they are pretty good at factory copying too.)


This analogy is flawed, you can't capture insight from the flaws in it.


Best comparison (car/machine manufacturing vs software development) I've ever heard on this topic is as following;

While producing a machine, there are two steps;

1) designing the blueprint (takes a lot of time, unpredictable)

2) mass production (takes a lot of time, predictable)

Those steps in software development;

1) building the software (takes a lot of time, unpredictable)

2) deploying the code (takes virtually no time, predictable)

So the real difference is this: software is almost all about doing something new, each and every time. It's all flesh and has no bone: e.g. devoid of a long, very stable and predictable mass production stage. It's almost all about R&D, all of it's length. Hence 98% of the process is novel work therefore takes time and hard to predict.


To note. All engineering has this problem.

Yes, building cars in a factory yields repetitive results, but what about all the different engineering projects. The truth is that this is not a new problem, we just lack discipline of fields that have been refined over thousands of years.

Because of the quick idea to prototype rate, product research and refinement takes a back seat. All engineering projects have massive research before the first stone is placed. Nobody mid way through building a sky scraper decides "oh, do we need to redo the foundation?"

I would also like to note, nobody is gonna give a self-taught architect who's 19 the ability to build that skyscraper foundation. Each building technique is developed, tested, and then once determined good, is actually used by others, and supervised by an industry veteran with underlings who learn and eventually themselves supervise such projects. Meanwhile in software it is the opposite.

Not to say that there isn't a reason software grows so fast. Things change daily. If software worked like architecture we'd be licensing even the right to use ruby on rails-style modeling. But software works differently, thus faster iteration, thus more loosey-goosey, thus harder to estimate.


> I would also like to note, nobody is gonna give a self-taught architect who's 19 the ability to build that skyscraper foundation. Each building technique is developed, tested, and then once determined good, is actually used by others, and supervised by an industry veteran with underlings who learn and eventually themselves supervise such projects. Meanwhile in software it is the opposite.

It's a bit less black and white: Any 6+ year old might build his own tree house, while in reality no one will let a self-taught kid write the control software for a nuclear power plant.


> Software takes a long time to build because it's always new.

And yet it feels like authentication and authorization code need to be rewritten for every application.


Not in my experience; basic session auth is easy enough, and oauth has been around for a while now.

What keeps giving though is CRUD. Naming things, forms, APIs, persistence, every time, rinse and repeat.


That's because it is mostly integration that is done, not so much construction. AA needs to permeate the whole project, from the details of a function call to high level design like making sure an actor can call another part of the software stack to perform an otherwise unauthorized operation.

"Building" software can mean any number of things. When you get down to the specifics, there is no getting around using specific language.


I don’t rewrite bcrypt, and authorisation is intrinsically linked to your data model.


Only vendor-specific code, so it's not that bad. What business do most people have rewriting auth stuff?


If that were true, it wouldn't take long to build from scratch a functionally identical clone of an existing application. Yet it does.

The conclusion is that the tools to build software from a known specification to working code are sub-optimal.


That's like saying a Ford F150 is functionally identical to a Ford Model A because they are both cars. The specific implementation under the hood requires a lot of work. And very rarely do people ever make functionally identical clones without massive refactoring to allow for extensibility.


"implementation under the hood requires a lot of work"

"people don't make functionally identical clones"

"massive refactoring to allow for extensibility"

You're enumerating the factors that make my point true. Those are some of the reasons why building software takes a lot even if you know exactly what it is supposed to do.

You can compound that to the problem of figuring out what the software should do, which classic IDEs are also bad at. Online collaboration and prototyping tools that put the program in front of stakeholders early; they're like car makers' focus groups that were used to design cars tailored to a target demographic group.


Building the first F150 is an act of creation and design. Assembling the second one is finding the same parts and doing the same assembly steps, the thinking is already sorted out.

Building the first Model A means going back to the drawing board because it's not the same. Even at a smaller scope building a F150 with an updated exhaust means going back to the drawing board for that part, and making sure it keeps working correctly with all the other parts.

It turns out that with software, once you have one fully assembled "F150", a second identical one can be created out of thin air (cp / git clone / run compiler / whatevs). So the effort is ~100% on thinking and design.

Now why would you want 2..n F150s? Because you want to have them in the wild doing something useful. That's making them available and distributing them and having more of these as needs increase. That's cp'ing your code onto servers and deploying and running it and making it reachable. You do that with DigitalOcean, Chef, K8S, or whatever AWS service du jour, and hopefully you automate as much of it as possible just like we have car factories with robots to not build these with human hands.

Deploying and spinning up more of that same code so that it keeps up with the scale of whatever it needs to do... that's ops, and that's basically the only part of software that remotely resembles industrial work. The part before that is development, as in "research & development", and that's 100% not "industrial" by any stretch of the imagination.

Thinking that developing software is like building a house or a car and "industrialising" things with waterfall or Kanban on SCRUM and pulling bogus time estimates to completion is the biggest lie ever, just like pulling out the next vaccine or devising a new maths theorem cannot have any reliable time to completion because you're constantly up against unknown unknowns. This field just flat out doesn't work that way.


> The conclusion is that the tools to build software from a known specification to working code are sub-optimal.

That may be true, but I don't know that your example supports it.

Cloning an app doesn't mean you had a specification for the app. The tools to build a known specification from an existing app are sub-optimal, as are the tools to build a known specification from scratch.

I've only rarely in my career had the benefit of a written specification.


> The tools to build a known specification from an existing app are sub-optimal, as are the tools to build a known specification from scratch.

That actually reinforces my point ;-) All the steps in the toolchain could benefit from more agile interactions, allowing the programmer to spend less time fiddling with syntax errors and recalling which functions need to be applied in what order, and more time evaluating and fixing errors in the current logic as written. That would expose what the build program is doing and how it differs from the expected intent.

The online notebooks used in data analysis (Jupyter, Apache Zeppelin) are a step in the right direction; IMHO their approach of 'data is always readable besides the code processing it' makes for a better introspection infrastructure than the REPL loops of old.


Basically we're going back to what delphi had 30 years ago. I understand that scalability concerns warrant a different approach, but a lot of the software we write will not deal with google or netflix scale.

I keep going back to the example of elixir/erlang (mostly because I'm getting into it now) but the tooling that is available in erlang is much more powerful than any newrelic or what ever monitoring/introspection tool we use now. And we've had it for years, but we've just ignored it.


While your second sentence is definitely true, there are some subtleties that also matter.

Car specs are (certainly as opposed to software specs) very exact and constrained by dimensions, chemistry and physics. None of those have version numbers.

Creating a particular model of a car is decidedly single-paradigm. The same cannot be said for a functionally identical clone. You could make a functionally identical clone of an app in OOP or FP, nodejs, Rust or C++, MVC, MVVM or MVU, as a polyglot SPA + backend combination...

All this to say: software as an industry also lacks standardisation, process documentation and documented best practices. Software "best practices" are more like "accepted dogma at the time" or "least likely to bite us later", not "will work 99.999% of the time". Software as a product is immaterial to start with.


I don’t want to sound smartass but building an identical clone of an existing application is very fast and often totally automated. That’s what your compiler chain does.


What does the compiler do if you start "from scratch", i.e., without any source code?


Change the criteria that you should rewrite it in another framework and the point still stands. You already have a perfect specification in the existing code, no unknowns at all, just implement it. Yet it still takes months or years.


A good physical analogy is the historical Soviet recreation of USA B-29 Superfortress as Tu-4, based on copying all the parts (based on 4 captured aircraft), but needing to rework to the metric standards (i.e. so wherever you have, for example, 1/4 inch i.e. 6.35mm sheet metal or fasteners or whatever, you have to pick 6 or 7mm instead (because your "different framework" doesn't supply 1/4 inch stuff), which changes the weight distribution and structural integrity, which requires additional tweaks). You're making a functional copy, still took more a year just for the designs. The same with other physical objects - as long as the underlying framework or supply chain changes, it tends to require re-engineering.



At the risk of sounding like another, it can be simpler than that - just install the already compiled software. In a sense, the thing about composing software using "microservices" is about "install it and get going".


Parent wrote "functionally identical", not "identical".


And add to that the number of custom software are many times more than most things in real life. Nobody will asks for their BMW to transport a cow, or for their BMW to connect / communicate with other's Mercedes.

But we often asked for accounting app to able to handle employee's payroll and paid leaves.

Then with that many demand for custom software, we also got many times software manufacturer than car manufacturer. Inexperienced manufacturer (programmer) will make development even slower. Not to mention that the client even don't know what they want.


Every once in a while when I see an example of an absurdity that wouldn't happen, I head off to Google to see what I can find.

> Nobody will asks for their BMW to transport a cow

How about a cow transporting a BMW?

https://carnewschina.com/2013/02/22/bmw-owner-in-china-is-an...


One aspect that is not mentioned is that we build software on top of an ever-increasing number of first-to-market, low-quality, building blocks. And by low-quality I mean "worse is better"/MVP/"everyone makes mistakes"/"leaky abstractions"/etc -- pick your favorite. As a result, we spend more and more time dealing with someone else's mistakes rather than making forward progress.


I dunno, there's a lot of really high quality stable software out there. It's not all crap.

But as someone who has had the luxury of taking time to do things the Right Way let me tell you from firsthand experience: doing things the Right Way is incredibly hard in no small measure because figuring out what you actually want to do is incredibly hard. There have been many times when I thought I was building something for the ages only to discover that I had made a bad assumption, or technology changed, or the market changed, or my own desires changed. The process of meeting human needs is messy because both human needs and the tools we have at our disposal are a constantly moving target.


> The process of meeting human needs is messy because both human needs and the tools we have at our disposal are a constantly moving target.

Software development is inherently explorative. Finding the right solutions is exactly that: finding, discovery, learning and play. IMO This is best enabled by fast feedback loops and highly dynamic, interactive systems and visualization.

Sometimes it is possible/feasible to parametrize a tool beyond what it is supposed to be doing to enable this kind of play and discovery, but also to make the process of building data, plumbing and so on just a bit more efficient and fun.

Game programmers get that: At some point while developing a game, they create the tools that produce the data, or the parameters, typically controlled with a visual interface, a configuration language or a scripting language. Level editors, state machines, behavior trees, story boards, flow scripting etc.

Another field that does this well is scientific computing, they use Jupiter Notebooks etc. with integrated REPLs and graph visualization.

The whole "no-code" and "low-code" trend[0] also shows that people are willing to program with constrained, visual languages. It empowers them and connects their mental model more directly to a product (instead of having to go all the way through a team of implementers for every change that could be exposed as data).

[0] I personnally don't like the terms "no-code" and "low-code" at all, because they describe what it is not, instead of what it is: visual programming. It's like "no-sql": Could be anything from a configuration file, to a document db, to a key-value store or a ACID graph db.


Wow, after your comment about having the luxury of taking time I wanted to find out how. I found your website in your bio. Very impressive and very cool to see even huge successes find the time to comment on HN.


Thanks for the kind words. HN is one of the few remaining bastions of sanity in today's on-line world, so yeah, I make time for it.


You are right, not all is crap. But too many people are not aware how much is crap and are pretty naive about using libraries. I don't think it is an accident, that e.g. the Java universe has uncountable numbers of libraries and Java projects have unbelievable many dependencies and e.g. Lisp has a percived lack of libraries. There are many reasons for that, but also, that Lisp programmers tend to rely on libraries less.


> Lisp has a percived [sic] lack of libraries

Lisp is also widely perceived as an interpreted language. Willful ignorance does not make reality even if it is widespread.


Totally agree.

Move fast and break things? Let’s not. Let’s build carefully and methodically. Teach others how to build quality software. Stop regurgitating what you watch on YouTube 4 hour course. I’ve seen horrific, I mean absolutely bottom of the barrel code being taught to others. Especially in JS community - yes, I’m picking at you guys again.

When teaching goes to shit, you’re breeding and propagating, institutionalizing horrible ways to do something - amplified 100x because YouTubers are chasing viewership. That code camp 8 hour course is better replaced by reading good books and docs. Actually build something by thoroughly reading the docs.

Now you got 100x more developers building foundational blocks that other developers blindly build atop.

Study what Unix did when they were building small composable highly quality building blocks. Still used today after 45 years!


Move fast and break things itself doesn't imply you shouldn't go back to patch it up and make it cleaner. People should be moving fast so they don't end up in endless discussions regarding or spend too much time on creating a foundation for a solution that doesn't work, and to inhibit perfectionism. People should also be transitioning from "make it work" to "make it good" once it works, prior to delivering or finalizing it.

This is largely a problem with people unable to shift practice according to the context, lazy developers and managers thinking "it works" means "ship it and never look back". Unfortunately, there is no cure perfect cure for lack of foresight and willingness to listen to the guy saying PoC code will cause problems at some point down the line.


> Move fast and break things itself doesn't imply you shouldn't go back to patch it up and make it cleaner

Lol :-D. You haven’t worked in a shop, have you?


Of course people should 'move fast', unless of course they should actually move slowly, or move glacially, or move moderately quickly, or with utmost urgency... the trick is knowing what's actually right, isn't it? That's where the metaphor breaks: this isn't like driving a car where the right speed is obvious.


> Especially in JS community - yes, I’m picking at you guys again.

Don't worry, we're not offended because we know it's true.

I bet that pretty much most of NPM's package index consists only of weekend prototype projects that are abandoned afterwards. It's sad to see that there's literally no baseline of quality measurements on NPM, and people give them stars far too quickly, without realizing that it's literally a single line of code with megabytes of useless testing around it.

Most of the patterns in UI/UX frameworks that come and go all the time are actually very very old paradigms that have been known in Computer Science since the 60s-70s. I always feel like no one reads a book about Software Engineering or Software Patterns anymore, let alone tries to find patterns in alternatives and makes a pro/contra list of features to find out what they actually want.

All go hush hush and rush rush to put out their next starlet on GitHub, without actually thinking about a software architecture anymore. Those that do are somehow invisible to the masses; and can never gain really a traction behind their ideas, which leads to the abandoned code problem either way.


> Study what Unix did when they were building small composable highly quality building blocks. Still used today after 45 years!

Don't worry, systemd has integrated building blocks that can replace all that. Those pesky reusable, composable modules won't hit their 50th birthday.


It really depends on the situation. Some stuff you won't need one month into future, other stuff you'll need for decades. Manage your efforts wisely. Time after all is limited.

Many companies wouldn't have existed if they didn't move fast and tried to perfect everything instead of prioritising getting to market.

Don't put the same level of care into the weekend side web app you are creating for fun as to the rocket ship you are building.


The trouble is knowing how long your code will work.


This has not been my experience in backend development (and some dabbling in building small React frontends). Our building blocks are FOSS components who are quite robust and widely tested, and the bugs, mistakes and shortcuts we have to deal with are almost exclusively of our own making.


"There are popular bad libraries out there that people base their software on." "I use good libraries"

Not really relevant.


Fully agree.

At first I was inclined to comment "it doesn't" because I can easily build small but useful tools in a matter of days.

But your comment made me realize that maybe the reason is just that I keep using the same old C++ libraries to avoid surprises.

In my last Ruby project, critical APIs changed multiple times during development. But Boost / openssl / curl / TBB / MKL are surprisingly API stable, given how much is changed under the hood.

Maybe conservative languages attract conservative programmers who conserve time by conserving APIs.


I'm a high-level guy, and I avoid using anything under 20 years in existence.

All the easy problems are already solved and well-documented, and less likely to break my code with a new release.

I then try to write code in such a way that it would have worked 15 years ago and today both, working around platform changes.

My current stack is Perl, HTML, CSS, SSI, PHP, SQLite, PGP, txt, and JavaScript.

And yes, my sites do work in Netscape 2.0+, IE 3.0+, Lynx, Links, w3m, and with a few settings tweaks, also Mosaic.


I refuse to write anything that supports any IE version < 11 on principle. And I will relish the day I can kill support for IE11, which is hopefully rapidly approaching. I admit I do enjoy a stack which includes text files, though.


It seems to me like that stance does nothing helpful, only lets the developer off the hook of attempting something difficult and annoying. At the same time, it leaves human users who can't change their browser high and dry.


Worry not, in IE terms, 'rapidly' means 'within this decade', so I will unfortunately be supporting the last, not really venerable, version of it for a while. But even just messing around with CSS in old versions is painful: it's not just 'stack of 15 years ago' if you support IE6, it's also leaving out pretty much everything except the basics of text content, lest you spend really horrible amounts of time getting something to work.


You're right that a lot needs to be left out or put behind feature checks. But I can't say I've spent an inordinate amount of time tweaking for IE6 than most other browsers. I've probably spent the most tweaking work on IE3, Mosaic, and Netscape 2.


It is a business decision. Deliver a feature faster to 99.9 percent people as opposed to later for 100 percent. Neither is right in all scenarios which makes the whole argument pointless without specific scenario. There's a slider you can move with support percentage and time efforts required to make it happen. It is about balancing trade offs. Taking one fixed stance will just make you inflexible.


Less annoying work for me is something helpful in my books. If I can make my job easier at the expense of people using unsupported technology, I will, otherwise I'd be stuck supporting that one guy who refuses to move from Mac OS X 10.4.


It's not mentioned specifically, but I think this very much falls into the "Accidental complexity" bucket. In the same way he describes someone choosing to use Mathematica for solving a problem - a developer choosing an obscure technology or writing poor code is just more accidental complexity.


I think you are using the wrong building blocks. Build your own blocks or use ones that are solid with good test coverage and many of those issues go away.


I really like Dorian Taylor's analogy of software to movie production: https://doriantaylor.com/softwares-ailing-mythology Their point is that software is a refinement of a process to the point where it can be executed by a computer (in a generally bug-free manner). Just like a movie spends a lot of time in pre-production and writing the script, we must spend a lot of time on refining our own understanding of our processes, before trying to explain them to computer via code. The faster one jumps to code, the more likely one is to have to go back and fix things, delaying the delivery of software.

So software is more similar to writing novels or movies, than construction, where a lot of time is spent just thinking through things, and going through the ups and downs of "writer's block". Perhaps one way out is to loosen the delineation between making software and using software (a la spreadsheets, Smalltalk) and moving towards more programmable environments, so one doesn't deliver software, but rather allow users to build their own as part of refining their processes.


It's because most software developers add negative value. Most not them aren't investing in learning at all and are just playing around and understanding things on the surface level. I'm fairly certain it takes much more learning and effort to keep up to date as an accountant than a software engineer.

I've been getting into elixir and was very sad to realise that a lot of the best practices have been enshrined into the erlang otp library for 20 years and we've forgotten about it.

Think how many engineers you know that can reason well about db indexes or could implement something like redis given enough time.

On the one hand it's fairly depressing, on the other hand it makes sense from the incentive structure - business people treat them like "nerds" they're not on the same career path as someone in management, PMs come from weird professions that have nothing to do with software and are focussing on weird scrum processes and jira tickets...

At least that has been my experience. I've seen massive amounts of wasted resources that has made me a bit jaded


"it takes much more learning and effort to keep up to date as an accountant than a software engineer"

you are wrong in many levels.... accounting is a field that moves slowly and it is highly regulated

Software is still in the wild wild west phase, and it catches and loses trends all the time. We are currently using languages and platforms that didn't exist at all 10-12 years ago, yet, in 10 years whatever you are doing now, might be completely obsolete and you have to retrain-retool.


Why is that? It's because we don't learn from the past and keep thinking we're the smartest and have to reinvent everything with every new generation of developers.

I've worked at a company where elasticseatch, postgres, message queues, custom microservice platforms, custom cicd platforms were deployed for a load that could fit in ram of one machine. With maybe 10 customers per minute.

And you're wrong about accounting, new laws and regulations are being passed constantly and one has to keep up with them all the time


Its important to distinguish b/w anecdotal experience and industry trends. A lot of great tech was invented to solve very real problems that groups of people faced. Once it gets hyped though, everyone wants it in their stack because they've read about it in blog posts or want to give a conference talk about or <INANE_REASON_X>.

Elasticsearch, postgres, message queues etc. all were invented by people when they faced real problems. As such, I don't view them as reinventions, but as simply inventions. Software engineers tackle a problem space thats truly ginormous and diverse. It seems quite natural that this situation would spawn a multitude of tools to manage that complexity, get a competitive edge, or just build a product that just wasn't possible before.


A lot of new tools were indeed invented to solve real problems, albeit not necessarily real problems shared by as many people as subsequently adopt those tools.

But there is also a huge amount of reinvention because, as mstipetic was saying, a lot of developers just don't study their subject any more. It's so superficial, rarely learning lessons from history.

In particular, that's how we get the modern world of web development, with new tools and frameworks every few months that do require retraining but don't contribute many new ideas. This is an industry where the dominant programming language for the front end literally had no standard concept of modules until relatively recently. To this day, almost all of the main tools like building, testing and static analysis usually have at least one extra layer of transpilation complexity on top, and usually it has to be configured separately for each tool. The tools themselves are often relatively basic, with routine (in the rest of the programming world) optimisations like tree shaking regarded as some sort of incredible advance. The same goes for the software architecture patterns used by the big front-end libraries and frameworks. And setting all of this up has become so complicated that there are now tools to scaffold the project and set up the other tools that orchestrate the other tools, which I really wish was a joke but is literally true.

Unfortunately, I think the real root cause of the rot is money. There is so much cash now in certain parts of the industry that it is undeniably a successful career strategy for developers to go job-hopping and collecting buzzwords working at those places, even if they are never really learning much of substance or building much of actual, lasting value. It turns out that huge numbers of people are willing to pay, directly or indirectly, for junk. And so writing junk and not caring is an effective business model.

The silver lining is that it's a huge industry and there is still room for quality as well, so those who do make the effort to build better software can at least make a living doing it, even if the rewards are more in satisfaction than greater financial compensation.


You're just proving my point. As a profession we base our decisions on blog posts and potential fun.

I'm all for these technologies, I'm against the fact that regularly as a profession we order multi million dollar digging equipment to dig a small hole. I've literally seen a bored group of devops start building their own cicd solution and give up after a few months of (highly paid) work

Not to mention that all these powerful technologies are super complicated and developers don't take the proper time to learn how to operate it correctly.

Just think of the Hadoop craze 5 or so years ago when people were throwing it at few gig datasets. Or even now what's happening with "data engineering" where we build massive pipelines that could be handled by a competent python script. It's crazy and we don't as a profession have any success criteria on our deliverables and we've just accepted that "things are hard" while getting paid more than most professions can dream about


It’s becoming pretty clear that you’ve never seen the power of these tools in action.

I don’t think I proved your point at all. I very explicitly showed the opposite.

I’ve worked at companies that ingest petabytes of data, processing it would just not be possible without hadoop and sister tech (chiefly hdfs). Hundreds of developers working on a common codebase that would not have been possible without modern ci/cd solutions. Data pipelines whose reliability would be 0 without a modern rpc system like thrift to ensure system interfaces are well defined and consistent.

There are very real and practical use cases for these technologies. Just because you haven’t experienced them doesn’t mean that they’re overkill. Just because enterprise engineers like to overengineer using these tools does not mean that everyone does. There are very real scenarios where these tools have helped smaller companies and startups to punch beyond their weight.


Mate I'm with you, they're extremely powerful tools and it's basically magic what a few good engineers can build with modern tooling.

When you say enterprise engineers that's 80+ percent of our profession. My gripe is only with lack of standards and misusing those tools without understanding them properly, which happens all the time


Understood. Apologies for the Curt tone in my earlier comment. Cranky before morning coffee.


But experience with one language tranfers quite well to a new one. I also see a deep problem from the business side, that a programmer is easily replaceable ot that just by throwing more (mediocre) people at the problem will help solve it faster, or even the existence of rock-star developers.

Building software is slow, because essential complexity. We are writing programs that no one person can hold in their heads. And based on the average CRUD app, it might not be trivial (even though they are already increasingly complex), but more complex applications really can take multiple years/decades to create. And it is a bit sad that there is little focus on reuse, even though that is the only silver-bullet out there. No language, no tool; but (high-quality) libraries.


Experience with languages transfers well, because programming is 80% "architecture" and 20% language (or framework or stack). (numbers made up).

The latter is a fast moving field. The first requires deep knowledge, and experience.

When you build an application with the latest javascript framework, vue-native, or ncurses, there is far more than learning ECMA2020, MVC or that ncurses-rust-binding-lib.

It requires you to study and understand the domain. Translate that into a domain model. Finetune that. Search for patterns and frameworks that fit this model. Develop the software in a way that it can be maintained, react to market demands and so on. These are skills that transcend languages or frameworks. That you can apply to both the latest kubernetes-docker-cluster and to the server in the closet. That you can apply to ncurses or vue-native.

You'll need to learn when, but mostly why, Rails becomes a PIAS for certain jobs, but also when or why ASP.net or Spring is not magically going to solve the problems you have with Rails. ASP.net or Spring probably don't translate well to, say Rails, wrt syntax or features. But the deeper knowledge, about how to model your OOP translates just fine. Or when all of those tools are going to be a PIAS because the domain is ill-fitted for OOP or MVC and needs something else.

And knowing React and Redux is neat (untill react is replaced by vue?) the knowledge it teaches you about functional state patterns will help you in your next elixir or even Rust project. Or vice-versa: a dev who only ever worked on your typical MVC CRUD-app, will probably have a hard time grokking Redux, whereas someone withour any JS experience but knowledge and experience in events, CQRS, or functional programming can pick it up in days: all they'll need to learn is the quircks of javascript (undefined-is-not-a-function).


Accounting is highly regulated, but the regulations change basically every year and the penalties for not following them can be very severe.


You may be jaded, but it doesn't mean you're wrong. Framework churn is such an issue, and so it language and all the other fashions that we are victim too. People don't spend enough time getting value from what they decide to use.

I think a more relaxing way to think about software, is to think about bridges and buildings. Sometimes bridges and buildings fall down. It happens less now. However, Civil engineers have had about ~2500 years to try all the combos and now they can both make things both robust and pretty.

If software is bridges, we as an industry spend a lot of effort on the gold plating very rickety construction. Gold plating in the form of over security engineering inconsequential little things, over complicating UI's, rapidly building new languages because of a perceptual deficiency in language Y.

It'll stabilize, and success will go up. The business / PM side is also new at this and doesn't control risk well yet. Agile and Scrum attempt it but .... the same fundamentals that get a building built well and on budget also apply to good software.


Many valid comments and points of view.

This is but one part of it, but what I see is that software has a uniqueness to it because so much of the complexity is hidden. Really, nobody can see it being created except the actual developer building that piece and maybe one level up if they are good enough to review the code. But to everybody else its a mysterious black box. Now this not only hides the complexity but actually amplifies it because at the ground level people are highly incentivised to inject complexity into their work. It feels good to make something complex. You feel productive and it definitely helps your resume if you can claim to have worked on something highly complex. Everybody else is impressed. The team gets more resources to support this amazing thing being created.

So hiding the complexity not only prevents exposure but means many people in the production chain incentivised to create complexity can get away with gratuitously injecting it into things. Whole frameworks will be deployed where 10 lines of code would have worked, new libraries will mysteriously appear and get used for 1 function. Layers of indirection in the design will appear to abstract things that have only 1 concrete implementation. etc etc. All of this is invisible above 1 management layer and therefore unmeasurable and unmanageable. But the outcome is, the end result takes much longer than you would surmise from the bare bones functionality or requirements.


I fundamentally disagree with the conclusions of this article.

It doesn't take long to "build software" - git was built in a weekend, Facebook a similar timeline. When the environment is right there's no upper bound on the pace of delivery, it's just that modern day dev is, in many places, less about quality of the output (in terms of product/market fit and user value, not code quality) and more about the satisfaction of ceremony. You couldn't product manage Google into existence, or any truly valuable piece of software, hardware, or any other innovation, but it's customary for organisations to create structures which feel grown-up in order to yield software.

The talents now required to enter the field are also different, which has naturally played a part in the perceived slow-down. Development is a new field, and research into what makes teams effective is limited - we're still using the ideas of industrial steel production to try and bolster the present manufacturing line. This means there are plenty of undesirable behaviours.

But ultimately, it's about what you're optimising for. If you hire outstanding people and give them an excellent brief, you can deliver incredible software at pace. Most modern companies don't optimise for this, it's more about satisfying sprawling teams, polishing egos, and chasing incremental gains. In that environment you can bet delivery is slow.


That's an overstatement. You can build impressive MVP and tech demos in a short amount of time, yes. But that requires that the author has already been thinking about the types of problems that come up in this domain for a long long time. Rest assured that Git certainly hasn't been created in a weekend.

Also, you can't create a product in this short amount of time. (There are some websites that do almost nothing at all, and that are very profitable, but those aren't engineering problems). Think how long git has been maintained, polished, and extended since its inception. And still people complain.

Software is a learning process, otherwise people wouldn't be so obsessed about it. And learning takes time. No matter which way you're approaching it.


I agree, saying Git was created in a weekend is like saying olympic gold medalist in 100m dash won competition in 10 seconds. Linus was working with DVCS software already and was annoyed with it so he had an idea on how to do it, he did not came up with it on Saturday like it never existed and coded it on Sunday.

Other part about Facebook - well if you take first version that was just well photos + some description and disregard all stuff that was refined next years where it started to be real product instead of novelty page for Harvard students.

In a company I work for we built first version of application in 1-2 years to catch up with what other companies already had and what customers expected. We could do really simplistic version in a weekend but no one would pay for it.


> saying Git was created in a weekend is like saying olympic gold medalist in 100m dash won competition in 10 seconds

Genius!!!


Git was "born" in a weekend, but the overall gestation would have taken much longer.


Git wasn't built in a weekend: https://www.linuxfoundation.org/blog/2015/04/10-years-of-git...

And git wasn't even built in 10 days, as Linus has said elsewhere. The core structure was, kind of. But there's been a large effort on top of that.

And the idea that Facebook was built in a weekend is just completely wrong. Sure, some basic Facebook-ish app was created over a weekend, sorta.


While using git as an example is a bit over simplifying I agree with the comment. Over time, if you plan ahead, your build up a set of building blocks and at some point you can put together an application very quickly. I suppose it is all relative though. While a month might seem fast to me someone else might expect a solid, fully featured application in a day or a weekend.


Nope. Linus spend years developing the software in his head before writing anything. Writing software is thinking not typing.


A lot of accidental complexity arises from adding more people to the mix. There is the communication overhead to deal with. Even when things go well this consumes time. When it doesn't: more time. Then there's of course the constant arguing and bickering over what is best; more people means more opinions on this. And of course the more people you have the more complexity gets introduced. Conway's law is a thing.

The benchmark for me is what I think I can pull off in a few days vs. what I would build with a whole team in a few months. You can build a lot in a weekend. But not not with a whole team around you questioning every move. So, when I need quick results, I give the right people a lot of freedom and not too many distractions and see what happens.

I'm building a new webapp for a startup with a few people since three weeks. We're launching with customers next month. That involves a lot of pragmatic decision making and taking on a few risks. Most of the hard choices were made in the first week and at this point the app is starting to look alright.

Here's what we did not do: we had no designers involved. We also had no lengthy meetings about what features we were going to have. We're patching up our existing backend and adding some new features to it.

Otherwise, it's an emergency rebuild of an app that just wasn't good enough (wrong mobile platform, developer is out of the picture, and it's a lot of technical debt and stability issues). I took the decision in December and gave our inexperienced frontend person some guidance to investigate the technologies I picked for him. By December we had a clear idea that this was doable. January we started doing this. That's 3 weeks ago. By now it's largely done. We're adding a few missing features and then we have an MVP that will run on IOS, Android and the web.

If I had a few million, I could not do this any faster. But I'd be able to ship beautifully native apps six months later. If fast is a goal, keep your team lean.


When you throw as many coders on to a project as anybody has ever done, in order to rush to the goal with maximum urgency, those man-months really add up fast and it does end up taking longer in the end because of it but look how many people got to the goal and it wouldn't have been possible without all those executive-months struggling to move the goalposts toward the team as much as possible before everything craters.

If you were committed to efficient progress to reach your original goal you're going to need a different approach.

And start all over again.

Carefully read every word of their comments from the programmer having two decades of single-handed operation, and the 42-year corporate retiree now making single handed progress himself, these are long comments but here's an excerpt:

>This all being said, one of the major causes of complexity in software development, which has always been an impediment to quality productivity is the ridiculous demand for unsuited deadlines in development efforts. This is a result of weak and incompetent management that runs up and out of the technical departments right up to division heads and the like. Deadlines, cause their own deference to the development of complexities, which often include defects. Deadlines cause stress and other factors, which often disallow most developers from thinking out their designs and problems clearly with time to tinker with the best solutions.


On a somewhat separate topic and line of thinking than the post (Great article btw, I agree with it completely) -- I've often thought that, as a software engineer, whenever giving estimates, any number of management folks must be thinking "why does it take so long?". And I think part of the reasons is this. If you simply _ignore_ most of the accidental complexity, you often can build 80% of the functionality of a requested product/feature in 20% of the time or less. What's more, for any upper management who's worked in the industry long enough, over the past two decades, they've most likely worked with some whiz kid who had done it (I was one myself in my early 20s a decade+ ago) -- so they know it can be done and have witnessed it.

Let's say the requested product is some CRUD web app. At one point in their career, this executive worked in a growing tech startup and on the team was a talented fresh grad. That engineer built a product very similar to that entirely, front and back ends, in two days. 15 years later you're working in a different company as an executive and you're talking about building some CRUD web app just like that. Why is the estimates being given by the dev team 4 weeks with 4 developers?

What wasn't seen and remembered was that the one built in two days by the smart kid had no unit tests (or any kind of tests), didn't cover many edge cases (at least in its first version), a UI with barebones CSS, no responsive layout, no component-based frontend (e.g. React), no scalability concerns, etc. But all the requested functionality is there, it's demo-able, and usable even. A similar product is being requested now, but it needs to be built to withstand today's quality standards, robustness, maintainability, etc. So unit tests are part of the requirements, a component-based frontend is needed for maintainability, you have to handle mobile, etc.

It's pretty much all the same points the author is pointing out about accidental complexity; just kind of showing why someone would ask that question.


It sounds like the kid in your example built an MVP product for the executive to demonstrate/test business value. Expectations matter. If the executive thought that was a finished product and is holding everyone to that standard, they were mistaken. I don't see any reason to write extensive unit tests, a component-based frontend, responsive layout, and scalable architecture for the first draft of a product. In my opinion, the first draft of a product in a business environment should test business value of the idea as rapidly as possible, as many of the ideas are half-baked


Some interesting thoughts in here. I guess the main thing is that there is basically infinite demand for more things in software. You can take almost any feature from any product and come up with new things you would like it to do that would take entire teams of developers to work on.

Software will never get quicker to develop because we will always request the absolute maximum amount of features that the market can stand to afford.

Basically every area has exploded with more features and is still way way behind what people would ideally want. Just look at the difference between Discord and IRC.


But do we actually have more features? I don't think Google Docs has more features than Ami Pro.

I think the problem is the modern focus on adding features, without consideration whether and how these compose. Not all features are equal - some can be used as a foundation on which more can be easily built, while some are a detriment to that. Just adding features without architectural consideration leads to - surprisingly - bad architecture.


Did Ami Pro let dozens of users around the world edit the same document at the same time? Some of them being on cell phones?

Did it automatically store your document somewhere that can be accessed from any computer with internet access?

Seems to me like we really do have more features.


Did Ami Pro even have fonts?


I think a problem here might be that those aren't exactly "Ami Pro" (or word processor in general) features, they're just tacked on the word processor even though it has nothing to do with word processing (even the multiple editors at the same time is essentially about having multiple views - not too different than opening two MDI windows in Office 97 or whatever with the same document but different positions - that happen to have input/output from across the network).

Some of that can be done in an agnostic way by pairing Ami Pro with something like Dropbox. But ultimately for better use (and having applications focus on the stuff they're supposed to be about) the underlying system should allow for such uses - after all one of the original ideas behind Windows was to avoid having every application reinvent its own UI. But we just seem to be stuck around that point in most OSes (Cocoa probably being the exception of an OS framework that provides more than the basics)... if anything we're regressing with every application (on the desktop at least) ignoring the OS GUI and doing its own anyway.


Simultaneous editing of the same document over the Internet is a fundamentally different problem than having multiple views on a local machine: 1) On a local machine, only one view has the input focus, and 2) you don't have to deal with high-latency, unreliable communication and synchronization.

> the underlying system should allow for such uses

That's not possible in the general case. It's a hard enough problem for the specialized case of text editing.


I do not see as a different problem really, what you are describing is having a separate cursor per view (which is something programs already do).

Also i'm talking about an ideal hypothetical system here, not something that exists - but it is certainly possible to do something like that, you are way too quick to dismiss something as not possible.


Sounds like a No True Scotsman argument for what counts as word processing features.

I know plenty of scenarios where collaborative document editing is a killer feature as part of writing a document.


I didn't argue that it wouldn't be a good (or "killer") feature, i argued that it could be a feature that is provided at a different level that lets the word processor focus on word processing (ie. what word processors are and should be about) while simultaneous/collaborative data editing is also provided for other applications (e.g. collaborative spreadsheet editing, or image editing, list editing or whatever else would make sense).


> Just adding features without architectural consideration leads to - surprisingly - bad architecture.

But you don't sell architecture. You sell features.

And people write their own Google Docs features. I have spent a bunch of time coding up things for that.


I mean.... isn't this true for hardware too?

Razors today looks very different from the ones when it was invented. Same goes for vacuum cleaners, cars, tools for construction/manual task, the list goes on.

The only notable difference is that each iteration of hardware have a vividly clear version start/end date compared to the rolling versioning of software. First vacuum cleaner was invented 1908...


Everything is improving I guess but there is very little relative investment in razors. They look different but arguably function worse than the original style double edge.

Vacuum cleaners have had a bit of innovation but no one is really desperate for a better vacuum cleaner unlike software. Also many of these hardware items have basically reached close to their peak form. You can't really make a radically better bicycle because we have already reached almost the best design.

With a team and a few years you can usually obliterate the leading software product in most areas and with a large team you could completely revolutionize it since there is so much easy target improvements to be had.


I guess the main thing is that there is basically infinite demand for more things in software.

Law is very similar to software in this way. They both also take a long time to write & are never actually "finished."


This is an interesting analogy. However, I would argue that Law is less constrained by the laws of nature.

If a program is broken, even the best programmer in the world may not be able to fix it if they can't diagnose the underlying problem. Whereas in law; there is no such problem. If a law is found to be bad, you change it. Most reasonably competent lawyers could come up with a solution. Not to say that creativity and intelligence isn't required, the constraints are just different. (You do hit the laws of nature at some point, but I believe there are guardrails in law that prevent this from happening; e.g. no judge will sentence a person to run a million miles even if some combination of inane laws may require that as punishment).


Law loopholes can be hard to diagnose and fix.


Law is constrained by human nature.


"the last 20 years has been the drastic reduction in the ratio of essential to accidental complexity"

I know this is nitpicking, but the author is using mathematics as an example and then uses ratio the wrong way... The point he/she is trying to make is that tools got better. That would mean the accidental complexity goes down. That means the ratio of essential to accidental complexity goes up.


I wondered about that as well, because much of the content in the middle of the article, where the author lists changes in the industry such as automated infrastructure and frequent deployments, would suggest to me that accidental complexity really has increased and the author's statement is actually correct as written even if that wasn't the intent.


My take is that programming languages have an inadequate ratio between abstract and concrete data flows.

You are supposed to build code by stitching together commands for all the data handling functions required to handle each case in the data. Yet development environments make it really hard to actually _see_ what effect the functions are having on the data. You need to enter a special debug mode and place strategic breakpoints, logs and watched expressions, instead of making easy to compare the full program state before and after executing each function.

In that respect, spreadsheets and reactive functional languages make it easier to prototype new software fast, since they have close together both functions and the data on which the functions operate; and in spreadsheets, you also get the intermediate steps for chains of functions.


I'm also surprised that no one's yet mentioned Bret Victor's work.

http://worrydream.com/LearnableProgramming/

His visual interactions for programming are a great advance in the state of the art for fast development, and they've greatly influenced recent advances in development environments - making everything more reactive, and with code and data more intertwined.


In an average replicate (FB, HN, Twitter) tutorial, students can see how with an amazing developer skills, someone creates the finished product from 15 minutes to 2 hours. But, when they try to do the same, it takes days, weeks or months. And this also applies to an experienced developer; the reason is depth. (How deep to the rabbit hole do we want to go). There are always hidden complexities, configuration, installations, validations, issues, known obstacles and unknown obstacles; there is time for meetings, negotiations, time for thinking, time for merely trying to come up with the best approach long before we start coding.

A good example from my early career was "just one field in the database", a client says "hey this will take you just 1 minute, why you are arguing with me about this", but that "one field" is so much intertwined that can affect half of the codebase. It means, going through all procedures, views, models, tests, validations, then there is an impact on historical data etc...

Also there is a huge difference between: spike, MVP, and finished product.


I think there is a cost disease in software where baseline requirements only increase depending on the maturity of a company. Accessibility, localization, scalability, permissioning, observability and alerting, reliable deploy and testing pipeline... all these things are table stakes for a new project at the company I work at, represent A LOT of complexity, yet an application delivered without them might have a nearly identical user experience...

That’s not even to mention the added burdens on the planning side of things, e.g. getting infosec and legal and compliance to sign off before breaking ground, gathering requirements from 10 different teams across the world, appeasing more stakeholders, etc.

The difference just in table stakes between a young startup and a mature company can easily account for an order of magnitude difference in effort required to deliver a feature, and it shouldn’t be surprising.

Edit: and this is why I weep when I see mature companies under-investing in developer tooling.


The pace of change is too quick and the permutations of how to do X are necessarily too numerous.

As far as I know there is no “standards body” that governs these things across organisations.

Ie: if you’re developing a web app that needs to poll other systems for data every X minutes there should be a standard that governs the best way of doing this in the major 5 languages.

Taking into account SRE principles like logging, scaling, security etc. And some clear code examples using the simplest and least OO-functional-prototypical-new age code possible


What is the value of the current pace of change though? We are using billions of developer hours every year to keep up with all the changes, where is it going, why do we need to spend so much time changing things? Changing things doesn't just force people to rewrite, even worse it forces people to relearn instead of achieving mastery.

How many percent of developers understand how everything in their stack works to such a degree that they'd be surprised if there are any problems when they test things? Basically nobody, yet it isn't really an unachievable goal. Why can't a web developer reach that point after 5 years of full time? Simple, it is all the damn changes to everything that happens constantly. It is so bad that most seems to think it is impossible to master anything in software engineering, yet if you just pin down the versions of all your dependencies and work like that for a few years everything will be crystal clear, that is how the human brain works.


What I find the most painful is that you can't rely on much to decrease accidental complexity.

A couple of years ago I settled on the JavaScript Dojo Framework. It was the most solid framework available, but then AngularJS came out, and it had some very compelling arguments on it's side.

So I moved over to AngularJS, being happy that there finally a good, modern solutions to many problems.

Then Angular 2 came. Then AngularJS 1.x stopped being updated. Angular 2 was nowhere near what I needed, so I had to find an alternative.

I switched to Vue, being really happy about it. Now I'm finding myself in the transition to Vue3, where frameworks like Quasar still need to get ported and it is unclear what will happen to all the code I already wrote.

Or Python2 vs Python3.

All this progress makes me feel burned out, barely having time to tackle the actual problem to be solved.


Typical internal CRUD applications took about 1/3 the time and 1/3 the coding to produce in my experience with the desktop IDE's of the late 90's. The web bloated things up so that you spend time hooking layers to more layers and debugging layers instead of focusing on business logic and UI in terms of users' needs. Now you micromanage technology instead of actual work.

People often say, "yes, but we have so much wonderful choice now!" Perhaps, but organizations are paying dearly for that choice.

Oracle Forms may have been esthetically ugly, but it did the CRUD job fairly well without fuss and muss, and developers were quite productive with it. The client was kind of a GUI browser so that you didn't need to install each application on each user's PC. It was also multi-platform. (Oracle mucked up the client by migrating it to Java applets.)

Some point out they don't work well on phones, but few of our apps are used on phones at our org. The UI was crippled to be mobile-friendly, but nobody is mobile-ing for real-work stuff. Make a special phone portal for the 5% or so of apps that need it instead of tangle up the 95% "incase" someday someone will want mobile. YAGNI.

If the industry would just learn to live with ugly K.I.S.S., they could save a lot of money and time. Warren Buffett says one of his greatest strengths is the courage to say "no". Few have that, and bloat the IT world as a result. Maybe fear of missing the Buzzword of the Week turns our stacks into buzzword pack-rats.


Maybe because successful companies in tech got rich so quick that their leaders never learned how to code properly. None of the developers who work for big successful companies know how to code efficiently. Initially, they were hacking together all the code as quickly as possible (of course that didn't scale and had to be rewritten completely) and then they later got enough money to just brute-force development by throwing hundreds of developers at it.

Then they started pushing their naive ideas to everyone else (e.g. static typing, functional programming, 100% unit test coverage, Elixir...) and they put 0 focus on stuff that really matters such as keeping method parameters and return values as simple as possible, consuming events using for-await loops instead of event listener callbacks, naming variables well, separation of concerns, etc...

So now most of the industry knows two kinds of code: Dirty hacks or bureaucratic bloatware. Nobody knows how to code properly. There is nobody around to teach people the correct approach and there are a lot of people around to teach the incorrect approach.


When someone asks "Why does it take so long to build software?" and you want to verify whether said person has any intelligent thoughts on the matter, ask them: "How long do you think it should take to build software?"


> How long do you think it should take to build software?

About as long as it takes to figure out what the software should do. Add one day for each 1000 lines of code for typing carefully and adding automated tests.


Aren't most developers in the world paid a salary or day rate and thus are paid for their time? What incentive is there for them to "hurry up" rather than sandbag?


Plus they usually get no financial compensation for the extra work of making things work well.

There's a minimum quality level required to not get fired, and I wouldn't be surprised if that's exactly what most employees deliver.


Of course, why go through the troubles of putting in the extra hours, if there is no financial compensation or appreciation?

Personally, I would just optimize stuff or propose new features for a product if I would learn something/gain experience out of it.


Agreed. I would go further and say that is true of any salaried employee who has no significant stake in the business


I want to say stock options, but I don’t really think it achieves this goal.


I concur.

Stocks are too far away from the actual software. The CEOs stupid pet project might well ruin the company's financials, even though all of the engineers on the core team did a fantastic and highly profitable job.

So stocks are more bets on the CEOs performance than on the engineers'.


Just think about building a simple saas website.

What do you need

A marketing page A dashboard set of elements A payment system A billing history, refund, change card, invoice system A token system A auth system A email system or integration with a email provider. A complaint system A admin dashboard A often custom database schema A analytics system and useage A log tracker A cloud infrastructure for hosting Rate limiting A api A event logging system A marketing blog

I have strayed too far.

My point is each one if these things is usually build from scratch. It takes a lot of time, then you have to wire everything together. In typical custom ways. If someone makes a off the shelf version typically the learning curve is greater than building a custom version.

I think software takes so log because each dev tries to rebuild the wheel. And often times the wheel does need to be rebuilt. But due to time issues things never get finished to the degree you planned and your custom code has been rewritten 10 times and each new feature take a logx longer to not break the old stuff.


I think software takes longer to build now than it did before even with all our software advancements. Abstracting the market requirements completely, the new technologies are very slow to create an mvp out of.

For example, a week ago I started a Symfony (php framework) project and let it on default server side rendered setup. I bought a $20 css+html template. I created some entities and then just inserted a form for that entity into an html file using one line of code. It took me less than 1h to do this, with containerization and db setup and figuring out how things work in Symfony.

It blew my mind. I totally forgot how easy it was and I could focus on the things that mattered. To do the same thing in React/Angular would have taken me much longer since I simply had to build 2 things (a frontend and a backend). I think this is why today software takes longer to build: You simply have to build more behind the scenes so that the end customer sees a form/page/button.


Software is like writing a book. A good book fits all new pieces in a new story and makes sense. A fast book is just copy of known pieces and known story and don't really fits well. A "PRO" in market steal not old ideas, a "PRO" in the field create detailed raw new sketchs that are not ready to sell.


"... programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal..."


Programmers rarely think like users or in the case of business or specialist software, those who have domain knowledge; and vice versa. Skill barriers to contribution remain high and efforts to scale them like ‘low-code’ tools seem forever in their infancy. You rarely get a rounded combination of required developer-designer-user qualities in one individual or small team.

Nor is there, in my opinion, widely distributed recognition of this within the industry. There still exists a lack of formalized design methodology enabling to processes to flow through iteratively from ideation, to prototyping, to production. Costs and time taken snowball as a result and project failure rates remain high compared to other engineering disciplines.

I think the high profit payoffs and normative structures due to distribution costs being marginal contribute to the stickiness of this comfy buy rickety state of affairs.


I think it is relative. Compared to developing a car, a wordpress website can be completed in a relatively short time; an ERP system on the other hand will take substantially longer. It depends on what is being asked to be delivered when at what cost.

Putting that aside, there is a time creep factor because people don't place limits on the demands - ask and the developer can do it. So it is developed and now you need to ensure that the custom piece works reliably and doesn't break and so on. Compared to that, a house needs to be carefully designed upfront and there are limits to what modifications you can reasonably ask for down the line - you don't ask the builders to drag and drop the kitchen to the other side because you don't like how it looks on mobile.


"splitting it up" is only accidental complexity if one splits things up in services. If one has a monolith I would argue that "splitting it up", in that case known as 'improving coherence' and 'reducing dependency' is much easier and falls under essential complexity.

Generally, I would say that tools are a very big part of it. One example. Why did we go from svn to git again? O yes, git can do so much more. But now you actually have to do all of these 'so much more' things. This gives you a very beautiful commit history if you do it well but is a very beautiful commit history really that important? You tell me.... maybe it is and maybe it isn't and maybe it depends on the nature of the project....


Writing software is basically going in debt, so you should write the 20% of software that will deliver 80% of the value. The problem begins when not considering costs and then being "surprised".


Manager: "All those 'nice to have' things are too expensive. Let's cut them"

Manager, 3 months later: "Why are there no nice things in the software!"


Does it? I mean it takes years to shoot a film or write a book, as creative activities, I don't see the evidence that software is more difficult or time consuming in that regard.


This answer is facile, but missing:

It takes so long to build software because it can. One can't construct a building forever. A novel is published and done. But software can be developed over its useful lifetime.

Similar followup:

The parts that take a long time, do so because the quick parts were finished quickly.

Lengthy software development isn't a problem with software, but rather reflects the frictionless malleability of code.

"Why does software dev take so long," isn't the same question as, "How to develop software more quickly?"


Why does it take so long to make a movie/software? Because it is an act of invention. Yes the camera/widgets, lights/libraries are all reused. But the way they are reused is unique for each movie/application. And a large number of external constraints/forces impact how the movie/application is made. Yes you can quickly make an amateur movie/app in a few hours. People do it all the time. But that’s not the same as making AAA movies/software.


I don't know that it takes that long to build software.

That is, the expectation that software be immediately available because you can vaguely imagine what it would do for you is strange.


I think Paras Parmar's comment on that page is spot on.


i believe one of the culprits for software taking so long is planning. this is alluded to in marty cagan's excellent book _Inspired_.

first, the caveat: a software dev project will take up as much time as is allotted to it. when you have more than enough time allotted for a project, most people either spend more time in research or allow themselves partial implementation paths of multiple solutions to help decide which one is best. on the other end of the project, every project can be polished forever. so dev projects will take as least as much time as has been allotted to it.

when we go through the planning process, it usually follows something like this:

1. dev estimates how many "points" (which translates into time in most peoples mind) to allot to a task. dev then pads it to account for dev optimism and wanting to make sure they have the chance of meeting the "committment" 2. program/project/product manager pads the number a bit further because they've been burned in the past on dev optimistic estimates and it's better to underpromise and overdeliver 3. dev proceeds on the task with the bloated estimate baked in and the task takes the full time allotted to it. it's baked inefficiency.

part of the problem is the concept of "committment" for estimates. good scrum organizations have changed that term to "prediction" instead of "committment" since this has been such a big problem. but even that isn't enough to solve the political capital lost when you "predict" wrong.

i think that any estimation above "t-shirt size" is going to bake this inefficiency into your process. the solution is less planning and more doing.


Also, user needs are evolving constantly (or are identified in an iterative cycle) and code is structurally not very good at changing constantly.


More so than user needs are user expectations, which are often just following trends set by larger software companies with much larger budgets. And just like other trends, they rapidly change and are fleeting.


What does it mean long? Long for whom? Your client? The final customer?

In general, in my experience, the software can go wrong due to humans being present in the process.

There are mainly 3 types of humans: engineers, product people, customers (clients).

These 3 types of people sometimes lose track of what is really important in terms of features and engineering.

Overengineering and overfeatures kill time to market.

Keep it simple.


Because we don't dare making bold decisions like:

* Throw out all MySQL/MariaDB installations and replaced them with PostgreSQL because it is objectively, provably better.

* Stop writing build-systems in untyped languages, because they are less productive when used by a team.


>Throw out all MySQL/MariaDB

that's reaching. there are many successful software build on top of mysql/mariadb.

>replaced them with PostgreSQL because it is objectively, provably better

"provably better"? that's not reasonable enough.


> that's reaching. there are many successful software build on top of mysql/mariadb.

That doesn't make them better or even good. There is also successful software built on CGI scripts, but I'm not going to go and say that Go/Node/C#/etc aren't provably better than that.


I've used MySQL/MariaDB for over 10 years (as well as MSSQL, Sybase, PostgreSQL, SQLite and any others I forgot) and I've had minimal problems with them. No more problems than I've had with any other DB system.

Edit: I also used MongoDB and hated it.


Did you read the article? The author doesn’t mention any specific technologies as the source of the problem but instead the increase in expectations of what software can do, how it is made and the complexity incurred because of that.


I've used both MySQL/MariaDB and PostgreSQL. They are perfectly equivalent and do their job pretty fine in my experience of 500req/s


Why is MariaDB worse than PG? Maria has temporal tables which I make great use of


Fun fact.. it doesn’t!

Some of the most widely used software has been created in a few weeks or days.


Created, yes. But maintained, built upon, iterated over and improved?


The same can be said about homes, buildings, road. They can take hundreds of years to maintain, build upon, iterate over and improve.

The problem is almost never technical, or "software". Any decent programmer is able to create 90% of the demand.

Domain knowledge, requirements, product management, roadmaps, business tradeoffs is where all the time goes


Have they released subsequent versions?


Git: 2 weeks by Linus Torvarlds,

Javascript: 10 days (Brendan Eich?? not sure)

Both have had iterations. Git, because the original command line was awful. JS had to be reimplemented with its weirdities, because websites were already relying on them (and this is why we get the WAT demo: https://www.destroyallsoftware.com/talks/wat )

I’d day, in 10 days, what you build is an excellent architecture for a project used across the world for half a century. But just the architecture. The features come later. Perhaps it’s limited to 10 days because that is the span of medium-term memory and it matches the maximum number of concepts in the human mind?


Both of them achieved a minimum viable product stage at that point. Git is good today because of the many many additional time that were spent on it, and a programming language is not software in itself. I am fairly sure that we don’t use a thing from Eich’s implementation. And the building of V8 engine for example started in 2008! (And even the language had to go through multiple revisions to make it useable, which it became only recentishly)

Noone will write a VM with state-of-the-art GC, JIT compilation and the like in 2 weeks from scratch, and this is an example of a complex application.


For me, the main thing causing slowdowns is the bad quality or complete lack of documentation about how the existing/external APIs work, and a big part of the development is trial-and-error.


I wonder if part of this is because the commercial pressures aren't there, that there is so much money sloshing about the industry that we can get away with sloppy practices.


Too long often means forgetting the iceberg principle [1] by Joel Spolsky. Business people only see the tip of the iceberg, the UI button, but don't see the huge chunk below water. This chunk is complex and not UI and takes time to develop or change.

[1] https://www.joelonsoftware.com/2002/02/13/the-iceberg-secret...


Try building hardware. That's harder.


They ain’t calling it HARDware for nothing! :)


Eh pointless gripe but if this is a blog/article should omit that intercom widget or whatever on that page.


We are asking more and more of our software.

Are we, though? This assumption merits deeper scrutiny. It's obviously true sometimes, but does it really generalise?

In terms of functional requirements, we use lots of online software today that is far simpler than its traditional desktop equivalents in feature set. Many successful SaaS businesses are providing tools to help businesses organise that are often barely more complicated in terms of features than the freeware/shareware applications that used to be everywhere, often written by a single person or a small team, running natively on your desktop! It turned out that a lot of the extra complexity in the huge programs we used to run on desktops and corporate servers wasn't offering a good return on investment and the trend has been to simplify over time. That might be an improvement for both developers and users, but it certainly doesn't mean we're asking more of that software.

In terms of non-functional requirements, connectivity is obviously a big change compared to 20 years ago. Today, a simple CRUD app might be running online and the data can be reached from anywhere by anyone, or by multiple anyones at the same time. That could be much more convenient even if the data itself is little different or even simpler than it used to be. And there are very significant overheads in implementing real-time, distributed, concurrently accessible systems compared to old school desktop and database applications, so in this area we certainly are asking a lot more of much modern software.

But beyond that, what else are we asking more of in terms of non-functional requirements? We're setting the bar low on performance for most of the software that gets written today, relying on ever faster and larger hardware to make up for running less efficient code. As anyone who actually wrote serious software on the systems of 20 or 30 years ago can testify, that makes life far easier for programmers and requires much less knowledge and skill. Today, the skill set that resulted in heavily optimised code a few decades ago is mostly found among developers working in fields like embedded systems where resource constraints are still often tight, but otherwise it's another area where we're asking far less of our software than we used to.

I think the article was more on target with its highlighting of essential and accidental complexity. The extra processes and tools and infrastructure add a great deal of extra complexity today, and I am far from convinced that much of that complexity is really justified, particularly when it's attached to such a transient, throwaway culture where little is built to last and developers jump ship every five minutes. It might be very un-PC and/or career-limiting to suggest building software any other way today, though.


I agree, feels like we've conditioned to accept software playing dumb and being unreliable.

A couple decades ago, random glitches attracted much scrutiny, today we just take it for granted.


Is that really true? I remember routine blue screens and system hard-crashes a couple decades ago. They're not commonplace today. We're primed to remember the past rosily.


It's true that there are rose-tinted spectacles at work here, but I don't think modern software is as great as all that.

Today's version of a blue screen might be something in us-east-1 being down again and taking half the software on the Internet offline with it, but the effect is much the same, and now you can't necessarily do anything about the outage, you just have to wait until whoever is responsible for the failed infrastructure to decide your problem is important enough to fix next.

It's not just cloud, either. I recently had a combination of two popular pieces of software, both widely used by professional programmers, where a plugin was crashing the host application when even basic functionality was triggered. The problem was sufficient that I had to abandon using them and ended up writing that program in a different programming language because it was faster than messing around with broken tools. It turns out that some of these much-touted modern standards for interoperability and portability aren't all that great after all.

And of course, while we don't get a literal blue screen any more, it's not as if Windows 10 and several other popular operating systems don't frequently break users' devices, sometimes catastrophically, when rolling out updates. We've just come to expect (by which I really mean, been forced to accept) this culture of shoving broken stuff out and fixing it later, even in the most essential software like our operating systems.

Meanwhile, the last time I saw significant instability on any desktop OS prior to quite recently, I think I was running Windows 95. So maybe the spectacles aren't that rose-tinted after all.


But that is because OSs are not rewritten from scratch every time, but rather iterated upon.

Just try to use facebook, messenger or countless other “apps”. They have so low quality that I don’t understand how can they even allow publishing those.. There are regular UI glitches, I frankly can’t believe the UI that the requested functionality went through and have to double check, and I even have to check if it didn’t issue commands I didn’t want to.


I remember that as long as I didn't install random crap and did basic maintenance, my Win 9x systems worked like clockwork, with only a reboot every couple of days.

Windows 2000 was the most solid and stable Windows OS I've used, no issues at all with it, even the pre-releases.


I kind of have the exact opposite in term of feelings.

Seeing Windows 95 or 98 going into a BSoD was far more common than today with Windows 10.

I also remember seeing Netscape crashing extremely frequently toward the end of the first browser war.

By comparison, today, a modern browser is doing a lot more things, but very rarely crashes in my experience.

Overall, in term of software quality, I think things have improved, specially thanks to:

* the maturing of tools like compilers, dependency managers, linters, unit tests frameworks, etc.

* the widespread adoption of development technics ensuring better quality (CI, unit tests, code coverage, automated integration tests, etc).

* the development of common building blocks such as libraries and frameworks, which enabled projects to not "re-invent the wheel" every time for low level stuff. These building blocks also being generally widely used, battle tested and more reliable than in house implementations.

* the development of OSS which drastically increase the accessibility of the previously mentioned building blocks to developers.

Both technically and methodologically, software development has seen tons of improvements since the late 90ies, early 2000.


It takes long time to build anything of value, software is no different.

There are many variables in building software, which roughly break down to budget, resources and time. All these variables must be managed carefully to deliver software of quality and value. There are several strategies that mitigate the process of building software and manage tradeoffs between budget, resources and time.

There's no one size fits all solution that can reduce time without accounting for budget and resources.


because software is mostly about learning and working code is merely a side effect of the latter.


Expectations != Requirements


Which points straight to the problem: the tools for expectation management, requirements gathering and design validation are severely lacking.

Applications should be built on fast iterations with the user present, until each detail is perfect and the full workflow is validated to be as useful as expected. Only then should engineering issues be addressed, to make the tool robust and fast.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: