They can not. They can make some average code. On Friday one suggested an NSI installer script that would never bundle some needed files in the actual installer. I can only imagine that a lot of people have made the same mistake (used CopyFiles instead of File) and posted that mistake on the internet. The true disaster of that being that then testing out that installer on the developer's PC, where that CopyFiles may well work fine since the needed files happen to be sitting on that PC, would then lead on to think it was some weird bug that only failed on the end user's PC. I bet a lot of people posted it with comments like "this worked fine when I tried it," and here we are a decade later feeding that to an LLM.
These tools can write average code. That's what they've mostly been fed; that's what they're aiming for when they do their number crunching. The more specifically one prompts, I expect, then the more acceptable that average code will be. In some cases, average appears to be shockingly bad (actually, based on a couple of decades' experience in the game, average is generally pretty bad - I surely must have been churning out some average, bad code twenty years ago). If I want better than average, I'm going to have to do it myself.
And it will run rings around me in all the languages I don't know; every case in which my standard would be shockingly bad (I speak no APL whatsoever, for example) it would do better (in some cases, though, it would confidently produce an outcome that was actually worse than my null outcome).
You left out the key line “and you don’t believe me, wait six months”. These models are getting better all the time. The term “vibe coding” was only coined a year ago, around the same time as the release of Claude Code.
It doesn’t matter if you don’t think it’s good yet, because it’s brand new tech and it keeps improving.
If you don’t think the quality has improved then you haven’t actually been trying it. Any programmer who knows what they’re doing can immediately tell models like Opus 4.6 and Codex 5.3 are much better than models from a year ago. All the objective metrics (benchmarks etc) agree as well.
The only software I ever worked on that delivered on time, under budget, and with users reporting zero bugs over multiple deliveries, was done with heavy waterfall. The key was knowing in advance what we were meant to be making, before we made it. This did demand high-quality customers; most customers are just not good enough.
About 15 years ago, I worked on code that delivered working versions to customers, repeatedly, who used it an reported zero bugs. It simply did what it was meant to, what had been agreed, from the moment they started using it.
The key was this: "the requirements are polished and any questions answered by stakeholders"
We simply knew precisely what we were meant to be creating before we started creating it. I wonder to what degree the magic of "spec driven development" as you call it is just that, and using Claude code or some other similar is actually just the expression of being forced to understand and express clearly just what you actually want to create (compared to the much more prevalent model of just making things in the general direction and seeing how it goes).
Don't use standard library containers because we have our own containers that date back to before the STL was even stable.
Flashback to last job. Wrote their own containers. Opaque.
You ask for an item from it, you get back a void pointer. It's a pointer to the item. You ask for the previous, or the next, and you give back that void pointer (because it then goes through the data to find that one again, to know from where you want the next or previous) and get a different void pointer. No random access. You had to start with the special function which would give you the first item and go from there.
They screwed up the end, or the beginning, depending on what you were doing, so you wouldn't get back a null pointer if there was no next or previous. You had to separately check for that.
It was called an iterator, but it wasn't an iterator; an iterator is something for iterating over containers, but it didn't have actual iterators either.
When I opened it up, inside there was an actual container. Templated, so you could choose the real inside container. The default was a QList (as in Qt 4.7.4). The million line codebase contained no other uses; it was always just the default. They took a QList, and wrapped it inside a machine that only dealt in void pointers and stripped away almost all functionality, safety and ability to use std::algorithm
I suspect but cannot prove that the person who did this was a heavy C programmer in the 1980s. I do not know but suspect that this person first encountered variable data type containers that did this sort of thing (a search for "generic linked list in C" gives some ideas, for example) and when they had to move on to C++, learned just enough C++ to recreate what they were used to. And then made it the fundamental container class in millions of lines of code.
The complete refactor, bringing it forwards from VS2008 to VS2022, and from a home-built, source-code edited Qt 4.7.4 to Qt 6.something, took about two years from start to finish.
... which makes the need for rent control and renter protection laws obvious. When people have the "choice" between paying their landlord 5k more just because the landlord can or spending 5-10k on finding a new home and moving, landlords have all the cards.
SF and NYC are the only two places in the US where rent control makes any sense whatsoever. And it's to solve a problem that was largely created... by rent control.
Forcing people to move to another country en masse sounds like the failures wouldn't be caused by a culture clash so much as more fundamental issues around being forced to move to another country.
I've taken money to create software for most of three decades and I don't think I've ever actually worked on software that needed the people who created it to be near it while it was running, once it was working.
I think the record single instance uptime on a customer site was most of a decade, running a TV station.
I worked on the same for many years; same deal - playout system for broadcast, years of uptime, never miss a frame.
The C++ was atrocious. Home-made reference counting that was thread-dangerous, but depending on what kind of object the multi-multi-multi diamond inheritance would use, sometimes it would increment, sometimes it wouldn't. Entire objects made out of weird inheritance chains. Even the naming system was crazy; "pencilFactory" wasn't a factory for making pencils, it was anything that was made by the factory for pencils. Inheritance rather than composition was very clearly the model; if some other object had function you needed, you would inherit from that also. Which led to some object inheriting from the same class a half-dozen times in all.
The multi-inheritance system given weird control by objects on creation defining what kind of objects (from the set of all kinds that they actually were) they could be cast to via a special function, but any time someone wanted one that wasn't on that list they'd just cast to it using C++ anyway. You had to cast, because the functions were all deliberately private - to force you to cast. But not how C++ would expect you to cast, oh no!
Crazy, home made containers that were like Win32 opaque objects; you'd just get a void pointer to the object you wanted, and to get the next one pass that void pointer back in. Obviously trying to copy MS COM with IUnknown and other such home made QueryInterface nonsense, in effect creating their own inheritance system on top of C++.
What I really learned is that it's possible to create systems that maintain years of uptime and keep their frame accuracy even with the most atrocious, utterly insane architecture decisions that make it so clear the original architect was thinking in C the whole time and using C++ to build his own terrible implementation of C++, and THAT'S what he wrote it all in.
I worked on a pure C system early in my career. They implemented multiple inheritance (a bit like Perl/Python MRO style) in pure C. It was nuts, but they didn't abuse it, so it worked OK.
Also, serious question: Are they any GUI toolkits that do not use multiple inheritance? Even Java Swing uses multiple inheritance through interfaces. (I guess DotNet does something similar.) Qt has it all over the place.
The best example I can think of is the Win32 controls UI (user32/Create window/RegisterClass) in C. You likely can't read the source code for this but you can see how Wine did it or Wine alternatives (like NetBSD's PEACE runtime, now abandoned).
Actually the only toolkit that I know that sort of copied this style is Nakst's Luigi toolkit (also in C).
Neither really used inheritance and use composition with "message passing" sent to different controls.
I take this back ;-) People come up with crazy things. Still I would not call this "C thinking". Building object-oriented code in C is common though and works nicely.
This is a good point. It would be better for me to say pure abstract base classes... that simulate interfaces in C++. That said, I can say from experience that Qt does more than multi-inheritance with pure abstract base classes. I think the QPainter class is mixed into a few places, and that class is fuckin' major -- it is responsible to paint every (cross platform) pixel in the whole framework.
it is also interesting that places where you would expect to have quite 'switched-on' software development practices tend to be the opposite - and the much-maligned 'codemonkeys' at 'big tech' infact tend to be pretty damn good.
it was painful for me to accept that the most elite programmers i have ever encountered were the ones working in high frequency trading, finance, and mass-producers of 'slop' (adtech, etc.)
i still ache to work in embedded fields, in 8kB constrained environment to write perfectly correct code without a cycle wasted, but i know from (others) experience that embedded software tends to have the worst software developers and software development practices of them all.
What I like most about this math problem is explaining it to people who understand what I'm saying but still insist that it might be possible and they're going to do it. It's a nice lesson for me to think about and carry through life.
They can not. They can make some average code. On Friday one suggested an NSI installer script that would never bundle some needed files in the actual installer. I can only imagine that a lot of people have made the same mistake (used CopyFiles instead of File) and posted that mistake on the internet. The true disaster of that being that then testing out that installer on the developer's PC, where that CopyFiles may well work fine since the needed files happen to be sitting on that PC, would then lead on to think it was some weird bug that only failed on the end user's PC. I bet a lot of people posted it with comments like "this worked fine when I tried it," and here we are a decade later feeding that to an LLM.
These tools can write average code. That's what they've mostly been fed; that's what they're aiming for when they do their number crunching. The more specifically one prompts, I expect, then the more acceptable that average code will be. In some cases, average appears to be shockingly bad (actually, based on a couple of decades' experience in the game, average is generally pretty bad - I surely must have been churning out some average, bad code twenty years ago). If I want better than average, I'm going to have to do it myself.
reply