I would like to know! I can go back 22 years. Then, jobs were more likely to be apps running on Windows. People yelling at screen because things aren’t rendering properly (render as in pixels, not virtual doms!). No unit tests. Sourcesafe (old buggy but simple to use VCS). You could exclusively lock a file to annoy other developers and stamp the importance of your work. No scrum and much less process. 9-5 ish and no time tracking. No OKR or KPI. Do everything with Microsoft tooling. No open source tooling. Someone’s job to build an installer and get it burned to a CD (the optical storage medium). There was some automated testing but no unit test or CI/CD. Not so many “perks” like swazzy office, toys, food
supplies etc. If there was webdev it would be in ASP or ActiveX!
That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then. Still, Slackware was 7 years old by the year 2000. Running the 2.2 Linux kernel, compiled with open source GCC. Websites were cgi-bin and perl. Yeesh I've been running Linux a long time...
On the windows side, NSIS was an open source piece of tooling released that year. And I was writing Windows programs in Visual Studio with MFC.
> That's a very windows-centric view of the past. And with good reason too! Windows was utterly dominant back then.
Running servers on Windows? Yeah, a few people who didn't know better did that, but it would be completely inaccurate to describe Windows as "completely dominant". It ruled the desktop (and to a large extent still does), but it barely made it to parity with *nix systems on the server side before Linux (and FreeBSD in some cases) punched down.
It entirely depends on what you are counting, but I do think your comment is extremely misleading because Microsoft was important for business web servers in 2000. “a few people who didn't know better did that” is outright deceptive.
The dominant position of Microsoft’s proprietary IIS in the Fortune 500 makes Windows NT a lock for the most used operating system undergirding the Web servers -- 43 percent. But the idea that Sun Microsystems Inc.’s Internet presence is weakening isn’t supported by the numbers. Sun’s Solaris holds a clear second place at 36 percent, with all other operating systems falling into the noise level. Linux showed up at only 10 companies.
It is fair to say that in 2000 Linux was beginning its growth curve for web servers, and all other OS’s were starting their decline. I do note the Fortune 500 had a lot fewer tech companies back then (zero in the top 10) and churn has increased a lot (perhaps due to not following technological changes): “Fifty-two percent of the Fortune 500 companies from the year 2000 are now extinct.”, “Fifty years ago, the life expectancy of a Fortune 500 brand was 75 years; now it’s less than 15”.
22 years ago I was programming on an almost entirely open source stack. Linux servers, vim, Perl and we paid for sybase. We used CVS for source control and when I heard about sourcesafes restrictions I was shocked.
We had unit tests, though it was your own job to run them before merging. If you broke them you were shamed by the rest of the team. We also had a dedicated lab for automating functional tests and load testing using Mercury interactives tooling (don’t miss that) that we would use to test out before upgrading our servers.
We used the techniques outlined in Steve McConnell’s Rapid Development, a sort of proto-agile (and editorializing it got all the good parts right while scrum did the opposite).
I had all of this 11 years ago and it was BLISS. Oh MS source safe and it's locked files! No merge conflicts or rebasing clownery, ever! It forced two people working on the same code to sync and this avoided so many conflicts. Customers called with small bug reports, I could fix them in 5 minutes and deploy to production right from eclipse.
Thats nice. I think MS programming stacks were most popular in the UK outside of
universities (universities would also have Unix, Oracle DB and SunOS). I guess in California it would more likely skew Unix/Sun?
I (a) ran a very early Internet provider and then worked in (b) oil and (c) finance where good networking, speed and reliability were enough to make *nix a sensible choice. Though (for example) the finance world tried to move to MS to save money, and indeed I got paid a lot to port and maintain and optimise code across platforms including MS, the TCO thing would keep biting them...
In 1988, Airbus delivered the first A320 with electronic flight commands, which was safe.
In 1998, the RATP inaugurated line 14 of the metro in Paris, which was fully automated, after formally proving that its software would never ever be able to bug.
Gitlab didn't exist back then, and yet these companies made a code that was safe.
I guess the main driver of code quality is whether the company cares, and has the proper specifications, engineering before coding, and quality management procedures, before the tech tooling.
It certainly is simpler now to make quality code.
But don't forget that software used to be safe, and it was a choice of companies like Microsoft, with Windows, or more recently Boeing with the 737 Max, to let the users beta test code and patch it afterwards (Aka early, reckless agile)
So yeah, modern codes look less buggy. But it's mainly because companies care IMO.
> It certainly is simpler now to make quality code.
Just think of the log4j fiasco last year. Or the famous left-pad thing. Perhaps you don't import any dependencies, but just imagine the complexity of (for example) the JVM. Point is, you can surely write "quality code", but even with quality code it's much harder to control the quality of the end product.
Requirements have gotten more complex too. 30 years ago people were generally happy with computers automating mundane parts of a process. These days we expect software to out-perform humans unsupervised (self-driving?). With exploding requirements software is bound to become more and more buggy with the increased complexity.
Quality assurance and software engineering can be applied everywhere, no matter the processes you use to create and deliver the code.
Methods and tools would be different, depending on context, but ANY serious company ought to do quality management
At the very least, know your code, think a few moves ahead, make sure you deliver a safe code, and apply some amount of ISO9001 at the company level (and hopefully much more at any other level)
Also, a security analysis is mandatory for both industrial code and for IT applications, thanks to standards, laws like the GDPR its principle of privacy by design, and contractual requirements from serious partners. You risk a lot if your code leaks customer data or crashes a plane.
it's the same for having 'specifications'.
Call them functional and safety requirements, tickets, personas, user stories, or any name, but you have to do them to be able to work with the devs, and describe to your customer and users what you have actually developed.
the 'lots of things [that] cannot be' scare me as a junior engineer.
I feel like they are made by these shady companies that offer 2 interns and a junior, to get you a turnkey solution within 12 hours. It also gives me back bad memories of homework made at the last minute in uni, and I would never do that again.
And as far as I saw in both cases, the resulting software is painful to use or to evolve afterwards.
> describe to your customer and users what you have actually developed.
In the domain I work in, what customers want (and what we provide) changes monthly at worst, annually at best. And in many cases, customers do not know what they want until they have already used some existing version, and is subject to continual revision as their understanding of their own goals evolves.
This is true for more or less all software used in "creative" fields.
I don't understand how this practice makes your modern code more reliable, sorry
I was replying to
>Are modern codebases with modern practices less buggy than the ones from 20 years ago?
I understood that @NayamAmarshe acknowledged about new practices and tools introduced after my examples, in the 80s, 90s, and early 2000s (mostly with agile everywhere, and v-methods becoming a red flag on a resume and in business meetings).
It seemed to be the essence of their question.
So all I was saying was that codes from back then where capable of being safe. Reliability wasn't invented by modern practices.
Modern practices have only changed the development process, as you mentioned. Not the safety.
And if it did, it affected safety, as doing provably safe code with new practices is still being researched at the academic level.
(check out the case of functional safety vs/with agile methods)
Can you explain how do you make your code less buggy, than a code from 20 years ago, with practices from back then ?
My point was that you cannot use the software development processes used in planes and transportation systems in every area of software development. Those processes are extremely reliant on a fully-determined specification, and these do not exist for all (maybe even most?) areas.
If you're inevitably locked into a cycle of evolving customer expectations and desires, it is extremely hard and possibly impossible to, for example, build a full coverage testing harness.
IMO yes. Software is a lot more reliable than it was 25 years ago. This boils down to:
1. Unit/regression testing, CI
2. Code reviews and code review tools that are good.
3. Much more use of garbage collected languages.
4. Crash reporting/analytics combined with online updates.
Desktop software back in the early/mid nineties was incredibly unreliable. When I was at school and they were teaching Win3.1 and MS Office we were told to save our work every few minutes and that "it crashed" would not be accepted as an excuse to not hand work in on time, because things crashed so often you were just expected to anticipate that and (manually) save files like mad.
Programming anything was a constant exercise in hitting segfaults (access violations to Windows devs), and crashes in binary blobs where you didn't have access to any of the code. It was expected that if you used an API wrong you'd just corrupt memory or get garbage pixels. Nothing did any logging, there were no exceptions, at best you might get a vague error code. A large chunk of debugging work back then would involve guessing what might be going wrong, or just randomly trying things until you were no longer hitting the bugs. There was no StackOverflow of course but even if there had been, you got so little useful information when something went wrong that you couldn't even ask useful questions most of the time. And bugs were considered more or less an immutable fact of life. There was often no good way to report bugs to the OS or tool vendors, and even if you did, the bad code would be out there for years so you'd need to work around it anyway.
These days it's really rare for software to just crash. I don't even remember the last time a mobile app crashed on me for example. Web apps don't crash really, although arguably that's because if anything goes wrong they just keep blindly ploughing forward regardless and if the result is nonsensical, no matter. Software is just drastically more robust and if crashes do get shipped the devs find out and they get fixed fast.
It improves velocity not code quality. You can achieve the same quality levels but making changes takes much more time.
Delivery costs of software is way down in many domains (SaaS teams frequently deliver dozens or hundreds of releases a day). That would not be possible without automated tests.