> If all your algorithms are as trivial as calculating a weighted modulo 11 checksum, then the sort of case I'm thinking of doesn't apply.
My estimate is that 98% of all programming everywhere is as algorithmically trivial as calculating a weighted modulo 11 checksum - probably more so - and it acquires its bugginess from accidental complexity due to poor factoring, and from conflicts at interfaces. Test-driven development is pretty good, in my experience, at helping ameliorate both these problems.
Of course, that doesn't mean I actually do it 100% or even 80% of the time. I'm happy to agree that it's no panacea: testing threads and UIs are particular pain points for me, and usually I substitute with either Thinking Really Hard or just Not Changing Stuff As Much
Formal proof for me is stuff I learnt at college, forgot subsequently, and keep meaning to reread up on. Thank you for prompting it back up my TODO list
> My estimate is that 98% of all programming everywhere is as algorithmically trivial as calculating a weighted modulo 11 checksum - probably more so - and it acquires its bugginess from accidental complexity due to poor factoring, and from conflicts at interfaces.
I think it depends a lot on your field.
If you're working in a field that is mostly databases and UI code, with a typical schema and most user interaction done via forms and maybe the occasional dashboard-type graphic, then 98% might even be conservative.
On the other hand, if you're doing some serious data munging within your code, 98% could be off by an order of magnitude. That work might be number crunching in the core of a mathematical modelling application, more advanced UI such as parsing a written language or rendering a complex visualisation, other I/O with non-trivial data processing like encryption, compression or multimedia encoding, and no doubt many other fields too.
Generalising from one person's individual experience is always dangerous in programming. I've noticed that developers who come from the DB/business apps world often underestimate how many other programming fields there are. Meanwhile, programmers who delight in mathematical intricacies and low-level hackery often forget that most widely-used practical applications, at least outside of embedded code, are basically a database with some sort of UI on top. And no, the irony that I have just generalised from my own personal experience is not lost on me. :-)
This can lead to awkward situations where practical problems that are faced all the time by one group are casually dismissed by another group as a situation you should never be in that is obviously due to some sort of bad design or newbie programmer error. I'm pretty sure a lot of the more-heat-than-light discussions that surround controversial processes like TDD ultimately come down to people with very different backgrounds making very different assumptions.
My estimate is that 98% of all programming everywhere is as algorithmically trivial as calculating a weighted modulo 11 checksum - probably more so - and it acquires its bugginess from accidental complexity due to poor factoring, and from conflicts at interfaces. Test-driven development is pretty good, in my experience, at helping ameliorate both these problems.
Of course, that doesn't mean I actually do it 100% or even 80% of the time. I'm happy to agree that it's no panacea: testing threads and UIs are particular pain points for me, and usually I substitute with either Thinking Really Hard or just Not Changing Stuff As Much
Formal proof for me is stuff I learnt at college, forgot subsequently, and keep meaning to reread up on. Thank you for prompting it back up my TODO list