Separation of concerns is still a valid paradigm with a single global datastructure like GUI, Microservice, Database and etc. In such situation one can still seperate concerns via composing the global datastructure from a smaller units and define methods with respect to thoose smaller units. In that way one does not need to wonder whether there are some unattended side effects when calling a function that mutates the state.
Seems like one is backpedaling because one was just talking about one's separation of one's concerns and now one is defending one's separation of concerns with respect to one's global data structure.
I still firmly believe that one ctx object and hundred functions/methods is as bad as programming with plain variables defined in the global scope. If the ctx is composed from smaller data structures with whom the functions are defined, then all is good. This is the opposite of the rule.
You keep saying you believe it, but that is literally what a database is, game state manipulation, string manipulation, iterator algorithms, list comprehensions, range algorithms, image manipulations, etc. These are all instances where you use the same data structures over and over with as many algorithms and functions and you need.
It’s about coupling and being able to maintain that in the long term. A narrow focus helps to test each individual unit in isolation from each other. It is true that a database appears to be a single datastructure with hundreds of methods from the users perspective and that is fine, because someone else engineered and tested it for you. However if you were to look into how a database is implemented you would get to see the composition of data structures, like btrees that are tested in isolation.
It’s about coupling and being able to maintain that in the long term.
What does that mean? This is all the kind of abstract programming advice that sounds nice until someone needs an example.
A narrow focus helps to test each individual unit in isolation from each other.
A function operating on a data structure is already a narrow focus.
It is true that a database appears to be a single datastructure with hundreds of methods from the users perspective
And also from a reality perspective because it's literally what a database is about.
However if you were to look into how a database is implemented you would get to see the composition of data structures, like btrees that are tested in isolation.
I don't know what point you're trying to make. Data structures should be tested? I don't think anyone is saying they shouldn't.
To put it simply if you have a function f(ctx) = ctx.a + ctx.b it is hard to see what are the arguments producing the output. Which are the elements in the datastructure you need to vary in order to have exhaustive tests. Whereas if one refactors it as f(ctx) = g(ctx.a, ctx.b) you only need to test function g with respect to (a, b) whereas forwarding of methods can be simply covered in integration tests without any care whether function g is implemented correctly.
To make such testing strategy to work data structures need to be small. It is better to have multiple small data structures rather than one big universal one where methods are defined at the ctx level making exhaustive tests difficult.
Perhaps, I haven’t been clear. I agree with Pike’s advice strongly. What I am trying to say here is that the Perl’s rule 9 is diametrically opposite of what Pike says.
To make such testing strategy to work data structures need to be small.
Why would that be true?
It is better to have multiple small data structures rather than one big universal one where methods are defined at the ctx level making exhaustive tests difficult.
I don't think you are backing this up at all, you just keep saying it over and over. It's also not even about big data structures it's about having fewer data structures and using them over and over. You can see where this is effective even in javascript and lua with their tables that have hash maps and arrays.