Scott Alexander was inspired to run a contest for adversarial collaborations on his blog, which were really cool. I actually participated in one (roughly about the evidentiary value of spiritual experiences), which I enjoyed a ton; I feel like I learned a lot and made a new friend (my collaboration partner).
I did notice that my research question was kind of ill-defined. The "classic" adversarial collaboration topic is something like the effectiveness of a medical intervention, or the effectiveness of a policy intervention. The format probably works best for that because there is some somewhat clearer notion about what it would mean for each position to be correct!
To me this is a rough implementation of Lakatos’ Research Programmes solution to the demarcation problem.
The next step is to track each adversaries progress over time and determine which group’s theories predict outcomes vs adding new auxiliary hypothesis to explain discrepancies.
This could help improve problems with pseudoscience and skepticism of scientific institutes: let all quacks collaborate by adding their programme to be comparison. Their lack of progress should then be simple enough for laymen to follow, and make it obvious that their theories have no predictive power.
Same goes with fraud: you are less likely to fabricate data if it is being publically compared to numerous other teams that have different biases.
There's a whole cavalcade of retrospectively obvious ideas, and this fits rather snugly - it had never occurred to me to use bidirectional improvements via disagreement as a deliberate rather than accidental process.
It forces both sides to argue in good faith, it forces clarity, and even in circumstances where no possible agreement can be reached, it pushes arguments on either side toward their strongest and most reasonable forms.
Of course, it does require that people on either side of a given argument need to be willing to engage and to do so in good faith.
This is a million times better than some attempt to talk over the top of the other person an a Joe Rogan podcast and pretending that something even vaguely resembling science or a debate has occurred.
> some attempt to talk over the top of the other person an a Joe Rogan podcast
Do you have a particular example in mind when that happened? I am trying to remember any live debate on Rogan, and could only come up Tim Pool vs Jack Dorsey and Vijaya Gadde; but I do not remember them talking over each other.
I’m skeptical that this is currently possible for the topic du jour (vaccines and pharmaceuticals) simply because the vast majority of funding is from private companies with no incentive to find problems with their products and with the ability to financially punish those who look for problems.
If adversarial studies occurred it would make those debates a lot more valuable, since the discussion could be based on comparisons of the adversarial research rather than the current ‘pharma bad’ or ‘skeptics bad’ framing.
Love this idea. I've always thought it'd be interesting to try to rank contrary replies to threads based on how much the opposing view ranked the reply. It'd certainly be more productive than just going around dunking on each other as is now normal, e.g. on Twitter. Of course dealing with the fact that people are smart and will game such a system is a hard problem.
I'm curious as to what you mean by rank. Is this a instead of a simple up/down signal you get something like a +10/-10 scale to rate how much you (dis)agree with a particular comment/link and then looking at the distribution of scores? Or maybe something like a rank vote of all comments at a particular tier?
Sort of. If the collaborators hate each other it goes beyond what the article is talking about and I doubt any fruitful collaboration can happen in such an environment. The article is more about having people from opposing ideologies.
I did notice that my research question was kind of ill-defined. The "classic" adversarial collaboration topic is something like the effectiveness of a medical intervention, or the effectiveness of a policy intervention. The format probably works best for that because there is some somewhat clearer notion about what it would mean for each position to be correct!