Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm inclined to agree with you, but I think the answer is actually more nuanced.

At a minimum, it seems like a good way to run a cheap and quick way to try to validate an idea before dumping more time and money into it. If it seems like there is something there, then a longer and more expensive study is easier to justify. And when that study does happen, you'll already have data on what people expect to happen which is nice.

It's not a perfect method for sifting through ideas, but at the end of the day you need some way of deciding what to pursue now, and without time travel there is no perfect method.

And because methods are included in publications, from a scientific standpoint there's nothing wrong with publishing it. Other researchers will read the paper and clearly see what was tested and be able to assign an appropriate amount of significance.

That significance may also vary through time. For example, somebody might conduct research about the accuracy of these imaginings in various ways. Maybe we find out we're actually quite good at them in some ways, and not so good in others. Maybe this is already done, and is why you've seen several recently? I don't know.

The problems mainly come from the popsci aspect. (Some) Writers and (some of) the general public see 'scientific paper' and more or less base their own assessment of significant entirely on that. You can also get a bit of the 'telephone game' going on, and the significance can get all jacked up. I don't know how to solve this problem, but I don't think the correct solution is to alter the scientific process itself.



> At a minimum, it seems like a good way to run a cheap and quick way to try to validate an idea before dumping more time and money into it. If it seems like there is something there, then a longer and more expensive study is easier to justify. And when that study does happen, you'll already have data on what people expect to happen which is nice.

That seems quite reasonable.

But lots of popular press coverage on "look what the scientists figured out" based on your initial validation justifying a longer and more expensive study... maybe less so.

Like, if this really is understood as that... should pop press be covering it at all? This article definitely is written with teh tone "look what the scientists have learned about humans" not "they got enough validation to do more research", right? Hard to say what the researchers themselves think about the strength of what they found out, if they have your reasonableness about it.

But the researchers are clearly happy to give interviews and be quoted; I mean, it's a lot to expect a researcher to give up good press, sure.

> And because methods are included in publications, from a scientific standpoint there's nothing wrong with publishing it. Other researchers will read the paper and clearly see what was tested and be able to assign an appropriate amount of significance.

I wish I had your generosity of spirit about contemporary practice and motivations of academic science.


> I wish I had your generosity of spirit about contemporary practice and motivations of academic science.

Ha! I seem to have given you a false impression. There's... so many problems. However, I don't think most (any?) researchers are actually ignorant of those problems. They are actually doing research somehow, so they've figured out the game well enough.

I apparently managed to disguise this well when I said 'assign the appropriate amount of significance.' The unsaid implication here was that it may well be virtually none, particularly if it doesn't fit with a larger body of work.

In this article for example, it's paired up with another similar finding from an actual physical experiment, which lends it more weight.

> But the researchers are clearly happy to give interviews and be quoted

I always wonder about this. When I've seen longer form interviews with researchers, they almost always seem appropriately uncertain. At least they do sometimes, I might just not be knowledgeable enough about the field to know if/when they aren't.

Meanwhile, you don't get much of that in popsci work, beyond maybe a small disclaimer at the beginning.

I think Adam Ragusea (a cooking youtuber of all people, though admittedly he's an ex-journalism professor as well, and also dealt being the interviewee and having things go poorly) actually covered the intricacies of this really well in a video[0].

I've tried too many times to write "Long story short, ..." here, and have ended up with way too much meandering. Just watch the video. It's worth it, and does a great job of covering the perspective of all parties involved.

I think it's appropriate to assume good faith in individual instances - there are reasonable justifications for misrepresenting the certainty of some work in this kind of context. It's unlikely to cause any actual harm, and probably gets more people intellectually stimulated (which is probably good in both objective and subjective ways). In this article it even lets them make a concrete suggestion that has no cost to try and can probably help some people.

I don't know that there is an objective way to weigh those benefits against presenting a more accurate representation of the certainty of some information, when a misunderstanding of it is unlikely to ever harm anybody. I assign quite a bit of value to that accuracy inherently, but that's coming from the core of my value system. I can't present a logical argument for it's correctness any more than I can the exact degree to which I think human suffering is bad. As a result I think there's room here for reasonable disagreements about where to draw the line.

[0]https://www.youtube.com/watch?v=fxUnwsttr_8 (I think this is very illuminating if you're interested in this topic. I highly suggest finding the time for it, even if you understandably don't find the time to read the rest of this over-long comment.)


> However, I don't think most (any?) researchers are actually ignorant of those problems. They are actually doing research somehow, so they've figured out the game well enough

The problem is that "winning the game" is about publications, tenure, and (less important but increasingly also) popular recognition, rather than about the size of your contribution to valid research results (one hopes that publications and tenure correlate, and yet...).

So, getting results of dubious validity published, and covered in popular press, can be exactly "figuring out the game well enough", an end in itself, rather than a minor step toward a research program.

You are right that I don't want to assume bad faith in any individual instances, and yet... here we are, in aggregate.

What are the costs? I dunno, mis-education of the popular audience, for one, I guess. Maybe that doesn't matter, but, it kind of does? And the opportunity cost of all the scientific labor being spent on "easiest to publish the most times" instead of what might be highest priority to discover. And the aggregate collective effect of a scientific community enabling each other on that.


> The problem is that "winning the game" is about publications, tenure, and ...

Totally. 100% with you there. I didn't mean researchers aren't doing those things, but rather that papers built for those ends are probably not polluting 'science' as a whole. The body of work product is getting increasingly swampy, but the researchers are also the best positioned people in the world to sieve through it.

A large part of their job is to sort out what a paper actually tested and what it actually observed to begin with. These 'game playing' papers are quite blatant in comparison to the kinds of problems researchers already need to look for.

> What are the costs? I dunno, mis-education of the popular audience, for one, I guess. Maybe that doesn't matter, but, it kind of does?

Totally agree. I personally find it kind of abhorrent. I also can't honestly tell you why beyond "because" and am pretty certain most people don't feel as strongly about it as I do. I try to be mindful of that, but I think it mostly just gets used when I end up playing devil's advocate.

> And the opportunity cost of all the scientific labor being spent on "easiest to publish the most times" instead of what might be highest priority to discover.

Sure thing. It also increases time spent on literature searches for all that, and will for quite some time even if things changed tomorrow.

On the other hand, maybe it provides an easier on-ramp for teaching prospective researchers to be skeptical even if it's in a Paper, and even how to analyze them? At least it's nice to try to hope there's some benefit to all this.


> The problem is that "winning the game" is about publications, tenure, and (less important but increasingly also) popular recognition, rather than about the size of your contribution to valid research results (one hopes that publications and tenure correlate, and yet...).

> So, getting results of dubious validity published, and covered in popular press, can be exactly "figuring out the game well enough", an end in itself, rather than a minor step toward a research program.

You're describing Science by Press Conference/Press Release.

https://en.wikipedia.org/wiki/Science_by_press_conference


This is where gaming-based simulations (usually based on videogames) are apparently catching on as a very interesting realm for psychological experiments.

It's a game and a simulation, so the set-up is low (once the game iself exists), but participants actually seem to get highly invested in the game mechanics. (This might be a stretch for people on HN to believe, but stay with me ;-) And it's possible to run some very-large-scale studies by tapping into existing gaming titles and platforms, particularly MMOGs.

EVE Online is among the instances I'm aware of specifically:

https://www.eurekalert.org/news-releases/765393

https://cs.stanford.edu/people/eroberts/cs201/projects/virtu...


I would guess that a lot of the validation for studies like this comes from marketing research. Marketing often does "study groups" to try to figure out if, say, one product name is better than another. Would you buy "ZimZam Fluid" or "Superlicious"? The study participants are not confronted with the actual product (it may not exist yet) but have to imagine which they prefer. If this approach shows success in marketing (which is all about peoples behavior) then it seems likely it is useful for figuring out other things about how people work and what they will think in specific situations. What people imagine is a clue to how they think, you could think of it as a simulation. People often imagine future outcomes as a way to make decisions in the real world.

What you are also saying is that the way research is funded determines how research is done. You need a reputation to get big money. To get a reputation you have to start small with the little money/time you have. Hence lots of studies are done that are too small, not well controlled, and are essentially useless because no one will believe they have conclusively proved anything. However, once you get 10 papers based on these poor studies you start to look like an expert and it becomes easier to attract bigger grants. By this time you may have actually formed some ideas about what it is you want to research as well and abandon your original ideas as castings in the dark. So it's good in that it helps focus research and helps researchers gain experience, but it is bad in that it results in lots of studies that are not very useful to anyone (like all the medical studies you read about where, when the topic becomes important like in a pandemic, you will find scientists calling out studies as underpowered, too small, not an RCT, and useless.)

It seems possible that making the basis of getting a PhD be making a unique discovery ("your contribution" as it is called) is misguided. It results in the above useless studies to gain reputation, plus it means researchers won't touch each others ideas for fear of being labeled derivative instead of unique. It may be much better to have all scientists in a field brainstorm ideas and contribute them to a common list and then individual grad students and researchers could choose one. The list could be ranked by the same crowd (and hopefully by outsiders as well) and used to assign grant amounts ahead of the selection of an actual researcher applying for the grant. That might result in better and more important topics being studied, an easier path to reputation for new scientists, and more useful studies being done that will actually advance science. Or maybe it would mean that a cabal of scientists and politicians take over the list and misdirect research for decades, which would not be much different from what we have now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: