Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In addition to playing into anti-vaxxers’ belief that they are being silenced for nefarious purposes, reducing their arguments’ visibility also reduces the likelihood someone will publish a well-reasoned counterargument.

Skeptic videos might be disseminated on sites frequented solely by those willing to believe them, and they will be less exposed to dissenting opinions.



I think by now we should understand that it usually takes an order of magnitude more effort to counter-act a false claim than it takes to make it in the first place. If your proposed approach was viable there shouldn't be people around today saying that vaccines cause autism, as the original paper that made that claim has been debunked many, many, times. And yet, that lie is still extremely pervasive in society and directly causing harm to people.

Part of this is because of recommendation algorithms on social media sites like Youtube getting people into positive feedback loops. If you find a anti-vaxx video and Youtube recommends you two videos, one re-enforcing the video you've just seen (making you feel smart for having found and accepted the information in the original video) and one debunking it (which makes you feel stupid for having wasted your time on the original video), which do you think the average person is more likely to pick? Eventually the algorithm will "naturally" pick up that people watching anti-vaxx videos don't want to see videos debunking those views and will never show them to people watching anti-vaxx videos. The only way to solve this paradox is to blanket ban the anti-vaxx videos.


Oh, I’m well aware how much easier it is to throw out random unscientific claims than it is to respond to them analytically.

Which is actually part of why I’m opposed to a blanket ban. I’ve had to personally wade through papers and studies to determine whether a vaccine skeptic (an M.D., at that) had correctly interpreted the results.

They hadn’t. So I’d rather have someone else, with some expertise and clout, spend some time on it and publish their counterargument.


Ah, I missed your last sentence. Either way, though, I still think there is a benefit to banning anti-vaxx content from general-audience platforms in that it stops people who are not necessarily seeking out anti-vaxx misinformation from being exposed to it in the first place. A great example of this strategy working is Reddit. When they ban a hateful subreddit (like r/fatpeoplehate) it tends to noticeably improve the quality of the discourse on the site in general for a period afterwards as people who were drawn to join Reddit just to be hateful will have less reason to be on it and also because fewer "ordinary" people are exposed to those hateful ideas limiting their spread within the "general" user base of the site. I believe the same approach works equally well with misinformation movements.


Ultimately, I can’t fault a corporate entity for wishing to improve their customers’ experience by preventing dissemination of potentially harmful material. Perhaps I would agree with them if I had insight into their cost-benefit analysis. I still generally prefer to have all types of ideas out in the open.


You have premised that some content recommender uses an algorithm that creates clusterization of positions, and you conclude that «the only way» is to eliminate one of the two positions. I hope this sentence makes it clear where the issue is.

Which /also/ means, the reasonable moderates of the censored position disappear. With consequences.

Which should contain the rebuttal to that first "proposal": the "centrist algorithm".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: