Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm ok with filtering humans who fail the Turing test.


The issue here is that the Turing test is getting ever harder - aren't you afraid of the time when you are going to start regularly failing it yourself ?


Furthemore: Did you think about what this means for people with e.g. serious cognitive disability? Your statement argues for prohibiting them from participating in public discourse wholesale.


In the maximal case where bots are indistinguishable from humans, by definition the odds of failing the Turing test are 50/50. A coin flip per comment. Yeah, I can put up with that.

Realistically, I don't think I will ever start regularly failing it. Go ahead, call it arrogance or hubris or whatever. I've looked at the originality and predictability metrics on a large corpus of my online comments. Let's just say, you won't be replacing me with gpt-3 or similar.


This is diverging from the strict definition of "Turing Test", but I've had the bad luck of having my YouTube comments silently deleted (or worse, shadowbanned), and it's so infuriating (because it seemingly punishes some of 3 ways in which I've tried to make my comments better), that I mostly stopped posting them.

Not to mention that in cases where it results in a direct ban, you better hope that odds are better than 50/50 ! (New users are particularly susceptible.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: