Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are I hope aware of the ability of abuse reporting systems to themselves become channels of abuse. Facebook's real names policy is exploited by anti-trans campaigners to force people off the platform, for example. Anyone should be aware of the risks of automated threshold systems being abused: if all you have to do is get 100 accounts to press "report abuse", that will be abused very quickly.

Local standards also present problems. Do we really want to go along with e.g. Pakistan arresting people for blasphemy?

It's definitely true that anti-abuse systems can be themselves abused, though most of the systems that you're talking about are partly due to the anti-abuse systems being centralized, right? I also see a lot of comments here along the lines of "but that's censorship!" But the article is discussing decentralized anti-abuse systems which allow individuals to set up their own opt-in filters which apply to themselves and their communities (which means different people might have different filters). Do you think that's different?



Filters deal with the situation where A is sending to B something that B doesn't want to recieve.

The situation where A is sending to B something that's harmful to C cannot be dealt with by C's filtering and can only be addressed at a higher level in the system.

Those are the technical distinctions, but there's a lot of possible things covered by the second case: leaked nudes, lynch mob organisation, slander, leaked intelligence, compromised party documents, names of human rights activists being leaked to secret police, copyright infringement, child porn, fake news, real news in fake states, allegations that invitations to pizza are evidence of child porn, and so on.


Other things covered by this case include leaks identifying corrupt behaviors, allegations of sexual harassment such as Susan fowlers, and evidence of human rights violations that the authorities wish suppressed.

Or various right wing ideas now softly suppressed on Twitter/Reddit.


>The situation where A is sending to B something that's harmful to C cannot be dealt with by C's filtering and can only be addressed at a higher level in the system.

Huh? If C knows A's public key and the content is signed, why can't C filter A's content?


Content is basically never signed, and I'm talking about situations where the content isn't intended for or sent to C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: