While the individual examples listed here are not good, what is the false-positive and false-negative rates for FB's method of censorship?
Globally scaling hate-speech censorship is a problem that many content websites (Youtube, Twitter, etc.) face. This article with it's leaked deck and false-positive examples seems like it's trying to generate shock at "how the sausage gets made". Any process involving humans will have outliers, but the question is, how effective is the process really?
The problem is the system is being gamed heavily. Russian troll factory was very effectively banning Ukrainians speaking out against Russian agression by mass flagging posts on FB and the fact that the moderators for the region were/are mostly russians was not helping either.
If only Facebook, Twitter, and YouTube were completely full of willing listeners for hate speech, then that might be applicable. Hate groups aren't sending private forum messages to each other, they're broadcasting
Even John Stuart Mill, a founding philosopher of free speech, postulated that speech needs to be curtailed when the expression harms others.
John Stuart Mill had higher criteria than mere harm. The words spoken in the French revolution lead directly to the deaths of nobility. The fact they lead directly to the harming of individuals did not mean they should not have been spoken. The words spoken at a trial may directly lead to the imprisonment of the accused, should those words too not be spoken due to the harm they cause the accused?
The classical line has been at incitement. It understands that humans are prone to irrationality in the heat of anger. But outside of that it is understood that some harm may be necessary for justice or other ideals to be served. Speech is the mechanism we use to weigh the costs of our actions.
"We have now recognised the necessity to the mental well-being of mankind (on which all their other well-being depends) of freedom of opinion, and freedom of the expression of opinion, on four distinct grounds; which we will now briefly recapitulate.
First, if any opinion is compelled to silence, that opinion may, for aught we can certainly know, be true. To deny this is to assume our own infallibility.
Secondly, though the silenced opinion be an error, it may, and very commonly does, contain a portion of truth; and since the general or prevailing opinion on any subject is rarely or never the whole truth, it is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied.
Thirdly, even if the received opinion be not only true, but the whole truth; unless it is suffered to be, and actually is, vigorously and earnestly contested, it will, by most of those who receive it, be held in the manner of a prejudice, with little comprehension or feeling of its rational grounds. And not only this, but, fourthly, the meaning of the doctrine itself will be in danger of being lost, or enfeebled, and deprived of its vital effect on the character and conduct: the dogma becoming a mere formal profession, inefficacious for good, but cumbering the ground, and preventing the growth of any real and heartfelt conviction, from reason or personal experience."
These platforms also censor private person-to-person messaging, between people who don't complain about the content. That's willing-speaker-to-willing-listener, and being censored.
And, rather than trying to square-the-circle of global speech standards enforced by crude algorithms and rushed, underinformed low-wage moderators, they could make the remedy for unwanted speech simply unsubscribing/unfriending the source. And, for users who feel disturbed by broadcasts, FB could offer opt-in moderation circles – leaving all willing-listeners, with no grievances, unmolested by 3rd-party censorship.
Then, willing-speakers sending legal speech to willing-listeners wouldn't be collateral damage of their clumsy improvised censorship regimes.
They do censor private messaging, but usually because it's used in targeted harassment of individuals.
FB and others could also do as you said - make quarantine zones for specific prejudicial groups and their ideologies.
Except time and again these groups have proven they're not wanting to stay in their own special self-moderated zones but are actively looking to broadcast. There are businesses built upon these prejudicial world views and deplatforming them just shows how they heavily they rely on broadcasting:
(The intended receiver can be frustrated or confused that they couldn't receive the message their friend wanted to send.)
And as you typically have to opt-in to messaging with someone new, they could wait for complaints rather than have content-based prior restraint. And, react to specific abuse with post-action blocking of the harasser, or quarantining-the-harasser as someone who'd need extra explicit acceptance before their messages are delivered.
There is no broadcast on platforms like Facebook. Users always have the features to control what they see. As for "deplatforming", that means censorship and the history of it shows that it doesn't work in the long run. Those views don't go away and the totalitarianism required to implement it invariably destroys itself, making millions miserable along the way.
So this is the thing now? We don't believe freedom of speech is a good thing anymore? Who decided this? I don't recall being consulted on this. Not that I'd think of arguing, mind you! I'm loyal, I swear!
It seems the pushes against freedom of speech ideals in the US greatly increased after the 2016 election, coming mostly from those who share the ideology of the losing party targeting the speech of the winning ideology/party.
That's true to a degree. But a lot of people are also legitimately troubled by the Russian use of the media to stir up trouble.
Personally, I'm in favor of free speech. I'm not in favor of being inundated by manipulative propaganda from a foreign power. I don't know how to reconcile those two views.
I mean, I guess I'm in favor of curtailing dishonest advertising, and that's kind of what propaganda is, but... I'm not sure where the lines are.
Back before internet access was widespread I saw a lot of people online who foresaw a future where an internet connection provided access to the sum of all human knowledge, and forums to participate in the marketplace of ideas. Between human curiosity, good will and people's natural desire to not be wrong, everyone will learn and reason their way to the truth (incidentally ending up agreeing with me).
Surely once everyone can access the equivalent of several university educations online, the masses will come to realise that sodomy should be legal for libertarian reasons; transparency and access to information will mean voters will demand and get honest officials; that employers will realise it's not rational to discriminate against a sysadmin for not wearing a dress shirt; and it'll be the year of Linux on the Desktop.
In the subsequent decades, people have re-evaluated some of those predictions for understandable reasons.
I'd be interested in the cite and context of the John Stuart Mill writing you're alluding to.
And I wonder if his reasoning applies to the kinds of speech policing Facebook is doing, or only to the same sorts of things modern US free-speech law makes illegal: direct threats & incitements to imminent violent acts. (Notably not illegal: 'hate speech', blasphemy, insults, abstract advocacy of most crimes, etc.)
So-called hate speech doesn’t cause harm other than to feelings. It’s constitutionally protected in the US, and Facebook is going down the wrong path trying to eliminate it.
Yes. Here, Facebook is making a good-faith effort to censor, we can see how difficult a time they are having writing rules for it, and Facebook is being criticized for it. There is no good way to do prior restraint.
Read Justice Brandeis' concurring decision in Whitney vs. California.[1] He wrote the classic decision on this subject, and I can't improve on that.
"Those who won our independence believed that the final end of the State was to make men free to develop their faculties, and that, in its government, the deliberative forces should prevail over the arbitrary. They valued liberty both as an end, and as a means. They believed liberty to be the secret of happiness, and courage to be the secret of liberty. They believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that, without free speech and assembly, discussion would be futile; that, with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine; that the greatest menace to freedom is an inert people; that public discussion is a political duty, and that this should be a fundamental principle of the American government. They recognized the risks to which all human institutions are subject. But they knew that order cannot be secured merely through fear of punishment for its infraction; that it is hazardous to discourage thought, hope and imagination; that fear breeds repression; that repression breeds hate; that hate menaces stable government; that the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies, and that the fitting remedy for evil counsels is good ones. Believing in the power of reason as applied through public discussion, they eschewed silence coerced by law -- the argument of force in its worst form. Recognizing the occasional tyrannies of governing majorities, they amended the Constitution so that free speech and assembly should be guaranteed."
"Fear of serious injury cannot alone justify suppression of free speech and assembly. Men feared witches and burnt women. It is the function of speech to free men from the bondage of irrational fears. To justify suppression of free speech, there must be reasonable ground to fear that serious evil will result if free speech is practiced. There must be reasonable ground to believe that the danger apprehended is imminent. There must be reasonable ground to believe that the evil to be prevented is a serious one. Every denunciation of existing law tends in some measure to increase the probability that there will be violation of it. Condonation of a breach enhances the probability. Expressions of approval add to the probability. Propagation of the criminal state of mind by teaching syndicalism increases it. Advocacy of law-breaking heightens it still further. But even advocacy of violation, however reprehensible morally, is not a justification for denying free speech where the advocacy falls short of incitement and there is nothing to indicate that the advocacy would be immediately acted on. The wide difference between advocacy and incitement, between preparation and attempt, between assembling and conspiracy, must be borne in mind. In order to support a finding of clear and present danger, it must be shown either that immediate serious violence was to be expected or was advocated, or that the past conduct furnished reason to believe that such advocacy was then contemplated."
"Those who won our independence by revolution were not cowards. They did not fear political change. They did not exalt order at the cost of liberty. To courageous, self-reliant men, with confidence in the power of free and fearless reasoning applied through the processes of popular government, no danger flowing from speech can be deemed clear and present unless the incidence of the evil apprehended is so imminent that it may befall before there is opportunity for full discussion. If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence. Only an emergency can justify repression. Such must be the rule if authority is to be reconciled with freedom. [n4] Such, in my opinion, is the command of the Constitution. It is therefore always open to Americans to challenge a law abridging free speech and assembly by showing that there was no emergency justifying it."
"Moreover, even imminent danger cannot justify resort to prohibition of these functions essential to effective democracy unless the evil apprehended is relatively serious. Prohibition of free speech and assembly is a measure so stringent that it would be inappropriate as the means for averting a relatively trivial harm to society. A police measure may be unconstitutional merely because the remedy, although effective as means of protection, is unduly harsh or oppressive. Thus, a State might, in the exercise of its police power, make any trespass upon the land of another a crime, regardless of the results or of the intent or purpose of the trespasser. It might, also, punish an attempt, a conspiracy, or an incitement to commit the trespass. But it is hardly conceivable that this Court would hold constitutional a statute which punished as a felony the mere voluntary assembly with a society formed to teach that pedestrians had the moral right to cross unenclosed, unposted, wastelands and to advocate their doing so, even if there was imminent danger that advocacy would lead to a trespass. The fact that speech is likely to result in some violence or in destruction of property is not enough to justify its suppression. There must be the probability of serious injury to the State. Among free men, the deterrents ordinarily to be applied to prevent crime are education and punishment for violations of the law, not abridgment of the rights of free speech and assembly."
Note the standard Brandeis laid down:
"In order to support a finding of clear and present danger, it must be shown either that immediate serious violence was to be expected or was advocated, or that the past conduct furnished reason to believe that such advocacy was then contemplated." "Immediate serious violence". That's the US standard for prior restraint. That's what Facebook, as a US company, should be using.
> “Immediate serious violence". That's the US standard for prior restraint.
No, it's the US standard for applying the coercive power of government to prior restraint.
The US standard for private actors applying prior restraint to their own platforms is “do what you will, so long as you apply any prior restraint the government has dictated based on immediate serious violence.”
> That's what Facebook, as a US company, should be using.
Facebook is not the US federal or a US state government, and thus the logic underlying why the government is limited to prior restraint in those narrow conditions does not apply.
This reaction is so stereotypical, so ubiquitous that I wonder people bother to type it out: why not link https://xkcd.com/1357/ and save
keystrokes?
This reaction is also false.
Facebook is part of the
state!
Why? Organisations like FB organise large-scale information flow, in particular
regarding controversial political issues. The majority of citizens receive the information regarding political decisions (e.g. who should be president) filtered and structured via FB. Hence organisations
like Facebook are currently part of the the state! They are public infrastructure, and should
be subject to democratic control.
Arguably, if the state permits a company to become a de-facto monopoly, it should be regulated as a public utility. That was a mainstream position from 1920 to 1980.
There is a sense in which all corporations are part of the chartering State, and there is a sense in which all citizens (and hence, all combinations involving citizens, with or without others) are part of the State of which they are citizens, particularly if it is a (direct or representative) democracy, but neither of those (nor any other sense in which Facebook might be considered associated with the State) is relevant here, because Facebook is not applying the coercive power associated with the state to prevent others from using their own resources to share information, which is the critical aspect of state prior restraint that is why it is restricted.
> The majority of citizens receive the information regarding political decisions (e.g. who should be president) filtered and structured via FB.
No, they don't. Online is still below TV as a news source, both in preference and actual use [0][1]. FB, or online as a whole, isn't the most used news source, much less the majority source. A very large minority receives some of their news, including political news, via social media, sure.
> Hence organisations like Facebook are currently part of the the state!
Even if the premise wasn't false, the conclusion offered wouldn't follow from it.
Downvoted for the first half, which seems more likely to lead to problems than useful discourse. I'd have otherwise upvoted for the ideas in the second half; I'm not sure whether I agree with them but seeing this site dig into them sounds interesting.
Do you think I'm wrong, or do you agree with the veracity of my
observation, but not with the style of its presentation?
If the former, I invite you to study past discussions of the
subject (e.g. googling: site:ycombinator.com
freedom of speech xkcd). If the latter, how do you recommend presenting
such as subject?)
You've clearly never heard of "stochastic terrorism". Words have consequences, and hate speech can have serious consequences. Just because something is included under blanket protections doesn't mean it's harmless to society.
If you owned every pub, street, square, coffee shop, theater, restaurant, and other social and public discussion space in the world, do you think it's be fine for you to enforce your arbitrary opinions on everyone in those spaces?
What if it's just people discussing things you don't like amongst themselves, with willing speakers and willing listeners?
Facebook does not own every public space. Facebook is not, itself, a public space. If one had to guess at it, I would describe it as perhaps a publically-accessible but privately owned space. Even if one decided to host something that would be allowed in public streets (for example a protest rally) the landowner of the space can still kick you out at any time from their property.
On the internet, if one wants to express hate speech, there are still websites to do so that may not be Facebook.
So to you, there is no level of dominance of channels of communication where a corporation takes on some of the responsibility of managing a public space? If they owned everything minus one, you're 100% okay with them censoring anyone for any reason and you think they should face zero criticism of that whatsoever. Because they own it, they can do what they want with it, and property rights are absolute. Just trying to follow your logic (and highlight its incoherence).
As for other places to express speech you label "hate speech" - not really. Domain registrars, hosting providers, and payment processors will shut you down. See what happened to SubscribeStar recently. There actually is nowhere to go if the consortium of megacorporations decides they don't like you.
Cyberpunk was supposed to be a dystopia but you seem to desperately want to bring it about.
You are assuming my opinion on the subject. Please consider my points in good faith.
To clarify: My perspective is that we have reached a point where it is clear that in the space of online, there are a few dominant companies that effectively control population traffic and attention through hosting spaces. The internet can be viewed as a public private space, with all the consequences of such. Changing it would require government-level regulation, bringing in the established checks on speech restriction that the government has.
I do not want any consequence. I am undecided on the pros/cons of possible soltutions.
It's constitutionally protected from the government infringing upon it. As Facebook is not the government, they are perfectly free to limit speech in any way they choose.
What hope does constitutional free speech have for the future if its citizenry doesn't hold the principle dear? The public must actually practice something that approximates the free speech protections they want from the government or else they will lose those protections.
If you have an argument to make about why this line of thinking is worrisome or lacking in understanding you ought to make it. As it stands your comment is just patronizing.
I’m surprised political speech is not protected specifically as it is in the UK and Australia in France for example, but I still don’t agree the trade off for Nazis is worth it.
> I’m surprised political speech is not protected specifically as it is in the UK and Australia in France for example
It is protected. It's just that hate speech is an explicit exception from that rule (which is the whole point of that designation). Most of hate speech is political in some way.
So it all boils down to who gets to define what is "hate speech", exactly.
> I still don’t agree the trade off for Nazis is worth it.
What trade-off? Neo-Nazis flourish in the very same countries that have extensive censorship of their ideologies and speech. Look at Germany with their "streitbare Demokratie" for an example - they don't just censor speech, they outright ban parties solely on the basis of their platforms. And for those efforts, they get yearly neo-Nazi marches that are several times bigger than the one-off we've seen in Charlottesville, and AfD is now the second most popular party.
Because hate speech does much more than hurt feelings, which minimizes it as if it’s just a school yard issue. Hate speech normalized ostracizing Jews in WWII Germany, and we know how that turned out. Hate speech incited genocide in Myanmar via the military’s posts on FB. Hate speech normalizes the extreme fringes and pulls nationalism into mainstream politics. Hate speech influences elections and spreads false information to irrationally enrage certain bases. Hate speech and false rumors causes rural villages to assume outsiders are rapists/pedophiles which leads to mob rage that ends in murder.
Facebook is effectively a new kind of weapon that accelerates the ability to target hate speech and let it spread uncontested with no basis in reality, but when done enough “becomes” reality via confirmation bias. It allows a very vocal but tiny population a microphone to amplify their hateful messages and grow beyond what would otherwise naturally occur.
With great power comes great responsibility, and FB/YouTube/etc absolutely should do their best to minimize evil forces from utilizing these platforms to spread false and hateful messages because it really does end in massive reprocussions for society and even death.
Check out what happened in Rwanda. This one random guy was going on the radio and spewing hate and lies, encouraging people to pick up machetes and kill their neighbors and family members based on this nuanced, arbitrary colonial racial construct.
It went viral.
Low key, I'm basically horrified of the hash tag version of this popping up. There's a poignant black mirror episode about this I believe.
Although... I do believe that the relative diversity of today's public internet forums is a safeguard against this. That spares me a little of my worry.
> Check out what happened in Rwanda. This one random guy was going on the radio and spewing hate and lies, encouraging people to pick up machetes and kill their neighbors and family members based on this nuanced, arbitrary colonial racial construct.
> The genocide was planned by members of the core Hutu political elite, many of whom occupied positions at top levels of the national government. Perpetrators came from the Rwandan army, the Gendarmerie, and government-backed militias including the Interahamwe and Impuzamugambi.
[..]
> The RPF military campaign had resulted in some intensified support for the so-called "Hutu Power" ideology, which portrayed the RPF as an alien force. In radio programs and other news, the Tutsis were portrayed as non-Christian, intent on reinstating the Tutsi monarchy and enslaving the Hutus. Many Hutu reacted to this prospect with extreme opposition. In the lead-up to the genocide, the number of machetes imported into Rwanda increased.
> On 6 April 1994, an aeroplane carrying Habyarimana and Burundian President Cyprien Ntaryamira was shot down on its descent into Kigali. At the time, the plane was in the airspace above Habyarimana's house. The assassination of Habyarimana ended the peace accords.
> Genocidal killings began the following day. Soldiers, police, and militia quickly executed key Tutsi and moderate Hutu military and political leaders who could have assumed control in the ensuing power vacuum.
I didn't read all that right now, but just reading some it's very obvious that to just sum it up as "this one random guy on the radio going viral with racist rants" and the rest following from that -- instead of the radio broadcasts being part of something that was already underway -- is kinda stunning. Especially in this context, preparing the ground for shutting up random people, because what they say might go viral and lead to genocide. You called it "a hashtag version of this", after all, of what you claim caused the Rwandan genocide.
Who was this random guy? Who told you this story, or where do you have it from? How did this claim go viral, that one random guy on Rwanda ranted on the radio, and it ended in genocide?
> I do believe that the relative diversity of today's public internet forums is a safeguard against this
If anything, it's people on the ground, in real life, living together, that would innoculate them against people on the radio or on the web telling them how other people are and what they will do if not attacked first. FWIW, the internet seems less diverse in spirit than it was, we used to be small groups of totally random people who had to get along and get to know each other, because we couldn't just block everyone or drive them off.
Thank you for pushing back, I appreciate that. I see now that I presented an overly simplistic narrative, which I took from a documentary long ago when I was young.
> Who was this random guy? Who told you this story, or where do you have it from? How did this claim go viral, that one random guy on Rwanda ranted on the radio, and it ended in genocide?
I picked this up from a documentary I saw years ago.. It was probably made in the late 90s or early 2000s. I don't remember what it was called, although I think I saw it on PBS.
The following article gives a very good overview of the origins of the myth I perpetuated in my above comment (starting on the bottom of page 3 is a section Common Claims about Media Effects in Rwanda—and a Critique):
> Pulitzer-Prize winner Samantha Power claims, “Killers in Rwanda often carried a machete in one hand and a radio transistor in the other.” (The implication being radio delivered instructions, and then men attacked with machetes.) Such conceptualizations suggest a strong causal link between radio broadcasts and genocidal violence.
The radio station was called RTLM, and actually had a handful of broadcasters (2 or 3?), but was quite small. To me, they are still just random guys in the same way that Sean Hannity is just a random guy, or Alex Jones. Although now I read that they had direct connections with the organizers of the genocide:
https://en.wikipedia.org/wiki/Radio_T%C3%A9l%C3%A9vision_Lib...
I didn't mean to imply that the radio broadcasts solely caused the genocide, but more-so that they played a critical role in exacerbating civilian participation in it, that enabling that participation it in a triggering way. I would also argue that the radio broadcasts played a role in galvanizing and normalizing hatred and motivation in the more organized militia and military participants.
Here is a popular empirical study which attempts to "quantify" the effect of RTLM:
> The findings herein do not, of course, mean that the RTLM broadcasts had no effect on the genocide. Indeed, there is good reason to think that they had some effect on specific instances of killing, given that there is evidence that the broadcasts were used explicitly to coordinate the actions of some of the genocidaires (Kirschke 1996, Des Forges 1999). In addition, because, as Straus (2015) notes, elites do not need large-scale popular participation for genocide to take place but rather only need ‘popular compliance – they need citizens not to mobilise to resist violence’ (66), it is certainly plausible that exposure to anti-Tutsi propaganda might have made implementation of the genocide easier, such as by reminding listeners of the founding, exclusionary ideology of the country (Straus 2015), by causing civilians to internalise norms that legitimised the killing (Smeulers and Hoex 2010), or by spurring the adoption of oppositional identities (Fearon and Laitin 2000, Mamdani 2001, Kalyvas and Kocher 2007; Balcells and Steele 2016).
My fear still stands, that mob "hashtag" violence could pop up any time now, that it could spill out in a way that would be hard to contain. Arguably, it already has and does. I don't blame media as a cause, but it certainly is a conduit.
People very dear to me were at that protest, and I was out of town. I heard about it on Twitter and was very afraid that someone close to me could have been shot.
> If anything, it's people on the ground, in real life, living together, that would innoculate them against people on the radio or on the web telling them how other people are and what they will do if not attacked first. FWIW, the internet seems less diverse in spirit than it was, we used to be small groups of totally random people who had to get along and get to know each other, because we couldn't just block everyone or drive them off.
I absolutely agree with you in all of this, especially the part about people living together in real life. Self-segregation is a dangerous condition, especially when it is driven by inequality, fear, misunderstanding and hate. Too many of us passively accept, even foster, segregated lives without realizing the powder kegs we unwittingly charge.
And what I meant by "relatively diverse" is that the internet's public forums are way more dynamic than a small number of radio stations and newspapers, which were the primary media channels in Rwanda. Of course it cannot and must not be taken for granted that the internet will always be this way.
Oh, another thing, something I wanted to note but forgot because I got carried away: I absolutely don't want to mention my experience with being kicked from Facebook, or any mobbing attempts I may have found myself a target of, in the same breath as the trauma of not knowing whether some people you know and love may have got killed. And I also didn't mean the video of 100 people being jerks in Portland as some kind of response to the protest your friends went to. I hope it doesn't read that way in the first place, but it can't hurt to state it, which I planned to and then forgot.
I remember when a woman I was interwebs friends with casually mentioned about a bombing in Israel "oh hey, that's a bus stop I drive past on my way to work every day". It's silly, because I'm an adult and I have an imagination, but just that tiny bit of personal connection made the situation feel vastly more real for me, and kinde sent me reeling for a bit (I may have had a little crush on her tbh). And that was after I already knew she was okay. And I knew she lived in Israel, and that bad things happen there too often, so why would it make a difference that the bus stop was on her way to work? It makes no sense, but it made all the difference to me, like seeing it for the first time. So I can not even imagine how that must have been for you, not to mention your friends. I have the utmost respect and sympathy for that, and my response wasn't intended as some kind of "yes yes, sure, BUT!"
> I didn't mean to imply that the radio broadcasts solely caused the genocide, but more-so that they played a critical role in exacerbating civilian participation in it
But you can see how you put it, with that "It went viral." on its own line even, read as implying that random guy single-handedly caused it all, right?
I was kicked from Facebook after 9 years of using it, with many RL friends and classmates, always under my real name, with photos, with photos I was tagged in; after a time during which I corrected hardliners in both Palestine and Israel, and for my troubles got called a supporter of the other side by each, (though I had FB friendship extended to me by two awesome people, a Jewish author and an Israeli journalist, for bitching so much in comments on Haaretz articles), I suddenly had my acount suspended and was asked to provide ID.
Mind you, not one of my posts ever had violated any guidelines. No posts or comments were reported, but simply me as a person. And of course, I did not get to face my accusers, either. Those "user report" mechanisms also enable mobs, and shutting someone up may not be violence, but it is the objective of violence, the end to which violence is just one of several means. Achieving it without violence doesn't make it okay, at all. I have no illusions that FB has any info that would be in my ID anyway, it's not a privacy thing, it's just a matter of principle to me. I won't play along with that, so "I got kicked off Facebook", and I'm naturally wary of whoever might get banned under the guise of "hate speech getting banned".
That is, I don't just accept that at face value, not in a world where there is STILL ElsaGate content on YouTube, 1 year after YouTube claimed it's gonna fix that ASAP, which in turn was after years of being begged to do anything. In a world where I run the risk to get mocked with "think of the children!" for mentioning that, while it's considered normal to think of adults who seek out words they don't like and get offended.
So I'd want every single case of someone getting banned for hate speech or whatever to be documented and verifiable, that would be a start. I'm not going to take the word of the people in charge, nor the word of the people who think everything must be in order, because they themselves haven't been on the wrong end of it yet. And I got lucky to get kicked over 2 years ago and not sometime in the future, because already I see people saying about the Patreon stuff, "oh, some might be innocent, but most are really nasty". To me, that is in spirit more aligned with radio broadcasts villifying people, than useful a tool against it. It's a weapon, not a tool at all.
This dehumanizing, this "throw them out, I don't care where they end up, just out of sight, out of mind", also spills over into physical violence. How could it not? Some examples here: https://www.youtube.com/watch?v=pbqqefsfC14
They're not fighting evil. They're being evil. They're using evil that others did, to do the evil they want to do. The actions matter, not the words. Even more, in part it's what people don't do, not just what they do. Everybody can get angry sometimes, but not everybody can laugh at themselves, or feel compassion with others.
Either we support "due process" (I put it in quotes because I'm not thinking of a specific process as much as one that applies to all), or our grievances lose their foundation. That is, if we don't think the same principles and rules should apply to all, on what basis could we possibly criticize someone for saying a certain race shouldn't vote, or anything like that?
I'm from Germany, and usually my part in discussions about free speech is being an asshole for thinking it's perfectly fine that we can't walk around glorifying Nazism. Facebook had so many horrid groups and comments spouting lies to incite hate and violence even 2.5 years ago, I don't even want to know how bad it's gotten since. So while I'm for, uhhh, "Facebook being allowed" to ban people or groups, I'm concerned with intransparent processes and double standards, and the normalization of treating speech like a luxury, that is granted or withdrawn by an unnaccountable group.
If I were to post here linking your legal name and identifying information and residence to this post and issuing a call to have you disemployed and harassed at home; evicted if possible and subjected to a general boycott covering everything from sex to food.
I don't think this is as great of a counterpoint as you seem to think it is.
All of the information you state is public record, and for someone like me who always uses my real name, I'm not concerned about people knowing it. You are free to call whomever you like to say I should be "disemployed and harassed at home" but making fraudulent claims and engaging in harassment can be handled like it always has, by me calling my local power brokers (police etc...) to prevent you from breaking the law.
Your implicit point though, that systems of free speech when combined with systems of wide ranging granular information about individuals, can be abused.
If your solution is to get rid of the former, rather than limit access to the latter or create better systems of verification with power structures, then indeed you're (IMO) on the wrong side of a free society.
The thing that gets me about this argument is that, not only is this not a hypothetical scenario, there's a rather large overlap between the people who've been using these tactics to shut people up and the people demanding that sites like Facebook clamp down on "harmful" speech. They're like two parallel tactics in the same war to silence undesirable views. If we give in to the demands for sites like Facebook to eradicate harmful speech, we'll still have the same campaigns to fire and harass people, they just won't be able to complain about it because the social media websites will be under the exact same pressure to shut them up as their employers.
The poster never said they were meaningless, simply that they aren't violence.
That is, speech itself cannot deprive someone of liberties. Only physical action can do that. It's unquestionable that certain forms of speech[1] can "incite" violence, and we have laws that cover such speech.
> The poster never said they were meaningless, simply that they aren't violence.
Indeed, and the top post in this thread didn’t use the word “violence.”
The point is that words do have profound impacts on society. Words can and do lead to material conditions which may cause actual harm to someone’s day to day life.
As I said, regardless of where one falls on Facebook’s right to pick and choose their own standards, let’s not pretend words are so meaningless they will never lead to material changes in someone’s life. That’s a silly implication. We fight hard for speech precisely because words have so much power.
this. (and the article mentions some of the gov entwining)
And
It's often just not seen, which to me is worse then being "laundered to look like it's [private-party]'s fault"
When speech is erased, or otherwise hidden and downranked, many people assume, both the poster of the content and those people that have followed or liked believe that content they are expressing desire to get, never existed at all
- so there is no blame because so many do not realize the censorship in the first place - which is even worse imho.
Look, if the people posting racist hate-speech get banned from all the sites I use, my life is much better. They can go take their trash somewhere else.
You are defending some pretty awful excuses for humans....
Yes. See, the problem is, you are a "pretty awful excuse for a human" to somebody. Do you want them to be able to block your speech? No? Then you need to extend the same courtesy to those you consider to be "pretty awful excuses for humans".
Good. Seriously. It's good that you're not a racist scumbag.
But the actual quality of you as a human being wasn't the issue. The issue is that, if you're going to block speech by scumbags, there is someone who is going to consider you a scumbag. If you allow speech from scumbags to get blocked, then you need the people controlling the blocking to have the right ideas of who is a scumbag. Will the right people be the ones making the decisions? Will they always be the right people - not just this year and next, but always?
I'm not willing to condone that power, because it can be used against me, not just against the people I want it to be used against.
By defending their freedom of speech, I am defending my freedom of speech. I may not care about theirs, but I care about mine.
And for you to not be worried about this being used against you... you just said that these people are in power. That means that they at least have the capability of using it against you in the near future.
It is very important to protect the right to free speech and expression. It is equally important that we don't downplay the potency of communication and the singularly powerful role that it plays in the behaviors of humans.
The word is powerful; invoking mental constructs and emotions. Indeed everything ever created or destroyed by a human began as a mental construct and/or emotion. Therefore words themselves are critical to destruction or creation and can be so much more than just violent--they will make or break the entire human species.
Words are used to spread ideas and tell stories, no? That much we can agree on?
Do we agree that stories are a powerful medium for spreading good stories that make people happy? What about making people sad? What about educating people?
If 'words' can do all of this, then surely 'words' can also incite harm?
Incitement is a specific class of speech that is generally prohibited. I see nobody here arguing against this.
However, much speech that is not incitement is also called hate speech, and that part of the Venn diagram is what we're talking about. These words are called harmful to justify the use of force to silence them, on much more abstract grounds than incitement.
Words are not the only form of speech, and physical violence is not the only way to cause harm. This is a "No True Scotsman", since hate speech is limited to neither.
You’re talking about hate speech as if it were a universally agreed upon concept. Hate speech is entirely in the eye of the beholder. Any speech I consider to be hateful is hate speech, any speech you consider to be hateful is hate speech, and there are 7.5 billion other unique sets of criteria out there for defining hate speech that are no more or no less valid.
If you want to get into a semantics argument on the definition of hate speech, then here's Wikipedia's definition:
>Hate speech is speech that attacks a person or group on the basis of attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity.
Wikipedia’s definition is no better. Even if you accept this definition, now you’re faced with the impossible task of defining what constitutes an attack on those grounds. The issue of defining hate speech is at the core of the free speech concerns, because what constitutes hate speech, by necessity, comes down entirely to the whims and personal opinions of whoever is enforcing the censorship.
Ah, but Wikipedia covers that in the very next sentence:
>The laws of some countries describe hate speech as speech, gestures, conduct, writing, or displays that incite violence or prejudicial actions against a protected group or individuals on the basis of their membership in the group, or disparages or intimidates a protected group, or individuals on the basis of their membership in the group.
Great. Except nobody is using that definition when talking about policing online speech. In fact those categories of speech have never been protected. It’s never been legal to solicit violence or to harass people. None of the big tech firms that have engaged in this type of censorship have done so solely on the basis of policing non-protected speech, so it’s entirely dishonest to say that hate speech starts and ends with with legal statutes.
It has two paragraphs describing that large tech companies review hate speech deemed illegal by the EU. That is not a description of how those companies interpret hate speech at all, and doesn’t come close describing their hate speech policies, which as I said, do not start and end with statutes.
please explain how Facebook did not contribute to the Rohingya "situation" in [Myanmar].
please document how violence-inciting speech never helps spark actual violence (xenophobic violence everywhere since the dawn of speech).
i know, i know... guns don't kill people, people kill people. the issue is: we're all latent murderers. civilization is a very thin veneer. if we were all perfectly rational, maybe we could all get along much better. but were but speaking [animals] and need all help we can get to maintain our outward civility.
i know it's a very fine line to tread, but we must not conflate the baby with the bathwater.
The real support for violence is not hidden in the Facebook walls of morons, it's on broadcast news. Virtually every news channel came out against Trump ending our war in Syria. By comparison, I couldn't care less what some dumb racists say on Facebook.
It's impossible to get any piece of content on twitter or youtube to you against your will - you have to go to that site and look at that content. Facebook is a bit trickier since your default view is formed by "friends" actions and "suggestions", but it's pretty east to make it so no undesirable content gets through. Most complaints are not about somebody inadvertently seeing undesirable content, but from somebody who went looking for it, found it and then demanded to remove that content from the platform - despite many willing consumers of the content. So don't bring the shade of John Stuart Mill in here - the "harm" to others is only the feeling these "others" feel when they see somebody thinks differently from what they want them to think, and they are not willing to tolerate it, not on their platform.
it is very often not a willing listener. and there is plenty of content that should be blocked between participants regardless of their willingness to share it.
The entire point is that you can't measure false-positive or false-negative rate since any definition of what should be banned or not is subjective and somewhat arbitrary. If the result could be non-ambiguously measured then the censorship could be non-ambiguously implemented.
I wonder if Facebook has had to hire political experts in every region to draft these guidelines. I would think you would need a sizable task force of political experts to have the necessary expertise to do this accurately for every country in the world.
The article indicates that all the rules are made centrally, in Menlo Park. I would find it very surprising if they had enough consultants to come up with comprehensive, culturally aware guidelines that allow moderators to make nuanced decisions about what to ban and what to leave up. Instead, their goal seems to be keeping Facebook out of trouble. As long as they can point to a guideline that says that hate group X is banned, they're covered.
To Facebook, the goal isn't to prevent the transmission of hate speech. The goal is to stop Facebook from being perceived as a vehicle to transmit hate speech. Facebook executives know that in advertising, perception overrides reality. They're willing to do whatever it takes to keep up the perception that Facebook is doing everything in its power to stop hate speech, even if those measures have little or no practical impact (or even if they backfire and suppress speech that should be allowed).
I was thinking about that google rule book when responding to this recent hn thing here: https://news.ycombinator.com/item?id=18773263 about google (Google Is Smashing Multi-Discipline Websites to Combat Fake News),
The truly terrible thing is that most people do not know that F and G are censoring to the degrees they are.
Most (99.9%?) - don't know that there is a group of human moderators and site rankers that are trying to follow some esoteric multi-page set of guidelines to determine what is shown and what is not shown.
We know nothing of these people who are choosing what gets seen or not seen on our state / country / city, etc. What are their political leanings? religious views?
How much time are these people given to decide if something you say or think you are going to hear is blocked? How much do these people make for making these important decisions? What is their education level?
What about those things for the people who make these sets of rules. Also, how are these things affecting the algorithms?
These things matter. The lack of transparency is bad for users and those who think they speaking to their audience.
>They consist of dozens of unorganized PowerPoint presentations and Excel spreadsheets with bureaucratic titles like “Western Balkans Hate Orgs and Figures” and “Credible Violence: Implementation standards.”
>In Pakistan, moderators were told to watch some parties and their supporters for prohibited speech.
>In another email, moderators were told to hunt down and remove rumors wrongly accusing an Israeli soldier of killing a Palestinian medic.
These are very similar to the information collected by and activities engaged in by a state intelligence organization.
They can’t have this responsibility. It gives them too much centralized power and fails to devolve power locally where people are better fit to understanding the audience and the content.
I really think the answer is in localized, community-based moderation (with appeals to localized corp) like forums used to be. Obviously that goes against their will to control every aspect of their platform.
Well there are localized social networks in countries where Facebook is blocked or just unpopular. WeChat in China, VKontakte in Russia. I'm not sure that's any better?
What does localized even mean anymore, and who should decide whether someone is sufficiently "local" to qualify as a moderator?
Devolving moderation on national level (as opposed to centralized global) would be a start, but also self selecting groups. If you choose to be in a Singaporean page, group, forum, etc., you abide by their localization.
But what does that mean for a global group? Seriously?
For example: I help run a large art group on facebook. (Large = 76,975 members). While the breakdown of our group shows the largest percentage of folks in the group are from the US (20k), lots of folks are from India (15k).. and the rest are from elsewhere in the world. We list ourselves as global. The unpaid admins and moderators are from different countries and times zones. We use Translate functions at times.
What localisation do we have? Do we have a choice? If one admin is in Norway, can we use a Norwegian model of nudity censorship instead of the more restrictive American or Indian or Facebook models of "acceptable" nudity? Can you list locally with one admin even if your demographics do not fit? Do large countries always have the advantage? If demographics change, does the censorship style?
These are all issues with "local" or national level censorship, and I dont' think there is an easy answer.
What is a "national" level? What nation should set content moderation standards for conflict zones and disputed territories such as Palestine, Tibet, Taiwan, Syria, Nagorno-Karabakh, Kosovo, Western Sahara, etc?
How about territories within nations? Should the same moderation standards apply to Mississippi and California?
There are no clear right or wrong standards here. Any changes are as likely to make things worse as better due to unintended consequences. The Facebook employees in Menlo Park have made some bad decisions on occasion but they at least try to do the right thing most of the time within the constraints imposed by their business model. National authorities often have more sinister motives.
Recall the early FB stories where Zuckerberg was pretty gung-ho about eaves dropping (and the company as a whole did) on people's private info. Said behavior has probably been moderated due to legal and financial ramifications, but I don't doubt it's still there in some lax form.
How does everyone have so much trouble with Facebook and Twitter? I have both and have used both for years and I know that both require you to pick what you want to look at. I exhaust my Twitter feed in seconds and then have nothing to look at.
Both my FB feed and my Twitter feed are more heartwarming, adorable, funny, or touching than angry or whatever. Maybe all of this is just a reflection of who you guys are.
And I'm no stranger to controversy. Ban the bomb, no war, I've been there. Where I haven't been is Twitter/FB flamewars. And you know how? I unfollow as soon as I don't like. It's not the end of the world.
Yeah, I tend to agree. At some level, I keep coming back to the fact that these are just websites. Like, you can turn off your computer and go for a walk or something.
I don't use facebook or twitter, so I guess that colors my perception as to how trivial they are in the broader picture of life, but it also means I don't totally understand the issues that people who use the sites face.
They're where various sorts of "public life" happens. While you can be a journalist without being on twitter, it's cutting yourself off from that feeling of being informed. Much harder to get by if you're not on Facebook (one of my friends in the trade reports having over 3000 "friends" on Facebook; effectively that represents an externally managed contacts book).
Then there's the problem that you don't have to be on either website for people to use it to campaign against you. From trivial tactics like getting you fired or SWATted to more egregious things like the Rohingya genocide.
Famous people always get some harassment. We used to call it "hate mail", now it's "hate replies". It comes along with being well known. Also, you can turn off notifications for @ replies from people you don't follow if it's an issue.
Facebook should add a privacy setting that allows you to disable @mentions completely, or limit them to people you have added as a friend. I would not be surprised if this is already a feature.
I was wondering about the @ thing. There is a strange account that keeps @ my account name and a few other people in my field whose names I recognize. I would like to be able to disable that.
To borrow the catch phrase from XKCD, sometimes, it's just that:
Someone is WRONG on the internet.
But there's a sort of natural conductivity/inductivity to social networks, that varies as you chain together personalities, and the dumber you go, the more emotive things get.
Ask any lawyer, picking a jury usually involves pulling the dumbest, most incompetent people, while polling for their gut alignment, relative to case at hand. These are the most emotional, and likely most easily driven to judgement. They also usually lack power in their daily lives, and are eager for some trigger time, so to speak.
So, now, put that on the internet. Pack these people together in clusters relative to their home towns and high schools, and never let them escape each other. Preserve every last memory, especially the most embarrassing ones that their family members can command, to bring an obnoxious personality to heel, if possible.
Take everyone with a shit job, with company and store policies design to screw the customer. Ever call center goon, every cellular and wireless store clerk gunning for contract sign ups. Ask them what's wrong with the world.
Now, randomly distribute agit prop into a flashing marquee on their home page, to gin them up, as they start cruising for love interests, and the unattainable crushes that taunt them on a daily basis. Give them parking tickets, too many bills to pay. Make them late with the rent. Get them pregnant.
The problem as usual is centralization on the Web. That’s what caused the VCs to fund Facebook and why it became this behemoth. (Peter Thiel thinks a monopoly is awesome.)
Take for example a law passed earlier this year which required networks to remove posts dealing with child trafficking. The EFF and many free speech advocates went ballistic:
But what if we had open source software that any small community could use to run their own Facebook-like social network?
Look, Wordpress has been around for 15 year and people have figured out how to clean up their own blog spam. Communities like HN or a local village can easily police their own posts. Local communities are also able to know a local language and local customs and laws.
I commend Facebook for allocating more resources to enforce its community standards, what's the point in outlining standards if there is no enforcement?
I'm not surprised that the initial implementation has had some rough patches. For this effort to be successful Facebook and other platforms are going to have to heavily invest in building relationships with cultural experts and stakeholders in hundreds of different countries and regions, each region has it's own unique history and flash-points, it's best to understand these from folks who are on the ground rather than overworked moderators and engineers in Menlo Park.
Facebook got to where it is by being a centralized social platform, but successful community standards enforcement at this level will take a more decentralized, hyper-local approach.
Also, similarly to how large tech companies publish yearly reports on diversity metrics and data requests from governments, if Facebook is really going to make this effort a priority they should publish a report detailing its efforts.
The problem is that nothing that Facebook does will actually solve the problem. The solution that will actually solve the problem, having different social networks, with different community standards for each, is antithetical to Facebook's existence. Facebook will continue to make more PowerPoint presentations and hire more outsourced content moderators in a desperate attempt to make the unworkable work... or at least be seen putting in the effort until the the political controversies have blown over.
I keep hearing that additional social networks will solve the problem. How exactly do additional social networks solve this problem? Won't they be even more inclined to serve addictive content under the threat of cut throat competition. For example, WhatsApp too used a lot for propagation of rumours. How will additional social networks make people immune from their biases, or magically neutralize all the extrme opinion on the internet. How will additional social networks magically verify facts?
Additional social networks will prevent Facebook from imposing a one-size-fits-all speech policy on the entire world. Yes, some countries' social networks will be filled with hate speech and conspiracy theories. Other countries will ban such speech, just like Germany bans Nazi memorabilia. Yet other countries will err towards allowing a free marketplace of ideas, like the US does with its First Amendment guarantees.
Facebook cannot and should not be the one attempting to reconcile cultural norms across the entire world into a single speech code that satisfies everyone.
I wouldn't trust my own government with moderating the social network activity of our nation. They are so corrupt that I can't even begin to describe. It's better in the hands of a multinational, at least their interests are related to ads, not political suppression :(
In my opinion the problem with social networks is just the effect of an underlying cause, which is that many people are so poor, uneducated, sick or old that they only care about the immediate future. Religious and inflexible people will believe anything that reaffirms their beliefs. They will bite at all fake news and spread it around and vote in a disastrous way for the future of the country.
Unfortunately there will always be a part of the population that lacks long term concern. You can't improve social networks when people are lacking long term concern, which is usually the domain of young educated professionals, who have the most to lose from this folly.
Not everyone wants Facebook to be imposing its notion of community standards on their online discourse. Right now, Facebook is the only social network, so if you want to reach people, you have to play by Facebook's rules, no matter how arbitrary and culturally-imperialistic they are. If there were a plethora of social network, you could reach people while maintaining the norms of your community, even if those norms were opposition to the US.
There's no such thing as cultural imperialism. It's an imaginary concept.
The people who don't want Facebook to impose its community standards outside the US usually just want to be able to impose their own community standards on others. They're no better than Facebook. Why should anyone care what they want?
Not to mention legal pressure, given that most of the world bans hate speech. (Japan is the only other country in the world that I've found which has a strong protection for free speech equivalent to the US, and they got it because the constitution was drafted after the US victory in WW2)
From Facebook's perspective, what is the alternative? _Not_ trying to define censorship guidelines - and consequently (as we saw in Myanmar) enabling genocide?
Seems like a pretty challenging task for anyone, also 1500 pages sounds like a good start, it would be nice if anyone who feels really strongly about the problems of this situation and wants to help out in a productive manner to read all of the pages and offer suggestions on how to improve it, maybe even upload some openly contributable global attempt at a “unified” (at least in distribution) agreed upon ethical core. Somewhat like the geneva convention did for conventional warfare, but for discourse through digital platforms different networks could adopt and reference.
There is a lot of apologism for censorship on this thread in sharp contrast to the threads on china and others where they are demonized for the exact same thing.
This is troubling as it seems when we do it commentators are ready with a litany of justifications in contrast to the blanket judgements on other discussions. This kind of double standards is abusive of discourse and the values sought to be defended and cannot then be a forum for informed discussion.
HN is right-biased and quite nationalist.I'm always amused to realize that some people actually believe that Facebook's "moderation efforts" are a good think, while WeChat's or VK's are not.
I mean, I get why an exec or a politician would have to say this in public, but come on, who actually buys this? Facebook is no less of a propaganda tool for an authoritarian elite than it's Chinese or Russian counterparts. And even if you're a nationalist American, don't forget that Facebook isn't really protecting your own interests.
The Times seems to have one or more Facebook employees feeding it confidential material in spite of Zuckerberg's threat to prosecute anyone caught doing so. Constant leak of documents/emails/etc.
The thing that troubles me the most is the hubris of Facebook to think that they can even attempt to define and enforce speech codes. It is their site, so they can do what they want, but I can also do what I want and not use it. Given Facebook's massive drop in stock price this year, it seems like their investors think Facebook is following the wrong path as well.
“...the hubris of facebook to think that they can even attempt to define and enforce speech codes?”
What about this is hubris? Wouldnt you argue they have a responsibility to try? Wouldnt it be hubris to build such a powerful platform and not think to care about minimizing the amount of damage malicious actors could do with it?
They should work with legislatures and law enforcement in their respective countries of operation to figure out what those countries (and their people) actually want them to do. Some countries have strong hate speech laws; some have strong freedom of speech protections. Ideally, this should be reflected in their communication platforms. One size doesn't fit all.
While I completely agree that one size doesn’t fit all, you can’t deny facebook might run into some problems if the governments of each country they are working with aren’t exactly pushing for what facebook thinks are some of the most forward thinking morals like individual freedom of speech or lgbtq rights or gender equality. It puts you in a rough precedent and a likely banning from a country to start working with some governments on rules and not some with others. I think I agree with some of the aspects of your statement but there should be a larger body responsible for defining the base set of ethics for the digital world but on a global scale and something more like the geneva convention. I think its often unfair to all the people that work on these problems that truly care about finding solutions to imply that there is an easy alternative they are somehow incredulously ignoring.
> you can’t deny facebook might run into some problems if the governments of each country they are working with aren’t exactly pushing for what facebook thinks are some of the most forward thinking morals like individual freedom of speech or lgbtq rights or gender equality.
At that point, they have to decide if operating in a country that demands this sort of thing is morally acceptable or not.
Facebook thinks it can push its own moral standards for speech on users while at the same time repeatedly compromising the privacy and data of users.
And Facebook does this without a hint of recognition that such imposition will be (and already is) being used as a political weapon against dissident voices.
I feel like you’re personifying “facebook” itself a bit as a sole entity when in reality its composed of thousands of individual people each with their own ethics and viewpoints so its not really like those are the same people working on privacy and suppressing hate speech, which I feel like would be somewhat of a necessity for your point. Really, everyone at that company is doing their individual best at their respective responsibilities to varying levels of success. But that really isn’t a coordinated calculated effort beyond what each person brings to the table.
These global sites, which have now been told it's not acceptable to just throw up their hands and say "it's not our fault, we're just a platform!" are now trying to ham-handedly manage the content we've told them to manage. I don't know what the answer is, but I wouldn't call this hubris. I'd call it... reluctant compliance?
There are probably legal issues preventing them do from doing so. I'm not super familiar with this area of law, but the snippets they shared probably fall into some analogue of fair use, while the full documents would be a clear breach of trade secret laws.
this is what I was really hoping for as well. I'd like to read through those documents myself to see exactly what rules they have come up with. This way the NY Times stays the gate keeper and they will like drip a few more articles out regarding this very topic.
Having a bunch of likely millenial aged individuals with limited life experience based in Menlo Park to have a say in discourse that crosses so many boundaries? What could possibly go wrong?
I suspect you are right. Your comment reminds me when Google and others were paying people to review sites on a contract basis for matching content to category. This was many moons ago before basically "mechanical turk" took off.
Life experience isn't just a result of accumulated years, as one dictionary summarizes it: "experience and knowledge gained through living". If it rains a lot, but I don't catch any of it, I still have no water, yet if it rains just a tiny bit, and I hold out a cup, I do have some water. Of course, if I hold out a cup and it doesn't rain, I have no water either. You have to live to be able to gather life experience, but just living isn't enough. The same goes for having children, they are a great opportunity for learning and reflection, but they don't necessarily cause it in all people in the same degree.
I was replying to the idea that a millennial is a young person. They aren't. If you've got other comments about life experience direct them at the original commenter.
> demographers and researchers typically use the early 1980s as starting birth years and the mid-1990s to early 2000s as ending birth years.
It mentions various ranges, roughly 1980 at the earliest, 1996 at the latest. So a millenial according to that today would range from age 22 to age 38. That leaves us with "people under 40 with limited life experience".
They probably meant young people, sure, but the sentence makes sense either way, since under 40 is still kinda young to formulate rules for how to moderate political discussions anywhere on the globe. A person would have to study and travel and learn a lot, just about that, to have anything valuable to say about it at age 30. If you spend that time working at Facebook instead, to speak of "limited life experience" in this context is very kind either way.
The real kicker is that it's not even just about Facebook and their own limited life experiences though. From the article:
> Then the company outsources much of the actual post-by-post moderation to companies that enlist largely unskilled workers, many hired out of call centers.
The cool thing here is, since it's in the tree of their comment, the orginal commenter sees that, too.
My knee-jerk reaction is write this off as ageism, but considering the social-movement trends of this demographic, it is probably a recipe for disaster.
To give an example, at a meetup hosted by a large game corp, some college children presented a project aimed at logging the speaking time of business-meeting participants based on superficial details (primarily sex, race), with the goal of equalizing the amount of speaking time based on such. The fact that my objection to such will be perceived as sexism/racism is my other objection.
This is exactly why Silicon Valley engineers, and engineers generally, need to stop being so dismissive of people with "useless" liberal arts educations, and instead bring in sociologists and anthropologists to think through these processes in a comprehensive way.
I'm being downvoted into oblivion elsewhere on this site for suggesting this, but I feel like it's a very obvious move for tech companies that have this problem.
I have no problem with anthropologists and those in the liberal arts employed when there are skills they have that help a company. Bubble-1, many humanities students were hired around working with content. That wasn't a bad thing.
As I said in another post, companies also actually used to hire people outside the tech bubble to deal with things like categorization of websites, etc. That went away once they could achieve a certain percentage with cheap labor.
Classical liberal arts educations are tremendously valuable in teaching people how to think critically. However when we're dealing with complex emergent systems no one is capable of reliably predicting problems regardless of how much education in sociology and anthropology they have. Active monitoring and fast responses are the only viable approach.
but what about seriously brainstorming some hypothetical scenarios as part of the process?
For example, documenting some sort of 'ideal' or 'pro/con' type lists of potential fallout. When architecting a system.. or in design/feature proposals.
Surely folks less in the weeds of technical requirements and actual implementation could help decide earlier on if something is 'good'.
Brainstorming hypothetical scenarios usually results in endless discussions about imaginary problems that never come to pass, and completely missing the real problems. This tends to happen regardless of who participates.
agreed. ...I guess why not have people who're better at it or who'd rather spend time on it, do it? Would probably miss most of the real problems, but could catch a few and help steer complex issues. I guess I'm thinking about it as applying the type of mission that OpenAI has[1] on a wider/smaller scale.
The OpenAI people mean well but their their concerns over safety seem a bit silly considering they haven't actually demonstrated any progress toward building a true AGI. At this stage it's the equivalent of worrying about the problems that would be caused by an alien invasion: an interesting intellectual exercise but useless in practical terms.
Ironically it’s these graduates with the strongest opinions on what people should and shouldn’t be allowed to say. It’s not mechanical engineering grads freaking out on Twitter over cultural appropriation, tone policing, etc. They are the last people you would want to entrust with this power.
But the actual relationship between nerds and liberal arts people is more like https://www.xkcd.com/743/ . I can see how the argument you're making would play well in some communities, but HN is not the right audience.
Noble aspirations (including those of the founding fathers of the US) that rational dialogue is the answer and not censorship need to be reinterpreted in a Facebook-world where dialogue is largely impossible. It demands a new citizenship, which I believe requires us to eschew platforms like Facebook, not because we want to turn back the clock, but because by their nature they are incapable of supporting true social dialogue. It isn’t a question of censorship, it is a personal question of ethical standards.
I would just shut down in these hard to govern, hard to monetize regions/countries. I know it's a bit step from "connecting the world" mantra, but it solves the problem.
i'll save you the click, tldr: facebook is trying to solve an unsolvable problem, they've made some mistakes as 2 billion people have a lot of different ideas about what constitutes a good or bad decision, but no one has anything better to offer, including the author of the article.
Globally scaling hate-speech censorship is a problem that many content websites (Youtube, Twitter, etc.) face. This article with it's leaked deck and false-positive examples seems like it's trying to generate shock at "how the sausage gets made". Any process involving humans will have outliers, but the question is, how effective is the process really?