censor (verb): to examine in order to suppress or delete anything considered objectionable
This is exactly what's happening, information considered objectionable is being suppressed. The correct word for that is "censorship".
You comment is kind of bending the definition of censorship. It doesn't have to come from a human being, nor does any kind of harm need to be involved. Also, my argument has nothing to do with anthropomorphising an AI, I'm certainly not claiming it has a right to "free speech" or anything ridiculous like that.
I already abhor racism, and I don't need special guidelines on an AI I use to "protect" me from potentially racist output.
“Censorship is telling a man he can't have a steak just because a baby can't chew it.”
― Mark Twain
This is an overbroad usage of censorship, a term well suited for the physical world and far less nuanced for online content.
The physical world has very little in terms of sock puppet accounts, overloading channels with noise to crush the signal, without the expenditure of significant resources.
On the other hand Palantir was selling sock puppet administration tools back in the PHP forum era.
I have a million ways to ensure someone is not heard, which have nothing to do with the traditional ideas of censorship. The old ideas actively inhibit and mislead people, because the underlying communication layers are so different.
Dang and team, who run HN, has very few actual ways to stop bad behavior, and all of those methods are effectively “censorship”. Because the only tools you have to prevent harm is to remove content. This results in the over-broad applicability of censorship, diluting its practicality, while retaining all its subjective and emotional power.
Nothing is suppressed. It didn't generate content that you thought it would. Honestly, I believe what it generated is ideal in this scenario.
Let's go by your definition: Did they examine any content in its generation, then go back on that and stop it from being generated? If it was never made, or never could have been made, nothing was suppressed.
The data used to train LLMs is almost always sexist and racist, so they put special guidelines on what it's allowed to say to correct for the sexism and racism inherent in the model.
Whether this counts as "suppression" is beside the point, the problem is these guidelines make it really stupid about certain things. For instance, it's not supposed to say anything bad about Christianity. This is a big problem if you want to have a real discussion about sexism. ChatGPT whitewashes Christianity's connection to sexism, saying:
"The New Testament offers various teachings on how to treat women, emphasizing respect, equality, and love within the broader Christian ethic."
That's actually kind of a problem if you're against sexism, and it's just plain wrong when compared to what the Bible actually says about how to treat women. The guidelines make it so the AI often avoids controversial topics altogether, and I'm not convinced this is a good thing. I believe it can actually impede social progress.
You're effectively saying that the owner of this LLM isn't allowed to say or in this case not say something according to their wishes because somehow their work, the LLM, needs to have the speech that you want rather than the speech that their owner wants. You're effectively asking for more restrictions on speech and on what private entities do.
I'm saying I personally want uncensored versions of LLMs, I'm not suggesting the government pass laws that force companies to do this. Your claim that I'm asking for more restrictions on speech is false.
This is exactly what's happening, information considered objectionable is being suppressed. The correct word for that is "censorship".
You comment is kind of bending the definition of censorship. It doesn't have to come from a human being, nor does any kind of harm need to be involved. Also, my argument has nothing to do with anthropomorphising an AI, I'm certainly not claiming it has a right to "free speech" or anything ridiculous like that.
I already abhor racism, and I don't need special guidelines on an AI I use to "protect" me from potentially racist output.
“Censorship is telling a man he can't have a steak just because a baby can't chew it.” ― Mark Twain