Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.
> bad actors quickly realized this is a force multiplication factor for them
You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).
Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.
AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary
If it can generate porn, it can do so because it was explicitly trained on porn. Therefore the system was designed to generate porn. It can't just materialize a naked body without having seen millions of them. they do not work that way.
I hate to be a smartass, but do you read the stuff you type out?
>Grok doesn't generate nude pictures of women because it wants to,
I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.
What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.
It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.
How many things do you do because society programmed you that way, and you're unable to think outside that programming?
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.