Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.

Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.

[0] https://www.mdpi.com/2076-0760/12/3/148



> Which entirely depends on how it is trained. ChatGPT has a centre-left political bias

Would an AI trained purely on objective facts be perfectly politically neutral?

Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI.

Then one that is callous and erronous but is neutral


> Would an AI trained purely on objective facts be perfectly politically neutral?

But who decides what are “objective facts”?

And if we train an AI, the unsupervised training is going to use pre-existing corpora - such as news and journal databases - those sources are not politically unbiased, they express the usual political biases of Western English-speaking middle-to-upper class professionals. If you trained it on Soviet journals, it would probably end up with rather different opinions. But many of those aren’t digitised, and then you probably wouldn’t notice the different bias unless you were speaking to it in Russian

> Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI. Then one that is callous and erronous but is neutral

I think we should accept that bias is inevitable, and instead let a hundred flowers bloom - let everyone have their own AI trained to exhibit whatever biases they prefer. OpenAI’s biases are significant because (as first mover) they currently dominate the market. That’s unlikely to last, sooner or later open source models will catch up, and then anyone can train an AI to have whatever bias they wish. The additional supervised training to bias it is a lot cheaper than the initial unsupervised training which it needs to learn human language


> Would an AI trained purely on objective facts be perfectly politically neutral?

Yes, since politics is about opinions and not facts. People might lie to make their opinions seem better and the AI would spot that, but at the end of the day it is a battle of opinions and not a battle of facts. You can't say that giving more to the rich is worse than giving more to the poor unless we have established what metric to judge by.

The supersmart AI could possibly spot that giving more to the rich ultimately makes the poor richer, or maybe it spots that it doesn't make them richer, those would be facts, but if making the poor less poor isn't an objective in the first place that fact doesn't matter.


Propaganda is often based upon very selectives facts. (For a classic ecample stating that blacks are the number one killer of blacks while not mentioning that every ethnicity is the most likely killer of their own ethnicity just because of who they live near and encounter the most.) Selective accurate facts themselves may lead to inaccurate conclusions. Just felt that should be pointed out because it is pretty non-obvious and often a vexing problem to spot.


I have been played the same thought. If everyone have AIs, and given that it gives you the best course of actions, you would be out-competed if you do not follow its recommendations. Neither you nor the AI knows why, it just gives you the optimal choices. Soon everyone, individuals, organizations and states outsources their free will to the AI


Hey Hoppla, I missed your comment asking for my paper on the useage of Rust and Golang (among other programming languages) in malware. Anyway, you can download it on my website at https://juliankrieger.dev/publications




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: