Yup. One of the things that's going to become starkly obvious is this: being in ownership of the money machine doesn't entitle you to power and compensation matching the output of that machine.
Feedback loops set in. If you need X amount of power and Y amount of resources to get it going, but it pays you enough for X times 2 power and Y times 5 resources, the first person to get the machine cranking becomes a human version of that AI paperclip maximizer.
And this has obviously already happened…
So now it's just a matter of, how obvious does this need to get before all the world is paperclips with one idiot sitting there, in his limited human perception, thinking he's won.
It really is Star Trek Future or bust. We don't have the luxury of steampunk attitudes towards power and societal structure anymore. Machine assistance and its multiplier effect are too big, and the runaway feedback effects are too obvious.
Do we have to wait until it's no longer Elon Musk or Jeff Bezos or Mark Zuckerberg, who are at least very aggressive ambitious idiots paying high prices for their goals, and instead it's some feckless Sam Bankman-Fried who ends up holding the bag? What's it going to take to illustrate the scale of the problem?
Morally comparing order of magnitude I don't really see the difference between any of your examples. They're all structurally lacking in empathy and every one of those would sell their mother if it got them ahead in the game.
I don't disagree, but there's a profound difference in effort (most notably seen in Bezos vs. Bankman-Fried). This changes the result, and I think it also significantly changes the framing.
Bezos started in the days of Windows 3.1, and already was positioning himself near Microsoft as a force multiplier even though, in those days, he and other humans had to do vast amounts of work and thinking.
If the only requirement for dominating the world is adjacency to AI, the gameplan is still exactly the same but now the work and thinking is what you turn over to machines as a force multiplier. Without that, where is the justification for producing individuals who dominate the world? It becomes, not arbitrary, but purely a factor of who is adjacent to the AI at the right time.
It's how France historically tried to deal with too much inequality, except it was a complete failure and after staggering levels of evil, bloodshed and oppression they simply ended up with a new elite that had absolute power. No inequality was solved.
Other countries didn't go down that road and none regretted it.
Yet it's amazing how quickly some will still exploit any new technology or change to justify reaching for violence. Some never learn.
You'd best have good implementation or they'll be using ChatGPT to aim your guillotines towards their business rivals, further consolidating their power.
This is the challenge. Anything we've got, people like this have tenfold, or a thousandfold, or a millionfold. As long as we're still running on 18th and 19th century societies.
Mind that you're not literally being steered by the billionaires to accomplish their ends, because that has been the history for the last decade or two. The only thing AI brings to the equation is, perhaps, making the process more obvious and providing a toy version of it that anyone can play with.
Before that, we played with the zeitgeist through marketing, big business… and politics. The only difference is that now we can use it to draw pictures or have it talk back to us like it was a person. The billionaires have been 'prompt engineers' for as long as I've been alive.
The „aiming problem“ for me isn’t really an issue. The goal must be that no one can have too much power over others, no matter why/how. So yeah, the first victims might be tricked targets, but it doesn’t end there.
This might sound totally absurd when you hear it first, but I am fully in favor for randomized rulings in short/limited durations! Lottery style, a bit tweaked to have good entropy.
Feedback loops set in. If you need X amount of power and Y amount of resources to get it going, but it pays you enough for X times 2 power and Y times 5 resources, the first person to get the machine cranking becomes a human version of that AI paperclip maximizer.
And this has obviously already happened…
So now it's just a matter of, how obvious does this need to get before all the world is paperclips with one idiot sitting there, in his limited human perception, thinking he's won.
It really is Star Trek Future or bust. We don't have the luxury of steampunk attitudes towards power and societal structure anymore. Machine assistance and its multiplier effect are too big, and the runaway feedback effects are too obvious.
Do we have to wait until it's no longer Elon Musk or Jeff Bezos or Mark Zuckerberg, who are at least very aggressive ambitious idiots paying high prices for their goals, and instead it's some feckless Sam Bankman-Fried who ends up holding the bag? What's it going to take to illustrate the scale of the problem?