Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure what your link has to do with your quote… Anyway, this blog post is not quite right.

While I agree capitalism is a more pressing problem than AI right now, it won't kill us all in 5 minutes. A self-improving AI… we won't even see it coming. There is also much more brainpower dedicated to "fixing" industrial capitalism, than addressing existential risks such as AI. And industrial capitalism doesn't need fixing, it needs to be abolished altogether.

Corporations are even less autonomous than the author thinks. Sure, kill a CEO, and some other shark will take its place. On the other hand, those sharks are all from the same families. Power is still hereditary.

If the people were truly informed about how the current system works, it would collapse in minutes. To take only one example, Fractional Reserve Banking is such fraud that if everyone suddenly knew about it, there would be some serious "civil unrest", to put it mildly.

The same does not apply to an AI. It's just too powerful. Picture the how much smarter we are from chimps. Now take an army of chimps, and a small tribe of cavemen (and women), which somehow want to exterminate each other. Well, the chimps don't stand a chance, if the humans have any time to prepare. They have fire, sticks, lances… Their telepathy have unmatched accuracy (you know, speech). And they can predict the future far better than the chimps. Now picture how much more intelligent than us an AI would be.

It's way worse.

---

Now, this new agey speak about information taking a life of its own… It doesn't work that way. Sure, there's an optimization process at work and it is not any particular human brain. But this optimization process is nowhere near as dangerous as a fully recursive one (that is, an optimization process that optimizes itself). And for that to happen, we need to crack a few mathematical hurdles first, like Löb's theorem.

But that's not the hard part. The hard part is to figure out what goals we should program into the AI. Not only we need to pin them down to mathematical precision, but we don't even know what humanity wants. We don't even know what "what humanity wants" even mean. Hell, we don't even know if it has any meaning at all. Well, we're not completely blind, we have intuitions, and a relatively common sense of morality. But there's still a long road ahead.



The connection between the Hostile AI link and the McKenna quote is this: the informational barrier between humans, institutions and technology is highly permeable, and creates a perfect petri dish for natural selection in informational life (you can model them as "memes", although the analogy to genes isn't a perfect one).

Yes, it breeds far less rapidly than a Kurzweilian AGI, and one day we will face that music for better or worse. But what I'm driving at this is that will not come as a singular moment when SkyNet gets the switch flipped; it will be a gradual evolution from the pre-existing emergent intelligence of the "human+institution+technology" informational network. (Even if you had a day where you flipped the switch on an infinitely accelerating AI, that life form would still inherit the legacy data of humans and their institutions, which would inevitably shape its consciousness, infecting it with any memes sticky enough to cross the barrier.)

See also: the coming wave of Distributed Autonomous Corporations. http://www.economist.com/blogs/babbage/2014/01/computer-corp...

> On the other hand, those sharks are all from the same families. Power is still hereditary.

Too true. Just because new evolutionary cycles are happening powerfully at higher layers of abstraction, it doesn't mean the old ones disappear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: