Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Assuming AGI agents are Turing computable, no individual AGI can possibly comprehend codes for all computable ordinals, because the set of codes of computable ordinals is badly non-computably-enumerable."

I was going to criticize this paper as crankery in the vein of Penrose, but first I thought I'd just compute all possible ordinals in my brain to make sure I'm a general intelligence.

brb.



Hi, thanks for looking at my paper. If you're interested in the relation between Lucas-Penrose stuff and enumeratability of ordinal codes, you might like I.J. Good (1969), "Godel's theorem is a red herring" (2 pages). Can you elaborate on what it is about my paper that strikes you as crankery? I'm a fan of yours so it would be much appreciated.


Sadly, I was only able to count through a vanishingly small subset of the reals in a finite time, and therefore am not a general intelligence, and so it would be foolish of me, a machine made of a handful of atoms, to try to criticize this paper. It sure would be nice if I could appreciate music, but you've proven that's impossible, so it is what it is.


Whether man is machine can't be trivially answered using a few handwavy applications of Godel's theorem nor of observations about the structure of the reals. My paper does not make any attempt to weigh on that question, neither does it claim nor imply that humans have any supernatural powers of enumerating reals or ordinals, and you grossly misrepresent me by implying it does.

Rather, my paper is on the less ambitious question of whether the traditional RL model (with its real-valued rewards) accurately captures the full set of reward-giving environments an AGI should be capable of comprehending.


None, one, many, all.

Hah. Still got it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: