Why wouldn't it follow? Human intelligence evolved in the real world with all its vast information content. Deep learning systems are only trained on a few terrabytes of data of a single type (images, text, sound etc). Even if they can be trained faster than the rate at which animals evolved, their training data is so poor, compared to the "data" that "trained" animal intelligence that we'll be lucky if we can arrive at anything comparable to animal intelligence by deep learning in a billion years.
One can rationally argue either way over the speculative proposition that reinforcement learning will yield AI in less than a few million years, but that it took evolution half a billion years is hardly conclusive, and certainly not grounds for stopping work.
Not grounds for stopping work[1], but perhaps grounds to explore other avenues[2] to see if something else might yield faster results.
I’m no expert, but my personal opinion is that AGI will probably be some hybrid approach that uses some reinforcement learning mixed with other techniques. At the very least, I think an AGI will need to exist in an interactive environment rather than just trained on preset datasets. Prior context or not, a child doesn’t learn by being shown a lot of images, it learns by being able to poke at the world to see what happens. I think an AGI will likely require some aspect of that (and apply reinforcement learning that way).
But like I said, I’m no expert and that’s just my layperson opinion.
[1] if the goal is AGI, if it’s not then of course there’s no reason to stop
Fair enough, though I do not think the evidence from evolution moves the needle much with respect to the timeline. For one thing, evolution was not dedicated to the achievement of intelligence.
Well, if it follows, then it follows necessarily. But maybe that's just a deformation professionelle? I spend a lot of time working with automated theorem proving where there's no ifs and buts about conclusions following from premises.
No, I am simply responding to your rather formal point, in kind. Unless you are aguing for it being an established fact that the time evolution took to produce intelligent life rules out any form of reinforcement learning producing AI in any remotely reasonable period of time, then that original point of yours does not seem to be going anywhere.
In your work on theorem proving, am I right in guessing that there are no 'ifs' or 'buts' because the truth of premises is not an issue? In the "evolution argument", the premises/lemmas are not just that evolution took a long time, but also something along the lines of significant speedup not being possible.
You might notice that in another comment, I suggested that we might still be in the AI Cambrian. I'm not being inconsistent, as no-one knows for sure one way or the other.
I didn't make a formal point- my comment is a comment on an internet message board, where it's very unlikely to find formal arguments being made. But perhaps we do not agree on what constitutes a "(rather) formal point"? I made a point in informal language and in a casual manner and as part of an informal discussion ... on Hacker News. We are not going to prove or disprove any theorems here.
But, to be sure, as is common when this kind of informal conversation suddendly sprouts semi-formal language, like "argument", "claim", "proof", "necessarily follows" etc, I am not even sure what exactly it is we are arguing about, anymore. What exactly is your disagreement with my comment? Could you please explain?
"Necessarily" has general usage as well, you know... why would you read it otherwise, especially given the reasonable observation you make about this site? And my original point is not actually wrong, either: whether reinforcement learning will proceed at the pace of evolution is a topic of speculation - it is possible that it will, and possible that it will not.
Insofar is I have an issue with your comment, it is that it is not going anywhere, as I explained in my previous post.
>> Insofar is I have an issue with your comment, it is that it is not going
anywhere, as I explained in my previous post.
I see this god-moding of my comment as a pretend-polite way to tell me I'm
takling nonsense, that seems to be designed to avoid criticism for being rude
to one's interlocutor on a site that has strong norms against that sort of
thing, but without really trying to understand why those norms exist, i.e.
because they make for more productive conversations and less wasting of
everyone's time.
You made a comment to say that unless I claim that X (which you came up with),
then my comment is not going anywhere. The intellectually corteous and honest
response to a comment with which one does not agree is to try and understand
the reasoning of the comment. Not to claim that there is only one possible
explanation and therefore the comment must be wrong. That is just a straw man
in sheep's clothing.
And this is not surprising given that it comes at the heels of nitpicking
about supposedly important terminology (necessarily!). This is how
discussions like this one go, very often. And that's why they should be
avoided, because they just waste everyone's time.
"Necessarily", when read according to your own expectations for this forum, made an important difference to my original post (without it, I would have been insisting that the issue is settled already), so it was reasonable for me to point out its removal. The nitpicking over it began with your response to me doing so, and you have kept it going by taking the worst possible reading of what I write. This is, indeed, how things sometimes go.
Meanwhile, in a branching thread, I had a short discussion with the author of the post I originally replied to, in which I agreed with the points he made there. Both of us, I think, clarified our positions and reached common ground. That is how it is supposed to go.
I did not set out to pick a fight with you, and if I had anticipated how you would take my words, I would have phrased things more clearly.