My point was that RL/DL is being used like some kind of massive hammer to hit all the nails. Cognition requires different, specialized, energy-efficient tools.
> consciousness
All talk about this is premature and "pre-science", before we figure out more basic, fundamental things like object storage and recall from memory, object recognition from sensory input, concept representation and formation, the exact mechanism of "chunking" [1], "translational invariance" [2], generalization along concept hierarchy and different scales, representation of causal structures, proximity search and heuristics, innate coordinate system, innate "grammar".
Even having a working, biologically-plausible model of navigation in 3d spaces by mice, without spending a ton of energy training the model, would be a good first step. In fact there is evidence that navigational capacity [3] is the basis of more abstract forms of thinking.
On all of these things we have decades worth of research and widely published, fundamental, Nobel-winning discoveries which are almost completely ignored by the AI field stuck in its comfort zone. Saying "we have no idea" is just being lazy.
Edit: As for OP's actual paper I think something like complex-valued RL [4] might bypass his main claims entirely. But my point is that RL itself is a dead end, trivializing the problem at hand.
> consciousness
All talk about this is premature and "pre-science", before we figure out more basic, fundamental things like object storage and recall from memory, object recognition from sensory input, concept representation and formation, the exact mechanism of "chunking" [1], "translational invariance" [2], generalization along concept hierarchy and different scales, representation of causal structures, proximity search and heuristics, innate coordinate system, innate "grammar".
Even having a working, biologically-plausible model of navigation in 3d spaces by mice, without spending a ton of energy training the model, would be a good first step. In fact there is evidence that navigational capacity [3] is the basis of more abstract forms of thinking.
On all of these things we have decades worth of research and widely published, fundamental, Nobel-winning discoveries which are almost completely ignored by the AI field stuck in its comfort zone. Saying "we have no idea" is just being lazy.
Edit: As for OP's actual paper I think something like complex-valued RL [4] might bypass his main claims entirely. But my point is that RL itself is a dead end, trivializing the problem at hand.
[1] https://en.wikipedia.org/wiki/Chunking_(psychology)
[2] http://www.moreisdifferent.com/2017/09/hinton-whats-wrong-wi...
[3] http://www.scholarpedia.org/article/Grid_cells
[4] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22c...