Or the reverse: today's AI research is missing large components of what would be necessary to achieve sapience. A conversation with Gerald Sussman half a decade ago had a big influence on me in this area:
I had some further conversations with Sussman and some other oldschool AI researchers from MIT later, the shortest summary of their comments would be that "We knew that neural nets could do this kind of thing, we didn't have the power to do it yet though. But an artificial intelligence system that can't explain why it's doing what it's doing doesn't seem very intelligent." Sussman and his students' work on propagators provide a very interesting alternative direction where explanations are a key part.
And yes it's true, humans also construct imperfect versions of their own thinking. That's because these systems are combined: the fast gut-feel type neural-network'ish systems and the slower symbolic reasoning systems that are associated with language. And probably the right design combines both of these.
https://dustycloud.org/blog/sussman-on-ai/
I had some further conversations with Sussman and some other oldschool AI researchers from MIT later, the shortest summary of their comments would be that "We knew that neural nets could do this kind of thing, we didn't have the power to do it yet though. But an artificial intelligence system that can't explain why it's doing what it's doing doesn't seem very intelligent." Sussman and his students' work on propagators provide a very interesting alternative direction where explanations are a key part.
And yes it's true, humans also construct imperfect versions of their own thinking. That's because these systems are combined: the fast gut-feel type neural-network'ish systems and the slower symbolic reasoning systems that are associated with language. And probably the right design combines both of these.
A view into how this might work can be found by first reading Alexey Radul's dissertation on propagators: https://dspace.mit.edu/handle/1721.1/49525
And then on top of that, Leilani Gilpin shows how propagators can be used to analyze neural network systems and create retroactive explanations for decisions that were made: https://groups.csail.mit.edu/mac/users/gjs/lgilpin-PhD-EECS-...