Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's also very probable that the verbalization the majority does internally is just that - a verbalization of the actual underlying thought process. That is what much of current cognitive linguistics points to as far as I have understood.

(Also a reason why I'm very sceptical that the current LLM approach will eventually lead to AGI, BTW)



I think you're probably right that the verbalization is the 'interface layer' but why does that mean LLMs can't approach AGI? They also only use words as an 'interface' layer. Underlying weights are vectors in an abstract space.


With humans, it has been shown that reasoning processes for different aspects of human behavior are completely distinct from verbalization, they work fully autonomous, and the (inner) verbalization comes afterwards.

For LLMs, the tokens (i.e. words) are what the weights are based on, as there isn't other input into them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: