> the latest iterations of "AI" are really good at making people believe it'll be there in 2 years.
This rings me a lot. It feels like the current generation AI companies/projects have been rewarded for making people believe the future is near. In reality, we're just driving towards the top of a local maxima for possible big money. We clearly won't reach AGI with the current LLM approaches, for example. (Perhaps, there might be a breakthrough in computer hardware that might make it possible, but only in significantly inefficient ways.)
It feels very much like the way the Trisolarans convinced Eathlings they were helping them to advance technologically, while they were really keeping them from developing any knowledge on Quantum Mechanics before their arrival.
>We clearly won't reach AGI with the current LLM approaches, for example.
Have any evidence to back this up? Scaling laws seem to show we aren't near a plateau and it's not clear what kind of capability GPT-4,5 or 6 may have.
They’ve already been trained on orders of magnitude more text than a human being ever sees or hears in their entire life, without approaching human intelligence. What text is left to train them on?
> They’ve already been trained on orders of magnitude more text than a human being ever sees or hears in their entire life, without approaching human intelligence
Actually ChatGPT has an IQ of ~83, so that is quite close to average human intelligence.
Furthermore, it was trained only on digital text, arguably that would be it's only "sensory organ". It had no other senses with which to correlate terms and concepts it inferred from text, and look how amazing it is just from that.
As the other poster said, multimodal training is the next step and people are not going to be prepared for it.
How much visual, auditory, and sensory data have they been trained on? What "pain" have they experienced? There are a lot of input vectors that haven't been factored in yet, and a lot of external integration points that haven't been explored.
This rings me a lot. It feels like the current generation AI companies/projects have been rewarded for making people believe the future is near. In reality, we're just driving towards the top of a local maxima for possible big money. We clearly won't reach AGI with the current LLM approaches, for example. (Perhaps, there might be a breakthrough in computer hardware that might make it possible, but only in significantly inefficient ways.)