Yes, calling an LLM "AI" was the first HUGE mistake.
A statistical model the can guess the next word is in no way "intelligent" and Sam Altman himself agrees this is not a path to AGI (what we used to call just AI).
Please define the word intelligent in a way accepted by doctors, scientists, and other professionals before engaging in hyperbole or you're just as bad as the AGI is already here people. Intelligence is a gradient in problem solving and our software is creeping up that gradient in it's capabilities.
Intelligence is the ability to comprehend a state of affairs. The input and the output are secondary. What LLMs do is take the input and the output as primary and skip over the middle part, which is the important bit.
A statistical model the can guess the next word is in no way "intelligent" and Sam Altman himself agrees this is not a path to AGI (what we used to call just AI).