> calling the use of neural networks in statistical work "AI" is misleading at best.
Neural Networks are not considered AI anymore?
That just reinforces my thesis that "AI" is an ever sliding window that means "something we don't yet have". Voice recognition used to be firmly in the "AI" camp and received grants from even the military. Now we have that on wrist watches (admittedly with some computation offloaded) and nobody cares. Expert systems were once very much "AI".
LLMs will suffer the same treatment pretty soon. Just wait.
Maybe it's a good indicator of misuse when the paper didn't mention 'AI', or 'intelligence' once.
> my thesis that "AI" is an ever sliding window that means "something we don't yet have
Or maybe it's the sliding window of "well, turns out this ain't it, there is more to intelligence than we wanted it to be".
If everything is intelligent, nothing is. If you define pattern recognition as intelligence, you'd be challenged to find unintelligent lifeforms, for example. You haven't learned to recognize faces, you are literally born with this ability. And well, life at least has agency. Is evolution itself intelligent? What about water slowly wearing down rock into canyons?
Pretty soon? I already regularly see people proudly stating that LLMs "aren't really AI" and just "a Markov chain" (yeah sure, let's ignore the self-attention mechanism of transformers which violate the Markov property).
For the sake of my sanity I've just started tuning out what anyone says about AI outside of specialist spaces and forums. I welcome educated disagreement from my positions, but I really can't take the antivaxx equivalent in machine learning anymore.
Chess was a major topic of AI research for decades because playing a good game of chess was seen as a sign of intelligence. Until computers started playing better than people and we decided it didn't count for some reason. It reminds me of the (real) quote by I.I. Rabi that got used in Nolan's movie when Rabi was frustrated with how the committee was minimizing the accomplishments of Oppenheimer: "We have an A-bomb! What more do you want, mermaids?"
They chased chess since they thought if they could solve chess then AGI would be close. They were wrong, so then they moved the goalpost to something more complicated thinking that new thing would lead to AGI. Repeat forever.
> we decided it didn't count for some reason
Optimists did move their goals once you realized that solving chess actually didn't lead anywhere, and then they blamed the pessimists for moving even though pessimists mostly stayed still throughout these AI hype waves. It is funny that optimists constantly are wrong and have to move their goal like that, yes, but people tend to point the finger at the wrong people here.
The AI winter came from AI optimists constantly moving the goalposts like that, constantly saying "we are almost there, the goal is just that next thing and we are basically done!". AI pessimists doesn't do that, all that came from the optimists that tried to get more funding.
And we see that exact same thing play out today, a lot of AI optimists clamoring for massive amounts of money because they are close to AGI, just like what we have seen in the past. Maybe they are right this time, but this time just like back then it is those optimists that are setting and moving the goal posts.
It also turns out that you can make machines that fly without having them flap their wings like flying animals. But it would be absurd to claim that airplanes don't fly for that reason.
I think you'll find that definition "intelligence" is a bit harder than defining "flight", and convincing people that "a machine programmed to mechanically follow the steps in the minimax algorithm as applied to chess, and do nothing else" doesn't fit most people's definition of "intelligence" in the context of the philosophical question of what constitutes intelligence.
Some sign that it's more than autocomplete would be nice, maybe the ability to perform some kind of logical reasoning. ChatGPT does a good job of putting up an illusion of human-like intelligence, but speak to it for like 10 minutes and its nature as a plausible text generator quickly makes itself apparent
Neither are neural networks, by that definition. Or 'machine learning' in general. They all have been called "AI" at different points in time. Even expert systems – that are glorified IF statements – they were supposed to replace doctors.
People thought those techniques would ultimately become something intelligent, so AI, but they fizzled out. That isn't the doubters moving the goalposts, that is the optimists moving the goal posts always thinking what we have now is the golden ticket to truly intelligent systems.
Some people are incapable of learning. Therefore, LLMs are AI?
As far as I recall, the turing test was developed long ago to give a practical answer to what was and was not practically artificial intelligence because the debate over the definition is much older than we are
Neural Networks are not considered AI anymore?
That just reinforces my thesis that "AI" is an ever sliding window that means "something we don't yet have". Voice recognition used to be firmly in the "AI" camp and received grants from even the military. Now we have that on wrist watches (admittedly with some computation offloaded) and nobody cares. Expert systems were once very much "AI".
LLMs will suffer the same treatment pretty soon. Just wait.