> You know, for someone who critiques the "generative AI disease" on his wiki, it's a bit funny to be using generative AI here at all, hm?
> Instead of kvetching about parent's terminology
The point behind knowing how it works vs assuming how it works gives you a difference in understanding and perspective.
I know how it works, because I implemented the papers, and I started way before the current LLM hype. Models like NEAT, HyperNEAT, LSTMs, Bayesian RNNs, GANs, BERT, AutoBERT, AlphaGo are inherently useful if you understand how the model works, what it can do and what it can't. Those tools are great, if you know their purpose and applications.
Post-LLM agents that's a different problem, because a lot of people are assuming it's "AI" that magically does things, while it just hallucinates. So the dangers are higher when it comes to the unawareness of systemic issues and inherent responsibilities of using those tools.
(read also: Attention is all you need, one of the best papers on the topic, even more relevant these days).
PS: I've spent too much effort to comment on a shitposting account already. Anyways, have a great day nonetheless and a Happy New Year!
Instead of kvetching about parent's terminology when you've proven you clearly know what he meant, I suggest the disclaimer:
"No LLMs were used in the making of this website and its content, but self-hosted latent diffusion models were."