I'll give you another option: There are valid concerns with the type of output that large model AI generates, and there are experts working in the field who are trying to improve the state of the art by researching and implementing solutions to these valid concerns. There are also a subset of academics whose "one trick pony" is just "veto, veto, veto", without providing valid solutions, or worse, not taking a good faith understanding of "yes, this may not be perfect yet, but that doesn't mean we have to shut the whole thing down."
I'm not as familiar with how this culture works in the AI field, but I absolutely have seen it in the world of open source: people who have little to no programming skill who do nothing but grep repos for instances of "whitelist" and "blacklist" and pretend they are doing God's greatest work by changing these terms, and then cause typical faux-outrage storms on Twitter when their PRs are met with eyerolls.
Like GP I was replying to, it sounds like you’re mostly looking to air grievances here rather than discuss the topic at hand. Thank you for your work on OSS in any case, I imagine that’s a very frustrating experience.
I'm not as familiar with how this culture works in the AI field, but I absolutely have seen it in the world of open source: people who have little to no programming skill who do nothing but grep repos for instances of "whitelist" and "blacklist" and pretend they are doing God's greatest work by changing these terms, and then cause typical faux-outrage storms on Twitter when their PRs are met with eyerolls.