I think the argument is that no harm has actually been shown and there is no legal reason to have one of the fastest growing companies hamstrung while Big Tech gets it's shit together.
I feel as if there needs to be data that demonstrates an explosion of SEO spam and social media fake content. People have been doing those things since we figured out we could -- LLMs are just exponentially better at it.
If we're going to start using excuses for LLMs to get clipped, I think we should focus on the core of the problem, not the fact LLMs can enhance it.
There was a very thorough understanding of the theoretical harm, as there is with AI. Harm from the Trinity test was not shown until after Japan, again having parallels with AI considering we don't yet know the long-term harms that may occur due to existing, less powerful models. Would you mind sharing your contention instead of just telling me you googled something?
My argument is also pretty clearly not that pedantic. I'm saying there was obviously going to be harm by the detonation of a nuclear weapon, but it wasn't technically shown until it was actually detonated. I'm saying the same thing is true of AI. You can disagree, but it's not _that_ ridiculous to compare the two.
sounds like a sound argument, nonviolence isn't ignoring violence; You don't win a knife fight by declaring it a spirited debate; Darwin had a point; etc.