Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your argument is companies should be allowed to do anything they like, because there will be another country where they can abuse with impunity?


I think the argument is that no harm has actually been shown and there is no legal reason to have one of the fastest growing companies hamstrung while Big Tech gets it's shit together.


SEO spam and social media fake content are already pretty common, so there is definitely at least some harm


I feel as if there needs to be data that demonstrates an explosion of SEO spam and social media fake content. People have been doing those things since we figured out we could -- LLMs are just exponentially better at it.

If we're going to start using excuses for LLMs to get clipped, I think we should focus on the core of the problem, not the fact LLMs can enhance it.


It's a terrible argument.

The harm is transparent and greatly eclipses most other threats to cybersecurity.


Please list the harm that has been done. I do not see it as transparent.


If someone puts a gun to your head, what harm has been done?

Well, the potential harm is only deniable by the biggest of shills, but, technically, the only harm is psychological.


Everything and I really do mean everything has potential harm. Is your position that OpenAI should have to suspend their business over this?


Do you really see commensurate potential harm between an extreme scenario and a mundane one?

The development of AGI is an extreme, extreme scenario.


Is your position that OpenAI should have to suspend their business over this?


These goalposts keep evolving.


[flagged]


Your argument is so ridiculous and easy to disprove with even a moment's Google search that I wonder why you made it.


There was a very thorough understanding of the theoretical harm, as there is with AI. Harm from the Trinity test was not shown until after Japan, again having parallels with AI considering we don't yet know the long-term harms that may occur due to existing, less powerful models. Would you mind sharing your contention instead of just telling me you googled something?

My argument is also pretty clearly not that pedantic. I'm saying there was obviously going to be harm by the detonation of a nuclear weapon, but it wasn't technically shown until it was actually detonated. I'm saying the same thing is true of AI. You can disagree, but it's not _that_ ridiculous to compare the two.


Bad example. Plenty of harm during tests.


Definitely not - companies should be regulated but stopping the future releases is not possible when you have tech powerful as GPT.


Where anything they like means publishing a text prediction engine that is pretty good?


sounds like a sound argument, nonviolence isn't ignoring violence; You don't win a knife fight by declaring it a spirited debate; Darwin had a point; etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: