Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know that we won't have tools, albeit different ones, to regulate a bad AI. AI need more than just intelligence and agency. They also need effective ways to interact and affect their environment. That boundary is where we are likely to develop tools to limit and regulate them.

If they are truly general AI then it's likely that their reaction to that limitation and regulation will be not dissimilar to a person's but I see no reason to assume that limiting them will be impossible.



Sure! I don't see any reason why it would be impossible either, but the (hypothetical) problems are very interesting. Starting with the most basic problem of all: how do we even specify what we want the AI to do? The whole field of AI safety is trying to figure out a way to write rules that an agent wouldn't instantly try to circumvent, and to find some way to provide basic guarantees about the behavior of a system that is incentivized to do bad things (just like corporations are incentivized to find loopholes in the law, hide their misdeeds, and maximize profits at the expense of the common good).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: