Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would spend all it's time and resources in order to even slightly decrease the probability of a human coming to harm.

You are assuming there are no thresholds, which is not correct for any decent ( fictitious )ai, I believe.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: