Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alignment is getting overloaded here. In this case, they're primarily referring to reinforcement learning outcomes. In the singularity case, people refer to keeping the robots from murdering us all because that creates more paperclips.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: