Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How much reasoning capability LLM’s have is up for debate.

With a true AGI you could just tell it to keep people’s personal information confidential and expect that it would understand that instruction.



It could understand it and try to comply but still fail to understand where it would leak data which can later be corroborated to someone. This is at least what commonly happens with human AGis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: