Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They're very good at tone, and feeling right without actually being write.

The text they generate is probably free of errors like this one as well



Probably.


I’m sure someone is working on adding support for human-like errors in LLM outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: