Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What if it can fix itself?


ChatGPT (and GPT-3) can criticize its own output, and then incorporate its own feedback into an improved version. This works for essays, for code...

I'm waiting for a Copilot upgrade that puts red squigglies under "probably wrong" code, because GPT-3 can already detect and fix most of it.


ChatGPT can write prompts for itself, and it can do so recursively (i.e. you can direct it to write a prompt that causes the new instance to write a prompt ... etc). It can be fun trying to make the shortest prompt that survives the most iterations, and introducing additional requirements that every iteration must do makes it more challenging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: