Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey everyone makes mistakes when coding, even LLms.

There’s two approaches I use, I’m sure there are more.

* Do multiple completions and, filter the ones that successfully run, and take the most common result.

* Do a completion, if it fails, ask the Llm to find and correct the bug.



This sounds like reducing the temperature with more steps


Reducing temperature doesn't automatically result in correctness, it only results in more precision.

It could be inaccurate with high precision - like a bow that consistently undershoots, and you would have a harder time trying to correct it.


You're right, and I understand that - but you're less likely to get the same result with high temperature, so it will be hard to find correct outputs when using overlap of output. But combining pieces that look good to you of several outputs is a good way to use it. Much of the natural language it creates is useful to me as essentially an extended thesaurus, or to make slightly different points I hadn't thought of it emails. It undoubtedly (demonstrably and probably, I'm sure) knows *way* more than any human on Earth, so it's great at teaching people new things. I usually just reword everything slightly that GPTs provide, so it's a *phenomenal* learning tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: