Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The trick is finding a way to ensure that LLM produces something which is always correct. Like in this case, the LLM only changes compiler optimizations, not the assembly itself, so no matter what it outputs the code is correct, it just may be larger. Other possibilities: an LLM which applies semantics-preserving program transformations, or an LLM combined with a proof assistant to verify the output (more generally and for any domain, an LLM as an NP oracle).

But I agree, as of now I haven't seen good uses where LLMs produce reliable output. Not only do you need that guarantee that whatever output always generates a correct program, you need something where an LLM is considerably better than a simple or random algorithm, and you need a lot of training data (severely restricting how creative you can be with the output).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: