Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suspect that a coordination between a human programmer and an LLM doesn't require strong programming skills, but it does require strong debugging fundamentals. A month ago I had ChatGPT write a function in Racket just given a text description. Take two lists of symbols of any arbitrary length (but only if both lists are the same size) and construct a new list which selects one at random from the other two lists at the same location. There was some other logic in there, too, based on the way I'd done the structs.

ChatGPT wrote the function perfectly on the first shot, but then I realized it was only working most of the time -- turned out ChatGPT had done a really obvious off-by-one error in the loop, and it was breaking on (1/n) attempts where n is the size of the list.

It's exactly the same as how ChatGPT usually knows what formulas and approaches to take when solving graduate-level mathematics, and its reasoning about the problem is pretty good, but it can't get the right answer because it can't add integers reliably.



> strong debugging fundamentals

Something that experienced (and expensive) programmers are good at, incidentally.


Yes, of course. The only people with good debugging skills are the people who have spent a lot of time debugging their own code (or the code of others). However, in an LLM-dominated environment, it may be plausible for someone to develop strong debugging skills while having only mediocre programming skills. This would be similar to the "boot camp web developer" archetype who has reasonable skills only in a narrow domain.

Full transparency: I think I'm one of those bad programmers who is a good debugger, but I've also been a full-time Linux nerd since Ubuntu 8.04, so I'm very comfortable reading error messages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: