Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, sounds like people are encountering a lot of PEBCAK errors in this thread. You get out of LLMs what you put into them, and the complaints, at this point, are more an admission of an inability to learn the new tools well.

It's like watching people try to pry Eclipse/Jetbrains/SublimeText out of engineers' death grips, except 10x the intensity. (I still use Jetbrains fyi :p)



Well thats the argument most people here are making - that current LLMs are not good enough to be fully autonomous precisely because a human operator has to "put the right thing into them to get the right thing out."

If I'm spending effort specifying a problem N times in very specific LLM-instruction-language to get the correct output for some code, I'd rather just write the code myself. After all, thats what code is for. English is lossy, code isn't. I can see codegen getting even better in larger organizations if context windows are large enough to have a significant portion of the codebase in it.

There are areas where this is immediately better in though (customer feedback, subjective advice, small sections of sandboxed/basic code, etc). Basically, areas where the effects of information compression/decompression can be tolerated or passed onto the user to verify.

I can see all of these getting better in a couple of months/few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: