Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No one uses it to generate code. Really. Talk to people who actually use it and listen to what they say… they use it to help them write code.

If you try to generate code, you’ll find it underwhelming, and frankly, quite rubbish.

However, if you want an example of what I’ve seen multiple people do:

1) open your code in window a

2) open chatgpt in window b (side by side)

3) you write code.

4) when you get stuck, have a question, need advice, need to resolve an error, ask chatgpt instead of searching and finding a stack overflow answer (or whatever).

You’ll find that it’s better at answering easy questions, translating from x to y, giving high level advice (eg. Code structure, high level steps) and suggesting solutions to errors. It can generally make trivial code snippets like “how do I map x to y” or “how do I find this as a regex in xxx”.

If this looks a lot like the sort of question someone learning a new language might ask, you’d be right. That’s where a lot of people are finding a lot of value in it.

I used this approach to learn kotlin and write an IntelliJ plugin.

…but, until there’s another breakthrough (eg. Latent diffusion for text models?) you’re probably going to get limited value from chatgpt unless you’re asking easy questions, or working in a higher level framework. Copy pasting into the text box will give you results that are exactly as you’ve experienced.

(High level framework, for example, chain of thought, code validation, n-shot code generation and tests / metrics to pick the best generated code. It’s not that you cant generate complex code, but naively pasting into chat.openai.com will not, ever, do it)



That matches my experience. It's a sort of shortcut to the old process of googling for examples and sifting through the results. And those results, I didn't typically cut and paste from them, or if I did, it was mostly as a sort of a scaffold to build from, including deleting a fair amount of what was there.

Many times it works really well, and it surfaces the kind of example I need. Sometimes it works badly. Usually when it's bad, going to the google/sift method has similar results. Which I guess makes sense, it couldn't find much to train on, so that's why it's answer wasn't great.

One area it works really well for me is 3rd party apis where their documentation is mostly just class/function/etc. ChatGPT generally does a good job of producing an orchestrated example with relevant comments that helps me see the bigger picture.


Me too. As someone who used to be a dev but hasn't written code professionally in twelve years or so, it was such an amazing accelerant. My iteration loop was to contextualize it (in English and in code), ask how to do a thing, look at its response, tweak it, execute, see what happened, alter it some more.

The fact that it usually had errors didn't bother me at all -- it got much of the way there, and it did so by doing the stuff that is slowest and most boring for me: finding the right libraries / functions / API set up, structuring the code within the broader sweep.

Interesting side note: un-popular languages, but ones that have been around for a long time and have a lot of high-quality and well-documented code / discussion / projects around, are surprisingly fecund. Like, it was surprisingly good at elisp, given how fringe that is.


With GPT-4, you can often just paste the error message in without any further commentary, and it will reply with a modified version of the code that it thinks will fix the error.


And then you waste time fixing the error the "fix" gpt introduced. Clever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: