Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My wife's employer has experimented with ChatGPT writing. Their experience has been it's way slower to use it in any but very limited ways, and having non-writers/editors try to use it to do the job of writers it is a disaster.

I think you need just the right combo of task and worker to actually see a notable speed improvement from it... unless the job is "write huge amounts of bullshit", which some jobs truly are (astroturfing, certain kinds of advertising or marketing, scams).

[EDIT] I should add that this isn't preventing them from hyping the effects externally. I'd be wary of companies' claims re: the effectiveness of AI. They're all afraid of being seen as having missed the train, even if the train's not really going where they need to go.



It’s probably the same deal as with LLMs generating code: it can crank out something that’s probably broken, and the person using the LLM needs to be able to know how to code to see where it’s broken. Companies might be able to reduce the headcount of programmers / copywriters / artists but certainly not replace them right now (or possibly ever).


I suspect that a coordination between a human programmer and an LLM doesn't require strong programming skills, but it does require strong debugging fundamentals. A month ago I had ChatGPT write a function in Racket just given a text description. Take two lists of symbols of any arbitrary length (but only if both lists are the same size) and construct a new list which selects one at random from the other two lists at the same location. There was some other logic in there, too, based on the way I'd done the structs.

ChatGPT wrote the function perfectly on the first shot, but then I realized it was only working most of the time -- turned out ChatGPT had done a really obvious off-by-one error in the loop, and it was breaking on (1/n) attempts where n is the size of the list.

It's exactly the same as how ChatGPT usually knows what formulas and approaches to take when solving graduate-level mathematics, and its reasoning about the problem is pretty good, but it can't get the right answer because it can't add integers reliably.


> strong debugging fundamentals

Something that experienced (and expensive) programmers are good at, incidentally.


Yes, of course. The only people with good debugging skills are the people who have spent a lot of time debugging their own code (or the code of others). However, in an LLM-dominated environment, it may be plausible for someone to develop strong debugging skills while having only mediocre programming skills. This would be similar to the "boot camp web developer" archetype who has reasonable skills only in a narrow domain.

Full transparency: I think I'm one of those bad programmers who is a good debugger, but I've also been a full-time Linux nerd since Ubuntu 8.04, so I'm very comfortable reading error messages.


Even if the code isn't broken the issue is that the vast majority of code isn't written in a vacuum. Refactoring, rearchitecting, etc. is quite tricky.

And writing code is the easy part. Architecting is where things get tricky and there are a lot of subjective decisions to be made. That's where soft skills become really important.


> it can crank out something that's probably broken, and the person using the LLM needs to be able to know how to code to see where it's broken.

Same with my junior devs


I see this claim so often and I fail to understand it every time... What kind of junior devs do you hire, where this is the case? And what kind of tasks do you give them?


It was a cheap shot at junior devs, not a serious statement


I've experimented myself and it's been vaguely helpful to create some stubs and to provide some boilerplate a bit faster than I could have generated myself. But I can't say I've found it genuinely useful for anything I want to be even workmanlike product on the other end.


Non-native speakers use ChatGPT to fix the grammar, for example, but you would have to actually read it and check it.

If you are going to use ChatGPT for writing, you would have to hire an army of fact-checkers, because it can literally fake citations.


It absolutely will fake citations. An academic friend of mine intentionally wrote an essay topic for her students last semester where the students had to include references. She picked an essay topic where she knew ahead of time there were only 3 papers with relevant content.

She caught a dozen or so of her students cheating (via chatgpt) by looking for hallucinated papers in the bibliographies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: