They're probably talking about some point after the capabilities of LLMs started to become clear.
It's why Codex, Claude Code, Gemini CLI etc. were developed at all - it was clear that if you wanted a concrete application of LLMs with clear productivity benefits, coding was low-hanging fruit, so all the AI vendors jumped on that and started hyping it.
Sure, but jumping from its amazing these things work for code at all to software engineering is solved is something only grifters or those drunk on the kool-aid did.
I do agree that it was thought that these llm-agents would be extremely useful and that is why they were developed, and I happen to believe they in fact are extremely useful (without disagreeing that much of the stuff in the article definitely does happen.)
I just sort of resent the setup that it was supposed to be X but actually it failed, when not only is there only minor evidence that it failed, but it was only a brief period in time when it was supposed to be X.
I've been pointing out LLM written stuff for months now, and often people ask how I determined it. When they do, I mention all the aesthetic things, and then I usually engage with the content and why the content is bad. In every case the content has been garbage. Usually it's a really bad infodump, in a singular tone, usually oversold, and you can't tell what was important to the original author and what's not. Often the some of the info isn't right. So it's like, infodump with extra labor to read that includes mistakes and masks what the author cared about.
It's just too easy to make garbage content that gets upvoted because it looks good if you skim it and serves as a good jumping-off ground for discussion. Engaging with the content of all the LLM-written garbage is a major waste of time and would make the site not worth it anymore to me.
Like it's already a major drain just to notice the aesthetic tells and then disengage. It's significantly more work to engage, and, AFAICT, around a 0% conversion rate to "oh shit I'm actually glad I read that."
> (FWIW, some people consider this style of colon use an LLM-ism.)
And, in this case, is indeed LLM output. Maybe you are already aware of that, I couldn't tell - the account you're responding to is 19 hours old and their only previous post is a ShowHN submission to a tool they're making for neurodiverent people to use LLMs to communicate (https://www.bottomuptool.com)
As a recap, my reply to your reply was that DoD is the actual newspeak, and your reply to that evolution of the discussion is that you were not discussing newspeak.
In trying to understand if I'm missing something, I looked up what newspeak means. I (as well as probably a few other commenters based on the contents of their comments) was under the assumption it meant "new speak" meaning it's something new.
In case anyone else reading this was not aware of this, this is what I discovered.
It's a term from George Orwell's 1984, describing a language used to make thoughts unthinkable by removing words from the language. It has nothing to do with "age of the term."
Hence, Dept of Defense is indeed newspeak. Dept of War, while being a new name for the dept, is too literal to be newspeak.
Thanks for the opportunity for me to learn something!
Department of Defense has historically been a prime example of newspeak.
I think Department of War is also newspeak. Or at least, they didn't change the name just to get the name in line with the amount of war the department does.
They changed it because they wanted to do more even more war. The amount of war the department does under the name "Defense" has been status quo for a long time, and my take is they wanted us to think of them differently so they could do even more war, which they have since been doing.
The aesethetic tells are excessive headings for small sections, many of which are largely just a list. There's the numbered list but then the sections that are a sequence of "Bold Text: Single sentence." Plenty of em-dash too, but I don't really look at that much.
It's basically a long list of factual statements all with the same weight, very little opinion expressed about the experience. I don't actually mind an infodump from a human, you usually can still work out some of what they care about, and you also can be reasonably sure they didn't fill in the gaps. Most likely I would have enjoyed whatever the author fed into the LLM.
Yeah we’re in a weird place where the prompt would still have to contain most of the content and then be “prettied up”. Since generic AI “prettied up” style is so same-y and dry it would be refreshing to just see whatever the original was, warts and all. Just blog-post the prompt!
At what point in time? Did anyone foresee coding being one of the best and soonest applications of this stuff?
reply