Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason I shout "weird and unintuitive" from the rooftops is that no LLM vendor will ever describe their weird and unintuitive products that way.


Describing the commercial offerings as "weird and unintuitive" is a weak criticism palatable to corporate comms teams. It suggests a fault in the user ("you're holding it wrong") rather than deficiencies inherent to LLM architecture. No amount of marketing can fix the lethal trifecta or the hallucination problem, can it?

https://www.anthropic.com/solutions/code-modernization:

    Generate dependency graphs, identify dead code, and prioritize refactoring based on code complexity metrics and business impact.
    Transform legacy codebases systematically while maintaining business continuity.
    Claude Code preserves critical business logic while modernizing to current frameworks.
    Claude Code can seamlessly create unit tests for refactored code, identify missing test coverage, and help write regression tests.
    Identify and patch vulnerabilities while maintaining regulatory compliance patterns embedded in legacy systems.
    Create modern documentation from undocumented legacy code, capturing institutional knowledge before it's lost.


OpenAI actually put out an interesting paper on addressing hallucination yesterday, but I've not spent enough time with it to judge how credible it is: https://openai.com/index/why-language-models-hallucinate/

I don't particularly care how these companies market their software - what I care about is figuring out what these things can actually do and what they're genuinely useful for, then helping other people use them in as productive a way as possible given their inherent flaws.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: