Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been "talking" quite a bit with Ollama models, they're often confidently wrong about Wikipedia level stuff and even if the system prompt is explicitly constrained in this regard. Usually I get Wikipedia as understood by a twelve year old with the self-confidence of adult Peter Thiel. If it isn't factually wrong, it's often subtly wrong in the way that a cursory glance at some web search results is unlikely to rectify.

It takes more time for me to verify the stuff they output than grabbing a book off Anna's Archive or my payed collections and looking something up immediately. I'd rather spend that time making notes than waiting for the LLM to respond and double checking it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: