Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Experience. If I recognize they give unreliable answers on a specific topic I don’t question them anymore on that topic.

If they lie on purpose I don’t ask them anything anymore.

The real experts give reliable answers, LLMs don’t.

The same question can yield different results.



So LLMs are unreliable experts, okay. They're still useful if you understand their particular flavor of unreliability (basically, they're way too enthusiastic) - but more importantly, I bet you have exactly zero human experts on speed dial.

Most people don't even know any experts personally, much less have one they could call for help on demand. Meanwhile, the unreliable, occasionally tripping pseudo-experts named GPT-4 and Claude are equally unreliably-expert in every domain of interest known to humanity, and don't mind me shoving a random 100-pages long PDF in their face in the middle of the night - they'll still happily answer within seconds, and the whole session costs me fractions of a cent, so I can ask for a second, and third, and tenth opinion, and then a meta-opinion, and then compare&contrast with search results, and they don't mind that either.

There's lots to LLMs that more than compensates for their inherent unreliability.


> Most people don't even know any experts personally, much less have one they could call for help on demand.

Most people can read original sources.


Which sources? How do I know I can trust the sources that I found?


They can, but they usually don't, unless forced to.

(Incidentally, not that different from LLMs, once again.)


How do you even know what original sources to read?


There's something called bibliography at the end of every serious books.


I am recalling CGP Grey's descent into madness due to actually following such trails through historical archives: https://www.youtube.com/watch?v=qEV9qoup2mQ

Kurzgesagt had something along the same lines: https://www.youtube.com/watch?v=bgo7rm5Maqg


And yet here you are making an unsourced claim. Should I trust your assertion of “most”?


It's not that black and white. I know of no single person who is correct all the time. And if I would know such person, i still would not be sure, since he would outsmart me.

I trust some LLMs more than most people because their BS rate is much much lower than most people I know.

For my work, that is easy to verify. Just try out the code, try out the tool or read more about the scientific topic. Ask more questions around it if needed. In the end it all just works and that's an amazing accomplishment. There's no way back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: