So LLMs are unreliable experts, okay. They're still useful if you understand their particular flavor of unreliability (basically, they're way too enthusiastic) - but more importantly, I bet you have exactly zero human experts on speed dial.
Most people don't even know any experts personally, much less have one they could call for help on demand. Meanwhile, the unreliable, occasionally tripping pseudo-experts named GPT-4 and Claude are equally unreliably-expert in every domain of interest known to humanity, and don't mind me shoving a random 100-pages long PDF in their face in the middle of the night - they'll still happily answer within seconds, and the whole session costs me fractions of a cent, so I can ask for a second, and third, and tenth opinion, and then a meta-opinion, and then compare&contrast with search results, and they don't mind that either.
There's lots to LLMs that more than compensates for their inherent unreliability.
It's not that black and white. I know of no single person who is correct all the time. And if I would know such person, i still would not be sure, since he would outsmart me.
I trust some LLMs more than most people because their BS rate is much much lower than most people I know.
For my work, that is easy to verify. Just try out the code, try out the tool or read more about the scientific topic. Ask more questions around it if needed. In the end it all just works and that's an amazing accomplishment. There's no way back.
If they lie on purpose I don’t ask them anything anymore.
The real experts give reliable answers, LLMs don’t.
The same question can yield different results.