My personal criterion for calling somebody an expert, or "educated", or a "scholar" is that they have any random area of expertise where they really know their shit.
And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.
LLMs don't do any of that. I don't know if they could, I do know it would be inconvenient for the sales pitch around them. But the people that I call experts distinguish themselves not by being right with their predictions a lot, but rather by qualifying their statements with the degree of uncertainty that they have.
> And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.
How do you count examples like Musk, then?
He is very cautious about rockets, and all the space science people I follow and hold in high regard, say he's actually a domain expert there. He regularly expectation-manages experimental SpaceX launches downward.
He's also very bold and brash about basically everything else; the majority of people I've seeing saying he's skilled in any other area have turned out to not themselves have any skills in those areas, while the people who do have expertise say he's talking nonsense at best and is taking wild safety risks at worst.
Musk is probably really good at back of the envelope calculations. The kind that lets you excel in first year physics. That skill puts you above a lot of people in finance and engineering when it comes to quickly assessing an idea. It is also a gimmick, but I respect it. My wild guess is that he uses that one skill to find out who to believe among the people he hires.
The rest of the genius persona is growing up with enough ego that he could become a good salesman, and also badly managed autism and also a badly managed drug habit.
Seeing him dabble in politics and social media shows instantly how little he understands the limits of his knowledge. A scholar he is not.
Anecdotal but I told chatgpt to include it's level of confidence in its answers and to let me know if it didn't know something. This priming resulted in it starting almost every answer with some variation of "I'm not sure, but.." when I asked it vague / speculative questions and then when I asked it direct matter of fact questions with easy answers it would answer with confidence.
That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however
You'd have to find out the veracity of those leading phrases. I'm guessing that it just prefaces the answer with a randomly chosen statement of doubtfulness. The error bar behind every bit of knowledge would have to exist in the dataset.
(And in neural network terms, that error bar could be represented by the number of connections, by congruency of separate paths of arguing, by vividness of memories, etc ... it's not above human reasoning either, no need for new data structures ...)
The level of confidence with which people express themselves is a (neutral to me) style choice. I'm indifferent because when I don't know somebody I don't know whether to take their opinions seriously regardless of the level of confidence they project. Some people who really know their shit are brash and loud and other experts hedge and qualify everything they say. Outward humility isn't a reliable signal. Even indisputably brilliant people frequently don't know where their expertise ends. How often have we seen tech luminaries put a sophomoric understanding of politics on display on twitter or during podcast interviews? People don't end up with correctly calibrated uncertainty unless they put a ton of effort into it. It's a skill that doesn't develop by itself.
I agree, and a lot of that is cultural as well. But there is still a variety of confidence within the statements of a single person, hopefully a lot, and I calibrate to that.
AIs are a "master of all trades", so it is very unlikely they'll ever be able to admit they don't know something. What makes them very unreliable with topics where there is little available knowledge.
And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.
LLMs don't do any of that. I don't know if they could, I do know it would be inconvenient for the sales pitch around them. But the people that I call experts distinguish themselves not by being right with their predictions a lot, but rather by qualifying their statements with the degree of uncertainty that they have.
And no "expert system" does that.