Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, but I think it would be more productive to educate people that LLMs have no concept of truth rather than insist they use the term "hallucinate" in an unintuitive way.


I don't know about OP, but I'm suggesting that the term 'hallucinate' be abolished entirely as applies to LLMs, not redefined. It draws an arbitrary line in the middle of the set of problems that all amount to "how do we make sure that the output of an LLM is consistently acceptable" and will all be solved using the same techniques if at all.


LLMs do now have a concept of truth now since much of the RLHF is focused on making them more accurate and true.

I think the problem is that humanity has a poor concept of truth. We think of most things as true or not true when much of our reality is uncertain due to fundamental limitations or because we often just don't know yet. During covid for example humanity collectively hallucinated the importance of disinfecting groceries for awhile.


I think taking decisions based on different risk models is not a hallucination.

To the extreme: if during covid someone would live completely off grid (no contact with anyone) would have greatly reduced infection risk, but I would have found the risk model unreasonable.

The problem with LLM-s is that they don't "model" what they are not capable off (the training set is what they know). So it is harder for them to say "I don't know". In a way they are like humans - I seen a lot of times humans preferring to say something rather than admitting they just don't know. It ss an interesting (philosophical) discussion how you can get (as a human or LLM) to the level of introspection required to determine if you know or don't know something.


Exactly, we think of reasoning as knowing the answer but the real key to the enlightenment and age of reason was admitting that we don't know instead of making things up. All those myths are just human hallucinations.

Humans taught themselves not to hallucinate by changing their reward function. Experimentation and observation was valued over the experts of the time and pure philosophy, even over human-generated ideas.

I don't see any reason that wouldn't also work with LLMs. We rewarded them for next-token prediction without regard for truth, but now many variants are being trained or fine-tuned with rewards focused on truth and correct answers. Perplexity and xAI for example.


Or to put it more concisely, LLMs behave similar to a superintelligent midwit.


> LLMs do now have a concept of truth now since much of the RLHF is focused on making them more accurate and true.

Is it? I thought RLHF was mostly focused on making them (1) generate text that looks like a conversation/chat/assistant (2) ensure alignment i.e. censor it (3) make them profusely apologize to set up a facade that makes them look like they care at all.

I don't think one can RLHF the truth because there's no concept of truth/falsehood anywhere in the process.


> humanity collectively hallucinated the importance of disinfecting groceries for awhile

I reject this history.

I homeschooled my kids during covid due to uncertainty and even I didn't reach that level, and nor did anyone I knew in person.

A very tiny number who were egged on by some YouTubers did this, including one person I knew remotely. Unsurprisingly that person was based in SV.


It's not some extremist on YouTube, disinfecting your groceries was the official recommendation of many countries worldwide, including most of Europe. I couldn't say how many people actually followed the recommendation , but I would bet it's way more than a tiny number.


This is the first I’ve even heard of people disinfecting their groceries because of Covid. Honestly that sounds rather crazy to me.


There was a period near the start of the pandemic, especially while the medical establishment was trying to avoid ordinary people wearing masks in order to help stockpile them for high priority workers, when a lot of emphasis was put on surface contact.

If it's extremely important to wear gloves and keep sanitizing your hands after touching every part of the supermarket, it stands to reason that you'd want to sanitize all of the outside packaging that others touched with their diseased hands as soon as you brought it into your house. Otherwise, you'd be expected to sanitize your hands every time you touched those items again, even at home, right?

Of course, surface contact is actually a very minor avenue of infection, and pretty much limited to cases where someone has just sneezed or coughed on a surface that you are touching, and then putting your hand to your nose or maybe eyes or mouth soon after. So sanitizing groceries is essentially pointless, since it only slightly reduces an already very small risk.


I did not do this personally but I know a number of people (blue state liberal city folk) I don’t think it was that unusual.


If people already understand what "hallucination" means, then I think it's perfectly intuitive and educational to say that, actually, the LLM is always doing that, just that some of those hallucinations happen to coincidentally describe something real.

We need to dispell the notion that the LLM "knows" the truth, or is "smart". It's just a fancy stochastic parrot. Whether it's responses reflect a truthful reality or a fantasy it made up is just luck, weighted by (but not constrained to) its training data. Emphasizing that everything is a hallucination does that. I purposefully want to reframe how the word is used and how we think about LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: