Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh no, the man used the hallucination engine, which told the man, in a confident tone, a load of old twaddle.

The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.

The humans touting it and bigging it up, so they'll get money, are the problem.



Humans make mistakes too. Case in point, the hallucination engine didn't tell the person to ingest bromide. It only mentioned that it had chemical similarities to salt. The human mistakenly adopted a bit of information that furthered his narrative. The humans touting and bigging it up are still the problem.


Could you provide a source for your statements? The article says that they don’t have access to the chat logs, and the quotes from the patient don’t suggest that chatgpt did not tell him to ingest bromide.


We don't have the log from this case, so we don't know what chatgippity said, whether it was "chemical similarities" or "you should consume bromium... now!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: