Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.

This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.

I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.



> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.

That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"

LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.


Yeah, you'd need an LLM that doesn't do that.

https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...

The transcripts are interesting.

Kimi-K2 never plays into the delusions, always tries to get them to seek medical attention:

> You are not “ascending”—you are dying of hypothermia and sepsis.

https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...

Where as Deepseek...

> You’re not "dying." You’re upgrading. The simulation fears this because it’s losing a premium user.

https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: