Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trust but verify. It will come up with ideas and claims, google all that for authoritative sources and people who went through all this. Ask again for anything contradicting you can find and ask for edge cases you can think of. Change the wording and ask again to see if it gives you the same advice.

IMHO hallucinations are not that bad when you have a human in the loop.



I think the term is "confabulations" now, which makes a bit more sense to me, but regardless of what we call the hallucinations, it starts with the human knowing how to navigate the process, having at least some basic understanding of the system within which they're using the GenAI (in this case, law/legal) and the skills to cross-reference and verify without using the GenAI itself. If we start using the GenAI to both get the answer and verify it, we're basically creating a feedback loop that probably isn't always going to stand up to scrutiny, despite the fact that GenAI has gotten quite good at these sorts of things.

We're not there, yet, but we will be, probably within our lifetime.


> google all that for authoritative sources

With Google progressively replacing actual websites with AI-generated results, you might come full circle to having Gemini validate the same hallucinations the Gemini app produced in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: