Perhaps the antidote involves a drop of the poison.
Let an LLM answer first, then let humans collaborate to improve the answer.
Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model.
I'm more amused that ExpertsExchange.com figured out the core of the issue, 30 years ago, down to their site's name.
Perhaps the antidote involves a drop of the poison.
Let an LLM answer first, then let humans collaborate to improve the answer.
Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model.