I've never found the Socratic method to work well on any model I've tried it with. They always seem to get stuck justifying their previous answers.
We expect them to answer the question and re-reason the original question with the new information, because that's what a human would do. Maybe next time I'll try to be explicit about that expectation when I try the Socratic method.
We expect them to answer the question and re-reason the original question with the new information, because that's what a human would do. Maybe next time I'll try to be explicit about that expectation when I try the Socratic method.