But you can expect to learn in both cases. Just like you often learn from your own failures. Learning doesn’t require that you’re given the right answer, just that it’s possible for you to obtain the right answer
We’ve been down this road before. Wikipedia was going to be the knowledge apocalypse. How were you going to be able to trust what you read when anyone can edit it if you don’t already know the truth.
And we learned the limits. Broadly verifiable, non-controversial items are reasonably reliable (or at least no worse than classic encyclopedias). And highly technical or controversial items may have some useful information but you should definitely follow up with the source material. And you probably shouldn’t substitute Wikipedia for seeing a doctor either.
We’ll learn the same boundaries with AI. It will be fine to use for learning in some contexts and awful for learning in others. Maybe we should spend some energy on teaching people how to identify those contexts instead of trying to put the genie back in the bottle.
If you can't discern the difference between a LAMP stack returning UGC and an RNG-seeded matmut across the same UGC fined-tuned by sycophants then I think we're just going to end up disagreeing.
> You can't simultaneously expect people to learn from AI when it's right, and magically recognize when it's wrong.
You are misconstruing the point I was making.
My point is that DK is about developing competence and the relationship between competence and confidence (which I am also claiming evolves over time). My whole point is that the DK effect is not as relevant to LLMs giving wrong answers and people believing them as the author is claiming.
As someone else pointed out in another comment, the effect of people believing falsehoods from LLMs has more to do with Gell-Mann amnesia.
Tangentially, it actually is possible to learn from AI when it's right and recognize when its wrong, but its not magic, its just being patient, checking sources and thinking critically. It's how all of humanity has learned pretty much everything, because most people have been wrong about most things for most of time and yet we still learn from each other.