> when you ask an LLM to point to 'sources' for the information it outputs,
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.
That is what I mean yeah, I’m not saying it’s fabricating sources from training data, that would obviously be impossible for news articles, I’m saying if you give it a list of articles A, B and C including their content in the context and ask ‘what is the foo of bar?’ and it responds ‘the foo of bar is baz, source: article B paragraph 2’, that does not tell you whether the output is actually correct, or contained in the cited source at all, unless you manually verify it.
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.