They could make up source, but ChatGPT is an actual app with complicated backend, not dumb pipe between textedit and GPU. Surely they could verify on server side every link they output to user before including it in the answer. I'm sure Codex will implement it in no time!
They surely can detect it, but what are they going to do after detecting it? Loop the last job with a different seed and hope that the model doesn't lie through its teeth? They won't be doing it because the model will gladly generate you a fake source on the next retry too.