the problem is your parenthetical - it's not possible, so attempting to do so isn't actually really possible. what's worse than a watermark? one that doesn't actually work.
Open AI literally said they have a semi-resilient method with 99.9% accuracy. It will become full-resilient for practical purposes if all LLMs implement something similar.
> Open AI literally said they have a semi-resilient method with 99.9% accuracy.
They also said many other things that never happened. And they never showed it. I bet $100 they do not have a semi-resilient method with 99.9% accuracy, especially with all the evolving issues around "human vs computer" made content.
I bet you also the `semi-` in the beginning leaves a lot of room for interpretation and they are not releasing this for more reasons than "our model is too good".
I really don't see what's in it for them to brag about a non-existent feature that's not in their commercial interest when its non-implementation can be turned into a stick to beat them with, so I believe they have something, yes. I don't necessarily believe the 99.9%, but with that proviso I'll take your bet.
The Verge doesn't report this, but other reports have said that the watermark is easily beatable by doing things like a Google Translate roundtrip, or asking the model to add emoji and then deleting them.
> the problem is your parenthetical - it's not possible, so attempting to do so isn't actually really possible. what's worse than a watermark? one that doesn't actually work.
If it's not possible to watermark, then just ban LLMs.
Tech people have this weird self-serving assumption that the tech must be developed and must used, and if it causes harms that can't be mitigated then we must accept the harm and live with it. It's really an anti-humanist, tech-first POV.