> Open AI literally said they have a semi-resilient method with 99.9% accuracy.
They also said many other things that never happened. And they never showed it. I bet $100 they do not have a semi-resilient method with 99.9% accuracy, especially with all the evolving issues around "human vs computer" made content.
I bet you also the `semi-` in the beginning leaves a lot of room for interpretation and they are not releasing this for more reasons than "our model is too good".
I really don't see what's in it for them to brag about a non-existent feature that's not in their commercial interest when its non-implementation can be turned into a stick to beat them with, so I believe they have something, yes. I don't necessarily believe the 99.9%, but with that proviso I'll take your bet.
They also said many other things that never happened. And they never showed it. I bet $100 they do not have a semi-resilient method with 99.9% accuracy, especially with all the evolving issues around "human vs computer" made content.
I bet you also the `semi-` in the beginning leaves a lot of room for interpretation and they are not releasing this for more reasons than "our model is too good".