Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fundamental issue is that these verification models are trained on datasets containing fictional characters and celebrities, so they're essentially being asked to distinguish between inputs that were part of their own training distribution.


Yet TFA shows the character used to beat the verification is a game character based on the likeness of an actor famous for the role he pays the game character is based. So you’re saying what, that the system isn’t aware it was trained on this person, the training isn’t looking that person is known to the training, or the system just doesn’t work as advertised?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: