Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everyone always goes for the discrimination angle, but what frightens me is that like other modern automated tech of this decade the facial recognition used here probably has a horrendous false positive rate which most institutions will none the less treat as the final truth. Unrelated this is also why I fear most of this new AI stuff so much despite working in the field; the accuracy rates on these things in production is atrocious.


They didn’t ban her on the basis of face recognition. She just got flagged by security, after which they asked for her ID and banned her on the basis of her real identity.

So the chance of false positives is minimal.


I give it a year before the venue fires the bouncer to pinch pennies and just relies on the facial scanner


So.. what if she wasn't carrying ID? Now it's back to a judgement call, and I can bet I know which side the guards are going to err on.


Is it legal for MSG security to show your ID? That in and of itself seems sketchy.


The false positive rate on quality facial recognition is actually very low. I'm not weighing in whether or not it is proper to always use it, just as someone that's deeply experienced in commercial applications of facial recognition.


> The false positive rate on quality facial recognition is actually very low.

It depends on the purpose. For identity verification purposes, when you already have independent reason to suspect that someone is specifically Person X, then a "very low" false positive rate is likely sufficient.

For filtering, however, "very low" isn't enough. Suppose your facial recognition system has a 0.001% false positive rate (one per 100k), but you have a list of 1000 banned faces and your venue sees 10,000 visitors per night. You're making ten million comparisons, and that "very low" false positive rate will still result in 100 false matches.

That could still be okay, if a match just involves (here) pulling the patron aside for an ID verification. Asking 100 people for ID is much more benign than turning 100 people away at the turnstile. MSG here did appear to follow the match with a secondary verification (per the article), but I shudder to think of all the venues that will hear "very low false positive rate" and not really think through the consequences.


I completely agree with you on this point. AI practitioners have an ethical responsibility to communicate these shortcomings to clients. I know my firm regularly talks to clients how humans should intervene in the AI systems we build. It's a necessary conversation to ensure clients know we aren't building them something flawless.

I loved your .001% example, I'm going to steal that when I talk folks. We often describe how systems fail at scale and talk about how being wrong 1/1,000,000 times can wildly backfire at large numbers.

All that being said, I just don't want people around this forum thinking facial recognition is some fringe, low accuracy modeling exercise like they used to be. The models are actually incredibly impressive these days.


The low rate of false positives have been well documented in the Archibald Buttle vs Archibald Tuttle case.

And the government never makes mistakes.


It's not just that. It's the collection of updated face data that's a concern.

They're taking an image, segmenting out the face and they're improving an existing model for that individual. Super scary stuff that's going on. (Other than the fact that they have an existing image)

If she was in IL .. she should persue them with the BIPA act.


NIST actually does reports on vendors for this stuff publicly. https://pages.nist.gov/frvt/html/frvt1N.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: