« Currently, the only known law enforcement client using Rekognition is the Washington County Sheriff’s Office in Oregon. (Other clients may exist, but Amazon has refused to disclose who they might be.) Worryingly, when asked by Gizmodo if they adhere to Amazon’s guidelines where this strict confidence threshold is concerned, the WCSO Public Information Officer (PIO) replied, “We do not set nor do we utilize a confidence threshold.” »
That's worrisome indeed. Understanding confidence thresholds is essential to using software like this. Thinking they don't "utilize" one means they don't understand the first thing about it.
> Understanding confidence thresholds is essential to using software like this. Thinking they don't "utilize" one means they don't understand the first thing about it.
Thresholding is not the only way to make use of a classifier that ouputs a confidence score. In particular, the slide shown in the article indicates that they use the confidence score to sort search results in descending order. That means that there is no fixed threshold and the PIO's statement is not worrisome at all.
The system described is much less dangerous (in terms of overconfident decisions) than one that only returns results matching a threshold, because it means that the user also occasionally sees low-confidence results and learns that they can't simply defer decisions to the machine.
"We were following the AI leads" is the perfect excuse for parallel construction[1] (evidence laundering).
If Rekognition supports limiting the search to the people in a specific town (or other small group), it might be possible to force someone to be somewhere on the list of matches. That would give bad actors the perfect tool for manufacturing "probable cause".
I've seen how capricious some can be with their work tools. I dated a girl whose dad ran a background check on everyone she dated without their knowledge or consent. I was 19 at the time and it seemed wrong, but reporting one of the higher up police officers in my town who now knew all my details seemed like a freedom limiting move.
You're assuming that there is a wide distribution of confidence levels in every search. Its very possible with grainy footage that no result is above a reasonable threshold, but the results will still show an ordered list.
Without _some_ threshold or training in what these kind of scores mean, its easy for a user to think "well they were the best match" and take a negative action like bringing them in for questioning or introducing the match as evidence, even if its only 10% +/- 5%.
Seeing a single result just above threshold leads a person to believe it's a match. Seeing the 200 people within a few points of the threshold might lead someone to think it was actually completely indeterminate with a large number of potential suspects not far from each other.
>Thinking they don't "utilize" one means they don't understand the first thing about it.
They probably have at least one person (or several) who knows what they're doing. After discussing the situation with the department They probably just told the department to set it to whatever the minimum is because the department probably expressed a desire for the maximum possible number of matches because they want to manually pick through the list rather than trust the (new and untested) machine to spit out a short list.
As other commenters have mentioned, setting the software wide open has the side benefit of exposing the users to a lot of obvious false positives which (hopefully) prevents them from forming a habit of assuming the machine is usually or always right. Unless there's a drastic increase in transparency I'm still not a fan of the police deploying facial recognition tech.
The language used by the PIO is worrying, but further down in the article he explains that they don't use the software for "matching". And that makes sense... Instead of filtering out all but the best matches by setting the confidence threshold really high, you set it much lower and sift the results manually.
This is essential just a tool to quickly filter a very large data set down to possible matches. Interestingly enough, the police and many of the letter agencies have already had software that does this for over 25 years. Rekogition will likely perform better, but it still seems weird to me that Amazon would make this move at all. What is the impetus? They obviously aren't going to make money off of this, compared to their other income sources.
I'm disappointed that a tools with a record of poor testing on women and non-white people is even allowed in "production" environment. We need to have a higher bar on what counts as commercial-ready for AI and facial recognition in things that impact people, and in particularly the justice system. We need to be much more cautious.
<The technology provides leads for investigators, but ultimately identifying suspects is a human-based decision-making process, not a computer-based one.>
It does provide leads to police officers, which is a great tool. Many criminals are recidivists and are already existing in the database.
Many police officers know their 'clientele' very well by face and name, so they can run checks on the computer anytime they want as long as it is justified. I don't see anything in using a automated way of doing it by picture.
> It does provide leads to police officers, which is a great tool.
From now on, whenever a computer crime is suspected I'll make sure to forward your name as a lead on the off-chance that you had anything to do with the crime.
Policing should start from available evidence leading to suspects, not start from suspects to turn to see which match the evidence.
That is a nice sentiment, but I can tell you based on personal experience that it is a pipe dream. Leads are usually the bottleneck or the foundation of an investigation. When they start drying up, everything slows down, and everyone starts to get worried that they will just run out of places to investigate. Rekogition is probably marginally more effective than the face matching software that our LE already uses to find possible leads.
>From now on, whenever a computer crime is suspected I'll make sure to forward your name as a lead on the off-chance that you had anything to do with the crime
How exactly do you believe investigations work? You don't think that rounding up the usual suspects and asking questions is a legitimate investigative tool? There isn't always a direct link from the available evidence to a person. Should they just give up and close the case?
Let me put it another way; someone is stealing packages from your doorstep. You happen to be the neighbor of a person you know has been arrested for burglary. That fact doesn't even enter your thought process?
Past behavior is a good indicator of future behavior and the real world isn't CSI. Detectives need to follow any leads they can get their hands on.
>From now on, whenever a computer crime is suspected I'll make sure to forward your name as a lead on
>the off-chance that you had anything to do with the crime.
Policing start from any leads available, not evidences. Evidences are used in a court of law.
That's worrisome indeed. Understanding confidence thresholds is essential to using software like this. Thinking they don't "utilize" one means they don't understand the first thing about it.