AI Gets Wrong Woman Jailed for Six Months, Life Ruined (youtube.com)
57 points by vaxman 2 hours ago
bradley13 an hour ago
Really, it's more about the police not doing their job. Face recognition pointed her out, the police saw she had a rap sheet, and therefore they didn't check further.
She apparently could not afford a lawyer, who would have pointed out that she was provably at home (transactions, etc.) at the time the crime was committed in another state.
Really it's not specifically AIs fault, though it made the error easier.
mft_ an hour ago
Quite; AI contributed to a (criminally?) inept and negligent "justice" system ruining an innocent woman's life.
The AI was akin to an unreliable eye-witness in this case, although people's trust in the AI's judgement may have been higher than a human eyewitness?
ahazred8ta an hour ago
Ditto the 1982 Lenell Geter case -- he was sent to prison based on a faulty witness ID. https://www.LenellGeter.com/Content/About/ -- https://exonerationregistry.org/cases/4406
DoktorDelta an hour ago
Absolutely, this is what is going to happen when the average person gets to use AI- "well, the computer says..."
throwaway439080 26 minutes ago
Yes and no. I think the interesting thing about this story is how it's been presented: AI as a scapegoat for incompetence.
The police made an inexcusable mistake out of carelessness. They simply couldn't be bothered to spend five minutes fact-checking the facial recognition match, and it caused catastrophic harm to an innocent woman.
And what's the headline? "AI did this". It's a new and exciting way for people to shirk accountability for their actions. We're already seeing it in the reporting on the Iranian school bombed by the United States: blame AI for selecting the target, and not the humans in the loop who failed to do the most basic due diligence.
underlipton an hour ago
You shouldn't have to have a lawyer to get something this basic entered into the record. Rule of law that can't even get that right is useless, which is part of why so many people have less, or zero, faith in it today.
mkoubaa an hour ago
Give them a hammer and everything becomes a nail
righthand an hour ago
There's no better comparison to chimps with a gun than cops with technology.
santoshalper an hour ago
I still wouldn't let AI off the hook here. Every link in the chain has to be accountable for fuckups. You don't get to pass it along to the supposed "human in the loop" when you fail spectacularly. That's how we end up with shitty "almost works" AI.
mft_ an hour ago
Sure, the AI contributed, but it was far less responsible overall than the humans in this case.
Don't let the AI system off the hook by all means, but by focusing on it to this extent, the narrative ignores (deliberately?) the hugely negligent actions of the police et al involved.
jjj123 an hour ago
hyperhello an hour ago
In Oregon the courts just ruled that since defendants weren’t provided a public defender in a certain amount of time, their cases were voided. There was an outcry, of course. But the ruling was sound: the pain had to be pushed to the part of the system that was failing. An honest system does not allow things like this; the accused either needs to either have a competent advocate, or the case is void.
gnabgib an hour ago
Discussion (730 points, 2 days ago, 379 comments) https://news.ycombinator.com/item?id=47356968
rectang an hour ago
My takeaway from the huge discussion thread yesterday was that the big divide among HN commenters is whether or not purveyors of AI tech have any responsibility to account for automation bias in their users.
https://en.wikipedia.org/wiki/Automation_bias
> Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.
In other words, if it is foreseeable that the tool will be misused, what does that mean for the toolmaker?
OutOfHere 43 minutes ago
Those deploying AI where it can affect individuals must ensure that the UI always prominently shows the failure rate.
For example, if a person's face is matched to a ID, the UI must show not just the match percentage (which is very misleading) but also contextually the odds of getting it wrong. For example, if there are 7 IDs whose face is at least a 95% feature match to the thief, the odds of getting it wrong are at least 6 out of 7, meaning the chances of an accurate classification is just 14% at best!
mvrckhckr an hour ago
AI is a tool. It is humans who abdicate their responsibility (and thinking).
rectang an hour ago
Howitzers are also tools, but we don't let just anyone own and operate them.
wat10000 an hour ago
Computers often serve as a tool for the avoidance of responsibility.
righthand an hour ago
I’m sure the cops got a slap on the wrist and their lives are fine. ACAB.
cindyllm 8 minutes ago
[dead]
mannanj an hour ago
Humans kill people not AI.
odshoifsdhfs an hour ago
But have they have tried the latest models? I understand this from October last year but Opus 4.6 is light and day and I wasn't a believer but now with this latest model it changed everything. it hasn't send any innocent person to jail yet and identified all my neighboorhood creeps 100%.
/s