Police used AI facial recognition to wrongly arrest TN woman for crimes in ND (cnn.com)
227 points by ourmandave 5 hours ago
firefoxd 4 hours ago
Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
rcvassallo83 6 minutes ago
The thing about the legal system is there's no incentive to investigate to find the truth.
The incentive is to prosecte and prove the charges.
Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.
The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
latexr 38 minutes ago
Yes, of course someone should have investigated, but the larger point here is that people don’t because they are being sold a false narrative that AI is infallible and can do anything.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
dpkirchner 27 minutes ago
We can barely convince powers thar be that eye-witness testimony is unreliable, after all.
bl4ckneon 3 hours ago
I think you missed many important points.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
3eb7988a1663 3 hours ago
It was worse than that, the reporting from an earlier story[0]
...Unable to pay her bills from jail, she lost her home, her car and even her dog.
There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?(Also, what happened to journalism - no Oxford comma?)
cguess 2 hours ago
segmondy 2 hours ago
redeeman 18 minutes ago
frankharv 3 hours ago
tmpz22 14 minutes ago
IANAL but AFAIK custodial interrogation triggers Miranda, lawyers, and those awful awful civil liberties we’re trying to get rid of.
Better just to apply Musk or Altman software to the problem and avoid it entirely.
garethsprice 2 hours ago
The vendor they used, Clearview AI, does not allow you to request data deletion unless you live in one of the half-dozen states that legally mandate it.
https://www.clearview.ai/privacy-and-requests
I have suddenly becomes very interested in New York's S1422 Biometric Privacy Act.
dawnerd an hour ago
Sadly this is really the only tool we have right now. Just have to keep spamming them with delete requests because once they delete it’ll end up back in their database eventually.
KomoD 35 minutes ago
guelo 37 minutes ago
To get your data deleted in the states that require it you have to submit a photo of yourself which I really don't want to do for a sketchy company with ties to evil billionaire Peter Thiel.
tlogan 4 hours ago
This is a weak or misleading story about AI.
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
suzzer99 4 hours ago
In the US there are no consequences for people in power failing to follow procedures, laws or regulations - except for being told to stop doing whatever illegal thing they're doing, and possibly getting sued way down the line, which gets paid by taxpayers.
tlogan 3 hours ago
From reading more into the case, it seems the issue may be related to how her lawyer handled the case.
They probably did “identity challenge” arguing that she is not the right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
frankharv 3 hours ago
AnthonyMouse 2 hours ago
[deleted]
everforward an hour ago
gamblor956 11 minutes ago
strictnein 3 hours ago
I wish I could find the link, but I believe she was in jail on parole violation, unrelated to anything that the "AI" flagged her on.
Supermancho 2 hours ago
Her picture was used as part of a fake id card, in the commission of a crime. The fuzzy camera footage looked like her (from stills I've seen) and her picture was on the fake ID. Those 2 circumstantial items were, apparently, enough to have a warrant issued.
They picked her up in TN and held her for 4 months, even after:
The ND police knew the ID was fake and the person using it was not her. The ND police knew she had been in TN before, during, and after the crime.
She is still technically a suspect, even after all of this has come out.
zoklet-enjoyer 2 hours ago
She was not
Source: I live in Fargo and have been following this story closely. Everyone here is pissed
frankharv 2 hours ago
frankharv 2 hours ago
That is the first I have heard of that. A small unexplained blurb in this article. Already in jail on parole violation..
Maybe she objected to the extradition order without good counsel.
"I aint never been to N.Dakota". She found out the hard way how the law works..
What about the banks being hit. Surely they have good cameras. This was bad mojo. I would think a Wells Fargo/BoA has a unit for this stuff.
Finincial crimes handled like this. The banks will be sued too I suspect.. Deep pockets settle out.
georgemcbay an hour ago
> It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
This is how it should work, but I still think it is important to discuss these failures in the context of AI risks.
One of the largest real-world dangers of AI (as we define that now) is that it is often confidently wrong and this is a terrible situation when it comes to human factors.
A lot of people are wired in such a way that perceived confidence hacks right through their amygdala and they immediately default to trust, no matter how unwarranted.
mememememememo 9 minutes ago
Wow thought the bar for probable cause for an arrest warrant would be much higher. Especially to drag soneone from another state.
mitchbob 5 hours ago
Earlier discussion (405 comments):
oopsiremembered 4 hours ago
Money quote from someone quoted in the article:
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
bryanrasmussen 4 hours ago
I've been hearing "it's not just... it's a" touted as an AI sign recently, personally I think it's an AI sign because it's a human thinking shortcut sign, and AI copies it, but it would be funny if AI wrote the article and then hallucinated this specific money quote.
oopsiremembered 3 hours ago
I doubt this happened here, but FWIW, AI does have a habit of "cleaning up" (read: hallucinating) interview transcript quotes if you ask it to go through a transcript and pull quotes. You have to prompt AI very specifically to get it to not "clean up" the quotes when you ask it to do that task.
b112 3 hours ago
indigodaddy an hour ago
A lot of dumb shit happens in this arena, where if you had just one smart cop, it could have been prevented. Here’s one from 2023:
jqpabc123 5 hours ago
AI is a liability issue waiting to happen. And this is just another example.
gtowey 4 hours ago
It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.
jqpabc123 4 hours ago
Courts are already refusing to accept this excuse.
https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...
Hizonner 4 hours ago
... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.
garyfirestorm 5 hours ago
It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.
happytoexplain 5 hours ago
There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
mikkupikku 5 hours ago
tgv 5 hours ago
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
cyanydeez 5 hours ago
jqpabc123 5 hours ago
Used incorrectly will lead to errors.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
nkrisc 4 hours ago
jfengel 4 hours ago
zephen 2 hours ago
MattDaEskimo 4 hours ago
What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
hrimfaxi 2 hours ago
Unlike hammers people preface things with "claude says", etc. I never see that kind of distancing with tools that aren't AI.
suzzer99 5 hours ago
Dynamite is a tool. But we don't hand it out to anyone who wants to play with it.
mikkupikku 5 hours ago
skeeter2020 5 hours ago
AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.
renewiltord 3 hours ago
This appears to be The Sort in action again. The 50% of Americans below IQ 100 also need jobs and so on. Perhaps with AI pushing out people from high-intelligence jobs, we will get a large number of intelligent people in jobs like police or retail pharmacists or so on. Currently, these guys can barely read text and follow instructions. In fact, most of them are likely functionally illiterate and are coaxed through their programs by a system that is punished if it does not pass people.
The average policeman will find his brain sorely taxed by the average incident report form. Describing the phrase "false positive" to them is like trying to explain calculus to a mouse.
rootusrootus 2 hours ago
Has it not been fairly common to require police officers to have a bachelor’s degree? Or an associate’s? I think recently that has been relaxed but I’ve lived in places where it was absolutely a requirement.
I don’t think they’re as stupid as you suggest.
voakbasda an hour ago
Police departments are known to avoid hiring people that get high marks in school, under the principle that such individuals will become bored with the job and quit. They literally look for average people with average intelligence: C students.
Now factor in the slow decline of our educational institutions, where grade inflation has systematically diminished the credibility of a degree. I would wager that many C students today would have failed out completely 30 years ago.
In that light, it is not surprising that people are seeing ICE agents behave like brown shirts. No one in power wants those people asking any kind of hard questions about what they are being ordered to do.
llbbdd 2 hours ago
Having a degree is a very low bar for intelligence.
zephen 2 hours ago
> I don’t think they’re as stupid as you suggest.
I'll just leave this here:
https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...
rootusrootus 40 minutes ago
giardini 2 hours ago
This has been posted at least twice before on HN.