In 2019 and 2020, three Black men were accused of and jailed for crimes they didn’t commit after police used face recognition to falsely identify them. Their wrongful arrest lawsuits are still pending, but their cases bring to light how AI-enabled tools can lead to civil rights violations and lasting consequences for the families of the accused.
Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators. The problem, critics say, is that a possible face recognition “match” is not evidence — and that it can lead investigations astray even if police seek corroborating evidence.
The state Assembly last month passed Assembly Bill 1814 on a 70-0 vote. Today it faced a contentious hearing in the Senate Public Safety Committee.
Such a bill “would not have stopped the police from falsely arresting me in front of my wife and daughters,” Robert Williams told CalMatters in a statement. In 2020, Detroit police accused Williams of stealing watches worth thousands of dollars — the first known instance of a false arrest involving face recognition in the United States — after face recognition matched a surveillance video to a photo of Williams in a state database. Investigators put his photo in a “six-pack lineup” with five others, from which a security guard, who had seen a surveillance image and not the theft itself, selected him.
“In my case, as in others, the police did exactly what AB 1814 would require them to do, but it didn’t help,” said Williams, who is Black. “Once the facial recognition software told them I was the suspect, it poisoned the investigation….
Read the full article here