13 February 2021
The arrests and charges in theaftermath of the Jan. 6 Capitol Hill insurrection made clear the power of facial recognition, even as efforts to restrict the technology are growing.
Why it matters: With dozens of companies selling the ability to identify people from pictures of their faces — and no clear federal regulation governing the process — facial recognition is seeping into theU.S., raising major questions about ethics and effectiveness.
Driving the news: The Minneapolis City Council voted on Friday to bar its police department from using facial recognition technology, Axios Twin Cities' Nick Halter reports.
- Minneapolis will join other cities that have restricted the technology, including Portland, San Francisco and Boston.
The big picture: Even as efforts to restrict facial recognition at the local level are gathering momentum, the technology is being used across U.S. society, a trend accelerated by efforts to identify those involved in the Capitol Hill insurrection.
- Clearview AI, one of the leading firms selling facial recognition to police, reported a 26% jump in usage from law enforcement agencies the day after the riot.
- Cybersecurity researchers employed facial recognition to identify a retired Air Force officer recorded in the Capitol that day, and after the attack Instagram accounts popped up purporting to name trespassers.
By the numbers: A report by the Government Accountability Office found that between 2011 and 2019, law enforcement agencies performed 390,186 searches to find facial matches for images or video of more than 150,000 people.
- The Black Lives Matter protests over the summer also led to a spike in use in facial recognition among law enforcement agencies, according to Chad Steelberg, the CEO of Veritone, an AI company. "We consistently signed an agency a week, every single week."
- U.S. Customs and Border Protection used facial recognition on more than 23 million travelers in 2020, up from 19 million in 2019, according to data released on Thursday.
How it works: InVeritone's facial recognition system,crime scene footage is uploaded and compared to faces in a known offenders database — though as agencies begin to share information across jurisdiction, that possible database has been getting larger.
- Veritone's system returns possible matches with a confidence interval that police can use — together with other data, like whether someone has a violent record — to identify possible suspects.
The big questions: Does it work? And should it work?
- Facial recognition is notoriously less accurate on non-white faces, and a 2019 federal study found Asian and Black people were up to 100 times more likely to be misidentified than white men, depending on the individual system.
- There have been two known cases so far of wrongful arrest based on mistaken facial recognition matches.
What they're saying: "Today's facial recognition technology is fundamentally flawed and reinforces harmful biases," FTC CommissionerRohit Chopra said last month, following a settlement with a photo storing company that used millions of users' images to create facial recognition technology it marketed to the security and air travel industries.
The other side: Facial recognition companies counter that humans on their own are notoriously biased and prone to error — a 2014 study found 1 in 25 defendants sentenced to death in the U.S. are later shown to be innocent — and that the models are improving over time.
- "There's nothing inherently evil about the models and the bias," says Steelberg. "You just have to surface that information so the end user is aware of it."
Be smart: At its most basic level the underlying technology isn't that sophisticated, which makes it difficult to control.
- Big tech companies like Microsoft can decide not to sell facial recognition software to police departments, but there are plenty of startups to take their place.
- And as Jan. 6 showed, even individuals can tap facial recognition with ease to become cyber-sleuths — or cyber-vigilantes.
"The core technology isn't limiting. It's really more of a legal jurisdiction question, which is where the rubber will meet the road."
Chad Steelberg, Veritone
Transcripts show George Floyd told police "I can't breathe" over 20 times
Section2Newly released transcripts of bodycam footage from the Minneapolis Police Department show that George Floyd told officers he could not breathe more than 20 times in the moments leading up to his death.
Why it matters: Floyd's killing sparked a national wave of Black Lives Matter protests and an ongoing reckoning over systemic racism in the United States. The transcripts "offer one the most thorough and dramatic accounts" before Floyd's death, The New York Times writes.
The state of play: The transcripts were released as former officer Thomas Lane seeks to have the charges that he aided in Floyd's death thrown out in court, per the Times. He is one of four officers who have been charged.
- The filings also include a 60-page transcript of an interview with Lane. He said he "felt maybe that something was going on" when asked if he believed that Floyd was having a medical emergency at the time.
What the transcripts say:
- Floyd told the officers he was claustrophobic as they tried to get him into the squad car.
- The transcripts also show Floyd saying, "Momma, I love you. Tell my kids I love them. I'm dead."
- Former officer Derek Chauvin, who had his knee on Floyd's neck for over eight minutes, told Floyd, "Then stop talking, stop yelling, it takes a heck of a lot of oxygen to talk."
Read the transcripts via DocumentCloud.