Article Summary

VISIT AI Trust Library
Article Summary of:

How AI-powered tech landed man in jail with scant evidence

Published:
August 20, 2021

The last 2 years have revealed that AI and ML systems often hold immense power over people’s lives in the criminal justice realm. A new Associated Press investigation into an arrest made based on an AI system known as ShotSpotter calls attention to many of the themes of trust and AI. ShotSpotter is a surveillance technology used by law enforcement agencies across the country to combat gun violence in urban centers. However well intended, this technology seems to be flawed in how it distinguishes gunfire from other loud noises, like a car engine backfiring. It operates as a complete black box, with its developer refusing to elaborate on how the system works. Customer support can overwrite the AI’s decisions upon request from police officers. And independent tests have shown it to fail at identifying the location of known gunshots, its primary function. As a result, it has been deemed an insufficient source of evidence by several judges, yet it continues to make life-altering decisions about individuals across the United States.

As seen in Issue 8 and our inaugural issue, this is not the only example of the harms of ungoverned AI in policing. After Robert Williams’s false arrest because of errors in facial recognition technology, the pending lawsuit filed by the ACLU would force the Detroit Police Department to stop using facial recognition technology and embrace more transparency into its systems across the board.

Text Link
AI Governance & Assurance
Text Link
Ethics & Responsibility