Police Across the US are Training Crime-Predicting AIs on Falsified Data
"In May of 2010, prompted by a series of high-profile scandals, the mayor
of New Orleans asked the US Department of Justice to investigate the
city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.
It used excessive force, and disproportionately against black residents;
targeted racial minorities, non-native English speakers, and LGBTQ
individuals; and failed to address violence against women....
Despite the disturbing findings, the city entered a secret partnership
only a year later with data-mining firm Palantir to deploy a predictive
policing system. The system used historical data, including arrest
records and electronic police reports, to forecast crime and help shape
public safety strategies, according to company and city government
materials. At no point did those materials suggest any effort to clean
or amend the data to address the violations revealed by the DOJ. In all
likelihood, the corrupted data was fed directly into the system,
reinforcing the department’s discriminatory practices....
But new research suggests it’s not just New Orleans that has trained these systems with 'dirty data.' In a paper
released today, to be published in the NYU Law Review, researchers at
the AI Now Institute, a research center that studies the social impact
of artificial intelligence, found the problem to be pervasive among the
jurisdictions it studied."
Link to Full Report
No comments:
Post a Comment