National News

Can software predict crime? Maybe so, but no better than a human

Can you predict a crime?

Posted Updated

By
NIRAJ CHOKSHI
, New York Times

Can you predict a crime?

More than a half-century ago, science fiction writer Philip K. Dick imagined a world in which a “precrime” agency made that possible. Today, a handful of tools promise to help nudge society in that direction.

But a new study out this week suggests that at least one such forecasting algorithm, used in some U.S. courts, is no better than untrained humans.

In the study, published in Science Advances on Wednesday, a pair of researchers found that small groups of randomly chosen people could predict whether a criminal defendant would be convicted of a future crime with about 67 percent accuracy, a rate virtually identical to COMPAS, software some U.S. judges use to inform their decisions.

“An algorithm’s accuracy can’t be taken for granted, and we need to test these tools to ensure that they are performing as we expect them to,” said Julia Dressel, who conducted the study for her Dartmouth College undergraduate thesis with Hany Farid, a computer science professor there.

Dressel and Farid also found that they could make similarly accurate predictions with just two pieces of information: a defendant’s age and past convictions. Meanwhile, COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, relies on six variables for assessing risk, according to Equivant, which sells the software.

The authors are far from the first to raise questions about COMPAS’ role in judicial decision making.

In one high-profile case, a Wisconsin man charged with evading police said that his due process rights were violated when a judge used COMPAS in sentencing him to six years in prison. (He appealed the case to the U.S. Supreme Court, but it declined to weigh in last year.)

And, in 2016, the investigative journalism nonprofit ProPublica concluded that the software was unfair to black defendants.

While some academics disputed the certainty of that finding — arguing that it depends on how fairness is measured — the report inspired Dressel and Farid to conduct their own study.

In addition to their main finding, the authors also found evidence of racial bias from both COMPAS and the humans, even though neither considered race in the predictions. Compared with their white peers, a larger share of black defendants was falsely predicted to reoffend, while a smaller share was falsely predicted not to.

Equivant disputed the findings. In a statement Wednesday, it said that the authors overstated the number of variables COMPAS uses to predict future crimes — it is six, the company said, not the 137 that the authors cited. Equivant also said that the conclusions were limited by the small data sample used, the same data behind the ProPublica report.

Dressel and Farid based their analysis on a database of more than 7,000 pretrial defendants in Broward County, Florida, from 2013-2014. For each, the database included demographic information, criminal history, COMPAS risk scores and arrest records for the two years following the scoring.

A subset of 1,000 defendants was then used to collect predictions from 400 people, recruited through Amazon’s Mechanical Turk, a marketplace for online labor. Those participants were presented with paragraphs about each of 50 defendants and asked to rate the likelihood that each would commit a crime in the following two years.

The descriptions the individuals were presented with included the defendants’ sex, age, crime they were charged with, the type of crime and their arrest history.

Copyright 2024 New York Times News Service. All rights reserved.