NY to try algorithm-driven pretrial risk assessments
Connecting state and local government leaders
The governor is allowing judges to use “validated risk assessment” in pretrial determinations, but the fairness of the tools has been called into question.
New York Gov. Andrew Cuomo plans to update what he calls the state’s “antiquated system” of bail and pretrial detention, but the accuracy of the technology New York will use has been called into question by experts.
To help reduce pretrial incarceration, Cuomo is allowing judges to use “validated risk assessment” in pre-trial determinations. Individuals who pose a high risk would likely still be held pending bail payment, but those who pose little risk could go free until trial.
Details on exactly what this risk assessment will look like remain sparse. “These assessments will be conducted by instruments that are validated, objective, and transparent to ensure that there is no bias in release determinations,” the governor’s announcement said.
Cuomo’s office did not respond to multiple phone calls and emails for this article.
New York will not be the first judicial system to use these algorithm-driven tools. When the Obama administration announced the Data Driven Justice Initiative in 2015, it touted Charlotte-Mecklenburg, N.C.’s use of the technology and the resulting drop in total county jail population.
But Cathy O’Neil, author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” and founder of the algorithmic auditing company ORCAA, said the algorithms that power these risk assessment tools can be problematic.
“I don’t think we’re ready,” O’Neil said about governments using risk assessment tools. “I don’t think the tools are mature enough for widespread use, but if Cuomo is going to go ahead and do it, at the very least he needs to provide ongoing, transparent evidence that these tools are doing a good job and are not racist.”
And these tools can have a bias, she said. In fact, a recent investigation by ProPublica found racial biases in the software that assigned higher risk scores to black defendants.
Part of the reason for that bias is that the humans who create the mathematical models define what is “fair.” If the recidivism rate is higher for black people, the algorithm may be calibrated to give blacks higher risk scores, without taking into account enough other influencing factors.
New York plans to use “objective” risk assessment, but O’Neil said that because the definition of “fair” depends on what factors are used in the evaluation, “there is no such thing as objective because, again, this is a subjective choice of what we call ‘fair.’”
Another problem with the tools is that they’re opaque, she says. Risk assessment companies often insist on clauses in government contracts to ensure their algorithms -- or intellectual property -- are exempt from freedom of information requests. But it is possible to audit their effectiveness without seeing the actual algorithm, O’Neil said. That would mean publically sharing the actual recidivism rate for blacks versus whites.
What people should not do, she said, is expect these algorithms to be perfect.
“I think the most important thing is to compare [the analysis] to the state of the world or the state of bail hearings without that tool,” she said. “You can’t compare it to perfection -- you have to compare it to humans doing it without the tool.
“There are scenarios in which it seems like a good idea,” she said. “But the thing you need to keep track of is how fair is it?”