Algorithms Can’t Fix Societal Problems—And Often Amplify Them
Connecting state and local government leaders
Activists and researchers argue that artificial-intelligence tools, including those used by governments, reflect the problems already present in the systems they're automating.
At research organization AI Now’s 2018 symposium in New York Tuesday, activists and artificial-intelligence researchers drove home the idea that algorithms are an insufficient Band-Aid slapped on top of deeper societal problems present in the United States.
The algorithms behind facial-recognition systems made by big tech companies—along with the ones used by governments to determine welfare payouts or the kind of healthcare disabled patients receive—reflect the problems already present in the systems they’re automating, presenters said.
“Much of our conversation about technology in this country happens as though technology—and AI in particular—is developing in some universe that is separate than the universe you and I live in, which is rife with problems of inequality and discrimination,” said Sherrilyn Ifill, president of the NAACP Legal Defense Fund.
In a conversation about facial recognition, Ifill argued authorities should focus on fixing biases within the police and criminal justice systems before directing tax dollars toward technology solutions. Algorithmic bias is inevitable if facial-recognition algorithms—already proven to be less accurate on individuals with darker skin—are used in tandem with the New York City Police Department’s gang database, for example, which is overwhelmingly comprised of people of color, she said.
Even government services like determining foster care are being automated in some ways, but the algorithms used today aren’t being designed with complete awareness of the situation, said Virginia Eubanks, author of the book Automating Inequality.
She offered the example of the foster-care system in Pennsylvania’s Allegheny County, which uses an algorithmic screening tool to assess phone calls made to report child endangerment. Developers working on the tool told Eubanks that the system can measure bias in the screening process, to see if there’s racial disparity in determining which calls get turned into full investigations. But, Eubanks argues, that’s not where bias exists in the system—instead it’s in the societal issue that communities report black and biracial families 350% more than white families. The percentage of cases investigated might be the same among populations, but there are far more cases to be investigated for people of color.
“What we’re doing is using the idea of eliminating individual irrational bias to allow this vast structural bias to sneak in the back door of the system,” she said.
Lawyer Kevin De Liban, for his part, highlighted his success in obtaining and invalidating an algorithm used to cut the medical care that his clients had previously received from the state of Arkansas.
“So that algorithm is dead and gone and on its way out. So what’s the state gonna do in January?” he asked the audience.
“Another algorithm!” an audience member shouted.
“Another algorithm!” he echoed. “So we’re hoping that at that point, not only are we smarter, we’re hoping that the state has learned some lessons.”
Dave Gershgorn is a reporter at Quartz, where this article was originally published.
NEXT STORY: 3 data strategies to help crackdown on internal corruption