AI Experts Want Government Algorithms to Be Studied Like Environmental Hazards
Connecting state and local government leaders
“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” according to a new report.
Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”
An AIA would first define the automated system a government wants to use, the researchers said. Such a definition shouldn’t be too broad, unless the government wants to spend its time disclosing every time they use a spell-check on a word document, but not so narrow that it leaves out important details. The AIA should disclose not only how an algorithm works mathematically, but what kind of data it needs to train, as well as who will be influencing and interpreting its outputs.
The public should be informed as a next step. That presumably would avoid scandals such as the revelation that the city of New Orleans secretly used Palantir’s algorithms for predictive policing for six years until an investigation from The Verge uncovered the program.
While in use, automated decision-making systems should be continuously monitored. AI Now suggests ramping up internal expertise of governments to guard against biased algorithmic decisions, like those ProPublica uncovered which showed bias against blacks in predicted recidivism rates. The report says audits from outside researchers and auditors are needed, too.
The authors point out that government contractors are likely to claim trade secrecy in revealing how their algorithms work. Such vendors shouldn’t bid for government work, the authors said. “If a vendor objects to meaningful external review, this would signal a conflict between that vendor’s system and public accountability,” the report says.
Dave Gershgorn writes for Quartz, where this article was originally published.
NEXT STORY: Analytics help city police connect the dots across databases