Uncovering discrimination in machine-learning software
Connecting state and local government leaders
Researchers have developed a way to test algorithms to determine if the decisions they make are biased.
It’s no secret that machine learning-based algorithms can be problematic and even inject or boost bias in decision-making processes. Software used by courts for sentencing decisions has been shown to make harsher recommendations for defendants of color.
Sainyam Galhotra, Alexandra Meliou and Yuriy Brun, assistant professors at the University of Massachusetts at Amherst, have found a new technique to automatically test software for discrimination.
The reason algorithms deliver biased outcomes, they say, is the data the machine-learning algorithms are using are full of bias. Many developers and end users simply aren’t aware of this bias, Brun said.
Their technique Themis tests algorithms and measures discrimination in outcomes. It will run the algorithm many times, varying the inputs to see if the decision the algorithm makes is biased. Users can find out, for example, if changing a person’s race affects whether the software recommends bail for a suspect or a lengthy sentence for a criminal.
"Themis can identify bias in software whether that bias is intentional or unintentional and can be applied to software that relies on machine learning, which can inject biases from data without the developers’ knowledge,” Brun said.
Themis was tested on software in GitHub's public repository, which included applications meant to be "fairness aware," meaning specifically designed to avoid bias. But even this fairness-aware code can show bias.
“Once you train it on some data that has some biases in it, then the software overall becomes biased,” Brun said.
Themis is free to use and available online. The source code for the algorithm is not needed, so a court that uses sentencing software or a police department working with predictive policing software can run Themis to see if bias exists.
So far, Themis only works with simple inputs like numbers. A next step is to test more complex inputs like pictures so researchers can detect bias in facial recognition technology.
Themis is a starting point for removing bias, the researchers said. When biased software is detected, it can be sent back to the developer who can get a better "understanding of what the software is doing and where the discrimination bug is.”
Editor's note: This article was changed Sept. 13 to include the name of the first author of the paper on fairness testing, Sainyam Galhotra. We regret the error.
NEXT STORY: GIS, mobile-alert tech shine during eclipse