The push for explainable AI
Connecting state and local government leaders
As algorithms take in more data and make increasingly complex inferences over time, their decisions become more opaque to human managers.
Governments, private companies and financial institutions are becoming increasingly reliant on algorithms to monitor networks, detect fraud and conduct financial lending and other transactions -- in some cases with limited insight into how those decisions are made.
Algorithms compare people and things to related objects, identify patterns and make predictions based on those patterns. That makes bad data or flawed learning protocols problematic, especially when the decision-making process is opaque.
A 2018 study by the University of Berkeley, for example, found that online automated consumer lending systems charged black and Latino borrowers more to purchase and refinance mortgages than white borrowers, resulting in hundreds of millions of dollars of collective overcharging that couldn't be explained.
"We are seeing people being denied credit due to the factoring of digital composite profiles, which include their web browsing histories, social media profiles and other inferential characteristics, in the factoring of credit models," Nicol Turner-Lee, a fellow at the Center for Technology Innovation at the Brookings Institution, told lawmakers at a June 26 House Financial Services Committee.
"These biases are systematically [less favorable] to individuals within particular groups when there is no relevant difference between those groups which justifies those harms."
Technology is not necessarily the main culprit here, and the Berkeley study did find that in some cases financial technology, or fintech, algorithms can actually discriminate less than their human lender counterparts. Rather, most experts say it's the humans and organizations behind those flawed algorithms that are careless, feeding data points into a system without having a real understanding of how the algorithm will process and relate that data to groups of people. However, the speed and scale at which these automated systems operate -- and the enormous amount of data to which they have access -- mean they have the potential to nationally spread discriminatory practices that were once local and confined to certain geographic enclaves.
While organizations are ultimately legally responsible for the ways their products, including algorithms, behave, many encounter what is known as the "black box" problem: situations where the decisions made by a machine learning algorithm become more opaque to human managers over time as it takes in more data and makes increasingly complex inferences. The challenge has led experts to champion "explainability" as a key factor for regulators to assess the ethical and legal use of algorithms, essentially being able to demonstrate that an organization has insight into what information its algorithm is using to arrive at the conclusions it spits out.
The Algorithmic Accountability Act introduced in April by Sens. Cory Booker (D-N.J.) and Ron Wyden (D-Ore.) in the Senate and Rep. Yvette Clarke (D-N.Y.) in the House would give the Federal Trade Commission two years to develop regulations requiring large companies to conduct automated decision system impact assessments of their algorithms and treat discrimination resulting from those decisions as "unfair or deceptive acts and practices," opening those firms up to civil lawsuits. The assessments would look at training data for impacts on accuracy, bias, discrimination, privacy and security and require companies to correct any discrepancies they find along the way.
In a statement introducing the bill, Booker drew on his parents' experience of housing discrimination at the hands of real estate agents in the 1960s, saying that algorithms have the potential to bring about the same injustice but at scale and out of sight.
"The discrimination that my family faced in 1969 can be significantly harder to detect in 2019: houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of -- all due to biased algorithms," he said.
Turner-Lee said that organizations can do more to understand how their automated systems may be susceptible to illegal or discriminatory practices during the design stage and before they're deployed. Voluntary or statutory third-party audits and bias-impact statements could help companies to "figure out how to get ahead of this game," she said.
"Getting companies as well as consumers engaged, creating more feedback loops, so that we actually go into this together, I think, is a much more proactive approach than trying to figure out ways to clean up the mess and the chaos at the end," Turner-Lee said.
Federal agencies are also increasingly making use of artificial intelligence and machine learning and will face many of the same conundrums. Another bill sponsored by Sens. Cory Gardner (R-Colo.), Rob Portman (R-Ohio), Kamala Harris (D-Calif.) and Brian Schatz (D-Hawaii) would create a new Center of Excellence at the General Services Administration to provide research and technical expertise on AI policy. It would also establish a federal advisory board to explore opportunities and challenges in AI and require agencies to create governance plans for how use of the technology aligns with civil liberties, privacy and civil rights.
This article was first posted to FCW, a sibling site to GCN.
NEXT STORY: RPA, AI help speed review of Medicare claims