How to reduce citizen harm from automated decision systems
Connecting state and local government leaders
For agencies that use automated systems to inform decisions about schools, social services and medical treatment, it’s imperative that they’re using technology that protects data.
A new report finds that there’s little transparency about the automated decision-making (ADM) systems that state and local agencies use for many tasks, leading to unintended, detrimental consequences for the people they’re meant to help. But agencies can take steps to ensure that their organization buys responsible products.
The findings are shared in “Screened and Scored in the District of Columbia,” a new report from the Electronic Privacy Information Center (EPIC). Researchers spent 14 months investigating 29 ADM systems at about 20 Washington, D.C., government agencies. They chose that location because it’s where EPIC is located, said Thomas McBrien, law fellow at EPIC and one of four report authors.
The agencies use such systems to inform decisions about many activities, including assigning children to schools, understanding drivers’ travel patterns and informing medical decisions about patients, so it’s imperative that they’re using technology that protects data.
“Overburdened agencies turn to tech in the hope that it can make difficult political and administrative decisions for them,” according to the report. But “agencies and tech companies block audits of their ADM tools because companies claim that allowing the public to scrutinize the tools would hurt their competitive position or lead to harmful consequences. As a result, few people know how, when, or even whether they have been subjected to automated decision-making.”
Agencies can take four steps to mitigate the problem, McBrien said. First, agencies can require data minimization through contract language. “That’s basically the principle that when a company is rendering a service for an agency using its software, the agency should really ensure that the company isn’t taking more data than it needs to render that service,” he said.
That connects to his second recommendation, which is monitoring the downstream use of this data. Some ADM system vendors might take the data, run their services with it and that’s it, but others may share the data with their parent company or a subsidiary—or sell it to third parties.
“That’s where we see a lot of leakage of people’s personal data that can be really harmful, and definitely not what people are expecting their government to do for them,” McBrien said.
A third step is to audit for accuracy and bias. Sometimes, a tool used on one population or in one area can be very accurate, but applied to a different context, that accuracy may drop off and biased results could emerge. The only way to know whether that’s happening is by auditing and validating the system using the group of people you’re serving.
“The gold standard here would be to have an external auditor do this before you implement the system,” he said. But it’s a good idea to also do audits periodically to ensure that the algorithms the system uses are still accurate “because as the real world changes, the model of the real world it uses to make predictions should also be changing.”
Fourth, agencies should inform the public about their use of these systems, McBrien said, adding that it’s a good way to build trust. Meaningful public participation is the No. 1 recommendation to come out of a report by the Pittsburgh Task Force on Public Algorithms.
“Agencies should publish baseline information about the proposed system: what the system is, its purposes, the data on which it relies, its intended outcomes, and how it supplants or replaces existing processes, as well as likely or potential social, racial, and economic harms and privacy effects to be mitigated,” according to the report’s second recommendation.
It’s also important to share the outcome of any decision being made based on ADM systems, McBrien added. “People who are directly impacted by these systems are often the first ones to realize when there’s a problem,” he said. “I think it's really important that when that outcome has been driven or informed by an algorithmic system, that that’s communicated to the person so they have the full picture of what happened.”
He added that privacy laws such as the California Privacy Rights Act of 2020 support transparency, as does an effort in that state to redefine state technology procurement as well as a bill in Washington state that would establish “guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”
Although he couldn’t say how prevalent such systems are among state and local agencies—in fact, EPIC’s report states that researchers couldn’t access all of the systems in D.C. because many agencies were unwilling to share information because of companies’ claims of trade secrets or other commercial protections—there are examples of their use elsewhere.
For instance, in 2019, New York City Mayor Bill de Blasio signed an executive order establishing an algorithms management and policy officer to be a central resource on algorithm policy and to develop guidelines and best practices on the city’s use of them. That move follows a 2017 law that made the city the first in the country to create a task force to study agencies’ use of algorithms. But that group’s work led to a shadow report highlighting the task force’s shortcomings.
“We definitely urge people to think of other solutions to these problems,” McBrien said. “Sometimes agencies implement that system and are locked into them for a long time and spend enormous amounts of money trying to fix them, manage the problem, ameliorate the harms of the system that could have been used to hire more caseworkers.”
Stephanie Kanowitz is a freelance writer based in northern Virginia.
NEXT STORY: States Test an Electrifying Idea: Roads That Can Recharge Your EV