Assessing the impact of algorithms
Connecting state and local government leaders
An Algorithmic Impact Assessment could help governments avoid the potential pitfalls associated with automated decision-making, the AI Now Institute says.
What: “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” a report by the AI Now Institute at New York University, which is focused on the social implications of AI
Why: As public agencies increasingly turn to automated processes and algorithms to make decisions, they need frameworks for accountability that can address inevitable questions – from software bias to the system's impact on the community. The AI Now Institute's Algorithmic Impact Assessment gives public agencies a practical way to assess automated decision systems and to ensure public accountability.
Proposal: Just as an environmental impact statement can increase agencies' sensitivity to environmental values and effectively inform the public of coming changes, an AIA aims to do the same for algorithms before governments put them to use. The process starts with a pre-acquisition review in which an agency, other public officials and the public at large are given a chance to review the proposed technology before the agency enters into any formal agreements. Part of this process would include defining what the agency considers an “automated decision system,” disclosing details about the technology and its use, evaluating the potential for bias and inaccuracy as well as planning for third-party researchers to study the system after it becomes operational.
Public comment should be solicited before any AI-enabled systems begin operation, AI Now suggests. In addition, a due process period would allow outside groups or individuals to challenge an agency on its compliance with an AIA. Once an automated decision system is deployed, AI Now says the communities the system will affect should be notified.
AIAs would help public agencies better understand the potential impacts before systems are implemented, encouraging them "to better manage their own technical systems and become leaders in the responsible integration of increasingly complex computational systems in governance." They also provide an opportunity for vendors to foster public trust in their systems.
These AIAs, once implemented, should be renewed on a regular basis, AI Now writes.
Read the full report here.
Editor's note: This article was changed April 18 to correct the name and affiliation of the AI Now Institute at NYU.