Squeezing the risk out of government AI projects
Connecting state and local government leaders
A new report offers a five-point framework agencies can use to maximize the benefits and minimize the risks in artificial intelligence projects.
A new report offers a five-point framework government agencies can use to maximize the benefits of artificial intelligence while minimizing the risks.
“Risk Management in the AI Era,” released by the IBM Center for the Business of Government April 16, proposes a risk management framework that can help agencies use AI to best suit their needs.
“Public managers must carefully consider both potential positive and negative outcomes, opportunities, and challenges associated with the use of these tools,” the report states, as well as the relative likelihood of positive or negative outcomes.
The framework is based on five criteria. The first is efficiency, which the report defines as the ratio of output generated to input required. AI tools are efficient because their marginal cost per task execution approaches zero over time, the report states, adding that even the fixed costs associated with starting a project can be lower than traditional enterprise solutions because AI’s infrastructure can use existing or web-based systems.
The second point is effectiveness, or the degree of success associated with one or more attempts to meet a predefined objective. Although most agencies use success rates as a determinant, the report recommends using failure rates instead. By accounting for making the wrong choice and not making the correct choice, organizations can determine their thresholds for both. That enables them to then compare the effectiveness of AI vs. current processes.
Third is equity. AI bias is a well-observed problem, particularly in criminal justice, where decision-support tools have come to harsher recommendations for black suspects than white ones. To mitigate that, “comprehensive risk management strategies of adopting AI tools should use the observed performance of the organization and its agents as the base case, rather than an ideal state of perfect equality,” the report states.
The fourth piece is manageability, or how easy a tool is to implement. Falling prices and more skilled workforces are making AI tools more manageable, but that in itself could be a problem if agencies develop overly ambitious implementations, the report states.
The fifth aspect is legitimacy and political feasibility. Agency leaders and the public must believe the AI tool and its use is legitimate. For that to happen, the first four elements of the framework are crucial.
“The framework emphasizes two important characteristics whenever considering whether to use AI-based systems to augment or automate organizational tasks — first, the degree of discretion required to execute the task and, secondly, the level within the organization/institutional environment where the task takes place,” the report states.
To illustrate effective government use of AI risk management, the report highlights two use cases. The first is self-driving trolleys in Bryan, Texas. The trolleys use data from cameras and laser imaging, detection and ranging sensors to navigate streets, and two safety workers are ready to take control should a problem arise. To make the project happen, city officials partnered with Texas A&M University and the private sector and created high-quality, publicly accessible data to generate support.
In Syracuse, N.Y., meanwhile, officials are taking a proactive risk management approach to implementing smart city solutions such as autonomous vehicles and AI-augmented decision processes. The main concern is balancing the city’s current needs with future ones as officials research today’s and tomorrow’s AI tools. What’s more, they’re looking at data ownership and management.
“Syracuse does not have the infrastructure or internal expertise to safely store and manage petabytes of real-time-generated sensor data,” the report states. “One alternative option involves a collaborative governance regime or partnership with Syracuse University and one or more other local and regional governments that would have the university host the data with support from the city and other jurisdictions.”
In looking at risk management and existing policies around AI such as the National Artificial Intelligence Research and Development Strategic Plan and the “Maintaining American Leadership in Artificial Intelligence” executive order, the report concludes with three recommendations for government agencies developing AI solutions:
- Commit to upfront and ongoing investments in management and analysis related to AI and existing organizational processes.
- Manage risk by maximizing the fit between tools and tasks.
- Use existing relationships and cultivate new ones to share lessons and best practices on AI use and risk management.
Read the full report here.