Toward the deployment of ethical AI
Connecting state and local government leaders
As agencies navigate the ethics surrounding artificial intelligence technology, they can learn to identify and avoid bias, an IDC analyst says.
The growing use of artificial intelligence comes with more than just technology challenges. Ethical questions are emerging as government agencies use AI for a range of purposes, from customer service chatbots to mission-critical tasks supporting military forces. This increasing reliance on AI introduces concerns about bias in its foundation, especially related to information on gender, race, socioeconomic status and age.
Potential for bias can be built into AI algorithms by the humans who create them. This can happen intentionally or by accident resulting from biases developers don't realize they have. The consequence, of course, is biased outcomes.
Many agencies are aware of the situation. For instance, the Defense Department released in February its AI strategy, which charges the Joint AI Center with “the responsible use and development of AI by articulating our vision and guiding principles for using AI in a lawful and ethical manner.” Programs under development include using AI to check contractors' cyber hygiene and increase situational awareness for warfighters, even as the Defense Advanced Research Projects Agency is researching how it can overcome adversarial AI. The Department of Veterans Affairs wants to use AI to speed the retrieval and delivery of information to veterans and caregivers who call its hotlines.
Additionally, one of the main goals of the National Institute of Standards and Technology’s AI program is to “measure and enhance the security and explainability of AI systems,” meaning designing systems that can explain the rationale for their decisions and surface any bias. In response to the Trump administration's executive order on AI leadership, NIST is working on a plan to develop "technical standards and tools in support of reliable, robust and trustworthy systems that use AI technologies," according to a notice in the Federal Register.
Not all communities have been welcoming AI applications.
On May 14, San Francisco amended its administrative code to ban the city's use of facial recognition technology that uses AI to speed the identification of people in surveillance imagery. The “Stop Secret Surveillance” ordinance bans city agencies from buying facial recognition technology and requires them to get approval before buying any new surveillance technology, including license plate readers, biometric software, camera-enabled drones and software designed to monitor social media services or forecast criminal activity.
As agencies navigate the ethics surrounding AI technology that is quickly gaining ground, IDC Government Insights offers help in identifying where bias can come from and tips on avoiding it.
"Responsible and ethical AI includes the practices that government agencies can and should take to manage, monitor, and mitigate these risks," according to “IDC PlanScape: Responsible and Ethical AI for Federal and State Governments” (document ID #US44856318). It includes "protecting individuals from harm based on algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data."
Some tactics that IDC recommends to help agencies achieve ethical and responsible AI include following basic data management practices, seeking diversity in data and using explainable AI.
Policies and guidance can go only so far, however. Agencies need the right people in place to oversee them and periodically review them for compliance. For example, President Trump’s February AI executive order directs leaders of agencies that conduct foundational AI R&D “and all other relevant personnel” to identify concerns in data and models related to privacy, confidentiality, safety and security.
“For nations to thrive, the ethics of AI must catch up with the technology so that governments, industries, and individuals learn to trust this transformative technology,” Adelaide O’Brien, research director at IDC Government Insights, said in the report.