Pentagon outlines AI ethics principles
Connecting state and local government leaders
The guidance calls for AI use and adoption that is responsible, equitable, traceable, reliable and governable.
As the military increasingly looks to artificial intelligence to improve efficiency both on the battlefield and in back-office systems, Defense Department officials announced principles it would use to ensure the ethical use of AI.
The five principles are based on recommendations from the Defense Innovation Board and come a year after DOD released its AI strategy. Announced Feb. 24, the guidelines call for AI use that is:
- DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
- The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
The principles will likely be woven into a little bit of everything, such as data collection, cybersecurity and testing, DOD CIO Dana Deasy told reporters at a press briefing.
"We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used,” Deasy said. “You can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you're actually intending."
Officials stressed that DOD would not field capabilities that did not meet the principles, and more specific guidance is required. They admitted, however, that “responsible AI” has not yet been defined and that ongoing discussions and exercises will be needed to help shape "who is held responsible" from software development to fielding.
Deasy said an AI steering committee will develop more principles on how to bring in data, develop solutions, build and test algorithms and train operators on what to look for with unintended effects. The group will also work on procurement guidance, technological safeguards, organizational controls, risk mitigation strategies and training measures.
"These are proactive and deliberate actions" that form the foundation for practitioners but are malleable enough to adapt as tech evolves, said Lt. Gen. Jack Shanahan, chief of DOD's Joint Artificial Intelligence Center.
DOD is also looking to include "non-obligatory language in contracts" that would ask companies how they planned to abide by the principles when building algorithms and tools -- but that doesn't mean enforcement, Shanahan said.
"I'm not suggesting enforcement at the beginning of it," he said. "These are early conversations to be had with our industry partners to say now that we've established these principles for AI ethics, could you develop the capabilities that address each of the five at some point along the way through [research, development, testing and evaluation]."
This article was first posted to FCW, a sibling site to GCN.
NEXT STORY: NOAA dives into AI