DARPA director clear-eyed and cautious on AI
Connecting state and local government leaders
Arati Prabhakar, Defense Advanced Research Projects Agency director, cautioned against reliance on the current generation of artificial intelligence to solve complex problems.
Artificial intelligence has gained serious attention as a solution for complex problems, but the head of the Defense Advanced Research Projects Agency cautions against viewing it as a panacea.
“When we look at what’s happening with artificial intelligence, we see something that is very, very powerful, very valuable for military applications, but we also see a technology that is still quite fundamentally limited,” DARPA Director Arati Prabhakar said at the Atlantic Council on May 2.
Image analysis, Prabhakar said, reveals some of the technology’s limitations. While AI and machine learning systems are statistically better than humans at identifying images because they can sift through thousands of images in seconds, “the problem is that when they’re wrong, they are wrong in ways that no human would ever be wrong,” she said. In one case, a picture of a baby holding a toothbrush was identified by a machine as a baby with a baseball bat. “I think this is a critically important caution about where and how we would use this generation of artificial intelligence,” she said.
Still, many experts and government officials are urging greater use of automation and intelligent systems so they can operate at “cyber speed” and increase efficiency as datasets grow exponentially. “We have organizations and machines that are capable of sharing information automatically, but … we need more machines to be able to automatically ingest it and act on it,” Philip Quade, special assistant to the director for cyber for the National Security Agency's Cyber Task Force, said last month.
Researchers have already developed cognitive systems to help humans sift through large datasets and identify objects of interest, such as mines beneath the ocean’s surface. Such developments will become increasingly important as DOD expects to increase daily unmanned aerial system intelligence, surveillance and reconnaissance sorties by nearly 50 percent by 2019.
Already, the Air Force is “collecting terabytes of data every day,” Lt. Gen. Robert Otto said at an AFCEA NOVA luncheon in February. “It’s the equivalent of -- just in full-motion video -- two NFL seasons a day and analyzing it all.” For Otto, the Air Force deputy chief of staff for intelligence, surveillance and reconnaissance, tagging metadata and leveraging automation and big data analytics will improve analysis. “My predecessor talked about we’re swimming in sensors and drowning in data, but that’s only true if you can’t analyze everything,” he said.
“There’s a criticism about the intelligence services not connecting the dots. I think of how big data might be able to change that equation -- connect dots that I’m not even thinking about,” Otto said. “Then we can put our attention where it needs to go.”
Prabhakar, meanwhile, signaled high hopes for big data and analytics in optimizing human performance, but she was cautious regarding the capabilities of machines to provide all the answers. “I’m having trouble imagining a future where machines will tell us what the right thing is to do,” she said.
However, she offered an optimistic forecast for AI. DARPA sees limitations as opportunities “to drive the technology forward,” she said. “So today the other thing that we’re doing … is making the investments that we hope will create that third wave of artificial intelligence.” The goal, she suggested, would be machines that look beyond correlations and “help us build causal models of what’s happening in the world … and take what they’ve learned in one domain and use it in different domains -- something that they can’t really do at all today.”
For Prabhakar, deploying AI will have to be in the right place at the right time. “We have to be clear about where we’re going to use the technology and where it’s not ready for primetime, where it’s not really ready for us to trust it,” she told GCN after the event.
Artificial intelligence, for example, can be useful if it immediately provides a jamming profile to military pilots who encounter a new radar signal, she explained. However, a self-driving car making AI-based determinations might be “imperfect in some dangerous ways.”
“I think it’s just important to be clear eyed about what … machine learning can and can’t do,” she said.