Building machine parts for intelligence analysts
Connecting state and local government leaders
IARPA is developing technologies that will enhance analysts’ decision-making capabilities.
Intelligence analysts sometimes must make life-or-death judgments based on conflicting or incomplete information. Can software help them get better results?
Managers at the Intelligence Advanced Research Projects Activity, an organization under the Office of the Director of National Intelligence, think there’s a good possibility that it can.
IARPA’s Crowdsourcing Evidence, Argumentation, Thinking and Evaluation program has funded four research projects that combine collaboration with tools to help intelligence analysts assess the data and detect assumptions that may be skewing interpretations.
“Reasoning, thinking through complex issues, different hypotheses, different types of evidence, contrary evidence, what the applications are -- that is what intelligence analysts do for a living,” Program Manager Steven Rieber said. “CREATE is about developing the technologies and tools that “make the reasoning processes more efficient and more effective.”
The four programs each take a different approach to enhancing analysts’ decision-making capabilities. “We think that that is a good idea, because we don't know at this point what will work,” Rieber said. “So we are trying different things.”
One of the programs -- Trackable Reasoning and Analysis for Collaboration and Evaluation -- is aimed at developing a web-based application that improves reasoning through the use of techniques such as debate and analogical reasoning. According to Jennifer Stromer-Galley, professor in Syracuse University’s School of Information Studies and leader of the TRACE team, the web app will also employ crowdsourcing to supplement analysts’ problem-solving abilities and foster creative thinking.
The first step in improving analysis is to break down the complex processes analysts apply after gathering their evidence. Currently, Stromer-Galley said, the intelligence community applies a variety of “structured analytic techniques," or SATs, which take analysts through a logical process of testing hypotheses. “The whole point of the structured analytic techniques is to help analysts, when they are looking at a variety of different information, to minimize cognitive biases that might lead them to incorrect conclusions,” she said.
“SATs are not perfect, and people don't apply them perfectly,” she said. They can be “kludgy and clunky, and they are not easy to use.”
TRACE aims to streamline the process with a web-based interface that applies logical tools -- decision trees and advanced natural language processing -- in an intuitive interface that employs game-based principles.
But the logical processing is only part of the effort, Stromer-Galley said. Just as important is building an interface that encourages online teams to work well together.
“Let's say you have been working on a case, and you keep going back to an interview that was done with a particular person that you think is pretty relevant, so you give it a high score for relevancy,” Stromer-Galley said. When an analyst working on a related case comes across the interview and sees that it has a high score for relevancy, he or she pays more attention. That means “you can focus your cognitive efforts on other aspects,” she said.
The quality of such ratings, of course, depends on the quality of analysts. “You can get people who produce low-quality content or low-quality evaluations, or they are saboteurs in some fashion,” she said. “As we move forward, one of the things we need to think about is how to downgrade or minimize the impact the bad work could have.”
As the analyst moves through the reasoning process guided by the web app and receives input from other analysts in the online crowd, the software will be watching and making suggestions. The team is also considering “smart nudging, providing little indicators or cues” to help analysts with reasoning and reporting, Stromer-Galley said. “Maybe there is a step they are not thinking about that they should think about, or maybe there's some evidence that they have failed to look at carefully that could be useful.”
A team at the University of Melbourne in Australia is pursuing a strikingly different approach. The Smartly-assembled Wiki-style Argument Marshalling project is a cloud-based platform that, rather than forcing analysts into a formal logical structure of argumentation, encourages natural debate among participants.
SWARM-- a collaboration between the university’s School of Bioscience, School of Historical and Philosophical Studies and School of Engineering -- provides an platform for participants to analyze and debate. The SWARM team is also developing algorithms that will generate a statistical summary of each analyst’s reasoning strengths and biases, which can then be applied to output.
“Using the platform will be much like using a WYSIWYG wiki-type platform, similar to, say, Google Docs, but customized to support production of reasoned analyses of complex problems,” explained Tim van Gelder, a University of Melbourne associate professor and the SWARM team co-leader. “Crowds could informally collaborate on producing analyses, somewhat similar to the Wikipedia page on the disappearance of Malaysian Airlines Flight MH370.”
The emphasis of SWARM is on debate. In fact, the team calls the project’s datasets “arguwikis.”
“The main objective of the platform is not rational consensus,” van Gelder said. “Rather, it is to produce a well-reasoned analysis of a problem. In short, the SWARM approach requires ongoing disagreement.”
According to van Gelder, SWARM will also include tools running in the background that improve performance by applying what the team calls “deliberation analytics.” That means generating profiles of the participants and rating their analytic strengths and weaknesses, as well as measuring group dynamics. “The platform can use these reasoning profiles in various ways to boost the performance of groups of participants,” van Gelder said. “For example we will be developing ways to produce ‘super-teams’ who work together especially effectively.”
Still another CREATE-funded team is developing Co-Arg, or Cogent Argumentation System with Crowd Elicitation, a software “cognitive assistant” for analysts that tests hypotheses, evaluates evidence and learns from information it is presented with. The project by researchers at George Mason University allows analysts to employ a web-based system called “Argupedia,” to ask other experts to evaluate the various parts of a hypothesis. Their arguments are collected and rated by the software.
The fourth project team from Monash University in Australia is engineering a system to help intelligence analysts improve the way they build and test arguments about probable outcomes where there is limited accurate information as well as uncertainty about the causal process driving the system and the effects of actions or interventions.
The system would also support counterarguments and rebuttals, identification of sources, levels of confidence and similarities. All of these features will be used to produce more effective English language arguments.
“What we’re developing is a sophisticated tool that will improve the quality of the analysts’ reasoning by enabling them to better assess the value of their evidence,” said Kevin Korb, the project’s chief investigator. “Using our interface should also increase the reliability and acceptance of their arguments, and therefore improve the decision making of the people that they report to.”
While the four, five-year CREATE projects take different approaches to enhancing human decision-making, Rieber said the teams are communicating with each other and he’s hopeful that each of the efforts may offer tools that eventually can be integrated.
“There is plenty of collaboration among the teams,” Rieber said. “We are encouraging them to develop systems or methods that will lend themselves to easy integration within existing intelligence community tools and with one another.”