Leveraging the wisdom (and ignorance) of crowds
Connecting state and local government leaders
A new IARPA project aims to improve human reasoning through large-scale, structured crowdsourcing.
To improve intelligence analysts and decision makers’ understanding of the evidence and assumptions that support -- or conflict with -- their conclusions, the Intelligence Advanced Research Projects Activity announced funding to develop and test large-scale, structured collaboration methods to improve reasoning.
The Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) program will improve analysts’ ability to provide accurate, timely and well-supported analyses of complex, ambiguous and often novel problems facing the intelligence community, IAPRA said.
Besides marshalling facts and evidence, CREATE aims to give analysts a clearer understanding of conflicting evidence, knowledge gaps and degrees of uncertainty. CREATE systems aim to help analysts explain to decision makers why judgments were made, why seemingly plausible alternatives were rejected and the major gaps in what is known.
“CREATE will combine crowdsourcing with structured techniques to improve reasoning on complex analytic issues,” said IARPA Program Manager Steven Rieber, “The resulting technology will be valuable not just to intelligence analysis but also to science, law and policy -- in fact, to any domain where people must think their way through complex questions.”
Through a competitive broad agency announcement, IARPA awarded CREATE contracts to develop and test structured crowdsourcing platforms. Projects that received funding include:
SWARM -- Smartly-assembled Wiki-style Argument Marshalling. The University of Melbourne aims to develop a cloud-based platform that uses algorithms to create a statistical summary of a participant’s reasoning strengths and biases. This information could then be used to improve the collective outputs.
TRACE -- Trackable Reasoning and Analysis for Collaboration and Evaluation. Syracuse University will develop a web-based application that uses crowdsourcing to overcome common shortcomings in intelligence work by improving the division of labor and reducing both the systematic and random errors individuals may generate while promoting communication and interaction among teams.
Co-Arg -- Cogent Argumentation System with Crowd Elicitation. George Mason University is developing a software-based cognitive assistant for intelligence analysts that tests hypotheses, evaluates evidence, sorts facts from deception and provide intelligent reasoning about quickly evolving situations. It uses a web-based system called “Argupedia” that lets a lead analyst ask other experts to weigh in on small aspects of a hypothesis. Their arguments are assembled and weighted for relevance by the software.
Awards were also given to Monash University, the University of Melbourne, John Hopkins University Applied Physics Lab and Good Judgment, Inc.