DARPA targets doctored images
Connecting state and local government leaders
A multidisciplinary team of university researchers is working to pull together a platform capable of scanning the massive numbers of images and videos posted online daily and verifying their authenticity.
As monitoring social media images becomes increasingly important to U.S. intelligence agencies tracking groups like the Islamic State, analysts need a way to quickly verify the authenticity of the millions of photos that are posted online daily. Last fall, the Defense Advanced Research Projects Agency launched an effort to use technology to detect manipulated images, and now a multidisciplinary team of university researchers is working to pull together a platform capable of scanning the massive stream of images and videos.
Because commercially available software tools can be used to manipulate visual media, DARPA's four-year, $4.4 million Media Forensics, or MediFor, initiative will underwrite research to, for example, develop better algorithms that can be used to spot fake images. The tools would then allow analysts to conduct forensic investigations to determine precisely how and why images were manipulated.
That capability could ultimately provide insights into the "digital lineage" of doctored images and video, a field known as "multimedia phylogeny."
A prime example of photo fakery occurred in 2008 when Iran released a provocative image of a missile test that turned out to be doctored. The photo was widely published in U.S. newspapers before the deception was discovered.
In rolling out the program last fall, program officials said MediFor would attempt to integrate machine learning and image analysis technologies into a forensic-based platform to “detect manipulations, provide analysts and decision makers with detailed information about the types of manipulations performed, how they were performed… in order to facilitate decisions regarding the intelligence value of the image [and] video.”
Image research has been divided among several U.S. universities along with investigators in Brazil and Italy. The multidisciplinary team includes researchers from the University of Notre Dame, New York University, Purdue University and the University of Southern California.
"A key aspect of this project is its focus on gleaning useful information from massive troves of data by means of data-driven techniques instead of just developing small laboratory solutions for a handful of cases," Walter Scheirer, a principal investigator at Notre Dame, noted in a statement.
Tools already exist to scan Internet images, but not on the scale required by U.S. intelligence agencies. Researchers noted that such a capability would require specialized machine-learning platforms designed to automatically perform processes needed to verify the authenticity of millions of videos and images.
"You would like to be able to have a system that will take the images, perform a series of tests to see whether they are authentic and then produce a result," explained Edward Delp, director of Purdue's Video and Image Processing Laboratory. “Right now you have little pieces that perform different aspects of this task, but plugging them all together and integrating them into a single system is a real problem."
Hence, investigators will attempt to piece together a complete system capable of handling the massive volumes of visual media uploaded to the Internet each day. That will require deep-learning tools capable of churning through millions of images, detecting doctored pictures and producing a digital lineage that might shed light on the motivation of terror groups.
Purdue's piece of the project focuses on using tools like image analysis to determine whether media has been faked, what tools were used and what portions of an image or video were actually modified. "The biggest challenge is going to be the scalability, to go from a sort of theoretical academic tool to something that can actually be used," Delp said.
This article originally appeared on DefenseSystems, a sister site to GCN.