DARPA's disinformation detector
Connecting state and local government leaders
The SemaFor program seeks to develop semantic technologies for identifying falsified media to better defend against large-scale, automated disinformation attacks.
As it becomes more difficult to identify text, images, audio and video that have been deliberately altered to deceive, the Defense Advanced Research Projects Agency wants to find a way to automatically detect evidence of manipulation so it can better defend against large-scale, automated disinformation attacks.
In a broad agency announcement for the Semantic Forensics or SemaFor program, DARPA said it is looking to focus on the small but common errors produced by automated systems that manipulate media content. For example, images of a woman's face created with generative adversarial networks, which use a database of real photographs to produce a synthetic face, might include mismatched earrings – a semantic error easier to spot than to avoid making.
Current media manipulation tools rely heavily on ingesting and processing large amounts of data, DARPA said, making them more prone to errors that can be spotted with the right algorithm.
"These semantic failures provide an opportunity for defenders to gain an asymmetric advantage," DARPA wrote. "A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies."
The project is split up into four technical areas: detection; attribution and characterization; explanation and integration; and evaluation and challenge curation. DARPA said it wants to make sure any algorithm developed from the project will outperform comparable manual processes and also be able to demonstrate how it reached its conclusions.
The agency also said it wants to keep a tight lid on some of the technical details of the project, saying it will treat program activities as controlled technical information (CTI). That means that even though such details are not classified, contractors would be barred from sharing or releasing information to other parties since it could "reveal sensitive or even classified capabilities and/or vulnerabilities of operating systems."
The base algorithm itself will not be categorized as CTI, as DARPA said it will "constitute advances to the state of the art" and would only potentially fall under the definition after it had been trained for a specific Defense Department or governmental purpose.
"A key goal of the program is to establish an open, standards-based, multisource, plug-and-play architecture that allows for interoperability and integration," the announcement stated. "This goal includes the ability to easily add, remove, substitute, and modify software and hardware components in order to facilitate rapid innovation by future developers and users."
This article was first posted to FCW, a sibling site to GCN.
NEXT STORY: DOD looks to spur domestic drone manufacturing