The dangers of 'deep fakes'
Connecting state and local government leaders
With advanced technologies like artificial intelligence and machine learning, manipulated digital media will be easier to create and more difficult to detect.
False and doctored media can be used for misinformation campaigns, and advanced technologies like artificial intelligence and machine learning will only make them easier to create and more difficult to detect.
Deep fakes are images or videos that combine and superimpose different audio and visual sources to create an entirely new (and fake) video that can fool even digital forensic and image analysis experts. They only need to appear credible for a short window of time in order to impact an election, Sen. Marco Rubio (R-Fla.) warned at a recent Atlantic Council event.
"One thing the Russians have done in other countries in the past is, they've put out incomplete information, altered information and or fake information, and if it's done strategically, it could impact the outcome of an [election]," Rubio said. "Imagine producing a video that has me or Sen. [Mark] Warner [D-Va., who also spoke at the event] saying something we never said on the eve of an election. By the time I prove that video is fake -- even though it looks real -- it's too late."
Rubio, who has warned about the impact of deep-fake technology in the past, is part of a growing group of policymakers and experts to fret over the impact false or doctored videos could have on electoral politics. Earlier this year comedian Jordan Peele and BuzzFeed released a now-viral video that used deep-fake technology to depict former President Barack Obama (voiced by Peele) uttering a number of controversial statements, before warning the viewer about the inherent dangers that such tools pose.
The technology is far from flawless, and in many cases a careful observer can still spot evidence of video inconsistencies or manipulation. But as Chris Meserole and Alina Polyakova noted in a May 2018 article for the Brookings Institution, "bigger data, better algorithms and custom hardware" will soon make such false videos appear frighteningly real.
"Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone," Meserole and Polyakova wrote. "However, deep learning and generative adversarial networks have made it possible to doctor images and video so well that it's difficult to distinguish manipulated files from authentic ones."
As the authors and others have pointed out, the algorithmic tools regularly used to detect such fake or altered videos can also be turned around and used to craft even more convincing fakes. Earlier this year, researchers in Germany developed an algorithm to spot face swaps in videos. However, they found that "the same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first place -- and that could make them harder to detect."
Researchers at the National Institute of Standards and Technology and the Defense Advanced Projects Agency have been working to develop technology that can detect deep fakes.
In its Media Forensics Challenge, NIST aims to advance image and video forensics technologies so its easier to determine if an image or video was modified, the exact section that was altered and where the "donor" parts of the image came from.
DARPA's five-year MediaFor program that launched in September 2015 attempts "to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform."
"We're now in the early days of figuring out how to scale [the system] so we can do things quickly and accurately to stop the spread of viral content that is fake or has been manipulated," Hany Farid, a Dartmouth College digital forensics expert who is participating in the MediaFor program, said in a recent article in Communications of the ACM. "The stakes can be very, very high, and that's something we have to worry a great deal about."
This article used portions of a story that was first posted on FCW, a sibling site to GCN.
NEXT STORY: Lessons from DISA's RPA pilot