Defense against deepfakes
Connecting state and local government leaders
Researchers at Purdue University have developed a machine learning algorithm that examines both metadata stored in the video’s header and the pixels in the video itself to learn to spot anomalies.
A slow-motion video circulating on social media of House Speaker Nancy Pelosi where it appears she is slurring her words was easy to spot as a fake. But the technology behind deepfakes – videos that use artificial intelligence to create false or misleading impressions – is rapidly improving and has great potential for political and social disruption.
In 2015, in fact, the Defense Advanced Research Projects Agency launched the MediFor (Media Forensics) program “to level the digital imagery playing field, which currently favors the manipulator.”
The project is starting to pay off. Purdue University researchers at the end of April announced publication of an algorithm that detects video tampering. The Purdue team, which received funding for its work from DARPA last year, has put the technology into the public domain so that anyone can use it without charge.
Calling deepfakes “a growing danger,” Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, warned in a statement that “it’s possible that people are going to use fake videos to make fake news and insert these into a political election. There’s been some evidence of that in other elections throughout the world already.”
The algorithm developed by Delp’s team uses machine-learning techniques to examine both video metadata stored in the video’s header and the pixels in the video itself for anomalies.
“By analyzing the video, the algorithm can see whether or not the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels, it can be coloring inconsistencies, it can be different types of distortion.”
At the same time, Delp says his team will be continuing to refine the algorithm “for the foreseeable future” to detect more-subtle signs of tampering. “It's going to be an arms race,” he told GCN. “People are getting more sophisticated about making these fake videos, and we want to make sure our algorithm is sophisticated enough to be able to detect those, too.”
Delp added that the technologies that enable deepfakes are well within the reach of even individuals. “You could probably do it with a machine that cost $5,000 or $6,000,” he said. “It might take you a couple of hours to generate a 5-10 minute video clip. If you want to generate it faster you just buy more hardware.”
And for those who can’t afford even that investment in equipment, there’s already a website offering deepfake services -- DeepFakes Web. Customers can upload videos and images and the site will process the video for $2 per hour.
Meanwhile, Samsung's AI lab in Moscow brought the Mona Lisa to life with technology based on a single image. After analyzing and learning from a large dataset of videos the facial landmarks when people move and speak, the system can then animate highly realistic and personalized talking-head models based on a handful of images. A model that's been "trained on 32 images achieves perfect realism and personalization score," researchers said in their paper.
NEXT STORY: Are We Having The Right Privacy Conversations?