Deepfake porn: The ugly side of generative AI, and what states can do about it
Connecting state and local government leaders
Policymakers are scrambling to rein in runaway deepfake content featuring nonconsenting victims in explicit images and videos.
It used to be that when someone wanted to indulge their personal pleasures, a Playboy magazine tucked under the mattress, an adult site on an incognito browser or even just one’s imagination was the way to go.
But now with rapid advances in generative artificial intelligence, all it takes to create personalized pornographic material is a single photo that can be manipulated into explicit images or videos of an unsuspecting victim. The capability has become so widespread that state officials trying to control the spread of these images face an uphill battle.
Since the launch of Dall-E and ChatGPT last year, generative AI tools have largely become democratized to be more affordable and accessible to the general population, said Michelle De Mooy, director of the tech and policy program at Georgetown University’s McCourt School of Public Policy.
While generative AI has been lauded for its potential to improve productivity by synthesizing and analyzing data, writing reports or improving public-facing chatbots, its ability to concoct nonconsensual synthetic images, videos and even audio of a real person—called deepfakes—reveals an uglier side to the innovative technology.
The explosion of pornographic deepfakes, for instance, has ensnared countless victims, predominantly women and underage girls, whose body and likeness were used to create explicit content often without their consent. In fact, more than 100,000 deepfake videos were uploaded to websites designed to host such content in the first nine months of 2023 alone, compared with just 73,000 uploaded in 2022.
But in many cases, deepfakes spread without public websites or social media platforms. Late last year, a New Jersey high school made headlines after some students made fake, explicit images of female classmates and circulated those pictures among their peers. Teenage girls in a Seattle suburb also fell victim to deepfakes after a male student reportedly took photos of his female classmates and used AI to create nude photos of them.
Even celebrities like Taylor Swift have faced the exploitation of their images, after a recent post featuring the popstar in an explicit way went viral on X, formerly known as Twitter. It racked up more than 45 million views before X officials took the post down, citing its zero-tolerance policy on nonconsensual nudity.
Instances like those have turned up the heat on legislators to address the largely unregulated landscape of generative artificial intelligence.
About a dozen states have introduced legislation on pornographic deepfakes as advocates have called on policymakers to address the controversial use of generative AI, said Daniel Castro, director of the Center for Data Innovation.
Policymakers are increasing attention on AI exploitation because it affects people in schools, at work and in government, making it a form of harassment and abuse, he said.
Deepfake images, intentionally or not, often shame or humiliate the victim, making it a public health issue, De Mooy said. Those feelings impact a person’s well-being, especially when such images could also put them at risk of being stalked or harassed by those who view the manipulated image.
Leaders are particularly concerned about deepfakes that target children. In January, some states, including Arizona, Ohio and South Dakota introduced legislation to prohibit the creation and distribution of pornographic deepfakes that depict minors. Ohio’s bill also includes restrictions against creating erotic images of digitally-created children and requires products made with AI to have a watermark.
Over the summer, Louisiana Gov. John Bel Edwards signed a bill criminalizing the creation and possession of deepfakes featuring minors in sexual scenarios. The law also makes illegal the advertising, distributing, exhibiting, exchanging, selling or promoting of nonconsensual deepfakes depicting children or adults.
But barriers remain for states looking to enforce such laws, De Mooy said.
Policies that regulate the production or dissemination of explicit fake images would have to also be compliant with a bevy of free speech laws, she said. For example, Section 230 under the federal Communications Decency Act protects online platforms against legal liability from the content produced by users, creating a barrier for state policymakers who want to regulate artificially generated explicit images.
Plus for law enforcement officials, “it’s hard, if not impossible, to track or identify the creators of this kind of content [because of] the anonymous nature of the way that they’re created,” De Mooy said.
Users can hide behind social media accounts under assumed names to share explicit AI-generated or AI-enhanced images, making it impossible for law enforcement to take legal action if they cannot identify the creator. Even if someone is aware deepfakes of them being circulated, there’s little legal action they can take, De Mooy said. Victims can’t open a civil lawsuit against the perpetrator, without having “a person with a motive who you can point to and say, ‘This is the person who did this to me.’”
States should continue pushing legislation to support the victims of AI-generated deepfakes, but policies should also be updated to make it easier to identify and prosecute bad actors, De Mooy said.
Companies that provide generative AI resources should be required to include a tool or mechanism that can identify or label content as being manufactured, she said. Otherwise, it’s hard for victims to prove that a picture of them has actually been manipulated.
Despite slow movement toward fully effective policies, De Mooy said, the more states that introduce and pass laws against explicit deepfakes puts pressure on the federal government to address the issue as well.
“Companies hate when there’s a lot of different state laws,” she said, because it requires them to navigate a patchwork of regulation. That could push the federal government to implement a standard regulation, “but we’ll see.”