AI misinformation a ‘whole new area’ for elections officials to deal with
Connecting state and local government leaders
Intergovernmental collaboration will be crucial in fighting the proliferation of AI deepfakes this election cycle, especially in helping voters navigate increasingly sophisticated robocalls and political ads.
In the days before the recent New Hampshire presidential primary election, a robocall purportedly featuring President Joe Biden discouraged voters from going to the polls, and instead told them to wait until November’s general election to vote.
The call turned out to be a deepfake generated by artificial intelligence, and it falsely claimed to come from the treasurer of a political committee supporting the campaign to write-in Biden in the state.
Complaints from residents prompted the New Hampshire Department of Justice to open an investigation into the call, which officials described as “an unlawful attempt” to disrupt the election and suppress the vote. The Federal Communications Commission also sent a cease and desist letter to the Texas-based company that facilitated the robocall, which the New Hampshire DOJ also said it had identified.
It was the latest in a series of AI-generated content related to the 2024 presidential election that threatens to overwhelm state and local elections offices, which have the primary responsibility for administering elections.
Biden has already been the subject of much AI-generated content, including a video on Facebook that made it look like he inappropriately touched his granddaughter’s chest after they voted in the 2022 midterms. The Meta Oversight Board, an independent body funded by Facebook’s parent company that makes content moderation decisions, said the video did not need to be taken down as it does not violate company policy.
Elections officials are concerned about the growth of AI-generated content and their ability to keep up, especially as this year’s races up and down the ticket heat up.
“When you start talking about AI, this is a whole new area that we have to educate ourselves on as elections administrators,” Nevada Secretary of State Francisco Aguilar said during a recent panel discussion at Columbia University. “And sometimes state governments, local governments don't have all the funding in the world to be able to deal with these new challenges.”
Concerns about election-related misinformation have prompted lawmakers to introduce bills at various levels to combat it. Federal legislators have looked to take the topic on in both the House and Senate, although they have yet to make progress.
At the state level, Michigan Gov. Gretchen Whitmer last year signed a law requiring political advertisements that have been generated wholly or substantially with AI to include a disclaimer disclosing that use. She also signed a bill that defined AI under state campaign finance law, and another law making it a crime for someone to knowingly distribute AI-generated material with the purpose of harming a candidate’s reputation or electoral prospects.
“As artificial intelligence becomes more intertwined with political advertising, it's crucial that we safeguard the truth in our elections,” Democratic state Rep. Penelope Tsernoglou said in a statement at the time.
A similar effort to regulate AI’s use in elections is underway in Colorado. The National Conference of State Legislatures noted that elections officials have always had to adapt to new technologies and that “people have always attempted to alter or misrepresent media to influence an election.” With the growth of deepfakes, policymakers and campaigners are “adjusting.”
To speed up that adjustment, Nevada’s Aguilar said during the Columbia panel discussion that it will be critical for the federal government to work closely with states and localities on AI and misinformation.
“Coming from the private sector into government is quite an awakening experience,” he said. “Things don't move as fast, so you can't be as innovative as you wish you could be in the private sector. This is going to have to be a partnership between the federal government and private sector, working with local governments to figure out what the challenges and the issues are, and then educating us on how to approach it.”
Biden’s recent executive order on AI addressed elections and misinformation, saying the administration “will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.”
Alondra Nelson, a former acting director of the White House Office of Science and Technology Policy who is now a professor at the Institute for Advanced Study, said there is a lot that administration officials do not know about AI and the impacts it could have. But she said leaders can “move forward with the learnings that we have from past elections,” like deception and mis- and disinformation, albeit with some of those bad actions done at a greater scale because of how the technology has evolved.
“We worried in the last election about bot armies on social media,” Nelson said during the Columbia panel discussion. “Now you can automate that kind of misinformation, and then also use social media to amplify and disseminate it. I think the intensification of things that we already are worried about are things that we need to keep an eye on.”
While there are concerns about Americans falling for deepfakes or other misinformation wrought by AI, other election leaders expressed their faith in the general public’s critical thinking skills.
“These things can be scary when you talk about an AI deepfake that might deter voters or discredit elections, but we're all confident in the American voter discerning fact from fiction,” said Donald Palmer, a commissioner on the U.S. Election Assistance Commission, during its 2024 Elections Summit. “Election officials will need to overcome these types of challenges in addition to the normal challenges they face with the increased attention in a presidential year.”