Misinformation, cybersecurity among top issues ahead of 2024 elections
Connecting state and local government leaders
State and local officials face a new threat from the proliferation of deepfakes and misinformation, driven by artificial intelligence.
Election Day this year mostly went off without a hitch as voters in 10 states went to the polls to decide crucial ballot initiatives or elect new governors, state legislators, mayors or council members.
But the relative peace of 2023 appears poised to give way to some big technological threats next year, as voters nationwide cast ballots in the presidential race, Congressional contests and a swath of state and local competitions.
Already state and local officials have gotten a preview of how cyberattacks and artificial intelligence may affect elections next November. Last year’s midterm elections, for instance, saw the Mississippi Secretary of State’s website crash after what officials called at the time an “abnormally large increase in traffic volume due to [denial-of-service] activity.”
Meanwhile, the 2024 presidential election has already seen an influx of AI-generated content, including from the Republican National Committee, or RNC, reacting in April to President Joe Biden’s reelection announcement.
While Mississippi’s elections system was not compromised, the denial-of-service attack offers a glimpse of possible things to come. And the Brennan Center for Justice has already warned that elections are at risk from AI, with the group suggesting the technology “could fuel the rampant spread of disinformation and create other hazards to democracy.”
The threat of new types of misinformation and cyberattacks is worrying state and local officials as they prepare for next year’s elections.
The Potential Growth of Deepfakes and Other Misinformation
The RNC advertisement is one of the most high-profile examples so far of how AI may be used in the 2024 elections. The RNC ad was followed in the summer by one in support of Florida Gov. Ron DeSantis that used former President Donald Trump’s voice, generated by AI. Those and similar deepfakes prompted calls from lawmakers and advocacy groups that such advertisements should contain disclosures that AI was used either fully or in part.
New York Rep. Yvette Clarke, a Democrat, introduced legislation in May to regulate AI in political advertisements, while later that month Democratic Sens. Amy Klobuchar of Minnesota, Cory Booker of New Jersey, and Michael Bennet of Colorado followed suit in the Senate.
Months later, after a petition from the advocacy group Public Citizen, the Federal Election Commission agreed to look into potentially regulating AI-generated deepfake ads that misrepresent political opponents.
Even the White House appears to be weighing in on the issue. Biden’s recent executive order on AI said the administration “will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.”
Separately, some states are taking action as well. The Michigan state Senate passed a package of bills designed to, among other things, require a disclaimer on political advertisements that contain audio, video or images generated by AI.
But despite these efforts, some are skeptical that it will be enough to prevent bad actors.
Samir Jain, vice president of policy at the Center for Democracy and Technology, warned during a recent webinar of a “really difficult information environment,” given that social media platforms and governments have less resources to dedicate to fighting misinformation, and a reluctance to be seen infringing on the First Amendment.
“The fact that these kinds of deepfakes and generative AI content might exist also has the indirect effect of undercutting trust in authentic content, because it becomes harder for voters and others to distinguish between what's real and what's not,” Jain said. “It makes the whole information environment a little bit more difficult to navigate.”
The public is concerned too. The Institute of Governmental Studies at the University of California, Berkeley found in a recent survey that 84% of Californians are concerned about disinformation, deepfakes and AI. More than 70% said the state government has a responsibility to act to protect voters, while 87% said tech companies and social media platforms should be required to label deepfakes and other AI-generated content.
It cannot be up to the public alone to simply better educate themselves about the threat of AI, said Campbell Cowie, head of policy, standards and regulatory affairs at the software company iProov.
“An important part of the narrative is encouraging media consumers to be literate as to the risks that are out there,” Cowie said. “But I don't think it's fair to place the burden on consumer education. I think it's the solution providers and social media companies [that must do more].”
They should, Cowie said, do a better job at sharing intelligence and insights on deepfakes and AI-generated content by malicious actors. That intelligence sharing will require a “genuine, roundtable-type of engagement” between all parties that will also require a prioritization of which threats have the potential to be the most damaging. That should help to mitigate what Jain painted as a “nightmare” scenario.
“The nightmare is that, a couple of days before the election in November there's some deepfake video of some kind that goes around that shows one of the candidates in some kind of compromising position or saying something really bad, and there's not enough time to sort of counter that,” Jain said. “So a number of voters go to the polls believing in that deepfake, and in a very close election, could even in theory swing the election.”
State and local officials will also play a key role in combating deepfakes and misinformation.
“It is the case that state and local officials are often the best, most effective voice to provide the right accurate information for their constituents,” Eric Goldstein, executive assistant director of the Cybersecurity and Infrastructure Security Agency, said at an event. “Our goal is to enable and empower our state and local partners to be that voice for their communities to ensure that the right information gets out.”
Cybersecurity Remains a Hot Topic for Elections Officials
In addition to worries about AI-generated deepfakes, cybersecurity is top of mind for state and local officials as they prepare for next year. Their concern is shared by the federal government: the Department of Homeland Security’s Intelligence Enterprise Homeland Threat Assessment said the agency expects the 2024 election cycle to be a “key event” in which cyber criminals will look to exploit networks and data used by political parties and elections officials.
Funding for cybersecurity protection remains the biggest challenge for state and local governments, even as CISA’s cyber grants continue to trickle down. It reflects a continued trend of low funding for state and local election administration.
At a U.S. Senate Rules Committee hearing earlier this month, Arizona Secretary of State Adrian Fontes and Rutherford County, Tennessee, Elections Administrator Alan Farley both called for beefed up federal funding to protect voting infrastructure.
Fontes said while funds distributed under the Help America Vote Act are helpful, they are “intermittent and wholly insufficient to provide predictable and sustained support that local jurisdictions require.” Fontes said it is “concerning” that there does not appear to be any funds provided under the act in the next federal budget.
In addition to pushing for more funding, observers said state and local elections offices should ensure they have good cyber hygiene practices for their workers, including regular training on how to avoid phishing emails, which remain the biggest vulnerability. Gary Barlet, former chief information officer at the Office of the Inspector General for the U.S. Postal Service and now federal chief technology officer at security company Illumio, said offices would be well served by doing those trainings more regularly, especially as the big day approaches.
“As you get closer to an election, they should be ramping up,” Barlet said. “As you get closer to elections, that's when people are potentially getting more and more targeted.”
Some elected officials have emphasized potential cybersecurity issues with voting machines getting hacked, but Barlet said of just as much concern is the security of networks, servers and the methods used to tally votes. Sometimes those vote-tallying efforts are low-tech and could be exploited, especially if someone accidentally grants a malicious actor access.
NEXT STORY: Election security threats require more federal resources, officials say