Proposed federal AI roadmap would fund local election offices
Connecting state and local government leaders
Amid warnings that artificial intelligence could “totally discredit our election systems,” a group of U.S. senators released a sprawling roadmap that includes grant funding to keep elections safe from AI.
Senate Majority Leader Chuck Schumer last month unveiled a roadmap for implementing and regulating artificial intelligence that included a pledge to help fund local election offices in their efforts to guard against AI and cybersecurity threats.
Schumer, a Democrat from New York, and three of his Senate colleagues, one Democrat and two Republicans, put a multibillion-dollar price tag on the roadmap, starting with $8 billion in FY 2024 and working up to $32 billion by FY 2026.
Some of that money will be earmarked for local election offices “to support AI readiness and cybersecurity through Help America Vote Act Election Security grants.” The initiative comes as lawmakers continue to raise concerns about the potential impacts of AI on this November’s elections, especially its potential for spreading misinformation through deepfakes.
“If we are not careful, AI has the potential to jaundice or even totally discredit our election systems,” Schumer said during a Senate Rules Committee markup of three elections bills last month. “If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy. This is so damn serious.”
In addition to funding election offices, the roadmap lays out two key policy priorities for lawmakers to focus on. One is advancing ways to make it easier for people to identify AI-generated or AI-augmented election content through “watermarking” or establishing rules for the provenance of digital content. It also calls for the implementation of “robust protections in advance of the upcoming election to mitigate AI-generated content that is objectively false, while still protecting First Amendment rights.”
The funding would be welcome news as local election officials across the country have warned of funding shortages, including limited grant funding opportunities for election administration and no way to make up shortfalls with more states banning private funds.
In a unanimous February vote, the Election Assistance Commission, which administers Help America Vote Act grants, has cleared the way for them to be used to counter AI-generated election disinformation. Commissioners said their use is permissible under the law as a section allows for “improving the administration of elections for federal office.”
“Therefore, it would be reasonable to conclude that states may fund voter education and trusted information communications on correct voting procedures, voting rights and voting technology to counter AI-generated disinformation,” a memo from then-Acting General Counsel Camden Kelliher said after the vote.
In an email, commission Chairman Ben Hovland said the vote showed that the agency “recognized election officials’ need for support in this area.” He added that “the policy details how these funds may be used to curtail the escalation of AI-related disinformation negatively impacting election administration.”
The Senate Rules Committee ultimately advanced all three pieces of legislation under consideration in the May 15 hearing, including a bill to ban the use of AI to generate deceptive content falsely depicting federal candidates in political ads to influence federal elections; another to require ads that are created with or altered by AI to have a disclaimer; and a bill to issue guidelines to help election administrators address AI’s impact on election administration, cybersecurity and election disinformation.
In a statement after the markup, Minnesota Sen. Amy Klobuchar, who chairs the committee, said AI “can have serious consequences for our democracy and we must work with urgency to put guardrails in place.” The bills’ prospects in the House are uncertain.
Meanwhile, the Federal Communications Commission is considering implementing rules on the use of AI in elections. FCC Chairwoman Jessica Rosenworcel last month unveiled a proposed requirement for on-air and written disclosures in broadcasters’ political files when there is AI-generated content in political ads, and that the requirement apply both to candidates and issue ads.
Rosenworcel said in a statement that the intent is not to ban AI-generated content in political ads, but instead to “make sure consumers are fully informed when the technology is used.” It is early days for that proposal, which would be required to go through public comment.
But FCC Commissioner Brendan Carr has already come out against it. In a statement, he said the FCC “can only muddy the waters” by getting involved in this issue, and “tilt the playing field.”
“Applying new regulations to candidate ads and issue ads but not to other forms of political speech just means that the government will be favoring one set of speakers over another,” Carr said. “And applying new regulations on the broadcasters the FCC regulates but not on their largely unregulated online competitors only exacerbates regulatory asymmetries. All of this confirms that the FCC is not the right entity to consider these issues.”
In the absence of firm federal regulation on AI’s use in elections, states have stepped up with their own laws, which are in various stages of enactment. But state election officials have previously urged the federal government to pass national rules.
At a March Senate Rules Committee hearing, Michigan Secretary of State Jocelyn Benson said the “biggest threat” to election security “is misinformation and disinformation designed to confuse voters and obfuscate the voting process,” with AI able to “amplify and expand exponentially these tactics and their impact.”
“The potential for malicious actors to exploit AI underscores the need to equip election officials with the essential resources and tools for effective preparation,” said Isaac Cramer, executive director of the Charleston County, South Carolina, Board of Voter Registration and Elections.
NEXT STORY: Why cybersecurity begins with users