After AI’s busy 2024, expect more of the same next year
Connecting state and local government leaders
States are likely to further attempt to regulate the technology and understand how it can benefit their governments and residents. But there are warning signs that rules deemed heavy-handed will remain unpopular.
This year, several states took bold steps towards regulating artificial intelligence. It sets up a 2025 in which more will likely follow, all while keeping an eye on what the federal government does next.
Colorado and California both took some of the biggest legislative action on the technology in 2024, to the concern of businesses in their states as well as their governors. Colorado Gov. Jared Polis in May signed sweeping legislation to create a regulatory framework for AI and set guardrails to protect consumers from harm and discrimination. But in signing the law, Polis noted he had “reservations,” and — amid opposition from the tech industry — amendments are likely.
Meanwhile, California considered legislation that would have required developers of large AI systems that cost over $100 million to train to test whether they could be used in various extreme scenarios, like in terrorism or cyberattacks. Amid fervent industry opposition, Gov. Gavin Newsom vetoed that bill, saying it was not the “best approach to protecting the public from real threats posed by the technology.”
Those two bills represented some of the biggest attempts to regulate AI, but there were myriad other efforts too. The Business Software Alliance found that states considered almost 700 legislative proposals in 2024, while the National Conference on State Legislatures found that at least 30 states have issued guidance on their agencies’ use of AI.
If the past is prologue, states can expect a busy 2025 in regulating — as well as understanding and using — AI.
“National technology laws remain the best way to set clear and workable rules for high-risk uses of AI,” Business Software Alliance senior vice president of U.S. government relations Craig Albright said in a statement. “But state leaders have made clear that they are not waiting for Congress to act.”
Already, major moves are afoot. In Texas, Republican Rep. Giovanni Capriglione unveiled draft legislation that would look to prevent discrimination by AI systems, although it exempts models used for research and testing under what he called a “sandbox program.”
Capriglione reportedly said his bill is designed to encourage innovation while protecting people, although opponents are already sharpening their knives against it. A letter earlier this month from a coalition of various organizations said the legislation “imposes restrictive regulations and burdensome compliance costs.”
And New Jersey is looking to stake its claim as a leader in integrating AI into its government operations and training its employees on the technology. The state in November released a report from its AI Task Force and noted that thousands of its employees are already using an AI-driven assistant and training on the technology, while the state is partnering with academia on research and workforce development, among others.
The report recommended that New Jersey expand opportunities for AI training and literacy, promote a strong talent pipeline, address potential ethical issues like bias and discrimination and help collaboration across the public and private sectors and academia. Such efforts will take a while, but state leaders said the time to start is right now.
Gov. Phil Murphy said in a statement that he would be implementing the task force’s various recommendations “in the coming months.” And Beth Simone Noveck, the state’s chief AI strategist, said AI “promises to be the most consequential, transformative technology since the Internet, but that promise is not a guarantee.”
Observers at other levels of government are excited about the opportunities presented by AI, too.
“There's a workforce crisis in all counties,” said Mark Ritacco, chief government affairs officer at the National Association of Counties. “But these artificial intelligence tools, when they're deployed intelligently and thoughtfully and safely, could help bridge the capacity gap when it comes to delivering government services, when it comes to applying for federal grants and so on.”
For state lawmakers and agencies, the next 12 months promise to be busy as more reports are issued, more bills are debated and the technology evolves. NCSL said it expects lawmakers to focus on balancing AI’s “risks and rewards,” and it expects states to continue studying the technology.
And Maggie Gomez, the Colorado legislative director at the nonprofit State Innovation Exchange, said the biggest progress could come in regulating deepfakes and other AI-driven misinformation, especially around elections.
Twenty states now have laws against the use of deepfakes in campaign communications, Gomez said, and more are likely to follow in what she described as an “emerging issue for legislators.”
“Congress has not been able to move quickly,” Gomez said. “With that being said, there are legislators that want to protect democracy, and they are on the front lines to do that in their states. They are in a unique position, as lawmakers and decision makers, to ensure that their states do have fair and free elections and that they're putting forth proactive policies to protect that. Often these types of policies around regulating AI, deepfakes, intimate deep fakes can often find bipartisan support.”
Meanwhile, the threat of the incoming Trump administration looms large. While President Joe Biden’s executive order on AI did not directly impact states and localities, it guided some of their thinking as they assessed how agencies should embrace the technology.
But President-elect Donald Trump has vowed to repeal the sweeping executive order, with the 2024 Republican platform alleging it “hinders AI Innovation, and imposes radical leftwing ideas on the development of this technology.” With the incoming administration also appearing to favor a reduced federal role in areas like cybersecurity and broadband, some worry that rescinding the AI order could impact the technology’s development and regulation.
“AI is something of a generational shift,” said Sundaram Lakshmanan, chief technology officer at mobile cybersecurity company Lookout. “We have never seen anything like this before. We can talk about the internet and how it changed our lives, and industrial revolutions and all that, but AI is the next level. The regulations behind it need to be thought through more and invested in more.”
NEXT STORY: Aerial drones helpful in removing graffiti along Washington highways, agency says