States must ‘keep delivering’ amid new Trump AI order

President Donald Trump after signing an executive order in the early days of his term. Trump moved quickly to rescind former President Joe Biden's executive order on artificial intelligence and issue his own. Anna Moneymaker via Getty Images
The pace of the new administration rescinding previous guidance and implementing their own might make leaders’ heads spin. But experts said they cannot be distracted from their own missions.
President Donald Trump got his first week back in office off to a fast start with an executive order that rescinded former President Joe Biden’s order on artificial intelligence. Then three days later, he signed his own AI order.
It made for a head-spinning few days for the tech community and for state and local governments, which had based a lot of their approach to AI on federal guidance.
But while it is unclear the impact of the federal government’s shifting mandates, states and localities have made significant moves on AI and will continue doing so, experts said. That is regardless of what the Trump administration and federal agencies now do and shows how important the technology has already become.
“At the end of the day, it's about getting the job done,” said Abhi Nemani, a former chief data officer for Los Angeles who is now senior vice president of product strategy at software company Euna Solutions. “No matter what's happening at the federal or state level, you just constantly keep delivering. In this case, governments are starting to use AI to deliver, and they're not going to stop.”
Trump’s order indicated that the federal government might take a more hands-off approach than it did under Biden. A fact sheet said the previous mandate hindered “AI innovation and impose[d] onerous and unnecessary government control over the development of AI,” and by doing so it “hampered the private sector’s ability to innovate in AI.” The order also said that AI development should be “free from ideological bias or engineered social agendas.”
Already, Nemani said it has been “quite a science fair” in state governments’ responses to AI, both through executive and legislative action, with more to follow as task forces and committees report their findings on potential use cases and legislators start to codify laws on the technology. AI has now become a major priority for state tech leaders, according to the National Association of State Chief Information Officers.
But it could be a challenge for lawmakers to find the right balance between innovation and mitigating some of AI’s biggest risks like bias and unethical behavior, as well as its potentially harmful uses, like for terrorism and cyberattacks. Some states have already tried to address those worst-case scenarios, including in California, where Gov. Gavin Newsom vetoed sweeping legislation last year.
The California bill would have required developers of large AI systems to test if they could be used in various extreme scenarios, and pitted the state’s powerful technology industry, politicians and academics against each other, as one side wished to regulate while another wanted to be left alone to innovate. Newsom’s veto put an end to that debate, and opponents of the bill said it was the wrong approach in the first place.
The Texas House has a similar bill pending.
“More limited, incremental, risk-based governance approaches make far more sense both for normative and practical reasons,” Adam Thierer, a resident senior fellow at the right-leaning R Street Institute, said during a January event hosted by the Information Technology and Innovation Foundation. “The focus of technology policy should be on concrete, identifiable, real-world harms not hypothetical worst-case scenarios and open-ended fantasies that might be pulled from the pages of science fiction novels or television shows.”
Instead of trying to use policy and legislation to protect people from the worst possible scenarios, Nemani said lawmakers should think of such guardrails more flexibly. Only when things are built quickly enough are they tested, he said.
“In reality, the guardrails are always moving; it's a constantly evolving yardstick,” he said. “I think that's okay, but we have to look at it in a different direction. It's not like the guardrails are going to protect us from every possible misuse. They help us understand where we are and where we need to adapt.”
Given all that has happened so far, it will be interesting to see how states’ efforts evolve on AI, and how their leaders adapt. Biden’s order emphasized that federal agencies should hire a chief AI officer, but Nemani said Trump’s rescindment could help create a “regulatory white space” for other tech leaders to step into, especially at the state and local level.
The effects of all this upheaval will not be clear for some time. Hodan Omaar, a senior policy manager at the Center for Data Innovation, warned during the ITIF event of the “chaos of America's AI governance approach,” including as states make their own policies, which “detracts from a clear national direction on how to effectively govern these systems.”
“People often talk about the U.S. approach to technology regulation as a patchwork of red and blue states and purple states that makes it hard for businesses to operate… but when it comes to AI it's even messier than that,” Omaar said. “[There] is no clear red vs. blue divide on how to regulate AI at the state level. It's a free-for all.”
According to Asha Palmer, senior vice president of compliance solutions at software training company Skillsoft, one thing is for certain, regardless of the two orders’ policy differences: AI will continue to be a major priority at all levels of government, regardless of ideology.
“What is encouraging is the consistency between administrations about the focus on AI,” she said. “We can all agree that the two administrations don't agree on much, but the fact that they both agree that AI is the way of the future and should be a priority for our government agencies to increase effectiveness and efficiency is a good sign.”
NEXT STORY: How satellites and AI help fight wildfires today