States explore uneven approaches to AI regulations

New York Comptroller Thomas DiNapoli (right) speaks during a recent event. DiNapoli dinged several state agencies in an audit and said they need more guidelines on how to use AI. Newsday LLC via Getty Images
New York Comptroller Thomas DiNapoli warned his state is unprepared to use the technology and needs more guardrails, while Virginia Gov. Glenn Youngkin vetoed a bill implementing guardrails in his state.
New York State Comptroller Thomas Di Napoli issued a startling report early this month as he revealed that more guidance is needed for state agencies that use artificial intelligence. But it stands apart from other states dabbling with AI regulation, who have favored a less hands-on approach.
In an audit of the state’s Office for the Aging, Department of Corrections and Community Supervision, Department of Motor Vehicles and Department of Transportation, DiNapoli warned that New York’s centralized guidelines on AI are “inadequate, and creates a risk that the technology could be used irresponsibly.” DiNapoli also warned that there is no inventory of the AI tools the state is using.
It’s the second time DiNapoli has audited New York’s use of AI in government, and follows a 2023 audit that found that New York City’s AI governance was not effective and led to city agencies developing their “own, divergent approaches” to the technology.
“This audit is a wake-up call,” DiNapoli said in a statement. “Stronger governance over the state’s growing use of AI is needed to safeguard against the well-known risks that come with it.”
New York’s use of AI is governed by the Office of Information Technology Services, which issued its Acceptable Use of Artificial Intelligence Technologies Policy last year. But the audit found a “disconnect” between that AI policy and how individual agencies understand AI. The overarching policy lacks detailed guidance and urges agencies to lean on federal guidelines if they need more information.
Agencies are also left to determine for themselves what responsible use of AI means, which DiNapoli called a “major problem.” The state also is still in the process of developing a central inventory of AI systems and has left it up to agencies to carry out their own risk assessments, reporting and compliance. DiNapoli said this is inadequate, as there is “no mechanism” to ensure such audits are done properly.
While New York favors a strong, government-led approach to AI, other states are taking a different view. Montana Gov. Greg Gianforte this month signed the state’s Right to Compute Act, a first-in-the-nation law to affirm residents’ rights to use computing technology and preserve their digital freedoms. That includes affirming that residents have the right to use AI, cloud and other technologies without government interference.
The bill does not strip any rights from the state government, but imposes various restrictions, including that any proposed regulation on those technologies be shown to have a “compelling government interest” in protecting public health and safety. Various groups have expressed support for the bill: Tanner Avery, policy director at free market think tank Frontier Institute, said in a statement that Montana has “planted a flag in the ground, affirming that here, we will treat attempts to infringe on fundamental rights in the digital age with the utmost scrutiny.”
New Hampshire is considering similar legislation as the Right to Compute movement — which looks to defend individuals’ rights to use computational technology — appears to be gathering momentum.
It’s a different story in Virginia, too, where state leaders appear to not want any regulations on AI that they deem too onerous. Gov. Glenn Youngkin vetoed legislation in late March that would have created a regulatory framework for businesses that develop or use AI systems deemed “high risk.”
It would have made Virginia the second state, after Colorado, to pass a comprehensive AI law. That legislation may be tweaked soon. Separately, California Gov. Gavin Newsom vetoed a bill last year that would have required developers of AI systems to test if they could be used in various extreme scenarios.
Youngkin suggested that heavy government regulation of AI is not the way to proceed and talked up the executive action he has already taken. In his veto message, Youngkin said the bill “would undermine this progress, and risks turning back the clock on Virginia’s economic growth, stifling the AI industry as it is taking off.”
The differences between the states — one wanting more AI regulation and others preferring a more hands-off approach to let businesses innovate — show the ongoing and uneven approach to states’ regulation of AI. Meanwhile, the federal government’s role in regulating the technology remains uncertain.
Some outside groups and elected officials believe that states and local governments will lead the way on AI regulation, and that the federal government needs to provide some guardrails to help.
In response to a request for information from the Office of Science and Technology Policy, the center-left NewDEAL Forum’s AI Task Force argued that AI can help make agencies more efficient and also called for better information sharing between the various levels of government to avoid a fragmented approach to the technology.
“What we don't want to see are a variety of different guardrails, policies and practices happening at the local level, and then something different coming from the federal government,” said Albany, New York, Chief City Auditor Dorcey Applyrs in a recent interview. “What people really want to see is this working in tandem, working in partnership, so that we are all singing from the same hymn sheet.”
But others remain unconvinced. In his veto message, Youngkin warned governments not to be too heavy-handed, lest they scare off business.
“The role of government in safeguarding AI practices should be one that enables and empowers innovators to create and grow, not one that stifles progress and places onerous burdens on our Commonwealth’s many business owners,” he wrote.