Regulate AI? Here’s what states need to know
Connecting state and local government leaders
A new report by the National Conference of State Legislatures provides a primer for lawmakers on how they might approach oversight of artificial intelligence.
An increasing number of state legislatures are grappling with what to do about the rapid rise of artificial intelligence.
In the absence of federal legislation, some states have been passing laws to protect citizens from the potential harms of AI, such as assessing whether its use is leading to discrimination or requiring public disclosure when it is being used.
But the majority of AI measures passed by legislatures have simply created task forces to advise states as they get up to speed on the complex issues involved, according to a new report by the National Conference of State Legislatures released Thursday.
Lawmakers are wrestling with a “critical balancing act,” said California Assemblymember Jacqui Irwin on a call with reporters. AI is increasingly being used in ways that can affect people’s lives, she said, from reading people’s body language during job interviews to approving them for loans or housing.
“AI is no longer a science-fiction movie concept,” she said. State lawmakers not only need to ensure agencies are using AI responsibly, “but also protect our constituents as they engage with private sector businesses.”
States want to do all this while also facilitating the nation’s development as a leader in technology that can help both the public and private sectors run more efficiently.
“California is the home of Silicon Valley,” Irwin said. “We want to put on guardrails. We want to protect privacy and we want to talk about equity. But we also don't want to stifle innovation.”
The Benefits of AI
Irwin and Kentucky Sen. Whitney Westerfield cited a report by PricewaterhouseCoopers that says AI could increase the global GDP by 14%, or $15.7 trillion, by 2030.
Other benefits of AI include making investing, portfolio management, loan applications, mortgages and retirement planning “more efficient, less emotional and more analytic,” the NCSL report said.
AI could also be used to prevent fraud. Algorithms in medicine could lead to preventative steps that keep patients out of the hospital. In criminal justice, the use of AI for gunshot detection and crime-mapping could help police “solve crimes more quickly and keep communities safer,” the report said.
AI is already being used in the development of autonomous vehicles, helping cars on the road with braking and changing lanes to reduce collisions.
Cities are using AI to improve customer service, urban and environmental planning, energy use, and crime prevention. The technology can also help states handle large volumes of data or to improve customer service. Georgia’s labor department, for instance, is using a virtual assistant—the George AI chatbot—on its website. Some Western states like California, Nevada and Oregon are using AI to monitor images from cameras in forests and mountains to spot wildfires.
“We're seeing lots of really good examples, very interesting applications and uses for it,” Westerfield said. “But we're also seeing some bad applications.”
Because AI is designed by humans who “make decisions that are based on emotions,” the report said, “there is a risk that such algorithms can contain bias and inaccuracies.”
What States Can Do to Regulate AI
The report, which is aimed at informing state lawmakers about the issue, laid out a number of ways that legislatures can address the emerging technology.
Lawmakers could take a narrow approach by dealing with specific issues of concern, such as regulating the use of AI applications in hiring, according to Sorelle Friedler, formerly a member of the Biden administration’s White House Office of Science and Technology Policy. “Americans may not want employers to track movements or facial expressions,” the report quoted Friedler as saying. Instead, they may want “hiring decisions to be made by a person and not a program.”
States could hold off on creating new regulations and wait to see how current laws are affecting the use of AI in “consequential” areas like employment, education, housing, health care or criminal justice.
They could require that agencies assess whether their use of AI is leading to the “unjustified different treatment of certain populations.”
And states could pass transparency laws that require that people be told if AI is being used to make, say, hiring decisions.
A Look at the Laws States Have Already Passed
The number of bills in state legislatures to address AI has been increasing, though not many have passed, according to the report. In 2020, at least 15 states introduced artificial intelligence bills and resolutions, but bills only passed in Massachusetts and Utah.
This year, however, has brought an “uptick” in bills, with at least 25 states, Puerto Rico and the District of Columbia introducing measures on AI. Of those, 14 states and Puerto Rico passed bills or resolutions.
Many of the bills that have passed over the last several years look to protect people’s rights.
A measure passed by the Illinois General Assembly in 2020 required employers to notify applicants if AI would be used to analyze a videotaped interview.
Colorado in 2021 prohibited insurers from using algorithms or predictive modeling in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.
Connecticut this year required the state’s Department of Administrative Services to begin assessing the use of AI by state agencies to ensure they do not lead to discrimination.
But not all legislation has been designed to restrict the use of AI.
A law in Mississippi required robotics, AI and machine learning to be taught to K-12 students. Maryland, meanwhile, created a grant program to help small and medium-sized manufacturing companies implement new AI technologies.
But the majority of laws passed created task forces. Given AI’s complexity, many legislatures including Louisiana, North Dakota, Puerto Rico, Texas and West Virginia are looking for guidance on what to do.
Task forces are a common approach in addressing emerging technologies that lawmakers might not know much about, said Susan Frederick, senior federal affairs counsel for NCSL.
Kentucky’s Westerfield echoed that sentiment. “Legislators are rushing headlong into policymaking decisions about AI. And I fear not all of them are familiar with AI and its benefits and its risks,” he said. “I'm still learning, and I'm as big a nerd as I know.”
Westerfield cautioned lawmakers to take their time. While there are some issues that require lawmakers to act quickly, he said, this is not one of them.
“We want to make sure that people are doing this responsibly, and that the technology is able to do what we count on it to do safely,” he said.
Kery Murakami is a senior reporter for Route Fifty, covering Congress and federal policy. He can be reached at kmurakami@govexec.com. Follow @Kery_Murakami