States clash over what responsible AI looks like
Connecting state and local government leaders
While some states are still establishing task forces and preparing to take advantage of the tech, others are more hesitant, warning of job losses and federal influence on a nascent industry.
President Joe Biden’s executive order on artificial intelligence set off an initial flurry of activity as states positioned themselves to take advantage of the yet-unexplored benefits the technology could offer. But now, some are shining a light on what they see as AI’s dark side and its potential for abuse.
Earlier this month, Utah Attorney General Sean Reyes and 19 other Republican state AGs sent a letter to Secretary Gina Raimondo in response to the Commerce Department’s request for information on developing guidelines to enable deployment of safe, secure and trustworthy systems. The AGs objected primarily to Biden’s AI executive order, which they said was the basis for the RFI saying it “moves in the wrong direction.”
The AGs called the executive order an effort to “centralize governmental control over an emerging technology being developed by the private sector.” The executive order, they wrote, “opens the door to using the federal government’s control over AI for political ends, such as censoring responses in the name of combatting ‘disinformation.’”
They also urged Raimondo “not to attempt to centralize control over AI being developed in the U.S. or otherwise create barriers to entry in this critical and growing sector of our economy.”
The letter argued that Biden does not have the authority to mandate testing and reporting requirements on companies developing AI systems and said that efforts to ensure AI is not used for “disinformation” could be subject to political bias from those looking to stifle speech. It also said that Biden was incorrect to cite the Defense Production Act in the administration’s supervision of AI development, noting that it can only invoke the act for military purposes.
To solve the complex and important issues posed by AI, the AGs ask the administration to “work with Congress and states across the political spectrum to find bipartisan solutions that can help our country harness the power of AI and use it for the good of all, rather than only for one political party or specific groups of people.”
The letter comes as states attempt to parse out the potential impacts of AI. Oklahoma’s AI task force, for example, produced a report in late January that made a series of recommendations for state government, including appointing a chief AI officer and establishing an oversight committee and various other task forces to build workforce skills, recruit talented individuals and leverage the state’s AI infrastructure.
But in a contradiction to what many state leaders have been saying publicly—that AI will be used to augment, not replace workers—the Oklahoma report also said AI’s use in performing administrative tasks and other digital services could reduce the state’s federal, state and local government workforce from 21% of its population to close to 13%, which is closer to “the ideal percentage, Gov. Kevin Stitt said in a statement.
That calculation is based on an analysis of Japan’s economic model, after the country began integrating robots into its manufacturing workforce in the 1980s and found that one robot could do the work of three human employees.
Those efficiencies would be found by automating tasks like managing residents’ enquiries and using chatbots instead of call centers staffed by humans. “Artificial intelligence creates possibilities for more efficient employment and government services," Stitt said.
Legislators are already flexing their regulatory muscles on AI, too. In California, long held up as a leader on innovation and new technologies, State Sen. Scott Wiener introduced a bill that he said would build on the existing executive action taken by Gov. Gavin Newsom and establish clear safety standards for those who develop the most powerful AI systems.
“Large-scale artificial intelligence has the potential to produce an incredible range of benefits for Californians and our economy—from advances in medicine and climate science to improved wildfire forecasting and clean power development,” Wiener said in a statement. “It also gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding or mitigating the risks.”
The bill also would establish an initiative known as CalCompute, a public cloud computing network to allow startups, researchers and other groups to participate in developing large-scale AI systems.
Beyond legislative action, other leaders are still looking to wield the power of their executive office to harness AI. Washington, D.C., Mayor Muriel Bowser recently signed a Mayor’s Order outlining the city’s Artificial Intelligence Values Statement and Strategic Plan. The order provides a “robust roadmap for integrating generative AI into government operations and positions DC to be a nationwide leader in AI adoption,” city officials said.
That plan is to ensure that the district government’s use of AI aligns with six core values, Bowser said, which are Clear Benefit to the People, Safety & Equity, Accountability, Transparency, Sustainability, and Privacy & Cybersecurity. The order also established an advisory group on alignment with those AI values, and convened a task force to produce government policies and procedures, with a view to each agency developing a specific AI strategic plan.
“We are going to make sure DC is at the forefront of the work to use AI to deliver city services that are responsive, efficient, and proactive,” Bowser said in a statement. “With these guiding values, we will make sure that when we use AI, we are responsible and we use it in a way that aligns with our DC values.”
Alabama Gov. Kay Ivey took a similar approach in her recent executive order establishing an AI task force and providing for the responsible use of generative AI in state government. Ivey’s order said the task force will look to understand current generative AI uses in state agencies, encourage the technology’s “responsible and effective use” and recommend policies and procedures for the future. It is mandated to submit a report by Nov. 30.
Ivey’s order also calls for an inventory by May of all instances of generative AI being developed, procured or used by each state agency. It mandates that the Office of Information Technology establish the cloud infrastructure necessary for generative AI pilot projects, which all agencies should consider in consultation with private and public sector experts.
While Ivey acknowledged there is a long way to go in understanding the technology, it is imperative to start as soon as possible.
“I am not going to stand here and preach like I know a lick about AI,” Ivey said during her recent State of the State address. “However, I do know that new technologies can have benefits, but if not used responsibly, they can be dangerous. We are going to ensure that AI is used properly.”
NEXT STORY: Policies to expand access to psychedelics could be ‘short-sighted’