AI is a ‘catalyst’ to solve ‘long-standing challenges,’ state leaders say

Surasak Suwanmake via Getty Images
While it may be tempting to just think about use cases for the technology, speakers at the Code for America Summit said it can also be an opportunity to improve data, service design and workforce development.
Leaders remain excited about artificial intelligence’s promise to revolutionize state and local government as they push forward with use cases that automate various processes, streamline services and answer resident questions, among others.
But even amid public-sector employees’ growing comfort around the technology, state officials warned that governments should take advantage of this moment to improve other areas of technology. By doing things like cleaning up data, working on how services are designed, and preparing employees for the future of work, states will raise what Nishant Shah, Maryland’s senior advisor for responsible AI and head of AI enablement, described as their “AI IQ.”
“There's incredible progress, but incredible hype, and we should all be leveraging that hype and this press release pressure that comes from that in order to fix the basics,” Shah said during a panel discussion at the Code for America Summit in Washington, D.C., last week. “These are like a good privacy program, an API strategy, your data infrastructure, a good third-party risk management program, all these things that maybe weren't as sexy prior and didn't get the attention and love that they could have.”
Experts have spoken repeatedly about the need for governments to improve their data so that it is clean, standardized and ready for use by AI tools to help make decisions and recommendations. Not doing so could mean incorrect outputs, hallucinations and bias, all of which leaders want to avoid as they look to the technology to help them improve back-office processes and constituent services.
Similarly, cybersecurity and privacy remain major concerns, especially in making sure that AI tools are safe. It may be tempting to just invest in the technology and hope that it works, but states must have the right foundations in place, leaders said on stage.
“AI is really a catalyst to solve some of these long-lasting challenges that you're dealing with, as far as service design, as far as how data is being neglected,” said Nikhil Deshpande, Georgia’s chief digital and AI officer. “This is a great opportunity.”
A key part of that will be making sure states’ workforces are prepared for what is to come. Deshpande said it is crucial to make sure that employees are educated on the technology, what it is capable of — and not capable of — and that it is not coming to take their job but instead make it easier. That is especially true when bringing in new, younger employees.
“If you are at a point where you can actually define a junior person's career getting into your organization, I think this is a good opportunity to figure out how they coexist with AI,” Deshpande said. “It's not just enough to say, whatever a junior person was doing, now AI can do it. We have to have some pathway in, so look at the future.”
Training up staff on AI can take many forms, too. Josiah Raiche, Vermont’s chief data and AI officer, said government lawyers had them put a disclaimer at the end of AI-generated text, disclosing how it was generated and when. That disclaimer is a good reminder to employees to check AI’s work, as it is not infallible, and can help them spot where human intervention is still needed, he said.
“In three or four years, all our employees are going to be using AI, and we need them to be ready for that, because AI doesn't throw up a little red error box when something goes wrong,” Raiche said. “AI just starts making stuff up.”
Raiche said he wants his employees to view AI as a “power tool” that can help in certain areas, but must be used with caution and with proper supervision.
“Instead of having the idea that we're going to build autonomous robots that do swathes of our work or replace our workforce, you're instead talking about what does it look like to give somebody a tool that lets them zoom out from the mechanics of doing the thing to thinking about the outcomes,” he said. “What does it look like to have a power tool that's designed for a specific purpose and is being used in the context of a human process to make that work better.”
Policymakers, too, will have to be more agile in regulating the fast-evolving technology. Shah said it will need a sea change from how lawmakers have legislated on virtually everything, but must be done given how much things are changing. That may be difficult under a House bill that would prevent states enforcing regulations on AI for 10 years, but its future remains uncertain.
“This stuff's moving quickly,” he said. “The way we've done policy for a long time is like a big bang: Here's a policy that covers everything. You put it out, you revisit it a few years from now, and you make some updates at that time. That's just not going to work in this space. We are trying to find ways to shorten that feedback loop on policy and make that much more rapid.”