Don’t rush into AI experiments too quickly, experts say
Connecting state and local government leaders
Speakers at Route Fifty’s latest Innovation Spotlight cautioned the need to balance innovation in artificial intelligence with good governance, despite the desire for adoption to happen quicker.
Since the release of ChatGPT nearly two years ago, artificial intelligence has meandered its way into many people’s everyday lives. But when government workers, in particular, started using the chatbot and other generative AI tools to save time at work, states and cities sprang into action, throwing up guidance and guardrails around its use.
Today, most states and cities have policies in place regulating the use of AI in government, and a handful of states have passed wider legislation this year addressing its use in elections or issuing consumer protections.
The most recent state to do this is Colorado, with the signing last month of a comprehensive AI consumer protection bill. Although Gov. Jared Polis and other lawmakers have already pledged big revisions after a backlash from technology executives and others.
But despite the flurry of executive orders, policies, pilot projects and initial use cases, progress may feel as if it is at a standstill, and that new use cases being rolled out—like chatbots and efforts to streamline processes—are less-than-revolutionary.
That’s a good thing, according to speakers on an episode of Route Fifty’s Innovation Spotlight on Leadership in AI. The panelists of state and local government experts said it is crucial for governments to find the balance between adopting the technology quickly and providing good governance on it.
As such, these less-than-revolutionary use cases are important to show what AI is capable of in low-risk settings.
“There's this general tension between adoption and governance,” said Nishant Shah, Maryland’s senior advisor for responsible AI. “You can have heavy governance, which then slows down adoption. But there is a certain moral case to be made for adoption. If this is the quickest way to improve services, if there aren't any other levers that can do that quickly, it's important that we leverage tools to get these things in the hands of residents and improve our services.”
Maryland is one of many early movers on AI, after Gov. Wes Moore issued an executive order on the technology’s use in government early this year as part of a big overhaul in state IT. The legislature also passed its AI Governance Act, which requires agencies to inventory AI and prevents agencies from using AI systems under certain circumstances.
Similarly, Washington has issued interim guidelines for the use of generative AI following an executive order by Gov. Jay Inslee in late January. Katy Ruckle, the state’s chief privacy officer, said the government has taken advantage of existing procedures like risk assessments and other impact assessments when determining the effects of AI on certain uses, as well as on data privacy.
By doing that, Ruckle said, it avoids the government feeling like it must “use a car without brakes.”
“Especially for government, it's expected that we're going to need to be more careful with how we deploy these types of tools, especially with the public and using the public's data,” she said. “I think a little patience is expected of government in a way that maybe private industry is not held to quite that same standard.”
Building trust in the technology among employees and residents is key, too. Shah said while it may be scary for workers who think that AI is coming to replace them, showing them the good it can do in low-risk situations while being transparent about it can help alleviate some of those fears.
“I think the strategy is execution in a lot of ways, where if you're able to be transparent at how we use these tools, and clear on what types of oversight and use cases, and then if people start seeing services actually get better because of the use, I think that's the best trust building exercise showing government getting better because of use of these tools,” he said.
It is also incumbent on government employees to not fall into the trap of “implicitly trusting the tools,” said Santiago Garces, chief information officer in Boston. That city was among the very first to produce guidance on its employees’ use of generative AI, and is already experimenting with the technology to produce the first drafts of official communications and job descriptions.
And Garces said there are more plans afoot. He said Boston recently used AI to generate short summaries of 16 years’ worth of city council meetings, which people can read before they view full meeting documents. That initiative still requires human oversight, Garces said, to ensure accuracy in what is being summarized. But in a bid to build trust, low risk use cases like these can be “practice arenas,” Shah said, before taking on bigger challenges.
Garces called on government to “get its hands dirty” and test AI tools, especially in working out how to avoid ethical concerns like bias and privacy. And while it may be tempting for people to think development should go quicker, it is not that simple.
“I'd imagine it’s going a little bit slower than some people would think,” Garces said. “But we're trying to be diligent. We're trying to be thoughtful.”
NEXT STORY: Massachusetts is expanding its pathbreaking vehicle fleet electrification program