Boston looks to boost employee productivity with generative AI guidance
Connecting state and local government leaders
The city’s chief information officer said the effort, among the first in the nation, is designed to encourage workers to “experiment responsibly” with the technology.
One city’s recently released guidelines for using generative artificial intelligence are designed to encourage employees to “experiment responsibly” with the technology while providing guardrails, a leader of the effort said this week.
Boston released an initial version of its interim generative AI guidelines last month, one of the first cities in the U.S. to do so. Seattle quickly followed suit with its own recommendations. Boston’s guidelines emphasize that generative AI is a tool, but that humans are still responsible for any outcomes it produces. “Technology enables our work,” the recommendations say, “it does not excuse our judgment nor our accountability.”
Santiago Garces, Boston’s chief information officer, said that city employees so far have used generative AI to help with enhancing their productivity on basic tasks, like writing the first draft of a letter or a job description.
“It's a tool that gets you maybe 70% or 80% of the way there, but you still need to have expertise to be able to discern whether it's giving you things that are correct, whether it makes sense in the context of the city,” Garces said in an interview during GovExec’s State and Local Government Tech Summit.
In addition to the guidelines’ emphasis on generative AI being a tool that can supplement the work of human beings, Garces said local leaders also wanted to make sure the city is transparent with its residents about where it is used. To that end, the guidelines are presented in a jargon-free format, Garces said, as “one of the core pieces of feedback” was to ensure that the technology is presented as clearly as possible.
The emphasis that generative AI is a tool that must be fact checked is also meant to guard against the risk of bias, which is one of the biggest risks that users face. Having people “on the hook for the outcomes” should encourage caution in its use, Garces said.
It also will be imperative to train staff on how generative AI can be used maliciously, like for more advanced phishing emails that could be used to hack into city systems. Still, with Boston looking to be a leader on AI technology at the urging of Mayor Michelle Wu, Garces said he is hopeful that the entire city government will be aware of the risks associated and adjust accordingly.
In addition to the guidelines, employees are banned from using confidential or personal information when experimenting with generative AI. Boston has an enterprise IT contract with Bard, Google’s generative AI tool.
While Boston is initially using generative AI to improve employee productivity, Garces said there are several potential use cases for the technology that could be useful in the coming months and years. As AI matures, he said it could be useful to help simulate community meetings, so that officials can be prepared for feedback they may receive from residents and businesses about projects that impact them.
“They're things that would be really complex to program, and that behavior is still not a replacement for community meetings,” he said. “It validates the idea that when you create the right space, you can have this experimentation and have people find new and novel ways that are beneficial to the city.”
While federal regulation and guardrails for AI’s use would be the ideal way to proceed, Garces said if other states and cities wish to follow Boston’s lead, they can do so by borrowing from the existing guidelines. Most importantly, he said state and local leaders must educate their residents on how generative AI can be useful and build trust, and that it will change jobs rather than take them away.
“This technology,” Garces said, “behooves us not to ignore it.”