Report: How local governments can prioritize responsible AI adoption

d3sign via Getty Images
Local officials can play a role in how AI tools are adopted and deployed among government agencies, even amid the federal government’s efforts to ease regulations on the tech, a new report says.
The U.S. House of Representatives recently approved its budget bill that includes a potential 10-year ban on states’ — and their political subdivisions, such as local governments — efforts to regulate artificial intelligence, a move critics say further erodes efforts to mitigate potential harms of the innovative technology.
Despite such challenges for regulation, local governments still have tools to help ensure they responsibly adopt and implement AI technology, according to a new report from the Local Progress Impact Lab and AI Now Institute.
“A lot will depend upon how courts interpret the breadth of the preemption language in the reconciliation bill if it should pass,” said Hillary Ronen, report author and former San Francisco District 9 supervisor, in an email to Route Fifty. “But the powers underlying several of the proposed areas of action in the report should remain even if preemption occurs.”
The federal government, for instance, cannot force local governments to use or procure AI products, so jurisdictions can leverage their purchasing power in the marketplace to impact which systems are deployed, she said.
“Local governments should also continue to have land use powers over if, where and under what conditions data centers are permitted within its jurisdiction,” Ronen said. “For those local governments that oversee municipal utilities, they should continue to have control over whether to serve new data center customers and under what conditions.”
Local leaders “need to start taking AI really seriously, because it's impacting our constituents every which way,” from housing, to public benefits administration and more, she explained in a separate interview.
Ronen pointed to a recent report from the Center for Democracy and Technology that found only 21 out of roughly 22,000 cities and counties across the nation have public-facing AI use policies, despite the technology’s rapid growth in government systems and operations.
Workplace surveillance, for instance, is one area where AI could become increasingly common in the public sector, Ronen said. Already leveraged in the private sector, AI systems can help employers track staff productivity or monitor how staff interact with customers, and many organizations leverage AI to help inform hiring-related decisions, according to the report.
But the risks of AI-enabled workplace management in the private sector are a concerning warning for the public sector, she said.
Research shows that using AI in this way, for instance, can increase employee stress, decrease job satisfaction and even reduce productivity, which can ultimately impact how government agencies are able to serve and interact with constituents. It can even deter people from seeking public sector jobs, Ronen said.
Local governments can take several action steps to help ensure the responsible adoption and implementation of AI, and doing so can help build employee and constituent trust in governments’ use of the technology, she said.
One way is by developing transparency policies regarding what AI products a jurisdiction is allowed to procure and for what purpose, the report stated.
In California, the city and county of San Francisco passed an ordinance last year that requires government agencies to report any AI products they’re using and their potential impacts to a central repository for transparency.
Local governments should also, for instance, request that vendors disclose what data was used to train their AI models, potential biases in their products and what tests have been performed to ensure its accuracy and reliability. Policies can include enforcement measures like prohibiting agencies from procuring an AI product if a vendor does not agree to such terms, the report stated.
Directly involving agency staff in decisions regarding the use of AI is another crucial way for local leaders to support its responsible rollout, Ronen said.
“The success or failure of introducing AI in local government requires workers’ knowledge and ongoing input regarding use of the new technology and its impacts within municipal departments and upon the public at-large,” she wrote.
Government officials should consider holding hearings or forums to collect staff feedback and concerns over how AI will impact their jobs, according to the report. This can also help leaders gauge their workforce’s knowledge and expertise regarding AI to help inform their use of the technology.
Local officials can also turn to their land use authority to encourage the responsible adoption of AI technologies, particularly as the public’s concern over data centers’ environmental impacts continues to grow, according to the report.
In Virginia, Prince William County officials voted in March to update a decades-old noise ordinance aimed at addressing potential noise pollution and disturbances caused by data center construction and operation in the community.
Officials can also address AI’s impact at the individual level, the report stated. In California, the city of San Jose updated its generative AI guidelines last month to direct city staff to “limit the environmental impact of your generative AI usage.”
Under the guidelines, employees are urged to, for example, use search engines where possible and prompt generative AI models to output shorter responses, like bullet points.
Ultimately, it’s clear that local governments “don’t want to stop this technology,” Ronen said. But officials “need to intervene and say, ‘Look, we're excited about this technology … but we need to slow it down, and we need to make sure that it truly causes benefits to human beings, not harms.”