How local governments are keeping AI under control

Robert Way via Getty Images
In the absence of national legislation, such governments are increasingly adopting AI governance policies and practices on their own, one expert says.
County supervisors in San Diego, California, recently directed local officials to start developing a policy and framework on artificial governance to better manage how government staff use the technology.
The county’s Board of Supervisors last week voted to require officials to study potential policy changes for reigning in AI use by staff and to develop guidelines for responsible AI use for county operations in a bid to minimize the tech’s potential harms, like biased decision-making or compromised resident data.
San Diego is far from the only locality striving to use AI more responsibly, as cities and counties across the nation push for stricter regulations amid increasing concerns about the tech’s impact on government operations and community outcomes, a new blog from the Center for Democracy and Technology has found.
The findings are based on publicly accessible policy documents from 21 cities and counties across the nation. The researchers identified key trends among local governments’ AI governance efforts, such as encouraging transparency, prioritizing risk mitigation and emulating guidance from neighboring governments and organizations.
“AI systems can assist in increasing the efficiency and effectiveness of local governments’ provision of such services, but without proper guardrails these same tools can also harm constituents and impede the safe, dignified and fair delivery of public services,” researchers wrote.
One trend among localities is a prioritization of transparency and accountability when it comes to how agencies are using AI, according to CDT. At least a dozen localities address the need for staff to disclose their use of AI when creating public communications, leveraging AI systems and other instances.
In Washington, for instance, Seattle’s generative AI policy calls for city staff to make the documentation of AI systems in use publicly available. The analysis also highlighted that Santa Cruz County, California, requires staff to include a disclosure notice on products that staff leveraged AI to create.
An increasing number of localities are developing publicly accessible AI inventories that list their current use cases for AI — such as a chatbot service used to answer residents’ questions about public benefits — or what specific AI systems they’re using, said Maddy Dwyer, policy analyst at the Center for Democracy and Technology and co-author of the analysis.
San Jose, California, is one jurisdiction that has adopted such an AI inventory. The city is leveraging, for instance, a waste contaminant identification system from the company Zabble to support the jurisdiction’s waste management efforts. The inventory also provides a fact-sheet that discloses what kind of data is used to train the AI tool, how often the models are updated and other information.
Cities and counties are also emphasizing the importance of accountability in their adoption of AI tools, which is a “crucial mechanism” for building public trust in governments’ use of the technology, Dwyer said.
Of the 21 frameworks that CDT researchers evaluated, 14 of them highlighted that city and county staff were ultimately responsible for the appropriate use of AI, and Dwyer noted that some localities have established enforcement measures to do so.
In Lebanon, New Hampshire, for instance, the city’s AI policy states that noncompliance “may result in disciplinary action or restriction of access, and possibly even termination of employment.”
Another trend CDT identified was local governments’ increasing efforts to identify and mitigate AI risks. Researchers noted that, across the 21 guidelines, three leading concerns emerged: AI bias, unreliable outputs and privacy and security issues.
Requiring city and county staff to verify information provided by AI systems is one way localities are addressing AI risks, according to CDT. In Baltimore, for instance, a 2024 executive order explicitly prohibits city staff from using generative AI if they do not fact check or refine the content such systems produce when it is used for decision-making or public communications.
Dwyer also pointed to local governments’ efforts to ensure their AI governance practices adhere to current state and federal laws surrounding issues like cybersecurity, public records and data privacy as a way to further mitigate the tech’s potential harms.
Researchers underscored that localities have largely developed their individual policies and practices by borrowing language and content from other governments and organizations, Dwyer said. The sharing of AI-related information and policies or resources like AI-policy templates from groups like GovAI Coalition, for instance, can help streamline the development of AI guidelines, particularly amid a lack of legislative action at the federal level.
“In the past year, we’re seeing states and even localities playing more of a leading role in developing [AI] policies and governance practices,” Dwyer said. “Trying to create policy around AI is not easy, but … localities don’t have to go at it alone.”