AI in the Workplace: Local Officials Explore Responsible Use
Connecting state and local government leaders
As more governments experiment with artificial intelligence, local leaders are considering how to leverage AI’s benefits safely and responsibly.
Artificial intelligence is a valuable tool for agencies automating mundane tasks and conducting data analytics, but recent calls for its regulation emphasize the technology’s potential privacy and security risks when not used responsibly.
As the Biden administration rolls out its AI frameworks and recommendations on the appropriate use and development of AI systems, local agencies have started drafting their own policies.
The National Association of Counties, for example, has formed an AI Exploratory Committee that will examine the intersection of AI and county policies and practices, workforce productivity, government services, privacy and security, and other elements, the organization announced late last month.
“We are at a unique moment in terms of artificial intelligence,” NACo Associate Legislative Director Seamus Dowdall said. With all the attention AI is getting, the committee plans “to explore the emerging policies, practices, potential applications, rules and consequences of artificial intelligence through [the] lens of county governments,” he said.
The committee’s 15 members include elected or appointed county officials, department heads and staff from state associations of counties, he said. It will be co-chaired by Florida’s Palm Beach County Commissioner Gregg Weiss and Texas’ Travis County Judge Andy Brown.
The committee will address issues such as determining appropriate use cases for AI and how it could impact the security of countywide data as well as public trust in local government, Dowdall said. “We’ve seen the federal government begin to explore how much it will approach and utilize AI at the federal level.… The same conversations are happening at the state and county level.”
Cities are also taking steps to ensure they use AI responsibly. The Seattle IT Department, for instance, recently issued an interim policy for city staff who wish to use generative AI like ChatGPT to streamline workflows or improve service delivery.
“We see the emergence of generative AI as providing both opportunities that can help us deliver our services, but it also has risks that can threaten our responsibilities.… Our interim policy is intended to minimize issues that may arise from the use of this technology while additional research and analysis are conducted,” officials said in a May 31 statement announcing the policy’s adoption.
Under the policy, the city’s IT Department must approve of staff member’s access to or acquisition of new generative AI products. Employees are also required to validate the information generated by AI systems, which may produce false or misleading results. This means city staff should review AI outputs for accuracy, proper attribution and biased or offensive material, officials said.
Seattle city employees are also prohibited from feeding generative AI systems “sensitive or confidential data, including personally identifiable data about members of the public.” Experts have warned that uploading code to a generative AI system could also weaken an organization’s ability to track and manage cyberthreats.
As counties, “we are community conveners in some ways [and] data aggregators in other ways—there’s a lot of different ways we’re looking at [AI],” Dowdall said. “Using AI as a tool is one component of this conversation, but there’s really a much broader approach that we want to take to think holistically about how AI is progressing.”
NEXT STORY: Cybersecurity Standards Gain Ground in Counties