Generative AI raises bias, privacy concerns
Connecting state and local government leaders
While ChatGPT presents big opportunities for efficiency, agencies must ensure they use it—and regulate it—properly, an expert advises.
Tools that rely on generative artificial intelligence like ChatGPT hold great promise for state and local governments, but agency leaders must work to avoid issues like bias and ethical concerns, one expert said last week.
Even as the use of ChatGPT and the hype surrounding it and similar tools has exploded, so far very few states are regulating AI. Connecticut turned heads earlier this month after passing legislation paving the way for an “AI Bill of Rights,” but that state appears to be a rarity. Meanwhile, some cities like Boston and Seattle are experimenting with the technology and putting guardrails in place.
And as government leaders consider how to take advantage of the technology, they must ensure they eliminate any bias, use it ethically and protect residents’ privacy and data security, said Angela Kovach, senior director of public sector solutions and operations at legal services company Everlaw.
“The reality is, we are entering a new era, and it's here to stay,” Kovach said during a webinar hosted by the Digital Government Institute. “But we have to be cautious and take the right approach before we just start utilizing things right out of the box.”
Concerns about possible bias are among the biggest for public sector leaders. Kovach cited a private study of DALL-E, which can create images from a text description. Forensic investigators asked DALL-E to generate sketches of suspected criminals from text descriptions, but when the DALL-E results were compared with suspects’ actual photos, Kovach said the AI-generated sketches were frequently far different from the real photos and the suspects’ skin color skewed darker.
Producing sketches that way could solidify a false memory among witnesses and even lead to more wrongful convictions, Kovach said. I could set up a “terrifying scenario if people are too quick to adopt something like this.”
Kovach called on governments to be transparent about how models used for generative AI applications are trained and what data is being fed into them, so everyone can understand any inherent biases and work to eliminate them. There also must be a clear understanding of who is responsible for eliminating those biases, whether it be the agency itself, the people who train the model or its creator.
“We can’t just become starry eyed about the fact that ChatGPT can write our emails for us,” Kovach said, as there is plenty of uncertainty about the tools.
Without proper data privacy protections, ChatGPT risks opening a “wild west of litigation,” she said, and it is incumbent on government agencies to ensure those protections are in place.
First, agencies need to manage whether customer data is being used to train generative AI and whether training data is hosted in the cloud or elsewhere. Decisions are required as to who can access that data, and whether data is deleted or remains in storage when the model is fully trained. Vendors working with the federal government on cloud-based AI must be certified by the Federal Risk and Authorization Management Program, and Kovach said that or something similar could be necessary for companies selling generative AI services to state and local governments.
When it comes to using citizen data, agencies must write specific policies for their vendors and employees to follow on data protection, including on retention and access control, Kovach said. Governments must play a significant role in ensuring data is protected, she said. “You cannot rely on the vendor to make all the security considerations for you.”
The reputational damage to an agency from an AI-related data leak could also be huge, Kovach warned. “You don't want to end up in the news because you were one of the first adopters of ChatGPT, and something got out there on the internet,” she said.
The technology is still in the early stages of development, and Kovach said engineers working with ChatGPT have been taken aback at how the application is evolving. The fact that those changes have surprised engineers shows that governments should proceed with caution. “Even the makers aren’t entirely sure why it does what it does, as well as it does,” Kovach said.