This state’s ‘unsexy’ AI policy takes transparency to the next level
Connecting state and local government leaders
From AI “nutrition labels” to keeping an inventory of artificial intelligence tools, Connecticut is embracing sweeping rules that would “talk to people about why we're using it, how we're using it.”
Connecticut is advancing sweeping legislation and rules to regulate artificial intelligence, not just with strong requirements on consumer protection, deepfakes and false political messaging, but also on transparency.
The Connecticut Senate last week approved a bill along party lines to regulate AI in the state. The legislation now goes to the House, which faces a race against time to pick it up and pass it before the state’s legislative session ends next week. State Sen. James Maroney, one of the bill’s key backers, said in a statement that it “seeks to place guardrails on AI’s uses and development.”
In addition to some headline-grabbing provisions, including limits on the malicious use of AI through deepfakes and other nefarious means, the legislation establishes a Connecticut Citizens Academy to provide professional training on the technology.
To much less fanfare, the state government released a separate policy in February governing how agencies should responsibly use the technology.
Connecticut’s Chief Information Officer Mark Raymond said while policies and frameworks may be the “unsexy side of AI,” they are paramount for keeping residents safe while allowing governments to dabble with the emerging technology.
A key part of the state’s policy is its emphasis on transparency, Raymond said during a panel discussion on Monday hosted at the National Association of State Chief Information Officers’ mid-year conference near Washington, D.C. The policy’s transparency requirements consist of four components: inventory; model cards; information on data; and contingencies.
Transparency in AI provides a window into the technology’s inner workings and helps ordinary people see where and how the technology is used. That is especially important in areas like decision making, as disclosure allows residents to see the role of AI, if any. An emphasis on transparency should also help build public trust in the technology.
Best described as “nutrition labels” for technology, the model cards might be the most flashy requirement. They would basically contain information on how old an AI model or tool is, its intended uses, metrics to evaluate its effectiveness, data it is trained on, ethical considerations, possible biases and any recommendations.
The effort to use nutrition labels for AI echoes the Federal Communications Commission’s initiative to use labels for broadband internet service that detail speed, price, fees, discounts and other information. Required under the 2021 infrastructure law, the intention is to provide clear, accurate information to consumers.
Raymond said Connecticut’s AI nutrition labels would do the same. They would also help inform key discussions around the technology’s uses. In doing so, that helps the state be more transparent about AI and requires its vendors to do the same, especially on how certain systems are used and specific use cases they are targeted for.
“Whether you're a state or a vendor partner, have this data available and bring it with you,” Raymond said. “Don't have a conversation about […] technology without understanding and being able to articulate [the details]. I know that's harder because it's less engaging. But it demonstrates that you treat our data and our processes as safely as we need to treat them.”
The information can be accessed by various stakeholders, including businesses, technology and software developers, policymakers, and those who are impacted by AI. If a vendor is reluctant to provide the state that information—a scenario Raymond said has not happened yet—Connecticut can make “different kinds of purchasing decisions” to ensure their vendors are willing to also be transparent.
“I am hopeful that others pick up on model cards, because it will bring some consistency in the marketplace,” Raymond added.
Beyond AI nutrition labels, the bill would require that the state keep an inventory of all its AI tools and the ways it uses AI technology in governmental operations. That inventory would include details on how a system is used, and whether it independently makes decisions or just provides support to decisions that are made by humans. It would be kept on the state’s open data portal, meaning anyone can access it at any time.
“That helps to build some public trust,” Raymond said. “We're willing to talk to people about why we're using it, how we're using it.”
Connecticut also has strict guidelines on the data used to train AI, including disclosures on what data is being used, whether it is stored in the cloud or in a data center, who can access it, and whether the data can be sold and monetized.
Having clear contingency plans in place also promotes the transparent, responsible use of AI, Raymond said, as under state guidelines if bias is detected in a system, the state must stop using it immediately and mitigate that bias.
But that mitigation must be done in such a way to not cripple governmental operations. Raymond said that, for example, if bias was found in spell checking software or AI-generated smart email replies, having those systems shut down could be “catastrophic” for continuity of government and mean needing to “pull the plug,” which he said is an unthinkable scenario.
These tenets of state policy are all key to contributing to the transparent, responsible use of AI. That way, he said, there are plenty of “safety breaks” built in to protect residents.
NEXT STORY: How states can take a ‘quantum’ leap in economic development