Shining a light on shadow AI
Connecting state and local government leaders
Employees sometimes use artificial intelligence websites like ChatGPT to get more work done. But if they do so without approval, they may be putting their agency at risk.
Under pressure to do more with less—and do it faster—government workers may be experimenting with online artificial intelligence solutions, whether their information technology teams know about it or not. The result is shadow AI, and it puts agencies at risk.
Just as shadow IT refers to devices, software and services that employees use without the IT department’s knowledge, shadow AI systems operate within an agency unbeknownst to the people responsible for managing technology use.
The explosive popularity of the generative AI tool ChatGPT has made minimizing shadow AI even harder, said Amy Glasscock, program director of innovation and emerging issues at the National Association of State Chief Information Officers.
“It used to be that if the state was using some sort of artificial intelligence product or service or tool, they would have had to go to an outside vendor and have a contract and … a set number of licenses,” Glasscock said. “But now anyone can go sign up for ChatGPT or some of these others and it’s just so accessible to everyone. It’s very easy for you not to know what employees are using generative AI.”
A main reason why employees turn to AI is to augment their efficiency. In fact, 41% of employees have acquired, modified or created technology without the oversight of IT departments, according to a Gartner report, which predicts that figure will hit 75% by 2027.
And the technology seems to deliver: Generative AI alone could have a productivity effect of about $480 billion in the public sector, according to McKinsey & Co.
Some workers adopt AI on their own because they feel that their agencies aren’t moving quickly enough to approve AI technologies, but others may not know that they need to go through IT, Glasscock said. “With generative AI, I think employees might just be thinking, ‘Well, this is just a website out there,’” she said.
Unsanctioned AI use runs the risk of exposing data collected by and entrusted to government. If employees feed personally identifiable and other protected information into, say, ChatGPT, agencies lose control over how that data can be used.
Additionally, by opening a shadow AI account at work, employees could agree to privacy and security terms they shouldn’t, Glasscock said. “If you’re an individual person signing up for something and you’re clicking yes [to terms and conditions], maybe those terms aren’t actually sufficient for what your state requires,” she said. “When you say yes with your employee email, you’re agreeing to those terms for your state, not just yourself.”
Although agencies might not be able to eliminate shadow AI, they can take actions to reduce it. Creating a policy around AI use is critical, Glasscock said.
“It’s very likely that people are using it … so employees probably should be aware of how it can best be used,” she said. “Policies should lay out what should this should be used for, what should it not be used for, what are the data leak risks or security risks associated with it.”
To formulate a policy, governments have plenty of guidance. In December, NASCIO issued a 12-step AI blueprint for agencies starting to incorporate AI into their operations to inventory and document existing AI applications and create acquisition and development guidelines. Government leaders can also look to the National Institute of Standards and Technology’s AI Risk Management Framework and the Trustworthy and Responsible AI Resource Center.
Some states have taken legal steps to regulate their own AI use. For example, beginning Feb. 1, Connecticut’s Department of Administrative Services must perform ongoing assessments of systems that use AI, and the Office of Policy and Management must establish rules around the development, procurement, implementation, use and assessment of AI systems.
During the 2023 legislative session, at least 25 states and Washington, D.C., introduced AI bills, and 18 adopted resolutions or enacted legislation, according to the National Conference of State Legislatures. Louisiana’s Joint Committee on Technology and Cybersecurity, for example, is studying AI’s impact on operations, procurement and policy, and Texas created an AI advisory council to study and monitor AI systems developed, used or procured by state agencies. North Dakota and West Virginia have similar groups.
“Last year seemed like the year of generative AI. Everybody was like, ‘What is this new thing? We’re trying to get our head around it,’” Glasscock said. “My hope is that 2024 will be more the year of AI governance, putting in place those policies, road maps and useful legislation around it. We had a year to figure it out, and now it’s time to get organized.”
NEXT STORY: Feds unveil resource for election officials and workers