After an action-packed year, 2024 will be another blockbuster year for AI
Connecting state and local government leaders
State and local governments are still figuring out how to best put the technology to good use. Next year will be a critical one in getting it right.
Taylor Swift is Time magazine’s person of the year. Sam Altman is its CEO of the year. And 2023, according to the publication, is the year governments began taking artificial intelligence seriously.
And indeed, it has been a banner year for executive orders and policies mandating the technology’s responsible use. State and local governments have issued their own guidelines and action plans, have dabbled with generative AI-driven pilot projects and instructed agencies to identify effective use cases.
State lawmakers across the nation have introduced legislation to regulate AI, and with no congressional action on the issue—just a sweeping presidential executive order that could impact the programs states and localities administer—researchers have predicted it will be the states, not Congress, that will lead the way in regulating AI.
All this progress means that 2024 could be just as crucial for the evolution and implementation of AI, especially as agencies figure out how it fits into their operations and impacts their workforce.
In the immediate term, observers expect governments to continue churning out policies, guidelines and best practices on AI’s responsible use. Steve Mills, global chief ethics officer at the Boston Consulting Group, said in an email that those efforts are “part of an ongoing trend toward governments taking more proactive actions to realize the benefits of AI while minimizing the risk.”
Perhaps the biggest buzz for state and local leaders surrounds the promise of generative AI, which could help various governmental functions, including summarizing public feedback, drafting communications, writing code, and letting staff and residents find answers to questions about existing policy more quickly.
Scott Buchholz, government and public services chief technology officer at Deloitte, compared AI’s maturity to a “digital intern,” which will in time evolve to allow it to become something of a “digital assistant.”
“You might not ask an intern to write your doctoral thesis for you,” he said. “But you might ask them to do some bounded research tasks where you specify the parameters. What we're trying to help people do is figure out what are the tasks that fit in a digital intern’s strike zone, and how do we evolve the work that people do to allow those digital interns to continue to grow into digital assistants and be even more helpful to them as time goes on.”
The National Association of State Chief Information Officers is one of many groups that believes generative AI will be crucial in the coming years, as state governments look to make their operations more efficient and allow employees to focus on other tasks. NASCIO has AI at No. 3 in its top 10 priorities for CIOs next year.
“From a government standpoint, [generative AI] really turns creators into curators,” said Ben Sebree, vice president of research and development and technology at civic experience platform CivicPlus. “It creates this really great opportunity for us to generate a first draft of something, and then apply our industry expertise, whether as a software vendor, or as an employee of a local government and turn it into a final draft of something that's ready to be published.”
Recent surveys suggest that employees across sectors—including the government—are open to using AI in certain circumstances and under controlled activities. Polling company Qualtrics found that 61% of employees prefer using AI for writing, while 51% said they are comfortable using it as a personal assistant to help manage their schedules, analyze meeting notes and assist with prioritizing emails.
But employees are less comfortable with using AI for more complex tasks. Just 37% said they are comfortable using AI for performance evaluations, while 29% said they agree with its use for hiring decisions.
They are right to be cautious given how new the technology is. But AI has arrived and it will ultimately be up to leaders and managers to create the right environment for its adoption, according to Sydney Heimbrock, Qualtrics’ chief industry advisor for government.
“If leaders are worried about employee backlash, they should look at their engagement first and look at how they might improve the employees’ experience overall,” she said.
Some worker displacement from AI will be “inevitable,” said Arthur Maccabe, executive director of the Institute for Computation and Data-Enabled Insight at the University of Arizona. He suggests that governments work to “protect the rights of people to have a meaningful life,” or a meaningful career, especially those who have worked to attain a certain position or status, only to see it replaced by AI or technology.
And rather than just approach AI as a technology issue that requires only technologists, Maccabe also suggests that governments engage experts from other fields, including the humanities and social sciences, as they try to mitigate any negative impacts.
“The fact is that we go through these transitions, and you can say we're on the cusp of the Fourth Industrial Revolution, whatever it is,” Maccabe said. “Revolutions actually have people harmed right in the middle of that, and so how do you make sure you get through this transition, without it becoming a revolution, or without it becoming something that really transforms our society in a way that we didn't expect, or we didn't plan, or we didn't think about?”
The cybersecurity and privacy issues that surround AI remain a top priority, with many groups already sounding the alarm about the potential for the technology’s use in fraud, cyberattacks and to spread misinformation about the upcoming elections.
Matt Waxman, senior vice president and general manager for data protection at Veritas Technologies predicted that 2024 will see the first-ever end-to-end AI-driven autonomous ransomware attacks.
In a bid to prevent AI-driven cyberattacks from getting out of control, Sebree of CivicPlus said it will be critical for governments to double down on training staff on how to spot phishing emails, which still are a major vulnerability. “The weakest element of any cybersecurity plan is the human element,” he said.
Given the speed of change and the evolution of AI, Hannah Burn, a government industry advisor at Qualtrics, said that one strategy governments should not pursue next year with the technology is “avoidance.” Leaders cannot halt progress, she said.
“I don't think that the answer here is to say, ‘This is scary, I don't want to touch it,’ because it's going to come,” Burn said. “People are probably already in your organization using ChatGPT in some form or another, and we need policies. I wouldn't avoid and hope it goes away.”
NEXT STORY: AI could improve your life by removing bottlenecks between what you want and what you get