When AI comes to the office: Managing change, building trust
Connecting state and local government leaders
Before artificial intelligence makes its way into government offices, agencies must lay the groundwork for culture change.
Although lawmakers might see artificial intelligence as a way to reduce the government’s headcount, agencies should look at AI as a way to supplement employees’ work and allow them to focus on more creative and difficult tasks, said William Eggers, executive director of Deloitte’s Center for Government Insights and co-author of a recent Deloitte study on AI in government.
The goal is to automate the dull, menial and repetitive tasks so employees are freed up to do more important things, Eggers said. “There are always going to be more things for government to do than we have resources for.”
Agencies should think of AI as a new digital labor force “so we can make better decisions, so we can make faster decisions, and we can serve citizens better,” he added. “Then you can look at getting a lot of value out of this, as opposed to doing it in kind of a random way.”
Daniel Castro, vice president of the Information Technology and Innovation Foundation, agrees. He said he believes that AI can change the nature of government work, leaving employees "with the good stuff. If you can take away the pain of government bureaucracy -- and AI can do a lot of that -- you can change the culture of government.”
But that change must be managed.
Meagan Metzger, founder and CEO of government-focused IT accelerator Dcode42, said agencies that want to adopt AI for customer-facing systems must prepare their employees for the changes. “You’re not replacing staff. You just need to change their skill set,” she said. “They’re accomplishing different things.”
Another aspect to managing AI in the workplace involves getting people to trust the decisions intelligent systems make. That trust is partially based on the technology's ability to explain its choices -- or on its managers’ ability to audit those decisions.
In the European Union, regulators want people to be able to demand an explanation when an intelligent system makes a decision that affects them. A version of that right to an explanation might be included in the EU’s General Data Protection Regulation, due in 2018.
A regulated right to an explanation hasn’t gained traction in the U.S., but AI experts say intelligent systems must deliver repeatable results and provide documentation that backs up their recommendations. That’s especially important when government AI systems make major decisions that affect people’s lives, they say.
The technology, however, should not be making the final decision for agencies in many situations, said Aron Ezra, CEO of OfferCraft, a vendor of machine learning-powered marketing tools. A human should, in almost all cases, review the AI system’s recommendation, whether the technology is approving an applicant for a government program or flagging tax fraud.
“The fear that, all of a sudden, everyone’s going to be sitting back and…letting computers make all the decisions for us is something that I don’t see happening for some time, if ever,” he added.
Furthermore, AI results improve substantially when the organization has an expert managing what’s going into the system, said Daniel Enthoven, business development manager at Domino Data Lab, a vendor of AI and data science collaboration tools.
“There’s this one approach where you throw all the data in a big hopper, turn the crank, see what comes out of the bottom, and it’s got to be right,” he said. “You’re going to have better luck and better accuracy if you don’t just turn the crank on the machine but actually understand what you’re doing.”
As with most systems, the garbage-in, garbage-out rule applies to AI. Although machine learning allows such systems to become more intelligent, it’s important to take the time to train the system and feed the proper data into it.
A longer version of this article was first posted to FCW, a sibling site to GCN.
NEXT STORY: How to make robots that we can trust