Employees need drastic reskilling to deal with generative AI’s data needs
Connecting state and local government leaders
Data analysts are just one part of the picture, observers said. Governments will also need data architects and business analysts, as well as ethicists to help with its responsible use.
Late last year, a leading state technology official warned that without good data state and local governments risk making artificial intelligence tools “stupid,” merely a conduit for “garbage in, garbage out.”
But to have good quality data, the public workforce needs good data literacy skills. As governments continue to churn out policies, guidelines and best practices for AI’s use in government, training and upskilling the workforce will be just as crucial.
In its May 2023 Future of Jobs Report, the World Economic Forum found that companies rank AI and big data as a top training priority from now until 2027, especially for those companies with more than 50,000 employees.
“Among technology skills, the ability to efficiently use AI tools now exceeds computer programming by humans, networks and cybersecurity skills, general technological literacy skills, and design and user experience by some margin,” the report said.
The forum, in particular, found that the public sector lags far behind other sectors of the economy, both in its plans to adopt AI technologies and to prioritize training in AI and big data.
One reason governments may be slow to adopt AI and prioritize training is fear. New Jersey Chief Innovation Officer Beth Noveck warned last fall of the challenge that states and localities face as they try to promote data skills among their employees.
“We all understand from a policy perspective, the importance of data and evidence, you know, basing decisions on data, but it doesn't mean any of us learned how to work with data,” she said at October’s Google Public Sector Forum.
One way to overcome these fears, Noveck said, is to make AI accessible. Successful AI training must include “examples and references that will make sense” to staff in their daily work. She even suggested that the best way for employees to learn about AI is to experiment with it themselves, especially by using publicly available generative AI tools like ChatGPT at home.
Getting employees more confident in using AI tools is important because “the lines between IT and business continue to blur,” said Orla Daly, chief information officer at Skillsoft, an online training company.
Governments don’t just need to hire data analysts, but to train employees to work as business analysts and data architects. Public sector employees at-large will need to have the skill to map the relevant data to the business functions they serve in government, to work out what information is consumed and then how it’s delivered back to agencies and the public that uses it, said Dean Johnson, a senior executive government advisor at Ensono, an IT service management company.
Data mapping in such a way can feel like “threading the needle,” Johnson said.
The best way to get employees mapping how the data fits with agencies’ needs is through hands-on training, Daly said. She cited Skillsoft’s 18th annual IT Skills and Salary Report, which found that across industries, only 15% of IT professionals said management did not see a tangible benefit from training, as opposed to 45% the previous year. Upskilling and reskilling—not just hiring data analysts—is key, she said, in getting employees comfortable using data for generative AI.
“This isn't something you can necessarily buy your way out of,” Daly said.
In addition to training, reskilling and upskilling employees to handle the vast amounts of data at their disposal, governments will also need more workers to help comply with the myriad data protection and state-level privacy laws, as well as federal laws like the Health Insurance Portability and Accountability Act.
And ensuring AI is used ethically, with potential biases kept front-of-mind, will be key as well. Daly said governance to provide guardrails for some of these problems will be a “continued focus,” and will need people to become adept in the ethical use of AI.
The potential for generative AI-driven misinformation and disinformation is troubling for government leaders, especially as they reckon with how the technology can experience so-called “hallucinations” and produce incorrect results. Training both the systems and the employees that leverage them will be crucial in helping reduce instances of bad information to almost zero.
“Eighty percent or 90% [success], that might be good for basketball shooting and baseball hitting and those kinds of things,” said Dekalb County, GA CIO John Matelski during a webinar hosted by the National Association of Counties last year. “But when we're talking about services and constituents and those kinds of things, that 10-20% that might not be accurate can be really devastating and embarrassing.”
NEXT STORY: After an action-packed year, 2024 will be another blockbuster year for AI