Government leans into machine learning
Connecting state and local government leaders
Agencies at every level -- local, state and federal -- are increasingly using AI and machine learning to better understand data and make back-office tasks more efficient.
It was just two years ago that artificial intelligence seemed to burst onto the government agenda.
In August 2016 then-President Barack Obama was the guest editor for an issue of Wired magazine and spoke with head of the MIT Media Lab Joi Ito about AI and its implications.
"Early in a technology, a thousand flowers should bloom," Obama said. "And the government should add a relatively light touch, investing heavily in research and making sure there's a conversation between basic research and applied research."
Two months later, the Obama administration released a report on AI, a broad overview of the emerging technology, the report dedicated just a few pages to how the government could benefit from AI. A second report followed just weeks before Obama left office, and focused primarily on the potential economic impacts. "AI raises many new policy questions, which should be continued topics for discussion and consideration by future Administrations, Congress, the private sector, and the public," it concluded.
Since then, the Trump administration has provided additional guidance to agencies outlining machine learning and AI as research priorities. It has also set up the Select Committee on Artificial Intelligence "to improve the coordination of Federal efforts related to AI and ensure continued U.S. leadership in AI," according to a May 2018 White House report. The work of the committee would include encouraging "agency AI-related programs and initiatives," the report reads.
Government interest in -- and initiatives using -- AI and machine learning began long before 2016, of course. But over the past two years, agencies at every level -- local, state and federal -- have increasingly looked to machine learning in particular to better understand data and make back-office tasks more efficient.
Machine learning techniques developed by researchers at Oak Ridge National Laboratory have been used by the Federal Emergency Management Agency to find man-made structures that have been eaten up by lava flow. Kansas City, Mo., has developed a machine learning algorithm to help predict when potholes will form on city streets. And the military has begun using AI to predict component failure on tanks.
If there is a common theme, it is one of predictions.
In machine learning, "prediction" means "you can infer something unknown given something known," said Zachary Chase Lipton, an assistant professor at Carnegie Mellon University's Tepper School of Business. "It turns out that a huge number of tasks can be expressed with predictive modelings."
These systems are given input -- whether that is satellite photos, 311 calls or sensor readings from vehicles -- and are asked to predict an output -- an airfield, a pothole or a part going bad on a tank. Machine learning models are trained on historical data to recognize patterns. The inputs and outputs, however, have to be clearly defined if machine learning is to be useful, Lipton said.
Machine learning can be considered the logical next step in analytics, Accenture CTO Dominic Delmolino told GCN.
"It's interesting, there are growth areas where government agencies who have been doing a lot of what we would call advanced analytics are starting to say, 'OK, can we start to incorporate AI and machine learning now as the next step in analyzing our data for mission decision making or mission value,'" Delmolino said.
Machine learning can be a great tool for finding non-linear relationships. Linear relationships, like the cost of a house related to its size (as one goes up, so does the other), are better explained by classic regression techniques. But sometimes relationships aren't linear, he added.
The relationship among words in a sentence is not linear, nor is the relationship between pixels in a photo. These relationships are complicated, but machine learning has proved to be a way to find them and others.
Getting the data house in order
At the end of each year, state agencies often issue an annual report capturing what they see as their successes from the previous year and their goals going forward. In 2017, the Illinois Department of Innovation and Technology focused on accelerating the use of AI, chatbots and advanced data analytics tools to "advance Illinois' overall effort for improving citizen services in a more efficient manner through innovation," its report noted.
DoIT Chief Data Scientist Krishna Iyer said the state issued a request for information last year to get a better understanding of the machine learning and artificial intelligence landscape. From those conversations with vendors, it became clear the state wasn't taking advantage of the technologies' potential.
"There is a huge gap between what it can do and what it's being used for today," Iyer said.
But Illinois isn't diving in headfirst without a plan. It wants to streamline its data sharing and optimize its datasets for machine learning.
There are two important considerations when it comes to machine learning, Iyer explained: architecture (data engineering) and science (data science).
The state has begun looking into creating a data sharing platform to help facilitate machine learning projects. It would make datasets available through application programming interfaces, thanks to data-sharing agreements the state has worked out between agencies. Agencies would have access to Python, R and other statistical packages through a centralized platform, he said, and the machine learning models could also be made available via API.
The final details of this platform are far from finalized, though. It still hasn't been determined whether it would be cloud-based or on-premises. "It depends on the sensitivity of the data and where the platforms are for supporting requirements," Iyer said.
The absence of a formal technology platform hasn't stopped the state from starting a couple machine learning projects, however.
Illinois' Department of Revenue has already started using machine learning to help predict fraud and the Department of Education is using it to better predict which students struggle academically and may drop out.
Iyer said the model for tax fraud was trained to find patterns in historical data where fraud has been found. The model assigns a fraud probability to taxpayers whose returns gets flagged, making it easier for the state to identify individuals who need to supply clarification on their tax filings. It was used this past filing season.
Illinois expects to use the education model this fall. It was trained with data on students that didn't graduate from high school, including information on their school performance, demographics for the area where they live and other variables. It then assigns a low, medium or high risk to current students that school officials can use for targeted intervention for at-risk students, Iyer said.
Self-sorting data
The Center for Innovation through Data Intelligence (CIDI) in the New York City Mayor's Office has looked extensively at using data to address homelessness, economic issues and physical health within the city.
Its most recent study analyzed young adults who successfully transitioned out of homelessness assigning them to one of six groups: frequent jail stays, consistent supportive housing, consistent subsidized housing, earlier homeless experience, later homeless experience and minimal service use.
"It helps us to predict who could fall into that group, but it also helps us to understand what the resources of each of these groups are," CIDI Executive Director Maryanne Schretzman said.
Sorting individuals into these groups required some serious data wrangling. The CIDI created profiles using real data from 8,795 individuals, which required pulling and protecting sensitive data from multiple sources: the Department of Youth and Community Development, the Department of Homeless Services, the Administration for Children's services, jails and hospitals. It used SAS Link King software to bring these datasets together.
The sensitive data never left the city government's intranet. It was transferred from one location to the next using an encrypted file transfer system, Schretzman explained.
So where does machine learning come into play? In the grouping. Those six groups were not predefined before the project started; the team used the TraMineR package in the R statistical computing environment to do the analysis.
"Machine learning gives you the ability to direct the data to develop those clusters," Schretzman said. "And that's the beauty that the data itself is developing the clusters, which is very cool."
Deep neural networks
The projects outlined in New York City and Illinois use techniques that qualify as machine learning, but they're on the simplistic side of the spectrum. The Bureau of Labor Statistics, however, has been using machine learning for years now and is preparing to make the leap from this "shallow machine learning" to the use of deep neural networks.
Every year, the BLS collects massive amounts of data. The Survey of Occupational Injuries and Illness, for example, includes 300,000 written descriptions of how workers are injured. Those responses then have to be coded. That means, for example, making sure "reporter" and "journalist" are given the same code and that the injuries are placed in the correct category.
Until 2013, this coding was all done by hand, and took about 20,000 hours to complete. "It's not a fun thing to do," said Alex Measure, a BLS economist who has been working on machine learning efforts for the agency.
But then the agency started using machine learning to lend a hand, training a model on the historical hand-coded surveys. Now, more than half of the coding is done by machine.
Each night completed surveys are run through autocoding model, which also creates a probability for the accuracy of the coding result. If the probability is below a certain level, the survey gets sent to a human employee to code.
The "shallow machine learning" that BLS has been using is really good at recognizing words or pairs of words, but it struggles with strings of text, Measure said. "Sometimes we have narratives where you really need to understand what a whole sequence of words mean."
In a phrase like "no sign of concussion," shallow machine learning might recognize the word concussion, maybe it will even see "sign of concussion," but it could struggle to recognize that the "no" negates the "concussion," Measure explained in an interview. Deep neural network techniques that can model complex non-linear relationships could help.
BLS has been able to run its shallow machine learning model on existing hardware using open source software like Google's TensorFlow. As the agency makes the transition into deep neural networks, however, it has needed the processing power of an NVIDIA GPU server, Measure said.
Although this kind of computing power is also available from cloud providers, Measure said the sensitivity of the data requires BLS to manage its own hardware.
Moving forward
Machine learning might seem as if it can be thrown at most any problem, but that's far from the case.
"Make sure that you have training data," Measure said. Machine learning "is not a solution to every problem, it's a solution to problems where you have lots of training data and you don't have an easier way to automate it."
It's most easily applied to areas like IT ticketing and call centers where there are a lot of requests and a lot of historical data, Delmolino said. Machine learning can make an impact on "anything where there's high volume, long wait times or big backlogs," he said.
Just because this technology can be used doesn't mean it should be used, Lipton warned. Applications like predictive policing have the potential to continue or exacerbate existing biases within society thanks to a feedback loop created by a model's use, he said.
If police are assigned to patrol areas based on predictions "of where crimes are going to take place … you could wind up finding crimes were you look for them and, as a result, over policing poor, disadvantaged neighborhoods, then finding more crime," he said. A model with biased sample data could "think a disproportionate number of crimes take place in those neighborhoods and then it's going to allocate even more police officers," he said.
Delmolino seconded the need to focus on potential bias. Machine learning implementation requires active management, tweaking the model to lessen bias over time.
"You can't just buy a magic tool and deploy it," he said. "You have to be aware of these things."
One of the next big steps for machine learning could be the ability for multiple models to interact with each other and work together, Delmolino predicts.
"So I suspect we're going to see some really fascinating requests for 'How do I make sure my models work with each other? Is there a way for models to communicate with each other?'" he said.
Another stepping stone will be the integration of machine learning and robotic process automation. RPA provides the ability automate tasks like transferring files, moving data from one field to another or other computer processes.
"It's pretty dumb, really, today," Forrester Analyst Craig Le Clair said of RPA. But as machine learning becomes integrated with the technology, RPA will begin making more decisions independent of human involvement.
If machine learning is the brain, then consider RPA the limbs, providing the ability to reach and grab different systems across an enterprise network, making changes as the brain sees fit.
"These are very general tools," Lipton said of machine learning, "and I think they find a large variety of use cases in any gigantic organization and that includes government."
NEXT STORY: Mitigating the risks of military AI