Fixing ‘concept drift’: Retraining AI systems to deliver accurate insights at the edge
Connecting state and local government leaders
When results from artificial intelligence systems don’t align with what’s expected, data scientists must identify the root causes of concept drift and retrain the algorithms to ensure the systems can be trusted.
If you’re like many people, you view more streaming content now than ever. To keep you watching, content providers rely on machine-learning algorithms that recommend relevant new content.
But when the COVID-19 hit, viewing habits changed radically. Suddenly, different people were streaming different content at different times and in different ways. Were the ML algorithms now making less-relevant recommendations? And were they falsely confident in the accuracy of their less-precise predictions?
Such are the vagaries of “concept drift,” an issue few users of artificial intelligence are aware of. As government organizations leverage more AI in more far-flung locations, concept drift is a problem they’ll have to address. Particularly when deploying AI at the network’s edge, concept drift presents challenges.
Yet by being aware of the problem and its solutions, agencies can make sure their analysts, data scientists and systems integrators take steps to optimize the accuracy and confidence of their AI deployments.
Growth in government AI
While AI remains an emerging technology, both military and civilian government organizations increasingly deploy the capability -- particularly ML -- in a variety of situations:
- Computer vision for operating autonomous vehicles.
- Internet-of-things sensors for predictive maintenance of equipment.
- IoT sensors and radio-frequency (RF) tags in supply networks to forecast the movement of supplies from manufacturers through ports and into warehouses.
- Cybersecurity protections to identify potentially malicious activity.
- Tools for optimizing delivery of government services, such as identifying which citizens will benefit most from a vaccine.
Many of these applications operate at the edge. The edge, however, presents unique challenges, because the models must be lean enough to run with limited processing power and network bandwidth. Those constraints become bigger factors when retraining algorithms to address concept drift.
Concept drift: High confidence in low accuracy
A simple way to think about AI algorithms is to say they accept data inputs and produce predictive outputs. Inputs could include images of cars, specifications such as machine tolerances or environmental factors such as temperature. Outputs could include identification of road hazards or forecasts of when equipment will require maintenance.
Concept drift occurs when the behaviors or features of the outputs being predicted change over time such that predictions become inaccurate for similar input data. Let’s say an ML algorithm designs shipping routes based on inputs such as the location of manufacturing sites, seasonal weather patterns, fuel costs and geopolitical realities. If the optimal shipping route changes over time, perhaps because sea currents change due to climate change, the model concept will have drifted. This will cause the algorithm to make recommendations based on an out-of-date mapping between the input data and the outputs being predicted.
Two key problems result from concept drift. First, the algorithm starts making predictions that are less accurate – often, much less accurate. So it might recommend a shipping route that’s slow, costly or even dangerous.
Second, and more deceptive, the algorithm continues to report a high level of confidence in its predictions, even though they’re markedly less accurate. Therefore, the model might accurately identify anomalous network behavior 70 times out of 100 but report that it’s 99% confident in the accuracy of its identifications.
Retraining at the edge
Technology vendors are developing AI training algorithms that can both determine when a model concept has drifted and identify the new inputs that will most efficiently retrain the model. In the meantime, when AI results that don’t align with what’s expected, data scientists or systems integrators should explore whether they need to investigate concept drift. If so, they should take these steps:
Identify root causes. Re-establish the “ground truth” of the algorithm by checking its results against what has been established as reality. Select a few samples, manually create accurate labels for them and compare the model’s confidence against its actual accuracy.
If confidence is high but accuracy is low, investigate how inputs have changed. Let’s say the inputs of an autonomous vehicle have been corrupted by dirt on its camera lens. That’s a problem of data drift, not concept drift. But if the vehicle was trained in a temperate environment and is now being used in a desertscape, concept drift might have occurred.
Retrain the algorithm. There are two basic approaches to retraining: continual learning and transfer learning. Continual learning makes small, regular updates to the model over time. In this case, samples are manually selected and labeled so they can be used to retrain the model to maintain accuracy.
Transfer learning reuses the existing model as the foundation for a new model. Let’s say the initial model’s basic features are solid but its classification capability is attuned to data inputs that no longer reflect reality. Transfer learning allows the classification capability to be retrained without rebuilding the model from scratch.
The ability to realign without starting over is crucial at the edge. Creation of an AI algorithm typically involves large data volumes that require the processing power of a centralized data center. Limited processing power and network bandwidth dictate that edge-based updates to AI algorithms be only incremental.
Building trust in AI outputs
Ultimately, agencies want their AI to deliver accurate insights and predictions. Just as important, they want those outputs to be trusted by the people who rely on them. That’s where addressing concept drift becomes crucial.
AI is still new to many people. Government employees and citizens alike might be hesitant to trust AI analyses and recommendations. The more often AI outputs are found to be inaccurate, the more user skepticism will grow. By actively addressing concept drift, agencies can ensure the accuracy and confidence of their AI models. In particular, they can avoid false positives and false negatives that erode trust.
Content-streaming services use AI for purposes that are helpful but hardly high stakes. Government agencies will increasingly deploy AI in mission-critical use cases that can have a significant impact on personnel and citizens. Managing concept drift can make sure algorithms deliver the insights and predictions they need -- and drive acceptance that maximizes investments in AI.