Cutting communications overhead for distributed learning apps
Connecting state and local government leaders
Scientists at the Army Research Lab have shown that distributed deep learning algorithms can deliver the same performance as algorithms that run at a single, centralized location.
As the military depends more on edge devices that provide data-driven insights and allow warfighters to better collaborate, machine-learning applications face challenges sharing their data in contested, congested and bandwidth-constrained environments.
Now, scientists at the Army Research Lab have shown that distributed deep learning algorithms, meaning those deployed in the field, can deliver the same performance as the typical learning algorithms that run at a single, centralized location. What’s more, the ARL experts have been able to decrease the learning time with the number of devices or agents involved in distributed learning.
"Distributed learning algorithms typically require numerous rounds of communication among the agents or devices involved in the learning process to share their current model with the rest of the network," ARL’s Jemin George said. "This presents several communication challenges" on the battlefield.
The scientists gave the algorithm “a distributed supervised-learning problem, in which a set of networked agents collaboratively train their individual neural networks to recognize handwritten digits in images, while aperiodically sharing the model parameters with their one-hop neighbors,” according to the research paper. Each agent aims to train its own neural network, broadcasting its findings to its neighbors as it learns through the shared data.
A triggering mechanism allows the individual agents to communicate with neighboring devices only if the learning model has significantly changed since it was last transmitted. This process significantly decreases the communication among the agents, but does not affect the learning rate or the accuracy of the final learned model, George said.
Results indicate that the agents in the distributed trained networks perform as well as those in a centrally trained network.
The new technique significantly decreases the communications overhead, by up to 70% in some cases, without impacting the learning rate or performance accuracy, ARL said. It will enable further development of the Internet of Battlefield Things program, which investigates warfighters’ use of embedded systems and machine intelligence to improve defense capabilities.
The researchers will evaluate the technique on larger, military datasets and look forward to the algorithm to eventually running on edge devices, George said.
NEXT STORY: Army plans AR headset rollout