Unmasking AI-assisted cyber attacks
Connecting state and local government leaders
Researchers are developing algorithms that can detect when malware uses adversarial machine learning to attack networks and evade detection.
Insider attacks. Outsider attacks. Cybersecurity pros try to protect their networks from both types of incident using white-box and black-box testing. The former assumes the culprit has full or nearly full knowledge of the network. The latter assumes the culprit has no authorized access to the network or understanding about its security.
But what about attacks from those who have some critical knowledge about a network's security?
Shouhuai Xu, professor of computer science at the University of Texas at San Antonio worries that cybersecurity pros are missing the “gray box” attacks in which the hacker has limited knowledge about the target’s security and applies adversarial machine learning to evade network security and gain higher-level access to the network.
“If terrorists know 80 percent about how the FBI is going after them, they can change their behavior to evade the FBI with a good chance, say, 80 percent,” Xu told GCN. “The more the attacker knows, the more damaging attack they can wage.”
Cybersecurity pros often don’t even know how frequently gray box attacks take place. “This is what we are working on -- quantifying the degree of prevalence in the real world,” he said. Researchers suspect gray box attacks are taking place, "but the data is often considered sensitive and hard for academic researchers to get," he said.
Xu has just received a $500,000 grant from the National Science Foundation to develop a machine-learning algorithm that will detect such intelligent evasion.
One of the methods existing cybersecurity programs use to identify malware is to look for “signature” behaviors. Programs operating on a network are classified as either benign or malicious and, in the latter case, are assigned to a “cluster” of malware that shares signature characteristics. “Intelligent evasion comes from the fact that the attacker knows how the defender classifies and clusters malware, and therefore the attacker can intelligently manipulate the behavior of the malware to disrupt the classification and clustering,” Xu said. In short, the attacker can change the malware to evade detection.
Xu’s team is designing classification and clustering algorithms that are stronger than existing ones and that will require greater changes in behaviors by malware to evade detection. “We want to make them resilient, meaning that in order to defeat the classification or clustering defense, the change has to be substantial,” Xu said. And such changes can be detected. “If we can force the attacker to make big changes, we are winning the war,” he added.
Cybersecurity pros, like the human body’s own defensive mechanisms, are engaged in an ongoing battle. “It’s like biology,” Xu said. “When we encounter a new virus, you either defeat it or survive it. The immune system learns to recognize the virus," he said. "We are mimicking that defense and going beyond by unmasking the disguised new threats.”
NEXT STORY: On the road toward derived mobile credentials