DARPA enlists bots to fight social engineering
Connecting state and local government leaders
DARPA wants to take the onus for detecting phishing schemes off the employee by using bots to detect and identify the sources of social engineering campaigns.
The Defense Advanced Research Projects Agency wants a better way to automatically sniff out social engineering attacks.
A growing problem across all sectors, social engineering tricks people into inadvertently downloading malware onto their device, from which the malicious code can make its way onto an agency network.
Currently, best practices for avoiding these kinds of attacks depend on every employee verifying links in emails, a skill that most users lack. And in government, so many users have access to privileged information that they present a target-rich environment for attackers.
DARPA, though, wants to take the onus for detecting social engineering schemes off the employee with its Active Social Engineering Defense program, which proposes to use bots to detect and identify the sources of social engineering campaigns.
ASED will use bots to mediate communications between attackers and potential victims to better identify attacks and coordinate investigations, according to the broad agency announcement. The bots would intervene when a victim appears to be under attack, validate the identity of the potential attacker and share information about the attack among themselves.
Once an attack is detected, the program envisions the use of automated, "virtual alter-ego" bots that work together to trace the attacker’s identity. Each user would be assigned a set of alter-egos bots for purpose-based communication channels -- much the same way humans use several phone numbers, email addresses social media accounts, depending on the purpose of the communications.
Monitoring multiple channels across many users offers two main advantages. First, it creates multiple vantage points for detection of broad phishing attacks. Second, in order to spoof the identity of someone the victim already trusts, an attacker must select the exact channel for that identity. For example, an attacker who wants to phish a bank customer must know the exact email address the victim uses to communicate with the bank. If the attacker tries to lure multiple victims with the email, each virtual alter ego will receive similar phishing attempts, creating a detectable signature.
DARPA envisions each bot managing a set of resources – sandboxed virtual machines or disposable accounts -- that it can trade to gain identifying information about the attacker. It also wants technologies that semi-automatically help victims edit, author, mediate or curate responses or distractions.
To evaluate the automated defenses, DARPA will build a test range using the email/phone systems of a real organization and wants an evaluation team to generate a series of realistic attacks so it can compare ASED technologies against existing baselines.
Abstracts are due Sept. 19. More information on ASED is available here.
NEXT STORY: DHS looks to startups for blockchain solutions