Anatomy of a bot campaign
Connecting state and local government leaders
A new report digs into the behavior and strategies that guide botnet-directed campaigns.
As government agencies, elections officials and private citizens wrestle with the implications of Russian directed online influence campaigns, a new report digs into the behavior and strategies that guide botnet-directed campaigns. SafeGuard, a cybersecurity and digital risk company that sells bot detection services, examined 320,000 bot accounts and analyzed their content and metadata for behavioral patterns.
"Despite the nomenclature, bots are not a uniform army of automatons blanketing Twitter with the same tweets," the report's authors wrote. "These bot operations are far more sophisticated than the psy-ops of yesteryear."
Rather than acting as robots who parrot the same message, SafeGuard's research indicated that many bots have specific and distinct purposes. Some are designed to mimic supporters of President Donald Trump; others downplay the narrative that Russia interfered in the 2016 U.S. presidential elections.
The research backs up what the U.S. intelligence community and disinformation experts have claimed: that these bots largely do not create division, but rather are designed to exploit and enhance existing discord.
Additionally, the bots are often "purpose-built to connect with one another to create amplification nodes," following and retweeting each other so messages appear to be coming organically from a wide range of Americans, giving a disinformation operation "the paradoxical benefits of [both] individualized specificity and generalized scale."While these different bot networks are always pumping out enough content to give the appearance of a real user, they tend to become more active at specific moments and in reaction to relevant news events.
"The spike is an attempt to intercept the news and the higher volume after the fact represents the continuing campaign to shape perception," the report stated.
Raj Samani, chief scientist for cybersecurity firm McAfee, said in an Aug. 31 post that the Safeguard research provided further evidence of the "remarkable" effectiveness and sophistication of botnet-fueled online influence campaigns.
"Leveraging a system of amplification nodes, as well as testing of messaging (including hashtags) to determine success rates the botnet operators demonstrate a real understanding on manipulating popular opinion on critical issues," wrote Samani.
The question of how whether and how much to police social media platforms for bot activity has vexed policymakers at times, who must first be able to accurately identify and segregate foreign-directed online content from the constitutionally protected activities of American citizens and residents.
Sen. Lindsey Graham (R-S.C.) proposed legislation to expand the definition of fraud to cover botnets and malware. At an Aug. 21 congressional hearing, associate deputy attorney general Sujit Raman told Graham that the law would be "very helpful" to the Department of Justice's efforts to protect the 2018 midterm elections and future contests from similar influence campaigns.
Sen. Sheldon Whitehouse (D-R.I.), a cosponsor of that bill, encouraged DOJ to use its existing authorities to crack down on the practice, arguing that there is little societal benefit in allowing parties free reign to leverage automation in the social media sphere.
"There is no good to a botnet as far as I can tell," Whitehouse said at the same hearing. "It's like a weed in the garden; anytime you take one out, it's good."
This article was first posted to FCW, a sibling site to GCN.