Can Russian cyber meddling be stopped?
Connecting state and local government leaders
The low-cost electronic meddling that sows suspicion in the democratic process is likely to continue, a cybersecurity expect told a Senate panel.
Agencies had better get used to online campaigns that spread disinformation, a social media expert told a Senate panel.
The massive numbers of fake email comments tagged to Russian IP addresses that flooded servers at the Federal Communications Commission, for example, might also swamp the servers of other federal agencies in the future, Clint Watts, a fellow at the Foreign Policy Research Institute, told a Jan. 17 Senate Commerce, Science and Transportation Committee hearing on social media and terrorism.
The 500,000 comments to the FCC public comment system is symptomatic of a larger effort to attack the integrity of the federal government's systems and sow suspicion in the democratic process, according to Watts, a former FBI special agent who served on the Joint Terrorism Task Force.
"It's a 'you can't trust the process'" approach, he said, adding that the Russian government has used similar techniques on its own population to make them apathetic and mistrustful of their own elections.
The Russian government has seen success with the low-cost program, according to Watts, which the U.S. government hasn't firmly addressed.
He predicted the program will continue in the U.S. and spread to other countries such as Mexico and Myanmar, where still-emerging technological environments mean many individuals aren't as technology literate.
To address the problem, Watts recommended social media companies verify the authenticity of their users. "Account anonymity today allows nefarious social media personas to shout the online equivalent of 'fire' in a movie theater," Watts said in his written testimony.
He also suggested pulling the plug on "social bots" that can broadcast high volumes of misinformation. They "can pose a serious risk to public safety and when employed by authoritarians a direct threat to democracy," he said. Limits on non-automated accounts should be developed so that they can only make a certain number of posts they can make during an hour, day or week. Watts also suggested social media companies use human verification systems like CAPTCHA to reduce automated broadcasting.
Representatives from social media companies Facebook, Twitter and YouTube, meanwhile, told the committee they are increasingly implementing machine learning and artificial intelligence to detect terror recruitment and messaging on their platforms.
For instance, YouTube Director of Public Policy and Government Relations Juniper Downs told the panel that since June, her company has removed over 160,000 violent extremist videos and terminated some 30,000 channels for violation of policies against terrorist content.
As the next election looms, those companies said they are preparing to sift through some of the political ads and other data that might be steered by questionable sources.
Downs said her company is looking for more transparency and verification about who is behind certain ads that appear on the platform, as well as a transparency report that would provide more detail on that content.
Watts warned the Russian effort may be more insidious than those of other bad actors, such as terrorist groups, because the Russian agents "operate within the rules" of the platforms and don't use the inflammatory language and terms that AI and machine learning systems are trained to discern.
A version of this article was first posted to FCW, a sibling site to GCN.
NEXT STORY: ONR seeks to speed development of cryptographic software