How smart are social bots?
Connecting state and local government leaders
Bot networks are clever enough to harvest personal information from social networking sites. But are these social bots smart, or are we just being dumb?
In 1950, computer pioneer Alan Turing proposed the Imitation Game in which a person would question two unseen subjects, one a machine and the other a human, in an effort to distinguish them. This has become known as the Turing Test, as he wrote in his paper, "Computing Machinery and Intelligence."
“I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
It now is 61 years after Turing’s prediction. He set the bar pretty low, giving the machine only three chances in 10 of winning. How are we doing?
Related coverage:
New cyber threats put government in the cross hairs
'Socialbot' fools Facebook, and users give up their info
With the creation of the Internet, our increasing use of remote transactions and interactions, and especially with the advent of social networking, the question has taken on more than academic importance. It is at the heart of the problem of how we know who or what we are dealing with online and how to ensure our security and our privacy.
In an interesting experiment, researchers at the University of British Columbia recently were able to infiltrate Facebook with a herd of automated “social bots” that went largely undetected by the network’s defenses for eight weeks, friending thousands of users and harvesting their personal information. Over a six-week period, the bots sent out 3,517 friend requests to human Facebook users, 2,079 of which — 59 percent — were accepted. At first glance, this looks as if the social bots won the Imitation Game and passed the Turing Test with flying colors.
We need to take those results with a grain of salt, however. The bots were good at defeating defenses such as CAPTCHA codes used to identify and block spamming bots and at gathering and posting appropriate information to create the impression that there were real people behind the accounts. But the bots were not really speaking with the other Facebook users. The bots didn’t pass the Turing Test because the Facebook users never really questioned them.
It turns out that the automated software bots really aren’t that smart but that the Facebook users were acting dumb. When it comes to social networking, we are our own worst enemies.
Social networking creates online communities that can be used for socializing and for collaboration and increasingly it is being used in the workplace. This has led to a lot of questions about the security and privacy controls of the systems. But the first question we need to ask about these networks is how we are behaving on them. Are “friends” being collected indiscriminately as status symbols, and is personal information being posted inappropriately? It is very difficult — if not impossible — to protect a person who is determined to be his own worst enemy by palling around with semi-sentient social bots.
Alan Turing might be disappointed in the performance of our 21st-century computers in the Imitation Game if he were around today, but he might be even more disappointed in the performance of the people in the game. Artificial intelligence won’t be very impressive if it is measured against people who have lowered themselves to the level of machines.
NEXT STORY: AT&T phone hacks in Philippines linked to terrorist group