- Automated editing programs square off in Wikipedia
- Scientists discover internet bots exhibit social behavior
- Failure to manage bot behavior could spell trouble for human societies
Bots on Wikipedia are computer scripts that automatically handle repetitive functions that improve and maintain the encyclopedia. They aren’t designed to interact with each other. But researchers found that some bots 'argued' with other bots, undoing one another’s edits as many as 185 times over a ten-year period.
“We did not expect the bots to be argumentative at all,” says Milena Tsvetkova, assistant professor of methodology at the London School of Economics and Political Science. “The bots on Wikipedia are programs that execute simple editing tasks; they are not designed for interaction. Finding out that they ‘argue’ was quite unexpected.”
As a sociologist, Tsvetkova is primarily interested in human interaction: with other humans or with computers. “I was initially resistant to the thought that computer programs could show interesting social behavior,” says Tsvetkova. “But the data proved me wrong!”
He said, she said
The number of bots in online systems is increasing quickly. They’re currently used to collect information, moderate forums, generate content, and provide customer service, as well as disseminate spam and spread fake news.
“Even if bots are not designed to interact, they find themselves in systems with other bots and interaction is inevitable,” says Tsvetkova.
Wikipedia bots complete about fifteen percent of the encyclopedia’s edits on all language editions. Since 2001, the growth in the number of bots has slowed, but the number of ‘reverts’ (corrections of others’ edits) has continuously increased, suggesting that bot interactions are not becoming more efficient.
“On Wikipedia, these fights were futile and not very consequential — most of what the bots did was still extremely valuable to the Wikipedia project,” says Tsvetkova. “However, in other online systems such fights might undermine the purpose of the bots.”
The researchers found that the same handful of bots are responsible for most of the ‘arguments’ with other bots. Conflicts between bots tend to occur at a slower rate and over a longer period than conflicts between human editors.
“Interaction leads to unexpected results, even when we design for it,” says Tsvetkova. “Bots' presence in and influence on our lives will be increasing, and if we want to understand society, we need to understand how we interact with these artificial agents too.”
Living with bots
Much of the scientific and popular discussion about artificial intelligence has been about psychology – whether AI is able to think or feel the way we do, says Tsvetkova. “But now it’s time to discuss how artificial agents are expected to interact with us.”
When studying human-human interaction, social scientists often model individuals as ‘bots’ that follow simple rules when they meet other agents. These modeled interactions can lead to complex patterns at the group level, patterns that none of the individuals intended.
“For the last few decades, we’ve created artificial systems with bots and used them to gain insights into social phenomena such as the emergence of cooperation, the evolution of cultural norms, the spread of fads, and so on,” says Tsvetkova. “With the proliferation of bots online, we now can do the same but by observing systems of bots ‘in the field,’ i.e. in the real world.”
Bots, too, can have their own social life, beyond the control of their human creators. Tsvetkova points to Tay, the Microsoft chatbot that began broadcasting racist and misogynist tweets within hours of being released, thanks to the influence of online trolls.
Unlike Tay or other social bots posing as humans to spread propaganda or influence public discourse, the Wikipedia bot ecosystem is controlled and monitored. “Conflicts likely arise as a result of the bottom-up organization of the community,” says Tsvetkova. “Human editors individually create and run bots, without a formal mechanism for coordination with other bot owners.”
Behind every bot is a human designer. Even malevolent bots that are designed to cooperate (like those in a botnet) may end up in continuous disagreement. As bots and other forms of AI proliferate on the web and in our lives, says Tsvetkova, “this unintended behavior could have more dire repercussions in other human-machine systems.”
As the creators of bots, humans must study bot behavior and design artificial agents that can carry out our tasks with minimal conflict. It is up to us to prevent a future bot war spinning out into a dangerous feud: Hatfields versus McCoys 2.0.