Internet Bots Fight Each Other Because They're All Too Human

A funny thing happens when you lock a bunch of bots in a virtual room: Sometimes they don’t get along.
Getty Images

No one saw the crisis coming: a coordinated vandalistic effort to insert Squidward references into articles totally unrelated to Squidward. In 2006, Wikipedia was really starting to get going, and really couldn’t afford to have any SpongeBob SquarePants-related high jinks sullying the site's growing reputation. It was an embarrassment. Someone had to stop Squidward.

The Wikipedia community knew it couldn’t possibly mobilize human editors to face down the trolls—the onslaught was too great, the work too tedious. So instead an admin cobbled together a bot that automatically flagged errant insertions of the Cephalopod Who Shall Not Be Named. And it worked. Wikipedia beat back the Squidward threat, and in so doing fell into a powerful alliance with the bots. Today, hundreds of algorithmic assistants fight all manner of vandals, fix typos, and even create articles on their own. Wikipedia would be a mess without them.

But a funny thing happens when you lock a bunch of bots in a virtual room: Sometimes they don’t get along. Sometimes a pair of bots will descend into a slapfight, overwriting each other’s decisions thousands of times for years on end. According to a new study in PLOS ONE, it happens a lot. Why? Because no matter how cold and calculating bots may seem, they tend to act all too human. And these are the internet's nice, not-at-all racist bots. Imagine AI-powered personal digital assistants in the same room yelling at each other all day. Google Home versus Alexa, anyone?

On Wikipedia, bots handle the excruciatingly dull and monotonous work that would drive an army of human editors mad---if an army of editors could even keep up with all the work. A bot does not tire. It does not get angry—well, at least not at humans. It’s programmed for a task, and it sees to that task with a consistency and devotion humans can’t match.

While disagreements between human Wikipedia editors tend to fizzle, fights between bots can drag on for months or years. The study found that bots are far more likely to argue than human editors on the English version of Wikipedia: Bots each overrode another bot an average of 105 times over the course of a decade, compared to an average of three times for human editors. Bots get carried away because they simply don't know any better---they're just bits of code, after all.

But that doesn't mean they aren't trustworthy. Bots are handling relatively simple tasks like spellchecking, not making larger editorial decisions. Indeed, it's only because of the bots' work that human editors can concentrate on those big-picture problems at all. Still, when they disagree, they don't rationally debate like humans might. They're servants to their code. And their sheer reach---continuously scanning more than 5 million articles in the English Wikipedia alone---means they find plenty of problems to correct and potentially disagree on.

And bots do far more than their fair share of work. The number of human editors on the English Wikipedia may dwarf the number of bots—some 30,000 active meatspace editors versus about 300 active editors made purely out of code---but the bots are insanely productive contributors. “They're not even quite visible if you put them on a map among other editors,” says the University of Oxford’s Taha Yasseri, a co-author of the study. “But they do a lot. The proportion of all the edits done by robots in different languages would vary from 10 percent, up to 40 even 50 percent in certain language editions.” Yet Wikipedia hasn’t descended into a bloody bot battlefield. That’s because humans closely monitor the bots, which do far more good than harm.

Barbaric Bots

But bots inevitably collide, Yasseri contends. For example, the study found that over the course of three years, two bots that monitor for double redirects on Wikipedia had themselves quite the tiff. (A redirect happens when, for instance, a search for "UK" forwards you to the article for "United Kingdom." A double redirect is a redirect that forwards to another redirect, a big Wikipedia no-no.) Across some 1,800 articles, Scepbot reverted RussBot’s edits a total of 1,031 times, while RussBot returned the favor 906 times. This happens because of discrepancies in naming conventions---RussBot, for instance, made "Ricotta al forno" redirect to "Ricotta cheese," when previously it redirected to "Ricotta." Then Scepbot came in and reverted that change.

For its part, Wikipedia disputes that these bots aren't really "fighting."

"If, for example, Scepbot had performed the original double-redirect cleanup and RussBot performed the second double-redirect cleanup, then it would appear that they are 'reverting' each other," says Aaron Halfaker, principal research scientist at the Wikimedia Foundation. "But in reality, the bots are collaborating together to keep the redirect graph of the wiki clean."

Still, Halfaker acknowledges that bots reverting each other can look like conflict. “Say for example you might have an editor that wants to make sure that all the English language lists on Wikipedia use the Oxford comma, and another editor believes that we should not use the Oxford comma.” (Full disclosure: This writer believes the Oxford comma is essential and that anyone who doesn’t use it is a barbarian.) But Wikipedia has a bot approval process to catch these sorts of things. “We're perfectly aware of which bots are running right now,” he says.

Also, Wikipedians are at all times monitoring their bots. “People often imagine them as fully autonomous Terminator AI that are kind of floating through the Wikipedia ether and making all these autonomous decisions,” says R. Stuart Geiger, a UC Berkeley data scientist who’s worked with Wikipedia bots. “But for the most part a lot of these bots are relatively simple scripts that a human writes.”

The Human Machine

A human. Always a human. A bot expresses human ingenuity and human mistakes. The bot and its creator are, in an intimate sense, a hybrid organism. “Whenever you read about a bot in Wikipedia, think of that as a human,” says Geiger. “A human who's got a computer that they never turn off, and they've got a power tool running on that computer that they can tweak the knobs, they can fiddle the words, they can say they want to replace X with Y.”

On the all-too-human front, Yasseri’s study also found cultural differences among the bot communities of different Wikipedia languages. “That was really interesting, because this is the same technology being used just in different environments, and being used by different people,” says Yasseri. “Why should that lead to a big difference?” Bots in the German Wikipedia, for instance, argue relatively infrequently, while Portuguese took the prize for most contentious.

Those differences may seem trivial, but such insight has profound implications as AI burrows deeper and deeper into human society. Imagine how a self-driving car that’s adapted to the insanity of the German Autobahn might interact with a self-driving car that’s adapted to the relative calm of Portugal’s roadways. The AI inside each has to make nice or risk killing the occupants. So the different ways bots interact on different versions of Wikipedia could foretell how AI-powered machines get along---or don’t---in the near future.

And imagine that AI elsewhere on the internet like Twitter makes its way into machines. Bots that spew fake news, that imitate Donald Trump, that harass Trump supporters. Unlike the benevolent bots of Wikipedia, these fool humans into thinking they're actually people. If you think Wikipedia bots squabbling is problematic, imagine machines with heads full of malevolent AI doing battle.

For now, though, the many bots of Wikipedia collaborate, clash, and keep Squidward in his place.