Behind the Machine's Back: How Social Media Users Avoid Getting Turned Into Big Data

To prevent being tracked by algorithms, we've begun thinking like algorithms.

Social media companies constantly collect data on their users because that's how they provide customized experiences and target their advertisements. All Twitter and Facebook users know this, and there is a broad array of feelings about how good or bad the persistent tracking of their social relationships is.

What we do know, though, is that—when they want to—they are aware of how to go behind the machine's back. They know how to communicate with just the humans without tipping their intentions to the algorithm.

In a new paper, University of North Carolina sociologist Zeynep Tufekci explores some of these strategies among Turkish protesters. She looks at these behaviors as analytical challenges for researchers who are trying to figure out what's going on. "Social media users engage in practices that alter their visibility to machine algorithms, including subtweeting, discussing a person’s tweets via 'screen captures,' and hate-linking," Tufekci writes. "All these practices can blind big data analyses to this mode of activity and engagement."

The same practices, though, from the user perspective, can be understood as strategies for communicating without being computed. All they require to execute is thinking like an algorithm. Here are the three she focuses on in the paper:

The Unlinked Mention

Tufekci focuses on "subtweeting," the practice of talking about someone without referencing them explicitly with the social software. So, "@alexismadrigal is a jerk" is one thing, but "Alexis Madrigal is a jerk" is a subtweet.

This practice, however, extends far beyond the domain of tweets. Talking about someone without explicitly tagging them is a very popular practice on Facebook, too. And the variations on the practice show how aware people are that machines can't easily detect spelling variations or infer a person from contextual clues. Sometimes they'll misspell the name ("Madirgal") or use contextual clues rather than explicit naming ("New story on The Atlantic from that California guy is terrible.")

In some cases, the reference can only be understood in context, as there is no mention of the target in any form. These many forms of subtweeting come with different implications for big data analytics.

People often read this unlinked mentions as ways of escaping the human scrutiny of the person who is subtweeted. But an equally important benefit is that the Facebook or Twitter algorithm cannot connect the two users together more tightly.

Screenshotting

Another way to escape the algorithmic gaze is to screenshot text instead of linking to a story or person directly. While humans can read the text of a screenshot easily, the algorithms on the major social platforms cannot. This allows for conversations that are silent or invisible to the machine, but work perfectly well for humans.

To a machine, a conversation in screenshots seems like people talking about a photograph, not commenting on the tweets, status updates, or posts of someone on the Internet.

Hatelinking

If these two strategies are about invisibility to the big data collectors, hatelinking is a way of introducing noise into the system. While Facebook or Twitter would know that User A had linked to Story B, the sentiment associated with that story would probably go undetected. The algorithms can see the activity, but they cannot understand it. In cases where activists or critics want a piece of content seen but to take its creator out of the communication loop, this is an effective method.

Alexis Madrigal is a contributing writer at The Atlantic and the host of KQED’s Forum.