These Brave Corporations Did What No Social Platforms Could Do, And I’m Weeping

Facebook and Twitter spent so long worrying about moderation, they forgot they could just pound the "ban" button.

There’s this cliché in crime movies where the ace FBI agent steps under the yellow caution tape surrounding the scene of a murder and tells the bumbling local police, “OK, boys, we’ll take it from here.”

For over a decade now, when it comes to content moderation, social media platforms have played the cop — accidentally shooting themselves in the dick with their own gun, letting the bad guys operate with impunity, doling out mere speeding tickets to Mafia capos, and barely bothering to dust the donut crumbs off themselves when law-abiding citizens come in to file a noise complaint.

Facebook, YouTube, and Twitter have failed over and over to stamp out hate groups, disinformation, and the QAnon mass delusion, allowing them to fester and metastasize into our politics and culture. The mob that stormed the Capitol was a manifestation of this failure: organized online, bloated on disinformation smoothies gavage-fed to them via “up next” sidebars, and whipped into a frenzy by the poster in chief everyone knew the mods wouldn't ever touch. That there were some people who were immune to the platforms' moderation was common knowledge; the companies spent years designing contorted "community standards," endlessly writing and rewriting their content moderation guidelines, and establishing supreme courts to review, approve, and legitimize each decision.

And then the FBI stepped under that yellow tape.

In the end, it was the big-money brands that had never dirtied themselves with the thankless and dismal task of moderating posts and banning users that stepped in. Capitalism drained the fever swamp.

The right to free speech is fundamental, but it is not absolute or — crucially — free from consequences. This is something Amazon, Apple, and Google have made definitively clear in acting the way they have. Which makes it all the more lol that the platforms whose business is content have struggled for so long. No one wants the decisions about what we see online to be made by opaque corporations. But this is what happened and where we are right now.

The companies that run the infrastructure of social media pulled out their seldom-used banhammers and swung mightily. When it became clear that Parler, a “free speech” alternative to Twitter, had been a gathering place for some who participated in the storming of the Capitol and had hosted discussions of violent threats against politicians and tech executives, Apple quickly removed it from the App Store, and Google removed it from its Google Play storefront. The same day, Amazon terminated Parler’s cloud hosting service, effectively knocking it offline. (Parler sued Amazon demanding reinstatement, but a judge denied the request.) Apple and Amazon aren’t social platforms — and while they do some light content moderation in places like product reviews, this is not what they do.

It’s worth noting that these companies only seem to leap into action following high-profile violence, like a killing committed by a mob of extremists.

Other companies that are not Facebook, Google, or Twitter quickly followed suit. Fearing its rentals might be used by insurrectionists, Airbnb blocked all stays in the DC area during Joe Biden’s inauguration. It also said its political fundraising group was halting campaign donations to lawmakers who had voted against certifying the election results. Other companies did the same: AT&T, American Express, Hallmark, Nike, Blue Cross Blue Shield, Cisco, Coca-Cola, Microsoft, and dozens more. Did you catch that? Hallmark!

Financial services firms were also quick to act after the Jan. 6 riot. Stripe, the bloodless online payment processor used by many e-commerce sites, dumped Donald Trump's official website. GoFundMe banned fundraisers for travel to Trump rallies. E-commerce platforms PayPal and Shopify booted the Trump campaign and associated sites that were promoting lies about the election. Financial services companies may not be in the moderation game, but they are beholden to stockholders and their bottom line. Because of that, they acted swiftly to deal a bigger body blow to Trump’s power than Facebook and Twitter could do in a thousand disclaimers about election results.

The Great Deplatforming of 2021 that saw the removals of Trump from social sites and Parler from app stores isn't the first time financial firms have done in days the moderation that platforms failed to do for years. In December, it was revealed that Pornhub was hosting nonconsensual pornography, some of which included child sex abuse material. Visa and Mastercard pulled their payment processing from the site. Pornhub had *for years* done an appalling job of policing its platform for such material, complaining it was difficult to eradicate. But after the credit card companies acted, it quickly did just that, removing any content not posted by a verified account.

Tech companies that aren’t social platforms have also taken sweeping steps against extremist content in the recent past. In 2019, Cloudflare, a web hosting platform, dropped 8chan after discovering its association with a gunman who killed 23 people at a Walmart in El Paso, Texas. In response to the violent far-right protest in Charlottesville, Virginia, in August 2017, Cloudflare stopped hosting hate sites like the Daily Stormer. Apple Pay and PayPal have terminated their services for a number of hate groups. Squarespace booted hate sites built on it, as did GoDaddy, which would later kick the social platform Gab off its service.

It’s worth noting that these companies only seem to leap into action following high-profile violence, like a killing committed by a mob of extremists. Mere public pressure or petulant, whiny news stories don’t move the dial.


That these companies have likely spent little time considering the free speech nuances of content moderation is, uh, not ideal. There are troubling implications. Groups like the Electronic Frontier Foundation have warned against allowing companies like Visa or Cloudflare to have too much power over what is allowed to exist on the open internet.

The Great Deplatforming was a response to a singular and extreme event: Trump's incitement of the Capitol attack. As journalist Casey Newton pointed out in his newsletter, Platformer, it was notable how quickly the full stack of tech companies reacted. We shouldn’t assume that Amazon will just start taking down any site because it did it this time. This was truly an unprecedented event. On the other hand, do we dare think for a moment that Bad Shit won’t keep happening? Buddy, bad things are going to happen. Worse things. Things we can’t even imagine yet!

They’ve created their own wonk-filled supreme courts where the judges make six figures to do 15 hours of work per week to argue over what kind of nipples are banned.

Some of you will inevitably note that there’s a common variation on the “OK, boys, we’ll take it from here” trope:

The underappreciated but smart, highly capable, and principled town sheriff intent on solving the case on their own, FBI be damned. But that fails as a metaphor here because Facebook, YouTube, TikTok, and Twitter certainly haven’t proven themselves to be highly capable when it comes to content moderation (see earlier description of shooting oneself in the dick with their own gun, repeatedly, as if they had many Hydra-like dicks that kept regrowing when shot). Twitter’s booting of various peddlers of hate and misinformation this past year came after more than a decade of widespread harassment and abuse. TikTok, the newest and possibly most vital platform, hasn’t quite figured out its moderation strategy yet, and it seems to fluctuate between deleting videos that are critical of China and allowing sketchy ads. There’s something almost comical about YouTube issuing a “strike” on Trump’s account as if he’s Logan Paul in the Japanese “suicide forest.” And Facebook? Well, Facebook is Facebook.

The first six cases basically read like a greatest hits of Facebook content moderation controversies: hate speech, hate speech, hate speech, female nipples, Nazis and COVID health misinfo. https://t.co/JwByovVT1S

@evelyndouek / Twitter

Long before Facebook, Twitter, and YouTube were excusing their moderation failures with lines like "there’s always more work to be done" and "if you only knew about all the stuff we remove before you see it," Something Awful, the influential message board from the early internet, managed to create a healthy community by aggressively banning bozos. As the site’s founder, Rich "Lowtax" Kyanka, told the Outline in 2017, the big platforms might have had an easier time of it if they’d done the same thing, instead of chasing growth at any cost:

We can ban you if it's too hot in the room, we can ban you if we had a bad day, we can ban you if our finger slips and hits the ban button. And that way people know that if they're doing something and it's not technically breaking any rules but they're obviously trying to push shit as far as they can, we can still ban them. But, unlike Twitter, we actually have what's called the Leper's Colony, which says what they did and has their track record. Twitter just says, “You're gone.”

That it took the events of Jan. 6 and five deaths to finally ban Trump from social platforms is, frankly, shameful, especially given the elaborate and endlessly tweaked justifications from these social sites for permitting posts that are unmistakably, conspicuously malignant. They’ve created their own wonk-filled supreme courts where the judges make six figures to do 15 hours of work per week to argue over what kind of nipples are banned. They have created incomprehensible bibles of moderation rules for throngs of underpaid, outsourced workers who are treated horribly. They’ve written manifestos about plans for “healthy conversations.” They flip-flop over whether to ban neo-Nazis or remonetize an anti-gay hate-monger's channel. They respond to threats to democracy and public health with “the more you know”–style labels and information “hubs.” They have worked their heads so far up their asses that they’ve forgotten they can just smash that “ban” button.

Is it admirable that Amazon, Apple, et al., stepped in to do the moderation work that Facebook, YouTube, and Twitter have failed to do for so long? Not necessarily! Big yikes!

But that’s what happened. Drano works to unclog my shower, but my landlord tells me it ruins the whole pipe system. I don’t expect the plumbing system of the internet to improve; there will always be more monster turds clogging it up. Happy flushing! ●

Topics in this article

Skip to footer