BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook's Fake News Detector And The Myth Of Technology As Savior

Following
This article is more than 7 years old.

Last week Facebook formally unveiled its solution to the “fake news” epidemic: a series of tools and partnerships that will place a small group of elite organizations as the ultimate arbitrators of “truth” over a population of 1.7 billion users spanning the entire globe. But, will this new system magically eradicate the scourge of fake news and bring peace and wisdom to the online world?

It is a twist of irony that the very same news organizations which have howled in protest over the past year at Facebook’s increasing role as censor and moderator and which protested every change Facebook made to its News Feed that impacted viewership of their articles are now the very same outlets wholeheartedly embracing and indeed championing Facebook’s new role as the ultimate arbitrator of “truth” and what can be seen across its platform. Gone are the universal condemnations of Facebook’s role as Editor-in-Chief after it removed the Vietnam War image this past September and in its place are praise and encouragement for a system that will fully entrench absolute editorial control in a small set of hands with no apparent recourse and no documented appeals process.

Remarkably, there has been no mention of how Facebook will arbitrate cases where journalists object to one of their articles being labeled as “fake news” and no documented appeals process for how to overturn such rulings. Indeed, this is in keeping with Facebook’s opaque black box approach to editorial control on its platform and the company expectedly did not respond to a request for comment.

Indeed, it is even more remarkable that if one steps back and looks at the system Facebook has announced, it is strikingly similar to the system used in many repressive governments. China, for example, operates a very similar ultimate arbitration system in which a small set of elites determine what is “truth” and what is acceptable for society to consume. Similarly, the line between “fake news” and “true news” can often come down to the government’s word versus that of its citizens, especially as one looks beyond the United States to explore the global perspective. What happens when the Thai government demands that all reporting worldwide that criticizes its government be declared as "fake news" and removed from Facebook? In between maliciously false news on the one side and satirical entertainment on the other lies a million shades of gray, where “truth” is in the eye of the beholder. Even flagging news as “fake” might actually encourage its spread, by becoming a badge of honor.

Who do we trust to make these decisions given that all humans are inherently fallible? In the midst of a week focused intensely on fake news, the journalism community demonstrated that it was not quite the bastion of rigorous fact checking that it had  touted, when a number of the world’s marque news brands all ran a story about a Santa Claus actor comforting a dying boy without ever picking up the phone to verify any of the details. The academic community has not proven to be much better as it struggles with its own verification and reliability crisis. Even the professional fact checkers, it turns out, are reluctant to share any detail about their inner workings or offer the necessary transparency to allow external auditing of their fact checks. Thus, on closer inspection, the “big three” being touted by pundits as the best solution for fake news: journalists, academics and professional fact checkers, do not present quite the envisioned panacea.

What options do we have then? Well, one might be to increase information literacy among online users, to allow them to make their own, more informed, decisions on what to trust. Media outlets might adopt increased transparency around their sourcing and verification processes to address the ways in which false and misleading news has weaponized modern journalistic practice like the inverted pyramid. Or, might a company like Facebook simply ban gullible people from their platforms?

Yet, it is fascinating that Facebook, even while touting its new system for fighting “fake news” by labeling and blocking stories flagged by its team of experts, greatly restricts the ability of users themselves to make matters into their own hands. When encountering a questionable article in one’s News Feed, one in theory can click on the dropdown in the upper-right of the item and block all future items from that source. For example, a well-meaning friend who regularly shares links to “miracle cure” medicines or “get rich quick” schemes can easily be addressed by simply blocking links from those sites in one’s news feed.

However, this ability to block future posts from selected pages in one’s News Feed is quite limited it turns out and is not available for a large number of sharing scenarios. Indeed, as an experiment I tried clicking on the upper-right dropdown for every item appearing in my News Feed to see how many had the “Hide All from XYZ” option and an exceedingly large number of items offered me only the option to block the user sharing the content in his or her entirety, not to continue seeing items from that user, but to block posts he or she (or other users) share from a particular page.

When I asked Facebook for comment on this and noted that this limitation appeared starkly at odds with its mission to empower users to fight false and misleading news, a spokeswoman confirmed the issue and commented only that the company was working on adding additional controls to allow users to block certain content, but would not comment further when pressed on why some types of sharing were not blockable, why the company was only just looking into the matter and when additional controls might become available.

Moreover, when it comes to advertisements, the company offers only the most rudimentary of controls, allowing a user to hide a single specific ad, but not to block all advertisements from a particular company. On the one hand, allowing users to block all advertisements by a particular company or featuring its products would allow users the ultimate control over their information consumption, but given that Facebook’s entire revenue stream is provided by those advertisements, it is highly unlikely that the company will offer users such blanket controls over advertisements. In fact, Facebook’s own announcement of its new fake news system makes no mention of any kind of controls that would enable blocking or flagging advertisements.

Facebook’s approach, of turning to the journalism, academic and professional fact checking worlds and laundering their results through the shimmering veneer of technological perfection and computerized infallibility, allows it to simultaneously outsource and distance itself from the truth finding process and make that process appear objective and neutral by delivering it through the lens of algorithms and software. That red warning box appearing beneath a post counseling a user that the information within has been disputed offers a polished and clinical front immediately suggestive of intense research by experts in the field and confirmed by massive “big data” algorithms, rather than what might have been a single person with deep partisan biases assigning the label on a whim or because of personal conflicts of interest. We simply have no insight into the level and intensity of research that went into a particular label, the identities of the fact checkers and the source material they used to confirm or deny an article – the result is the same form of “trust us, we know best” that the Chinese government uses in its censorship efforts.

Silicon Valley must recognize that it does not always have all of the answers and that this kind of patronizing approach does not work in all venues, as it found over the past year with its censorship efforts. Today the very news outlets that have been attacking it over claims of censorship have pivoted instead to laud and support these new efforts, but the moment Facebook begins to flag their own articles as “fake news” with no explanation of why and no appeals process to correct errors, will those same news outlets be cheering as loudly?

Putting this all together, as I have argued repeatedly over the last two weeks, the answer to false and misleading news is not to assemble a small band of (almost exclusively English-speaking American) elites and hand them the ultimate power to determine what is “true” and what is “false” for 1.7 billion people across the world. The answer is instead to shine a bright light of transparency onto the world of journalism, offering readers more insight into how a journalist came to his or her conclusions, the specific verification steps taken to confirm a story and whether other journalists have come to alternate conclusions on this topic, while at the same time focusing on increasing information literacy to create an online citizenry that is capable of making educated and informed decisions based on all available facts. At the end of the day, the Internet itself was created so that no one single group of elites had total control over the world’s information, but as it has evolved, the elites have reasserted themselves, centralizing the web back into perhaps the world’s greatest walled garden and exerting perhaps the greatest power and control they have ever had over the information we access and consume. Is the future of the Internet to be one of freedom of expression where individuals can decide for themselves what to consume and believe or will it be one that would make George Orwell proud?