Occam’s Razor needs to be applied more broadly in the debate over free speech online. If you think your political position is being suppressed by Facebook or Google or Twitter, it probably isn’t. More likely, your post was taken down because you violated a rule and then interpreted the takedown as a nefarious attempt at censorship.
While there are some notable exceptions, large platforms have a much stronger incentive to keep the content flowing uninterrupted. More content leads to more interaction, which, in turn, means more advertiser revenue.
The heightened risk of regulation is also cajoling platforms to limit their content moderation. On three separate occasions this year, Congress hauled tech leaders into hearings to account for content removal on their platforms. A range of legislative reform packages were proposed by both sides of the aisle that would fundamentally alter Section 230. Meanwhile, Federal Trade Commissioner Christine Wilson wondered aloud if coordination among Facebook, Google and Twitter to take down content might violate antitrust laws.
Yet, the pressures faced by social media companies don’t always align with openness. If users see more content and spam than they want, they tune out. Thus begins a dance where openness leads and content moderation follows.
The real problem comes in interpreting those moments when content is taken down. People have a terrible habit of trying to make sense of the world. We see patterns where they might not exist, which psychologists call pareidolia or apophenia. We also give simple inanimate objects their own intentions and internal motivations. Similarly, every action by a platform is taken as meaningful even if it is not.
Just as important, there isn’t a single Facebook, Twitter or Google algorithm as much as there are multiple algorithms all interacting with each other. With billions of people online using social media, a one-in-a-million slip up will occur numerous times a day. Every experience is personalized, meaning a simple rule applied to different situations will yield disparate results.
Take, for example, the pro-life Susan B. Anthony List. In late 2018, they ran an ad that showed a baby surviving a premature birth. Because the ad depicted a medical procedure and included tubing, it was pulled by Facebook. It is not surprising that a simple rule applied to a hotly contested issue would have sparked claims of bias.
Indeed, while some of the strongest criticisms have come from the right, the left has heaped scorn on social media sites as well. Bhaskar Sunkara, the publisher of leftist Jacobin Magazine, accused Facebook of blocking videos in October and took to Twitter to ask if the terms “Marxist,” “Bolivia,” or “Biden” triggered the takedown. The World Socialist Web Site has been accusing Google of outright manipulation for years, although their claims have been exceptionally difficult to substantiate. Earlier this year, it was reported that Mark Zuckerberg signed off on an algorithm change that throttled traffic to progressive news sites.
The interaction of rules applied in specific contexts alongside psychological biases makes it exceedingly likely that each side will cry foul play and claim social media sites are biased against them, regardless of whether a social media platform dedicates itself to principles of free speech.
We can see this happening in real time with the social media site Parler. The site has long maintained a dedication to the ideals of openness but was recently accused of censoring a hashtag because users were spamming the site. In other words, even though the site has been vocal in courting conservatives to leave Twitter and Facebook, it is also accused of bias against conservatives.
Around this time last year, Facebook CEO Mark Zuckerburg stood before a packed crowd in Georgetown and extolled the virtues of free speech. Although his company may fail to live up to everyone’s standard, Zuckerberg is at least committed to free speech as an ideal.
With the new Congress beginning in January, leaders will again be considering laws to ensure that ideals of openness and freedom of expression are applied online. However, the problem is that even if platforms are dedicated to such an ideal, which they seem to be, an outside observer will still find a way to conclude they are engaged in clear bias.
The incentives for openness are already in place, and as the threat of regulation grows, social media sites will feel even more pressure to keep things open. Sadly, content moderation will always be seen as an ideological choice and not the output of what it is: a bureaucratized institutional process.