At a recent hearing on disinformation, Rep. Frank Pallone (D-N.J.) slammed tech CEOs and their role in pushing misinformation: “You’re not passive bystanders. When you spread misinformation, actively promoting and amplifying it, you do it because you make more money.”
The response from Facebook’s Mark Zuckerberg hardly allayed fears, “While it may be true that people might be more likely to click on it in the short-term, it’s not good for our business or our product or our community for this content to be there.”
Facebook and other platforms are impelled by user engagement. The website experience, content, and interactions are all optimized with an eye towards increasing user engagement, which translates into advertising revenue. Some research suggests that misinformation boosts short-term engagement with platforms. So, one would expect that platforms are benefiting from the spread of misinformation.
This research is problematic. First, we can’t compare platforms with and without misinformation in the wild. Second, users learn and change their preferences over time. Platforms aren’t impervious to bad press. Some 15 million U.S. users left the platform after the Facebook–Cambridge Analytica data scandal broke.
With these concerns in mind, our economic lab set up an experimental game to test some key questions. The game was structured to enable us to cleanly determine how voters acquire and share information on a platform when there is partisanship. Crucially, the experiment design also allowed us to sidestep thorny political controversies like the definition of misinformation, and whether or not certain policies are correct.
The game culminates with a real cash payout, so participants have skin in the game. Importantly, each person playing was randomly assigned to one of three conditions. Sometimes, users had no access to a platform and the information it contained. In other sessions, only accurate information could be shared, while in others, people were permitted to post misinformation.
The results undercut the notion that misinformation is good for the bottom line. User engagement on social media is significantly lower when misinformation is permitted. All of the metrics that matter took a dive in the presence of misinformation. Users reduced the number of people they were connected to and reduced the number of posts they published. Much as Zuckerberg had suggested, social media platforms focused on user long-term engagement have a clear incentive to purge their platforms of misinformation.
But the results aren’t entirely surprising. Back in 2015, researchers at Google quietly announced that the company had reduced by half the number of mobile ads users saw. Driven by a focus on long-term user experience, studies found users responded positively when the quality of ads was increased but massively reduced in number. The shift towards the long-term changed priorities for the company: “We estimated that the positive user response would be so great that the long-term revenue change would be a net positive.”
Our results bring into focus the urgency of effectively curbing misinformation on social media platforms. If it flourishes, everyone is worse off. The exposure to untrustworthy content leads to outcomes that are more detrimental than if there were no social media at all. At the same time, social media can be a force for good, provided misinformation is not permitted.
The world we all inhabit, the world outside the lab, lies somewhere in the gray zone between the two states of misinformation and truth. On the whole, however, the incentives of companies running social media platforms are aligned with the broader social goal of eliminating misinformation. Any regulatory approach should understand that and see these platforms as willing partners whose own growth depends on solving this issue.