Fletcher Reads the Newspaper | Can clamping down on digital disinformation lead to even more digital disinformation?

By Aanchal Manuja

On October 6, Fletcher’s Institute for Business in the Global Context and the Center for International Law and Governance co-hosted a new edition of “Fletcher Reads the Newspaper.” Honoring Fletcher’s tradition of interdisciplinary study, the event featured a lively and dynamic dialogue among professors from multiple disciplines and students to answer the question “Can clamping down on digital disinformation lead to even more digital disinformation?”

Introducing the event, Professor Bhaskar Chakravorti called it a part of the “magic of Fletcher.” Since the first edition of Fletcher Reads the Newspaper, which focused on the 2013 Rana Plaza disaster in Bangladesh, faculty and alumni have gathered periodically to analyze news topics through their areas of expertise. The October talk tackled government regulation of social media and drew on Fletcher research sponsored by The Omidyar Network and The MasterCard Center for Inclusive Growth.

Professor Joel Trachtman (International Law) opened the discussion by assessing section 230 of the U.S. Communications Decency Act, which protects third party platforms from legal liabilities. Reminding the audience of 230’s historical context, Trachtman described how questions of legal responsibility have led to fractious disputes over communications law. Since the section mentions nothing about diverging internet laws among states, the landscape is beset by legal ambiguity.

Complicating the issue even further, Trachtman said, is the moderation of platforms by algorithms. These platform algorithms shoulder the responsibility for identifying harmful content, with the government unable to regulate due to discrepancies in jurisdictions and approaches. Trachtman argued that primary law should regain responsibility in this area and, speaking from a commercial perspective, explained how regulatory cooperation can benefit trade in the digital sector.

Professor Daniel Drezner (International Politics) however challenged the likelihood of coordination. Drezner argued that a country’s market size will determine its power to set standards. While acknowledging various international cooperation frameworks for regulation, he argued that in an equilibrium outcome, states will not coordinate or adjust; the cost of adjusting cultural norms, social mores, and political nuances is greater than any economic benefit accrued from it.

Global disagreement on censorship and regulation has grown starker with the entry of countries like China, given their different ideas of harmful content and how to regulate it. Other factors hindering cooperation include democratic recession, increasing calls for data protection, populism and subnational division, and the rise of monopolistic big tech companies with a specialized knowledge advantage over government regulators.

Drezner suggested that transatlantic community standards may be feasible, though some countries in the Atlantic community may still diverge. More importantly, analysts must look beyond platform culpability. Consumers digest and multiply disinformation, with platforms a means to that end. Secondary laws cannot fix primary problems.

Professor Josephine Wolff (Cybersecurity Policy), while agreeing on the lack of global consensus, shed light on nuances within disinformation itself. Platforms pursue their own content moderation guidelines separate from legal norms, simply absorbing national laws into their own rulebook. Observing an increase in consensus across nations, she also shed light on non-convergent areas, including rules on video platforms such as Tik-Tok and YouTube. These platforms are difficult to moderate, due to their inconsistent and fluid content.

As seen during the 2016 U.S. elections, platforms often must navigate political landmines to arbitrate disinformation. Then, American speech norms led platforms to counter disinformation not by removing it but by promoting factual content. A more aggressive shift, Wolff noted, came during the pandemic, when the stakes of disinformation appeared even higher. Science was easier for moderators to take a stand on than politics, leading to wider takedown of misleading content.

Wolff pointed out that China and Russia have their own state social media platforms, making their adherence to common standards largely irrelevant. Differing national versions of Facebook, she argued, is not the worst-case scenario. Agreement amongst nations on censorship and hate speech is far-fetched, and absorption of national laws is a quagmire for platforms. On the topic of secondary vs. primary responsibility, Wolff remarked that policymakers pursue platforms merely because they are the simplest target in a complex policy landscape.

Professor Bhaskar Chakravorti, while reminding the room of the current U.S. Supreme Court cases on Section 230, emphasized Facebook’s commercial interests in the U.S. market. The platform earns more dollars per user in the U.S. than in any other nation. Thus, 87% of its content moderation resources are devoted to the U.S., despite 90% of Facebook users being located outside the country. Misinformation in languages other than English is not treated as a priority. With a limited budget, U.S. focus comes at the expense of other countries, despite their greater misinformation risks. Chakravorti describes this phenomenon as “the misinformation paradox”, painting a pessimistic picture for progress at combating harmful speech globally.

Chakravorti criticized the chaos of regulatory fragmentation, noting that the pandemic opened the door for cooperation on standards. He emphasized the importance of investing in user responsibility, advocating for digital literacy courses in school curriculums.

After showing a satirical video mocking common disinformation practices, Chakravorti opened the floor to student questions. Several students argued for updating laws to match contemporary technology and for more transparency in content moderation decisions.

Student attendees left the event expressing anticipation for part two, an Unconference on Defeating Disinformation: Towards a Globally Inclusive Approach to Regulation and Digital Platform Responsibility held on December 2nd. To learn more, visit here.