Digital Planet > Events > Defeating Disinformation: Advancing Inclusive Growth and Democracy through Global Digital Platforms
Event Concluded (December 2022)
About the Event
With hateful and harmful content ramping up around elections and conflicts across the globe, disinformation is thriving. While global platforms allow users to generate massive volumes of content thereby serving as powerful conduits of commerce, communication, and community as well as inclusive access to information and economic resources, at the same time, these same platforms present novel challenges; these challenges arise when users start generating content that is false or harmful or both and steer the platform communities towards division rather than inclusion. If unchecked either by the platforms themselves or by regulators or civil society this gives rise to many risks: eroding societal institutions, threatening democratic and civic processes, undermining public health, endangering disadvantaged minorities or minors, and contributing to economic disparities.
The need for cross-sector action to ensure platforms themselves and regulation are working for society and not against it cannot be more urgent. This December, we examined the implications and impact online platforms will have on the global community and what we can do to improve them and ensure that user generated content contributes to civic discourse, informed societies and inclusive growth.
Join us as we create a unique forum for thought leaders and experts to explore and weigh in on critical challenges affecting the digital public sphere, design technical and regulatory solutions that address these issues, and answer questions including: Can a global approach be developed to address these tensions while maintaining or even enhancing the inclusive social contribution of platforms? Can we avoid the unintended consequences of an internationally fragmented approach to content moderation requirements? Are the world’s most vulnerable populations given adequate protections, even as the platforms prioritize the most powerful governments? How can we ensure that these platforms work for everyone, everywhere, and what is the role that each sector can and should play to get us there?
Defeating Disinformation: A Research and Solutions Series
Background and Approach
Disinformation is pervasive. There is hardly anyone on the internet or social media who hasn’t encountered disinformation. From political disinformation amidst crucial elections worldwide to medical disinformation during the COVID-19 pandemic, harmful content has flourished on social media platforms and resulted in an information disorder impacting civic processes, public health, and, most importantly, access to accurate information. Despite mitigation tactics such as fact-checking and content removal, disinformation continues to thrive on global platforms with little to no accountability.
These platforms serve as powerful conduits of communication, commerce, and correspondence. Yet their power to create and dominate the local, national, and global news cycle and influence political, business, and consumer behavior is enormous. Their responsibility for exercising such power—for the content on their platforms—is statutorily limited by national laws, such as Section 230 of the Communications Decency Act in the United States. Efforts by civil society to demand and guide appropriate content moderation and to avoid private abuse of this power tend to be in tension with a widespread concern in liberal states to avoid excessive government regulation, especially of speech, and a separate parallel concern about not-so-liberal states using regulatory power to control or manipulate the narratives for political ends. In response to such tensions, diverse and sometimes contradictory national rules threaten to splinter platforms and reduce their utility to affluent and developing countries.
This raises several questions: Can a global approach be developed to address these tensions while maintaining or enhancing the social contribution of platforms? Can we avoid the unintended consequences of internationally fragmented approaches to content moderation requirements? Are the world’s most vulnerable populations given adequate protections, even as the platforms prioritize the most powerful governments?
With support from Omidyar Network, we at Digital Planet and the Center for International Law and Governance at The Tufts Fletcher School embarked on a journey to answer these questions by leveraging our interdisciplinary approach to research and building a network of academics and thinkers from diverse disciplines. We began this journey by framing the questions around content moderation and international cooperation and coordinating expert analysis to examine the dynamics of these challenges and their possible resolutions. First, we collected ‘comparative briefs that outline content moderation regulations and proposed legislations across leading tech jurisdictions, such as the US, EU, China, India, and Brazil. Second, we collected ‘generative briefs that capture the ideas and proposals offered by a panel of multi-disciplinary subject matter experts. Together, these briefs provide policymakers, lawmakers, industry decision-makers, and civil society with actionable insights. And lastly, we relied on the delivered comparative and generative briefs to produce ‘synthesis briefs’ utilizing the analytical approaches drawn from disciplines such as microeconomics, international law, international politics, and technology policy.
To share our research findings and spark a public discussion and debate of the many ideas generated and critical issues raised so far, we convened our authors as well as stakeholders in the broader information ecosystem, including private industry, civil society, and government actors, to further investigate our learnings in the form of panel discussions and a solutions workshop.
The result was an ‘unconference,’ which took place on December 2, 2022, and brought collaborators and leading experts on this thematic area to explore systemic solutions to defeat the beast called ‘disinformation’ and its accomplice ‘hateful content.’ The discussions spanned three panels, where the panelists examined; (i) the international implications and impact online platforms have on the global community, (ii) potential policy responses – domestic, international, and multilateral – to reform intermediary liability regimes and prevent the spread of disinformation, and (iii) ways to ensure user-generated content positively contributes to civic discourse, informed societies, and inclusive growth.
The UnConference Conversations: A Summary
Our first panel focused on what we can learn from a comparative analysis of content moderation laws/policies across different yet critical tech jurisdictions, including the US, EU, India, Brazil, and China. It also delved into the degree of convergence and overlap in how countries establish global norms for content moderation and platform responsibility. Interestingly, each country has a different approach to addressing the problem.
While there are some areas, like child sexual abuse material, where there is immense potential for convergence across regions, areas such as political disinformation and hate speech lack the same potential for convergence or cooperation given the diverse sociopolitical and cultural environments in these regions in addition to the legislative will to regulate online speech. Areas such as Asia, the United States, and Latin America have very different standards for what constitutes hate speech and the point at which governments should intervene or impinge on their citizens’ right to free speech.
Two levels of protection are available to social media platforms in the United States; Freedom of Speech granted under the First Amendment to the American Constitution and Section 230 of the CDA. Section 230, among other things, states that websites and other online services are not liable for third-party content (also known as user-generated content). This is in addition to the First Amendment defense. It acts as a “procedural fast lane” to quickly and cheaply resolve litigation related to third-party content. This immunity encourages internet services to moderate content without fearing high litigation costs. However, it does not create new incentives for internet services to address lawful but offensive content, as the First Amendment covers those issues. Instead, Section 230’s immunity provides additional legal protection for internet services to address harmful speech in a way that the government cannot under the First Amendment.
However, in light of the upcoming litigation pending to be heard before the SCOTUS on the issue of Section 230 and content moderation in the US, the jurisprudence around this is likely to change. The US now has a chance to be at the frontier of innovation in policymaking; however, if the Supreme Court makes that possible is yet to be seen.
Strict Content Liability: Chinese platforms are explicitly required by government regulators to take primary responsibilities in content governance as they must monitor, moderate, and censor content according to the government’s requirements.
· Chinese platforms also enjoy conditional immunity for damage caused by user-generated content as long as the platform takes down such content.
· Public opinion management plays a crucial role in formulating content regulation policies as such content has the power to influence public opinion or the capacity for social mobilization.
· Regarding enforcement, China’s civil and criminal laws apply to social media platforms. These laws are specifically applied in cases such as pornography, and executives of these companies are frequently summoned by China’s online content regulators and ordered to deal with such grossly problematic content.
The critical elements of the Digital Services Act that feed into a global context:
· It is built on a risk-based / asymmetric approach, which imposes due diligence obligations that increase with online intermediaries’ size, reach, and social relevance.
· It propagates transparency and due process for content moderation decisions through most of its rules. However, it does not say what content is illegal and leaves this mainly to the discretion of member states. It primarily focuses on how platforms should deal with harmful content.
· Specific rules of large online platforms. This is a regulatory approach described as a systemic regulation. It is not about individual tweets or comments but about how platforms deal with inappropriate content on a systemic level. As such, it calls for risk mitigation, external auditing, and appointment of compliance officers and takes the approach of mass moderation of speech for content moderation.
India now has specific rules for large platforms, which apply to ‘Significant Social Media Intermediaries’ with over 5 million users that call for more transparency in reporting.
Like the United States, India’s Constitution also protects the freedom of speech and confers upon platforms conditional immunity under the Information Technology Act, 2000, and intermediary guidelines notified thereunder. The Indian government, however, is increasingly attempting to regulate platform speech, and they are doing that under the intermediary liability law. This becomes tricky as determining immunity is on a case-to-case basis by courts.
So, while the government’s intention is benign, the challenge will be balancing the users’ rights, namely, their right to privacy and free speech and expression.
Platforms are not liable for their content unless they fail to comply with an order of the court specifically. However, as of now, Brazil’s content moderation laws do not encompass criminal and electoral law.
There is also a jurisdictional question of whether Brazil’s legislation applies to all social media platforms or only portions concerning data protection.
In a global / BRICS context, governments should consider automated tools for content moderation. They can also borrow a page from Brazil’s Fake News Bill, where accounts of public interest, including State/members of congress/everyone we think we have the right to hear from on a social media platform, will have to keep public interest in mind. Some panelists also suggested that countries can draw inspiration from the Digital Services Act that set out a governing mechanism for the procedural aspects of content moderation instead of defining what classifies as misinformation or hate speech. The issue of frameworks like the DSA being used to impinge on people’s freedom of speech will always exist. However, drawing consensus on content assessment and what controls to impose is almost impossible internationally as there is too much divergence on the substantive issue.
Our next panel took a generative approach to tackle m/disinformation by looking at other areas of international cooperation in different sectors. This panel explored the various mechanisms used in banking and international trade as an analogy that could be used in regulating disinformation globally. The panel looked at frameworks like the Basel Accords and the possibility of using its policy formation to design similar policy frameworks for content moderation and platform responsibility. A suggested way of doing this was a coalition of multinationals or an alliance of states subjecting their platforms to be a part of it that would come together and develop a framework. The trigger that inspired the Basel Accords was also discussed, raising a critical question – can we analogize a Lehman moment for content moderation as well? Does cyberbullying make the cut? Are we experiencing that moment with Elon Musk buying Twitter? Can this be the right momentum for international cooperation?
However, this also raised a lot of essential questions:
1. The Policy Argument: How will the issue of ‘free for all information’ be addressed?
2. Traceability Problem: How do we trace the disinformation to its source? With monetary problems, we trace the money and tax the money. How would tracing in the disinformation sphere work?
3. Moderation Problem: As established by Panels 1 and 2, this is a semantic problem, and how it may be penalized may change from country to country.
4. Procedural Problem: Do we force multinationals to report all profits and establish minimum content moderation standards?
5. International Coordination Problem: This would require coexisting content moderation and management standards between parties.
6. Rigidity Problem: How rigid can content moderation regulations be (for them to be universally accepted by member states)
‘Misinformation Paradox’ refers to the phenomenon of regulators in different parts of the world forcing companies to moderate content on their platforms. Platforms are compelled to respond to these regulations by allocating resources in these jurisdictions. Given that platforms have limited resources to allocate across regions, the two deciding factors for companies could be (i) where the regulations pinch the hardest and (ii) where commercial risk is the highest. This creates a massive problem of content moderation resources getting directed toward countries where regulations are the strictest and commercial opportunities are the greatest. Parts of the world that have neither are starved of content moderation resources.
Our final panel, envisaged as a solutions workshop, dealt with how to build upon international aspects of this problem, including questions about how governments might work together to create global standards and how private sector actors might respond to this Misinformation Paradox to bring about more transparency and accountability, and effectively allocate their resources.
The panel discussed the importance of multi-stakeholder cooperation in dealing with the Misinformation Paradox and establishing a fact-checking ecosystem for misinformation that can be built using an open-source system for designated experts of different regions to access and develop, and improve. The importance of fact-checkers was emphasized, especially considering how disinformation was used by political parties in the recent Philippines’ and Brazilian elections. While private players and governments will implement these initiatives, it also has to be complemented by oversight boards that prioritize and ensure that users understand the rules/laws that are enforceable for content moderation. Despite some inspirational ideas on dealing with this problem, the chicken and the egg dilemma remained an issue the panel flagged as an issue – How do we prevent disinformation at source? Do we try to deal with it at its source or block it once it is posted? Do we have enough resources for either of these courses of action? How do we deal with the issue of different cultures perceiving misinformation as not a threat but merely entertainment? Against the backdrop of all of this, how do we uphold the integrity of the democratic system?
Key Takeaways and Avenues for Future Investigation
Our discussions raise important questions that stakeholders ought to consider going forth while thinking about solutions to combat disinformation online and exact accountability from online platforms. While the challenge of defeating disinformation is not new and has been seen in print and broadcast media as well, the unique architecture of the networked Internet as well as design features of online platforms such as algorithmic news feed curation, has not only empowered users to propagate harmful content and polarized users but also made it difficult to prevent the proliferation of disinformation online given the virality of such content. Moreover, despite governments holding individuals accountable for online speech, more must be done to hold platforms responsible for facilitating the spread of such information. Additionally, countries that lack institutional capacity, resources, and expertise, and in some cases, legislative will, to call on online platforms to be more transparent, continue to suffer from the information disorder plaguing the new digital public sphere. Several approaches towards mitigating the risks arising from disinformation and harmful content online are being adopted, such as government regulation, self-regulatory mechanisms, and co-regulatory efforts. However, disinformation continues to thrive online, with the most vulnerable populations and marginalized communities lacking access to critical, accurate information. We have seen that this is stark in the Global South, where countries still need to establish speech norms and enforcement measures to counter harmful content online. Given the diverse sociopolitical and cultural contexts in the majority world, Western speech norms are likely to cause more harm than good when it comes to content moderation. Therefore, a localized approach that accounts for the perspectives of all stakeholders, as well as the unique characteristics of the community, is likely a more effective solution in mitigating the risks arising from disinformation and harmful content online. Towards this approach, international cooperation is necessary to ensure that the Brussels Effect does not infringe on users’ free speech rights in the majority world.
Q: First Amendment vs. Section 230 – Will Section 230(c)(1) revisions create new legal incentives for the services to redress lawful-but-awful content?
Eric Goldman: “…Section 230(c)(1) acts like a “procedural fast lane” to resolve litigation more quickly and cheaply than would be possible with a Constitutional defense. … Section 230’s immunity provides additional legal comfort to Internet services to do the socially valuable work of cleaning up harmful speech—work that the First Amendment would not permit the government to do itself.”
Q. What is the starting point for framing content moderation laws? Can free speech provide that
initial jump-off point?
Joel Trachtman: “Elon Musk, in connection with his bid to purchase Twitter, said “by free speech, I simply mean that which matches the law. I am against censorship that goes far beyond the law.… he failed to recognize that different countries in which Twitter operates have different legal formulations of free speech. Multiple countries’ laws can apply to the same platform or the same platform transaction. Perhaps transactions divided in conduct or effect through platform-based dispersal can escape any regulation at all.”
Q. How does China choose to enforce its content governance framework?
Jufang Wang: “China adopts a mixed method in enforcing regulations regarding platform responsibility. While civil and criminal laws are applicable, administrative measures are China’s main method in pressuring them to fulfill the “primary responsibilities” for online content governance.”
Q. Is exercising regulatory dominance essential to achieve a globally accepted regulation against disinformation?
Federico Lupo-Pasini: “…global deal on the regulation of harmful speech could probably be pushed if enough pressure was coming from US regulators and the European Commission, as they oversee the two biggest markets for platforms. Both jurisdictions, in different ways, play a prominent role in regulating the digital economy.”
Q. What is the great paradox that hovers around misinformation?
Bhaskar Chakravorti: “While we can recognize the social ills of harmful content, its moderation is hard. The reason comes down to a single critical factor: the misalignment of incentives. This misalignment creates a “misinformation paradox,” where attempts to regulate harmful content could give rise to even more of it.
Q. Is it truly possible to achieve a global consensus on content regulation?
Daniel Drezner: “Countries have wildly divergent preferences to which Internet content should be regulated… For this issue, there is no bargaining core among governments. The predicted outcome would be the unilateral use of national regulations to bar undesired content and the creating of sham standards at the global level.”
Q. Is international coordination genuinely impossible?
Josephine Wolff: “While enforcing different rules and policy measures for different national versions of a platform is certainly not the same as reaching international consensus on these measures, it is no small thing that so many countries have accepted this approach as a satisfactory implementation of their domestic laws. …. This widespread acceptance of a highly imperfect system of implementation and enforcement is, itself, an impressive feat of international coordination.”
Q. What is Brazil’s stance on the scope and extent of users’ rights against platforms?
Artur Monteiro: “The scope and extent of users’ rights against platforms are not defined; one possibility would be to view those rights as equivalent to those individuals hold against the state. This was arguably the view behind the provisional legislation issued by President Bolsonaro. Another possibility would be to reconcile users’ rights with platforms’ by admitting that platforms enact (and enforce) their content policies and granting users due process rights, with judicial review of the justification offered by platforms for content moderation decisions.35 A third possibility would look beyond fairness in content moderation and constrain the latitude platforms in establishing their content policies to restrict protected speech. Court cases where users seek to have their content or accounts reinstated — and prevail on their claims — are common, yet case law has so far not articulated a theory of users’ free speech rights against platforms, a question that is ordinarily left unaddressed.”
Graduate Analysts Arpitha Desai and Mitakshi Lakhani produced this summary of the research and the unconference under the guidance of Bhaskar Chakravorti and Ravi Shankar Chaturvedi.
Conference Papers (In Progress)
Professor of European Private and Business Law,
University of Osnabruck (Germany) and
a Visiting Fellow at Yale ISP
Professor of Law,
Santa Clara University School of Law,
Co-director of the High Tech Law Institute
Recent global estimates suggest that school closures and unequal access to technology-based educational inputs used for remote learning will aggravate the existing equity gaps in education. ASER Digital Check 2020 captures information on various dimensions such as children’s sex, their school type, and their parents’ education level to explore this widening equity gap in education in rural India.
In collaboration with ASER Centre, Pratham India
Data governance has become an essential, albeit challenging task for policymakers. They must develop new visions, strategies, structures, policies, and processes. Governments that can accomodate a flexible approach to governing different types of data use and re-use in a responsive, accountable, ethical, and anticipatory manner are likely to build and maintain trust.
In collaboration with the Digital Trade and Data Governance Hub at The George Washington University
How can real-time social analytics provide a tool for inclusive policymaking? This report uses a dataset of over 873 million online interactions drawn from more than one hundred social and mainstream media channels to analyze public sentiment and emotion in response to the pandemic management of eight governments between January and July 2020.
In collaboration with Equiception