Fighting Virality On and Offline: Regulating Health Misinformation on Social Media

Action is needed to stop the spread of health misinformation. From the campaign against tobacco companies to debunking the myths about vaccines and autism, pushing back against health misinformation has been a perennial battle in public health. Misinformation comes in many forms: deceptive marketing, inaccurate news reporting, untrustworthy websites, or word of mouth among friends and family.  

Significant inroads have been made against many misinformation campaigns. The fight against smoking is the most well-known example to date. Government regulation has proven to be an effective tool in steering public health agendas—anti-smoking laws, taxes on sugary drinks, and government sponsored vaccine drives are just a few examples. However, the spread of misinformation has changed drastically over the last ten years. Public health officials are stumped when it comes to dealing with the epidemic of misinformation that spreads across the most interconnected communication platform that exists—social media. 

Social Media and the Dangers of Misinformation 

The COVID-19 pandemic has highlighted just how destructive unchecked misinformation on the internet can be. Spread largely through social media channels such as Facebook and Twitter, misleading or downright incorrect information about the pandemic leads directly to Americans getting sick and spreading the virus further. There is an epidemic of the unvaccinated, with individuals refusing to social distance or get vaccinated, prolonging the pandemic and allowing variants such as Delta and Omicron to spread. Much of this can be traced back to the misinformation people read on social media.  

Social media is largely unregulated. The main law that governs internet publishing, known as Section 230, states that internet platforms cannot be held responsible for what their users publish on their sites. For example, Facebook cannot be sued if users post hate speech or hateful imagery on their profile. The law is intentionally open-ended, and for the most part it has protected major internet platforms from legal repercussions due to misinformation spreading on their sites.  

Sites like Facebook claim to be neutral communication platforms that merely exist to host users—a Roman forum for the 21st century. But this isn’t true when the algorithms built into these sites are actively pushing the most harmful, most inflammatory misinformation to as many viewers as possible. Because these types of posts are most likely to result in engagement—clicks, views, and shares—social media platforms are incentivized to promote them. Studies show how social media algorithms steer users to radicalized misinformation within days of creating an account. When the platform itself is promoting these sources of misinformation, they are responsible for the spread. These platforms have directly contributed to public health crises. This needs to be addressed.  

We Need a Call to Action 

Censorship is not the goal here—accountability is. Senator Amy Klobuchar (D-MN) has introduced legislation to combat the spread of health misinformation. Her bill would remove Section 230’s protections from platforms in the case of algorithmically promoted misinformation. Sites like Facebook and Twitter would still be protected in the case of misinformation that simply appears on a chronological feed as opposed to being promoted by the platform.  

No matter what action government takes, social media will remain a major player in our information ecosystem. The COVID-19 misinformation pandemic will not be the last of its kind. Many public health workers and health care professionals have taken to the internet to provide sources of credible health information, but pushing back individually is not enough. Public health officials must act now to develop policies to regulate misinformation on a larger scale and hold these platforms accountable for their role in public health crises. 

Leave a Reply

Your email address will not be published.