Skip to content Skip to footer

Study reveals key factors that affect people’s decisions to quash harmful misinformation

Online content moderation is a moral minefield, especially when freedom of expression clashes with preventing harm caused by misinformation. A study by a team of researchers from the Max Planck Institute for Human Development, University of Exeter, Vrije University of Amsterdam, and University of Bristol examined how the public would deal with such moral dilemmas. They found that the majority of respondents would take action to control the spread of misinformation, in particular if it was harmful and shared repeatedly. The results of the study can be used to inform consistent and transparent rules for content moderation that the general public accepts as legitimate.

 

The issue of content moderation on social media platforms came into sharp focus in 2021, when major platforms such as Facebook and Twitter suspended the accounts of then U.S. President Donald Trump. Debates continued as platforms confronted dangerous misinformation about the COVID-19 and the vaccines, and after Elon Musk singlehandedly overturned Twitter’s COVID-19 misinformation policy and reinstated previously suspended accounts.

“So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech. Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence,” says lead author of the study Anastasia Kozyreva, Research Scientist at the Max Planck Institute for Human Development. “To deal adequately with conflicts between free speech and harmful misinformation, we need to know how people handle various forms of moral dilemmas when making decisions about content moderation,” adds Ralph Hertwig, Director at the Center for Adaptive Rationality of the Max Planck Institute for Human Development.

The majority of respondents chose to take some action to prevent the spread of harmful misinformation. On average, 66 percent of respondents said they would delete the offending posts, and 78 percent would take some action against the account. Not all misinformation was penalized equally: Climate change denial was acted on the least (58%), whereas Holocaust denial (71%) and election denial (69%) were acted on most often, closely followed by anti-vaccination content (66%).

“Our results show that so-called free-speech absolutists such as Elon Musk are out of touch with public opinion. People by and large recognize that there should be limits to free speech, namely, when it can cause harm, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial,” says co-author Stephan Lewandowsky, Chair in Cognitive Psychology at the University of Bristol.

The study also sheds light on the factors that affect people’s decisions regarding content moderation online. The topic, the severity of the consequences of the misinformation, and whether it was a repeat offense had the strongest impact on decisions to remove posts and suspend accounts. Characteristics of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions.

Respondents were not more inclined to remove posts from an account with an opposing political stance, nor were they more likely to suspend accounts that did not match their political preferences. However, Republicans and Democrats tended to take different approaches to resolving the dilemma between protecting free speech and removing potentially harmful misinformation. Democrats preferred to prevent dangerous misinformation across all four scenarios, whereas Republicans preferred to protect free speech, imposing fewer restrictions.

“We hope our research can inform the design of transparent rules for content moderation of harmful misinformation. […]”, says co-author Professor Jason Reifler from the University of Exeter. “To design such rules, several factors and actors should be considered. People’s preferences are not the only benchmark for making important trade-offs on content moderation. However, ignoring their preferences altogether risks undermining the public’s trust in content moderation policies and regulations,” he adds.

Co-Author and Legal Academic Professor Dr Mark Leiser at Vrije University-Amsterdam added: “Effective and meaningful platform regulation requires clear and transparent rules for content moderation and acceptance by the Internet community of the rules as legitimate constraints on the fundamental right to free expression.  This important research goes a long way to informing policymakers about what is and, more importantly, what is unacceptable user-generated content”.

 

Original Publication

Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (in press). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences. Advance online publication. DOI

 

Max Planck Institute for Human Development

The Max Planck Institute for Human Development in Berlin was founded in 1963. It is an interdisciplinary research institution dedicated to the study of human development and education. The Institute belongs to the Max Planck Society for the Advancement of Science, one of the leading organizations for basic research in Europe.

 

Contact:

Dr Mark Leiser

VU-Amsterdam

ALTI Center

E-Mail: m.r.leiser@vu.nl

 

Max Planck Institute for Human Development

Public Relations Department

Maria Einhorn

Phone: +49 (0) 30 82406-211

E-Mail: einhorn@mpib-berlin.mpg.de

 

Nicole Siller

Phone: +49 (0) 30 82406-284

E-Mail: siller@mpib-berlin.mpg.de

 

Further Information:

www.mpib-berlin.mpg.de/en

www.mpg.de/en

           

Photo by Jeremy Bishop

Subscribe to our newsletter