Fake news poses a major challenge to the cornerstone of democratic values. The emergence of social media as a key source of news has created a new ecosystem for the spread of misinformation. As many people consume social media is it therefore important to intervene to show relatively less untrustworthy content on these platforms. This research investigate one potential approach to curb the spread of misinformation by having social media platform algorithms preferentially display user rated trustworthy content. The study ran two preregistered experiments where individuals rated familiarity with and trust in 60 news sources from three different categories: mainstream media outlets, hyper-partisan websites and false news websites.
Highlights:
- Although there were substantial partisan differences, the findings found people across the political spectrum trusted mainstream sources more than hyper partisan or fake news sources. Despite the difference being larger for Democrats than Republicans every mainstream media outlet was rated more trustworthy in comparison.
- At the level of discerning individual headlines, people who were more reflective were better at discerning between mainstream and hyper partisan or fake news.
- The findings also indicate that politically balanced layperson ratings were strongly correlated with the ratings provided by professional fact-checkers. Finally, the research concluded that excluding the ratings of participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd.
- The study's findings indicate that having algorithms which promote content from trustworthy sources may be a promising approach in fighting misinformation on social media.