Dead Reckoning Navigating Content Moderation After “Fake News"

Date Posted: May 29, 2019 Last Modified: May 29, 2019
Dead Reckoning Navigating Content Moderation After “Fake News" Photo: TheDigitalArtist, Pixabay

After the 2016 US Presidential elections, the phrase 'fake news' has become a daily fixture in US political discourse. It has been appropriated by political actors to critique "mainstream media" or been taken up by a wide range of policymakers, journalists, and scholars to refer to problematic content such as information campaigns shared on social media. This paper clarifies the use of 'fake news' and looks at solutions proposed by social media platforms, news media, civil society organisations and the government. 

Highlights:
  • The study finds the term 'fake news' being used to both extend critiques of mainstream media and the growing spread of propaganda and problematic content online.
  • The existing definitions seeks to define fake news by the intent of the producers and sharer of news and separate them into clear, identifiable features that can be used by machines and human moderators to detect false content.
  • The strategies for curbing the spread of 'fake news' include trust and verification processes, disrupting economic incentives, banning accounts and other regulatory practices.
  • Fake news content producers quickly learn to adapt to the new standards set by platforms using tactics like including satire or parody disclaimers in order to bypass content regulations.
  • The moderation of fake news requires a better contextual understanding of both the article and the source. The current reliance on automated technologies and artificial intelligence are not advance enough to address this issue without human-led interventions.
  • The expectation on third-party platforms and media organisations to close the gap between platforms and media literacy fall short because they are currently under-resourced to meet this challenge.