Skip to main content
. 2021 May 27;5(1):123–159. doi: 10.1007/s42001-021-00118-8

Table 5.

Policies adopted by leading companies to curb misinformation

Facebook Investments in partnerships with journalists, academics, and independent fact-checkers, to reduce the spreading of misinformation [106]. Actions focused on: (1) removing accounts and content that violate Community Standardsa or advertising policies; (2) reducing the distribution of false news and inauthentic content and users; and (3) giving users more context on the posts they see.
Google Launch of the Google News Initiativeb to fight misinformation and support journalism, based on three pillars: (1) increasing the integrity of information displayed, especially during breaking news or crisis situations; (2) collaborating with the industry to surface accurate information; and iii) helping individuals to distinguish quality content online through media literacy.
Microsoft Creation of advertising policies to prohibit “ads for election related content, political parties and candidates, and ballot measures globally”; application of these policies to Microsoft services, such as Bing and LinkedIn; partnership with NewsGuard [107] to provide a browser plug-in to warn users of untrustworthy news sites.
Twitter Political advertising banc; interactions with the public to jointly build policies against media manipulation [108]; labeling and adding warning messages to misleading posts to provide additional explanations or clarifications [109].

ahttps://www.facebook.com/communitystandards/

bhttps://newsinitiative.withgoogle.com/

cAccording to Twitter CEO Jack Dorsey, political advertising forces “highly optimized and targeted political messages on people”, which brings significant risks as “it can be used to influence votes to affect the lives of millions”. See: https://twitter.com/jack/status/1189634360472829952