Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 22.
Published in final edited form as: Eur Psychol. 2023 Jul 14;28(3):a000493. doi: 10.1027/1016-9040/a000493

Incorporating Psychological Science Into Policy Making: The Case of Misinformation

Anastasia Kozyreva 1,, Laura Smillie 2, Stephan Lewandowsky 3,4,5
PMCID: PMC7615323  EMSID: EMS190996  PMID: 37994309

Abstract

The spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem because misinformation can harm both the public and democracies. To address the spread of misinformation, policymakers require a successful interface between science and policy, as well as a range of evidence-based solutions that respect fundamental rights while efficiently mitigating the harms of misinformation online. In this article, we discuss how regulatory and nonregulatory instruments can be informed by scientific research and used to reach EU policy objectives. First, we consider what it means to approach misinformation as a policy problem. We then outline four building blocks for cooperation between scientists and policymakers who wish to address the problem of misinformation: understanding the misinformation problem, understanding the psychological drivers and public perceptions of misinformation, finding evidence-based solutions, and co-developing appropriate policy measures. Finally, through the lens of psychological science, we examine policy instruments that have been proposed in the EU, focusing on the strengthened Code of Practice on Disinformation 2022.

Keywords: misinformation, disinformation, harmful content, regulation, policy making

Misinformation as a Policy Problem

Misinformation is a global problem in need of urgent solutions, primarily because it can encourage people to adopt false beliefs and take ill-informed action – for instance, in matters of public health (e.g., Loomba et al., 2021; Schmid et al., 2023; Tran et al., 2020). Although democratic societies can sustain some amount of unverified rumors or outright lies, there is a point at which willfully constructed alternative facts and narratives can undermine the public’s shared reality and erode trust in democratic institutions (Lewandowsky, Smillie, et al., 2020). Democracy requires a body of common political knowledge in order to enable societal coordination (Farrell & Schneier, 2018). For example, the public must be aware that the electoral system is fair and that an electoral defeat does not rule out future wins. Without that common knowledge, democracy is at risk. The seditious attempts to nullify the 2020 US election results have brought that risk into sharp focus (Jacobson, 2021).

Although not a new issue per se, misinformation has become a pressing global problem due to the rising popularity of digital media. Indeed, people around the world see the spread of false information online as a major threat, ranked just behind climate change (Poushter et al., 2022). Misinformation is therefore both a research problem – spanning the fields of network science, social sciences, and psychology – and a policy problem, defined as “a disconnection between a desired state and the current state of affairs” (Hoornbeek & Peters, 2017, p. 369).

The misinformation problem can be approached in a variety of ways, ranging from media literacy campaigns and government task forces to laws penalizing the act of sharing fake news (for an overview of actions taken across 52 countries, see Funke & Flamini, n.d.; see also Marsden et al., 2020). Worryingly, “fake news” laws in authoritarian states (e.g., Russia) are used to control the public conversation, effectively stifling opposition and freedom of the press (The Economist, 2021).

In the EU, when misinformation does not constitute outright illegal speech (e.g., consumer scams, incitement to terrorism) but is rather “legal but harmful,” choosing appropriate policy instruments is largely a matter of balancing the threat of harmful falsehoods against fundamental human rights and societal interests. This balancing act calls for evidence and expertise, which in turn require a successful interface between science and policy, including knowledge brokerage that facilitates a dialogue between the two distinct communities (Gluckman et al., 2021; Topp et al., 2018).

How Can Scientific Research Inform Misinformation Policy?

In this article, we discuss how misinformation policy can be informed by scientific research. We outline four building blocks for evidence-informed misinformation policy – understanding the policy problem, understanding psychological drivers and public perceptions, finding evidence-based solutions, and co-developing policy measures – then focus on how a specific piece of the EU co-regulatory framework, the strengthened Code of Practice on Disinformation 2022 (“the Code”; European Commission, 2022a), draws on findings and arguments from cognitive science (Figure 1).

Figure 1. Building blocks of science-based policy for the case of misinformation.

Figure 1

Understanding the Policy Problem

To address a policy problem, a thorough understanding of the problem in question is crucial. Policies to manage the misinformation problem should take into account an understanding of the types of false and misleading content, the harms misinformation might cause, its distribution in online networks, and the factors that contribute to its spread.

Defining Misinformation and What It Means for Policy

“Misinformation” is often used by the research community and the general public as an umbrella term for various types of false or misleading content, including “fake news” (entirely false claims masquerading as news), unconfirmed rumors, half-truths, and factually inaccurate or misleading claims, conspiracy theories, organized disinformation campaigns, and state-sponsored propaganda. Misinformation can be defined along several dimensions, including degree of inaccuracy, presence of malicious intent, harmful outcomes, and risk of harmful outcomes (see, e.g., Kozyreva et al., 2020; Wardle & Derakhshan, 2017).

Determining the degree of inaccuracy of misinformation is a crucial first step. Online platforms are not in a position to independently establish whether information posted by their users is factual; they therefore generally consider accuracy only to the extent that it can be evaluated by external experts (e.g., Twitter, n.d.).1 Platforms outsource judgments on accuracy to external and certified fact-checking organizations. The credibility and expertise of organizations that correct misinformation are important to ensure the successful reduction of misperceptions (Lewandowsky, Cook, Ecker, Albarracín, et al., 2020; Vraga & Bode, 2017). Crucially, fact-checking organizations rarely work with binary definitions of truth; rather, they apply rating scales to both content and its sources. For instance, PolitiFact uses a 6-point rating scale to determine truthfulness (Drobnic Holan, 2018); Meta’s scale allows fact-checkers to classify content as false, altered, partly false, missing context, satire, or true (Meta, 2022); and News-Guard assesses websites based on nine journalistic criteria, then assigns an overall rating on a 100-point scale of trustworthiness (e.g., Aslett et al., 2022). Unlike binary approaches, rating scales can paint a more nuanced picture of misleading content online.

Misleading content can also be classified according to the intent behind sharing it. For instance, Wardle and Derakhshan (2017) distinguished three types of “information disorders”: misinformation (false or misleading content created and initially shared without malicious intent), disinformation (false, fabricated, or manipulated content shared with intent to deceive or cause harm), and malinformation (genuine information shared with the intent to cause harm, – e.g., hate speech and leaks of private information). Although this classification establishes some useful general distinctions, in practice it is often impossible to differentiate between misinformation and disinformation because the intent is difficult to infer. Furthermore, if misinformation and disinformation differ only in intent and not in content, they can have the same psychological effects on an individual, and their consequences can be equally harmful. We argue that policies that address misinformation should define the intent using measurable characteristics. For instance, policies could focus on the behavioral proxies of intent, such as signs of coordinated inauthentic behavior (e.g., fake accounts) and repeated sharing of falsehoods.

Another crucial dimension in defining misinformation is the severity of harm it can cause or has already caused. From the policy angle, false and misleading information is considered harmful when it undermines people’s ability to make informed choices and when it leads to adverse consequences such as threats to public health or to the legitimacy of an election (European Commission, 2020a). Research must therefore establish the specific aspects of online misinformation that threaten individuals and society and thus warrant policy attention. It should also examine how misinformation contributes to other events (e.g., elections, measures to contain pandemics) and how it affects people’s behavior and relevant antecedents of behavior (e.g., intentions, attitudes; see Schmid et al., 2023). For example, relative to factual information, exposure to misinformation can reduce people’s intention to get vaccinated against COVID-19 by more than 6 percentage points (Loomba et al., 2021). Misinformation about climate change undermines people’s confidence in the scientific consensus (van der Linden et al., 2017), and exposure to climate misinformation reduces people’s acceptance of the science more than accurate information can increase it (Rode et al., 2021).

Paying specific attention to harmful misinformation is particularly important in light of the proportionality principle. The severity of harm (represented, e.g., by the number of casualties or other adverse consequences) is one of the most impactful factors in people’s decisions to impose limits on online speech (Kozyreva et al., 2023) and hate speech (Rasmussen, 2022). Online platforms’ policies have – at least until recently – largely reflected that point. For example, Twitter’s approach to misinformation explicitly evoked the proportionality principle, claiming that actions against misinformation should be proportionate to the level of potential harm and whether the misinformation constitutes a repeated offense (Twitter, n.d.). However, the fate of such policies at Twitter and other platforms remains uncertain, highlighting the need for transparent and consistent rules for content moderation, independent of the whims of individuals with vested interests.

Risk and uncertainty posed by misinformation are also crucial for policy making. Risk refers to the likelihood of harmful consequences actually occurring, and uncertainty, in this context, refers to variance in the estimates of risk. Even small risks can warrant policy attention, especially if their potential consequences could be highly damaging. Moreover, the higher the uncertainty associated with estimates of potential harm, the more policy attention the issue might deserve. For example, greater uncertainty about climate change implies a greater probability of adverse consequences and therefore a stronger, rather than weaker, need for mitigating measures (Lewandowsky, Risbey, Smithson, & Newell, 2014; Lewandowsky, Risbey, Smithson, Newell, et al., 2014).

What Are the Causes and the Scope of the Problem?

Understanding the policy problem involves investigating underlying causes and the conditions that facilitate the spread of misinformation, as well as establishing the scope of the problem in a measurable way. Establishing causality is crucial for addressing the root of the misinformation problem. For instance, if social media were found to increase people’s exposure to harmful misinformation and the associated change in behaviors, it would be legitimate to expect that a change in social media might influence social well-being. In the absence of causality, this expectation does not hold. Overall, there is sufficient evidence to suggest that misinformation has causal effects on people’s behavior and attitudes (Bursztyn et al., 2020; Loomba et al., 2021; Rode et al., 2021; Simonov et al., 2020). At the same time, the nature of these causal effects is ambiguous and dependent on cultural context. For instance, the positive effects of digital media are intertwined with serious threats to democracy, and these effects are distributed differently between established and emerging democracies (Lorenz-Spreen et al., 2022).

Efforts that contribute to understanding the scope of the problem include monitoring misinformation across platforms, tracing the problem’s origins to specific actors (e.g., political actors, foreign interference, superspreaders), and investigating how features of online environments can amplify misinformation and influence people’s behavior. Identifying the motivations of those who intentionally spread falsehoods and curbing incentive structures that facilitate the spread of misinformation are also important for controlling the sources of the problem. For instance, Global Disinformation Index staff (2019) found that online ad spending on disinformation domains amounted to $235 million a year. The Code and the Digital Services Act (DSA) of the European Union, therefore, have several provisions aimed at demonetizing such content.

Monitoring programs that track misinformation across platforms are crucial; under the DSA they are an obligation for most major platforms (European Commission, 2020b). Independent organizations also contribute to the task (e.g., the Virality Project and the EU’s COVID-19 monitoring and reporting program both monitor pandemic-related disinformation). Independent monitoring makes it possible to estimate the size of a threat and to establish which platforms and sources are hotspots for misleading content. Most studies estimate that political misinformation (or “fake news”) online constitutes anywhere from 0.15% (Allen et al., 2020) to 6% (Guess et al., 2019) of people’s news diet, but there are indicators of considerable cross-platform variation. For example, Altay and colleagues (2022) found that in 2020, “generally untrustworthy news outlets, as rated by NewsGuard, accounted for 2.28% of the web traffic to news outlets and 13.97% of the Facebook engagement with news outlets” (p. 9). Similarly, Bradshaw and Howard (2019) showed that from 2017 to 2019, the number of countries with disinformation campaigns more than doubled (from 28 to 70) and that Facebook remains the main platform for those campaigns. These findings are particularly concerning since Facebook is the most popular social media platform for news both globally and in Europe (Newman et al., 2022).

Finally, an important factor in tracing the spread of misinformation and implementing measures to moderate it is detection. As the pressure to detect problematic content proactively and at scale mounts, platforms are increasingly relying on algorithmic tools (Gorwa et al., 2020) for detecting misinformation, moderating content, and attaching warning labels. However, these tools are fraught with problems, such as a lack of transparency and the inevitable occurrence of false positives, when acceptable content is removed, and false negatives, when posts violate platform policies but escape detection (for an informed overview and discussion of the topic, see Gorwa et al., 2020).

Implications for Policy

Identifying key characteristics of misinformation and the aspects of misinformation that merit policy attention is especially important for defining policy objectives. Making policy decisions about truth is a notoriously difficult task, not only because people may disagree about what constitutes truth, but also because limiting speech can pose dangers to democracy (for a discussion, see Sunstein, 2021). Because the majority of false and misleading information is not classified as illegal content in the EU and its member states, EU policy measures addressing misinformation focus primarily on harmful but legal content, including bots, fake accounts, and false and misleading information that could lead to adverse consequences and societal risks. Risk and harm feature prominently in policy objectives. For instance, the DSA requires online platforms whose number of users exceeds 10% of the EU population to assess systemic risks such as risks to public health and electoral processes. The intent behind sharing misinformation is also for selecting appropriate measures on the policy level. For example, in EU policy, when misinformation is spread without intent to deceive, it “can be addressed through well-targeted rebuttals and myth busting and media literacy initiatives,” whereas malicious disinformation “needs to be addressed through other means, including actions taken by governments” (European Commission, 2020a, p. 4).

Major challenges for policy include consistency, transparency, and cross-platform integration. The Code, therefore, requires its signatories to commit to developing common definitions or “a cross-service understanding of manipulative behaviors, actors and practices not permitted on their services” (European Commission, 2022a, Commitment 14, pp. 15–17) and to dedicate transparency centers and task forces to these issues.

Understanding Psychological Drivers and Public Perceptions

Policies to manage the misinformation problem should also take into account the cognitive and behavioral factors involved in exposure to and perception of misinformation. This requires a clear picture of how people interact with online platforms and what makes them particularly susceptible to misinformation, as well as which groups of people are most vulnerable.

What Are the Psychological Underpinnings of the Problem?

There are various psychological drivers of belief in misinformation (for an overview, see Ecker et al., 2022; Zmigrod et al., 2023; for an account on what makes people succumb to science denial, see Jylhä et al., 2023). For instance, people tend to accept information as true by default. Although this default makes sense given that most of an individual’s daily interactions are with honest people, it can be readily exploited. The perceived truthfulness of a message increases with variables such as repetition; motivated cognition, which can be triggered by information that is congruent with one’s political views; and failure to engage in deliberation that could have revealed that the information was unsubstantiated (i.e., inattention-based approach; see Pennycook & Rand, 2021).

Moreover, misinformation may press several psychological hot buttons (Kozyreva et al., 2020). One is negative emotions and how people express them online. For instance, Vosoughi and colleagues (2018) found that false stories that went viral were likely to inspire fear, disgust, and surprise; true stories that went viral, in contrast, triggered anticipation, sadness, joy, and trust. The ability of false news to spark negative emotions may give it an edge in the competition for human attention; moreover, digital media may encourage the expression of negative emotions like moral outrage (Crockett, 2017). In general, people are more likely to share messages featuring moral–emotional language (Brady et al., 2017). Because misinformation is not tied to factual constraints, it can be designed to trigger attentional and emotional biases that facilitate its spread.

Another factor in the dissemination of false and misleading information is the business models behind social media, which rely on immediate gratification, engagement, and attention. These goals determine the design of the algorithms that customize social media news feeds and the recommender systems that suggest content. Although not in themselves malicious, algorithmic filtering and personalization are designed to amplify the most engaging content – which is often sensational or negative news, outrage-provoking videos, or conspiracy theories and misinformation (Lewandowsky & Pomerantsev, 2022). For example, Mozilla’s (2021) study confirmed that YouTube actively recommended videos that violate its own policies on political and medical misinformation, hate speech, and inappropriate content. Problems associated with the amplification of harmful content might emerge not merely due to psychological biases and predispositions, but, crucially, because technology is designed to exploit these weaknesses. For example, the structural properties of social networks may be enough in themselves to cause challenges such as echo chambers, regardless of how rational and unbiased its users are (e.g., Madsen et al, 2018).

What Demographics Are Most Susceptible to Misinformation?

Misinformation generally makes up a small fraction of the average person’s media diet, but some demographics are disproportionately susceptible (Allen et al., 2020; Grinberg et al., 2019; Guess et al., 2019). Strong conservatism, right-wing populism, and advanced age are predictors of increased engagement with misleading content (van der Linden et al., 2020). Converging evidence across studies in several countries indicates that the propensity to believe in COVID-19 conspiracy narratives is linked to right-wing voting intentions and conservative ideologies (Leuker et al., 2022; Roozenbeek et al., 2020). A recent cross-cultural study found that supporters of political parties that are judged as extreme on either end of the political spectrum (extreme left-wing and especially extreme right-wing) have a higher conspiracy mentality (Imhoff et al., 2022).

How Does the Public Perceive the Problem and What Are the Public Attitudes to the Relevant Aspects of the Problem?

People’s perceived exposure to misinformation online is high: In an EU study, 51% of respondents using the internet indicated that they had been exposed to misinformation online and 37% stated that they had been exposed to “content where you could not easily determine whether it was a political advertisement or not” (Directorate-General Communication, 2021, p. 61). In a recent global survey, 54% of respondents were concerned about the veracity of online news, 49% said they had come across misinformation about COVID-19 in the last week, and 44% had encountered misinformation about politics in the last week (Newman et al., 2022). Furthermore, 70% of respondents across 19 countries see the spread of false information online as a “major threat” (Poushter et al., 2022). Although perceived exposure to misinformation may differ from actual exposure, a perceived prevalence of misinformation online can suffice to increase mistrust in media and political institutions (e.g., CIGI-Ipsos, 2019).

Given that the spread of misinformation is inextricably entangled with platforms’ algorithms, public attitudes towards algorithms and data usage are highly relevant to policymakers. Social media news feeds, viewing suggestions, and online advertising have created highly personalized environments governed by nontransparent algorithms, and users have little control over how the information they see is curated. A recent survey showed that most respondents in Germany (61%) and Great Britain (61%) and approximately half in the United States (51%) deem personalized political advertising unacceptable (Kozyreva et al., 2021). In all three countries, people also objected to the use of most personal data and sensitive information that could be collected for personalization. Nearly half (46%) of EU citizens worry about the use of personal data and information by companies or public administrations (European Commission, 2021d), and only 25% of people globally trust social media to use their data responsibly (Newman et al, 2022).

Implications for Policy

An understanding of the psychological drivers behind misinformation and public perceptions of false and misleading information is particularly relevant for regulations on platform design. Platforms are currently not free of design that might exploit human psychology for profit, for instance, using persuasive choice architecture or information to capture people’s attention (Kozyreva et al., 2020). A policy must also take into account the cognitive implications of online technologies and protect the public against potential manipulation. Researchers have argued that protecting citizens from manipulation and misinformation and protecting democracy requires a redesign of the online architecture that has misaligned the interests of platforms and consumers (Lewandowsky & Kozyreva, 2022; Lewandowsky & Pomerantsev, 2022; Lewandowsky, Smillie, et al., 2020). It is crucial to restore the signals that make informed decision-making possible (Lorenz-Spreen et al., 2020) and to offer users more control over their data and the information they are shown. It is important to note that understanding the psychology of misinformation does not always produce an actionable policy agenda. For example, the finding that extreme conservative ideology is predictive of conspiracy mentality (van der Linden et al., 2020) cannot inform impartial policies.

In order to address the imbalance between online platforms and the public and to increase transparency, the DSA introduced “wide-ranging transparency measures around content moderation and advertising” (European Commission, 2021b, p. 2). These measures include increased algorithmic accountability, in particular with regard to how information is prioritized and targeted. For instance, the DSA gives users more control over recommender systems and the power to refuse targeted advertising; it also bans targeted advertising to vulnerable groups. To minimize the risks associated with online advertising, profiling and microtargeting practices must be transparent and purveyors of disinformation must be barred from purchasing advertising space. Political advertising and microtargeting practices are also addressed in the proposal for political advertising legislation (European Commission, 2021c), which aims to provide a unifying framework for political ads online and offline.

The strengthened Code also encourages its signatories to commit to safe design practices by facilitating “user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness” (European Commission, 2022a, p. 23) and increasing the accountability of recommender systems (e.g., by “prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information”; p. 20).

Finding Evidence-Based Solutions

Psychological science can also provide evidence for interventions and solutions aimed at reducing the spread of misinformation.

What Are the Key Entry Points for Interventions?

Kozyreva et al. (2020) identified four types of entry point for policy interventions in the digital world: regulatory (e.g., legislative initiatives, policy guidelines), technological (e.g., platforms’ detection of harmful content and inauthentic behavior), educational (e.g., school curricula for digital information literacy), and socio-psychological (e.g., behavioral interventions to improve people’s ability to detect misinformation or slow the process of sharing it). Entry points can inform each other; for instance, an understanding of psychological processes can contribute to the design of interventions for any entry point, and regulatory solutions can directly constrain and inform the design of technological and educational agendas.

What Solutions Already Exist or Can Be Developed?

Misinformation research offers many ways of slowing the spread of dangerous falsehoods and improving people’s ability to identify unreliable information (see Kozyreva, Lorenz-Spreen et al., 2022; Roozenbeek et al., 2023). There are behavioral and cognitive interventions that aim to fight misinformation by debunking false claims (Ecker et al., 2022; Lewandowsky, Cook, Ecker, Albarracín, et al., 2020), boosting people’s competencies through digital media literacy (Guess et al., 2020) and lateral reading (Breakstone et al., 2021; Wineburg et al., 2022), inoculating people against manipulation (Basol et al., 2020; Cook et al., 2017; Lewandowsky & van der Linden, 2021; Roozenbeek & van der Linden, 2019), and implementing design choices that slow the process of sharing misinformation – for instance, by highlighting the importance of accuracy (Pennycook, Epstein, et al., 2021) or introducing friction (Fazio, 2020). These tools and interventions stem from different disciplines, including cognitive science (Ecker et al., 2022; Pennycook & Rand, 2021), political and social psychology (Brady et al. 2017; Van Bavel et al., 2021), computational social science (Lazer et al., 2018), and education research (Caulfield, 2017). They also rely on different conceptual approaches (e.g., nudging, inoculation, boosting, techno-cognition – for an overview see, Kozyreva et al., 2020; Lorenz-Spreen et al., 2020) and different research methodologies to test their effectiveness (e.g., Pennycook, Binnendyk, et al., 2021; Wineburg et al., 2022). Although interventions may differ in terms of scalability, field studies have shown that accuracy prompts (Pennycook & Rand, 2021) and psychological inoculation campaigns on social media are effective at improving misinformation resilience at scale (Roozenbeek et al., 2022).

New challenges in online environments call for new competencies in information management. This might require going beyond what has traditionally been taught in schools. For instance, to efficiently navigate online information, people must be able to ignore large amounts of it and focus on that which is relevant to their goals (Kozyreva, Wineburg, et al., 2022). They must also be able to evaluate scientific information themselves (Osborne et al., 2022).

What Is the State of the Evidence?

Not all evidence is created equal. Research on behavioral interventions should only be applied to a policy if the quality of evidence and its readiness level is appropriate (IJzerman et al., 2020). For instance, some studies are run on nonrepresentative samples and might not be generalizable enough to serve a policy’s objectives. It is therefore important to consider the generalizability and replicability of empirical studies. Methodological steps such as metascience and research synthesis for policy-relevant problems and solutions can help mitigate these challenges (Topp et al., 2018). Different types of research syntheses might be required to address the misinformation problem, including conceptual overviews and reports, systematic reviews, meta-analyses, expert reviews (Lewandowsky, Cook, & Ecker, 2020), and living reviews (Elliott et al., 2021).

Implications for Policy

Evidence-based solutions are especially important for policies geared toward empowering citizens to deal with misinformation. Current EU policies foster digital media and information literacy through design choices and educational interventions; for example, improving media and information literacy through educational interventions is one of the priorities of the new Digital Education Action Plan (2021–2027), a policy initiative directed in part at helping EU citizens develop digital skills and competences.

The strengthened Code encourages platforms to highlight reliable information of public interest (e.g., information on COVID-19) and prioritize reliable content from authoritative sources (e.g., with information panels, banners, pop-ups, maps, or prompts) in order to empower users. It also focuses on enabling users to flag harmful false or misleading information and on introducing warnings that content has been identified as false or misleading by trusted third-party fact-checkers.

Research on interventions against misinformation has advanced considerably in recent years. In practical terms, this means that there is a toolbox of interventions that can be applied to the challenges of online environments (Kozyreva, Lorenz-Spreen, et al., 2022). In addition, platforms themselves have been implementing and testing various interventions on a large scale (e.g., Twitter Safety, 2022). However, internal reports on the effectiveness of platform interventions rarely go beyond reporting mere percentages and are not transparent methodologically. It is therefore particularly important that platforms heed the Code’s call for data sharing and cooperation with researchers (see also Pasquetto et al., 2020).

Finally, interventions aimed at fostering media literacy and empowering online users should not be regarded as a substitute for developing and implementing systemic and infrastructural solutions (see Chater & Loewenstein, 2022). Instead, as the “Swiss cheese model” for mitigating misinformation suggests (Bode & Vraga, 2021), different measures and interventions should be implemented as multiple lines of defense against misinformation.

Co-Developing Appropriate Policy Instruments

The final building block of developing science-based policies to tackle the misinformation problem is developing evidence-informed policy instruments.

What Policy Instruments Are Appropriate?

In the Better Regulation Toolkit (European Commission, 2021a), “evidence” refers to data, information, and knowledge from multiple sources, including quantitative data (e.g., statistics and measurements), qualitative data (e.g., opinions, stakeholder input), conclusions of evaluations, and scientific and expert advice. The toolkit makes no specific reference to psychological science. However, contemporary policy-making techniques such as stakeholder and citizen dialogue can identify knowledge gaps and different ways of framing a problem, and psychological science lends itself well to these techniques (e.g., strategic foresight, citizen engagement, and workshops to establish the values and identities triggered by a policy area; Lewandowsky, Smillie, et al., 2020; Scharfbillig et al., 2021). Research in psychological science can thus help identify evidence that is critical for establishing the appropriate type of policy instrument.

What Tools Can Help Develop the Policies?

Several tools can assist with policy development. One tool is the foresight exercise, which is particularly suitable when policymakers face systemic challenges that require them to address multiple issues simultaneously while facing high uncertainty. In a foresight exercise, policymakers and key stakeholders imagine possible futures that may arise in response to their actions, without specifying links between policy decisions and outcomes. Creating, exploring, and examining radically different possible futures can help policymakers uncover evolving trends and dynamics that may have a significant impact on the situation at hand. For instance, Lewandowsky, Smillie, et al. (2020) explored four possible futures of the European online environment: The “struggle for information supremacy” scenario, which assumes that the European information space will be marked by high degrees of conflict and economic concentration; the “resilient disorder” scenario, in which the EU has fostered a competitive, dynamic, and decentralized information space with strong international interdependence, but faces threats from disinformation campaigns; the “global cutting edge” scenario, which foresees a world in which societal and geopolitical conflict have been reduced significantly, while high degrees of competition and innovation have led to the emergence of a dynamic, global information space; and the “harmonic divergence” scenario, which assumes that regulatory differences and economic protectionism between nations have resulted in a fractured global information space. From these scenarios, participants in a stakeholder workshop generated five classes of potential drivers of change that might determine the future (society, technology, environment, economy, and policy). This process highlighted the two most important drivers, which were also the most uncertain: the changing economic paradigm and conflicts and cyberattacks. Identifying those drivers is an important step toward future action by policymakers: At the very least, it suggests that resources should be allocated to further study of those drivers with a view towards reducing uncertainty.

What Are the Expected Impacts of the Policy?

Any regulation proposal needs to address potential impacts and the outcomes of the policy for relevant actors and society as a whole. In the EU, this is achieved through impact assessment, a standard methodology adopted by the EU Commission to address the potential social, economic, and environmental impacts of a policy (Adelle & Weiland, 2012; European Commission, n.d.). An impact assessment comprises a structured analysis of policy problems and corresponding policy responses. It involves developing policy objectives and policy options, as well as ascertaining the options’ subsidiarity, proportionality, and impact. The assessment also considers effective procedures for monitoring and evaluation. Impact assessments provide the evidence base for a range of policy options and may result in a preferred option.

EU Policy Approaches to Misinformation: The Strengthened Code of Practice on Disinformation

In liberal democracies such as the EU, online platforms are currently the primary regulators of speech on the internet, thus leaving the power to make and enforce rules in the hands of unelected individuals at profit-driven companies. To address this issue, the EU Commission has developed an array of policies combining stricter regulations for platform design and co-regulation guidelines for misinformation and harmful content (European Commission, 2022b; see also Helberger, 2020; Marsden et al., 2020). Both regulatory and self-regulatory policy instruments for addressing misinformation are currently under development (see Table E1 in the Electronic Supplementary Material, ESM 1, for an overview).

The DSA, the centerpiece of EU policy measures, establishes the EU-wide regulatory framework for providers of digital services. It also designates a special category of service providers – Very Large Online Platforms (VLOPs) – under the assumption that the risk of harm from the dissemination of misinformation is connected to the size and reach of the platform (Broughton Micova, 2021). VLOPs are required to manage systemic risks related to the dissemination of illegal content, potentially negative effects on fundamental rights, and intentional manipulation of their services, including “any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.” (European Parliament, 2022, p. 75).2

The self-regulatory Code is a central piece of EU policy on harmful misinformation (European Commission, 2022a). In force since October 2018, its signatories include major online platforms such as Meta, Google, and Twitter. A strengthened version of the Code was released in 2022, with 34 signatories agreeing to 44 commitments and 128 implementable measures. It features measures designed to strengthen platforms’ approach to misinformation, including more robust commitments to defund organized disinformation, limit manipulative behaviors, increase transparency in political advertising, and empower users, researchers, and fact-checkers. The strengthened Code also has a provision to improve cross-platform policy making.

In Table 1, we summarize the Code’s commitments, their significance from the perspective of cognitive science, and the VLOP signatories. As the Table shows, the user-empowerment section of the Code is the strongest from the perspective of cognitive science. For instance, psychological science research provides a variety of interventions to tackle the spread of misinformation (see also section “Finding Evidence-Based Solutions”), and insists on the importance of choice architectures that enhance user autonomy and facilitate informed decision-making (e.g., including trustworthiness indicators). However, several VLOPs have declined to sign up for some of the commitments in the user empowerment section (e.g., commitments 20 and 22). The importance of individual empowerment, as highlighted by cognitive science, appears to be a weakness for VLOPs. This will be especially significant if the Code evolves into a co-regulation instrument for the VLOPs under the DSA. Signatories of the EU Code published their first 6-monthly monitoring report on February 9, 2023 (https://disinfocode.eu/reports-archive/?years=2023). There are clearly discrepancies in the quality of reporting, particularly on the part of the platforms. It is clear that further development is needed to establish meaningful metrics for impact. Nevertheless, this is an important first step in increasing transparency and sharing insight and data from a broad stakeholder group.

Table 1. Cognitive foundations and signatory commitments under the strengthened Code of Practice on Disinformation 20221.

Commitment Summary Significance from the perspective of cognitive science Very Large Online Platform (VLOP) adopters VLOP non-adopters2
Scrutiny of ad placements (3 commitments)
1. Demonitization Disconnect ad revenues from disinformation, with independent audits Monetary incentives can be powerful motivators of behavior. Global Disinformation Index staff (2019) estimate that a quarter billion dollars’ worth of advertising globally goes to sites flagged as disseminating disinformation.
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads) = all but 1.4 (does not buy advertising)

  • Microsoft (LinkedIn)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Bing)

2. Ads containing disinformation Disrupt algorithmic amplification of disinformation Algorithms prioritize engagement and can inadvertently amplify attention-grabbing content. Compared to verifiable information, misinformation is more emotional and negative (Carrasco-Farré, 2022) and more likely to inspire fear, disgust, and surprise (Vosoughi et al., 2018).
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads, LinkedIn)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Bing)

3. Cooperation Cooperation with fact-checkers Fact-checking can reduce people’s beliefs in false information, especially when detailed refutations are provided. Source credibility and expertise also matter for successful corrections (Lewandowsky, Cook, Ecker, Albarracín, et al., 2020)
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads, LinkedIn)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Bing)

Political advertising (10 commitments)
4. Common definition Adopt a common definition of “political and issue advertising” NA
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads, LinkedIn)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Bing)

5. Consistent political ads Apply a consistent approach across political and issue advertising NA
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads, LinkedIn)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Bing)

6. Efficient labeling Transparent labeling of political ads (incl. developing best practices, improving visibility, participating in research) Although users pay some attention to these disclosures, they often do not enhance users’ knowledge of who paid for a given ad (Binford et al., 2021). Labels and associated information should be easy to understand and should provide the signals that make informed decision-making possible (Lorenz-Spreen et al., 2020). Platforms must continue to improve labeling on political ads and collaborate with researchers.
  • Google (Ads) = all but 6.5

  • Meta (Facebook, Instagram) = all but 6.5

  • TikTok = all but 6.5

  • Twitter = all but 6.5

  • - not messaging apps

  • Google (Search, YouTube)

  • Meta (Messenger) = all but 6.5

  • Meta (WhatsApp)

  • Microsoft (Ads, LinkedIn, Bing - do not allow political or issue-based advertising)

7. Verification Ensure identity of ad sponsor is known Identifying the source of sponsored content can help people use strategies for verifying its credibility (e.g., lateral reading; Wineburg et al., 2022).
  • Meta (Facebook, Instagram)

  • Google (Ads)

  • Twitter

  • TikTok

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn) = all but 7.3 (relates to reporting ads that may violate respective policies) Microsoft (Bing) - does not allow political or issue-based advertising

8. User-facing transparency Transparent information on political ads Transparency is an essential element of democratic governance (Fung, 2013).
  • Meta (Facebook, Instagram)

  • Google (Ads)

  • Twitter

  • TikTok

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing - does not allow political or issue-based advertising)

9. Transparency of targeting Identify targeting for ads Customizing messages, including political ads, based on a receiver’s personal characteristics is known as microtargeting. In microtargeting, digital fingerprints are used to infer personal attributes such as religion, political affiliation, and sexual orientation (Hinds & Joinson, 2018; Kosinski et al., 2013). Microtargeted political ads can be used to undermine democratic discourse. Most people oppose microtargeting of certain content (e.g., political ads) and microtargeting based on certain attributes (e.g., political affiliation; Kozyreva et al., 2021).
  • Meta (Facebook, Instagram)

  • Google (Ads)

  • Twitter

  • TikTok

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing - do not allow political or issue-based advertising)

10. Repositories Full historical record of ads and targeting NA
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing - do not allow political or issue-based advertising)

11. Application programming interfaces (APIs) Provide researchers and public access to data for research Researchers have identified ways for online platforms to contribute to the study of misinformation, specifying how “increased data access would enable researchers to perform studies on a broader scale, allow for improved characterization of misinformation in real-world contexts, and facilitate the testing of interventions to prevent the spread of misinformation.” (Pasquetto et al., 2020, p. 2).
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing - do not allow political or issue-based advertising)

12. Civil society Permit scrutiny of advertising during elections NA None
  • Google (Ads, Search, YouTube)

  • Meta (Facebook, Instagram, WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing)

  • TikTok

  • Twitter

  • - not civil society organizations

13. Ongoing collaboration Continue to monitor evolving risks Online misinformation and manipulation are moving targets and require consistent monitoring and updating of measures.
  • Google (Ads)

  • Meta (Facebook, Instagram)

  • TikTok

  • Twitter

  • Google (Search, YouTube)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing - do not allow political or issue-based advertising)

Integrity of services (3 commitments)
14. Impermissible manipulative behaviors (common understanding) Platforms should adopt and implement publicly available policies for impermissible manipulative behaviors, maintain list of manipulative strategies Knowing common manipulative strategies can help in preemptively inoculating people against them (e.g., Lewandowsky & van der Linden, 2021; Roozenbeek et al., 2022)
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Meta (WhatsApp, Messenger)

  • Google (Ads)

15. AI transparency Algorithms used for detection and content moderation should be transparent and respect user rights; manipulative practices for AI systems are prohibited Misinformation policies should be consistent across platforms. Process of establishing transparent and consistent rules for content moderation can be informed by public attitudes (e.g., Kozyreva et al., 2023; Rasmussen, 2022).
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Meta (WhatsApp, Messenger)

  • Google (Ads)

16. Cooperation and transparency Cross-platform integration of efforts NA
  • Google (Search = all but 16.2 – no platform migration, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing = all but 16.2)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

Empowering users (9 commitments)
17. Media literacy Extend users’ media literacy The literature on various interventions aimed to improve users’ media literacy is extensive (e.g., Guess et al., 2020; Kozyreva, Lorenz-Spreen et al., 2022; Roozenbeek et al., 2022; Wineburg et al., 2022).
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

18. Safe design Disrupt algorithmic amplification of disinformation and increase safety of design Design of online architectures has an impact on people’s decisions and behavior. Interventions via choice architectures and cognitive design principles can contribute to safer and autonomy-promoting online environments (e.g., Lorenz-Spreen et al., 2020).
  • Google (YouTube, Search = all but 18.1 - does not allow for viral propagation)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing = all but 18.1)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

19. Transparency of recommender systems Transparent AI and recommender systems NA
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

20. Provenance tools to check authenticity and accuracy of content Authentication tools Interactive, customizable labels and warnings can empower users. Microsoft (Bing, LinkedIn)
  • Google (Ads, Search, YouTube)

  • Meta (Facebook, Instagram, WhatsApp, Messenger – other tools exist)

  • Microsoft (Ads)

  • TikTok

  • Twitter – will explore the feasibility of this measure

21. Fact-checking and flagging tools for accuracy Provide users with labels and warnings Several recent studies have confirmed the efficacy of labels and warnings in the context of misinformation, especially when corrections contained an alternative explanation (Ecker et al., 2010) and when information was identified as true or false while it was presented, or immediately afterwards (Brashier et al., 2021). When only some false information is flagged as false and true information is not affirmed, this may raise the perceived truth of other false information that is not flagged (Pennycook et al., 2020). Efficient labeling requires further research, as textual disclosure labels may go unnoticed (compared to graphic labels) (Dobber et al., 2021).
  • Google (Search = all but 21.2, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing = all but 21.2 – do not allow for viral propagation)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

22. Indicators of trustworthiness of sources Provide users with trustworthiness indicators and pointers to trustworthy sources Trustworthiness of content can be measured with reasonable objectivity (e.g., by checking a site’s record of reporting accuracy). Professional fact-checkers rely extensively on exogenous cues of trustworthiness: They spend very little time on the site itself, instead searching other sites in a process called “lateral reading” (Wineburg & McGrew, 2019). Users rely mainly on endogenous cues when evaluating the trustworthiness of content, which is often ineffective. People can be taught to use exogenous cues and engage in lateral reading. Identifying these cues requires further research using platform data (to select trustworthiness indicators) and behavioral experimentation (to verify that they are useful). None
  • Google (Ads)

  • Meta (Facebook, Instagram, WhatsApp, Messenger – other tools exist)

  • Microsoft (Ads)

  • Google (Search, YouTube = all but 22.7)

  • Microsoft (LinkedIn, Bing) = all but 22.1, 22.2, 22.3, and 22.7

  • TikTok = all but 22.7 – measures are unnecessary

  • Twitter (all but 22.7)

23. Flagging functionalities Allow users to flag content Users can play an important role in correcting misinformation (Bode & Vraga, 2018; Pennycook & Rand, 2019)
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

24. Transparent appeal mechanism Permit transparent appeal when content is removed NA
  • Google (YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn)

  • TikTok

  • Twitter

  • Google (Ads, Search)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

  • Microsoft (Bing) – does not post user content

25. Messaging apps Introduce friction or labels to limit viral propagation Introducing friction can slow down the spread of misinformation (e.g., Fazio, 2020). Although such prompts are used by several major social media companies (e.g., Twitter and Meta), evidence of its effectiveness for messenger apps is lacking. Meta (WhatsApp, Messenger)
  • Google (Ads, Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (Ads, LinkedIn, Bing)

  • Tiktok

  • Twitter

  • - not messaging apps

Empowering the research community (4 commitments)
26. Automated access to non-personal data Provide researchers and public access to data for research Researchers have identified ways for online platforms to contribute to the study of misinformation, specifying how “increased data access would enable researchers to perform studies on a broader scale, allow for improved characterization of misinformation in real-world contexts, and facilitate the testing of interventions to prevent the spread of misinformation.”
  • Google (Search = all but 26.2, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

27. Governance structure for access to data Create governance structure for access to data (Pasquetto et al., 2020, p. 2).
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

28. Cooperation Cooperate with researchers
  • Google (Search, YouTube)

  • Meta (Facebook, Instagram)

  • Microsoft (LinkedIn, Bing)

  • TikTok

  • Twitter

  • Google (Ads)

  • Meta (WhatsApp, Messenger)

  • Microsoft (Ads)

29. Conducting research Platforms continue research on disinformation and how to enhance public resilience against it Platforms conduct internal research but rarely share their findings and data in a transparent way. None
  • Google (Ads, Search, YouTube)

  • Meta (Facebook, Instagram, WhatsApp, Messenger)

  • Microsoft (Ads, LinkedIn, Bing)

  • TikTok

  • Twitter

  • - not research organizations

Notes.

1

This table includes 29 out of 44 commitments. The excluded 15 commitments refer to measures aimed at empowering the fact-checking community and implementing the Code and are therefore less relevant for cognitive science. Signatories of the Code listed at: https://digital-strategy.ec.europa.eu/en/library/signatories-2022-strengthened-code-practice-disinformation.

2

Unless otherwise specified (in italics), reasons for not subscribing to commitments are either not indicated by the signatories of the Code or indicated to be not applicable.

Conclusion

Former Google executives once called the internet and its applications “the world’s largest ungoverned space” (Schmidt & Cohen, 2013, p. 3). Meaningful legislation that protects citizens and institutions online must incorporate relevant evidence accessible and in a timely manner. However, many policymakers access evidence through procurement service providers, which traditionally do not provide insights from psychological science. Civil servants could therefore consider systematically including access to evidence from psychological science in their procurement contracts.

The regulatory and co-regulatory efforts in the EU are a milestone in creating online spaces that protect democracy and people’s best interests. Nevertheless, many problems and questions remain. First, achieving a balance of rights and interests while controlling the spread of misinformation is a difficult task. Some policy choices represent genuine dilemmas – for instance, pitting freedom of expression against the need to mitigate the harms of misinformation (Douek, 2021; Kozyreva et al., 2023). Second, policy making in a rapidly evolving online environment requires flexibility and constant updating. By the time legislation is published it might already be out of date. Third, platforms must be willing to cooperate in good faith. This has proven challenging so far, at least in part because the interests of platforms and the public are not always aligned (Lewandowsky & Kozyreva, 2022) and many large platforms do not provide researchers with adequate access to data (Pasquetto et al., 2020). To move forward, platforms, regulators, and researchers need to find a way to cooperate productively and act in a timely manner.

Supplementary Material

Table E1

Acknowledgments

We thank Deb Ain for editing the manuscript and Spela Vrtovec for providing research assistance.

Funding

The article was written as part of a Volkswagen Foundation grant to S. Lewandowsky (Initiative “Artificial Intelligence and the Society of the Future”). S. Lewandowsky also acknowledges financial support from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO) and the Humboldt Foundation through a research award.

Biographies

Inline graphic Anastasia Kozyreva is a research scientist at the Center for Adaptive Rationality (ARC) at the Max Planck Institute for Human Development in Berlin. She is a philosopher and a cognitive scientist working on the cognitive and ethical implications of digital technologies on society. Her current research focuses on misinformation policy, attitudes toward digital technologies, and cognitive interventions that could help counteract the rising challenges of false information and online manipulation.

Inline graphic Stephan Lewandowsky is a Professor of Cognitive Science at the University of Bristol. His research examines people’s memory, decision decision-making, and knowledge structures, with a particular emphasis on how people update information in memory. Stephan Lewandowsky’s most recent research examines the persistence of misinformation and the spread of “fake news” in society.

Inline graphic Laura Smillie leads the European Commission’s multi-annual research program Enlightenment 2.0. The program is identifying the drivers of political decision-making – exploring the extent to which facts, values, social relations, and the online environment affect political behavior and decision-making.

Footnotes

Conflict of Interest

The authors declare no competing interests.

1

However, even in the presence of established scientific consensus, the issue of accuracy and who gets to decide what is true can be extremely divisive, especially when the topic is highly politicized and polarized – as is the case for many topics rife with misinformation (e.g., climate change denial, COVID-19 anti-vaccination).

2

Recital 106 suggests that rules on codes of conduct under this Regulation could serve as a basis for existing self-regulatory efforts at the EU level, including the meaning that VLOPs could become co-regulated under the DSA when the Code becomes a Code of Conduct.

References

  1. Adelle C, Weiland S. Policy assessment: The state of the art. Impact Assessment and Project Appraisal. 2012;30(1):25–33. doi: 10.1080/14615517.2012.663256. [DOI] [Google Scholar]
  2. Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Science Advances. 2020;6(14) doi: 10.1126/sciadv.aay3539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Altay S, Kleis Nielsen R, Fletcher R. Quantifying the “infodemic”: People turned to trustworthy news outlets during the 2020 Coronavirus pandemic. Journal of Quantitative Description: Digital Media. 2022;2 doi: 10.51685/jqd.2022.020. [DOI] [Google Scholar]
  4. Aslett K, Guess AM, Bonneau R, Nagler J, Tucker JA. News credibility labels have limited average effects on news diet quality and fail to reduce misperceptions. Science Advances. 2022;8(18) doi: 10.1126/sciadv.abl3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Basol M, Roozenbeek J, van der Linden S. Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition. 2020;3(1) doi: 10.5334/joc.91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Binford MT, Wojdynski BW, Lee Y-I, Sun S, Briscoe A. Invisible transparency: Visual attention to disclosures and source recognition in Facebook political advertising. Journal of Information Technology Politics. 2021;18(1):70–83. doi: 10.1080/19331681.2020.1805388. [DOI] [Google Scholar]
  7. Bode L, Vraga E. See something, say something: Correction of global health misinformation on social media. Health Communication. 2018;33(9):1131–1140. doi: 10.1080/10410236.2017.1331312. [DOI] [PubMed] [Google Scholar]
  8. Bode L, Vraga E. The Swiss cheese model for mitigating online misinformation. Bulletin of the Atomic Scientists. 2021;77(3):129–133. doi: 10.1080/00963402.2021.1912170. [DOI] [Google Scholar]
  9. Bradshaw S, Howard PN. The global disinformation disorder: 2019 global inventory of organised social media manipulation (Working paper 2019.2) Project on Computational Propaganda; 2019. https://demtech.oii.ox.ac.uk/research/posts/the-global-disinformation-order-2019-global-inventory-of-organised-social-media-manipulation/ [Google Scholar]
  10. Brady WJ, Wills JA, Jost JT, Tucker JA, Van Bavel JJ. Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences. 2017;114(28):7313–7318. doi: 10.1073/pnas.1618923114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proceedings of the National Academy of Sciences. 2021;118(5):e2020043118. doi: 10.1073/pnas.2020043118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Breakstone J, Smith M, Connors P, Ortega T, Kerr D, Wineburg S. Lateral reading: College students learn to critically evaluate internet sources in an online course. Harvard Kennedy School Misinformation Review. 2021;2(1):1–17. doi: 10.37016/mr-2020-56. [DOI] [Google Scholar]
  13. Broughton Micova S. What is the harm in size? Very large online platforms in the Digital Services Act. Centre on Regulation in Europe (CERRE); 2021. https://cerre.eu/wp-content/uploads/2021/10/211019_CERRE_IP_What-is-the-harm-in-size_FINAL2.pdf . [Google Scholar]
  14. Bursztyn L, Rao A, Roth CP, Yanagizawa-Drott DH. Misinformation during a pandemic (NBER working paper 27417) National Bureau of Economic Research; 2020. [DOI] [Google Scholar]
  15. Carrasco-Farré C. The fingerprints of misinformation: How deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions. Humanities and Social Sciences Communications. 2022;9:162. doi: 10.1057/s41599-022-01174-9. [DOI] [Google Scholar]
  16. Caulfield M. Web literacy for student fact-checkers. 2017. https://digitalcommons.liberty.edu/textbooks/5 .
  17. Chater N, Loewenstein G. The i-frame and the s-frame: How focusing on the individual-level solutions has led behavioral public policy astray. Behavioral and Brain Sciences. 2022:1–60. doi: 10.1017/S014052X22002023. Advance online publication. [DOI] [PubMed] [Google Scholar]
  18. CIGI-Ipsos. 2019 CIGI-Ipsos global survey on internet security and trust. Centre for International Governance Innovation; 2019. https://www.cigionline.org/internet-survey-2019 . [Google Scholar]
  19. Cook J, Lewandowsky S, Ecker UKH. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One. 2017;12(5):e0175799. doi: 10.1371/journal.pone.0175799. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Crockett MJ. Moral outrage in the digital age. Nature Human Behaviour. 2017;1(11):769–771. doi: 10.1038/s41562-017-0213-3. [DOI] [PubMed] [Google Scholar]
  21. Directorate-General Communication. Special Eurobarometer 507: Democracy in the EU. European Commission; 2021. https://europa.eu/eurobarometer/surveys/detail/2263 . [Google Scholar]
  22. Dobber T, Kruikemeier S, Goodman EP, Helberger N, Minihold S. Assumed effectiveness of labels is not evidence-based (Working paper) 2021 September 30; https://www.uva-icds.net/wp-content/uploads/2021/03/Summary-transpar-ency-discloures-experiment_update.pdf .
  23. Douek E. Governing online speech: From “posts-as-trumps” to proportionality and probability. Columbia Law Review. 2021;121(3):759–834. https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/ [Google Scholar]
  24. Drobnic Holan A. The principles of the Truth-O-Meter: PolitiFact’s methodology for independent fact-checking. PolitiFact; 2018. Feb 12, https://www.politifact.com/article/2018/feb/12/principles-truth-o-meter-politifacts-methodology-i/ [Google Scholar]
  25. Ecker UKH, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, Kendeou P, Vraga EK, Amazeen MA. The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology. 2022;1(1):13–29. doi: 10.1038/s44159-021-00006-y. [DOI] [Google Scholar]
  26. Ecker UK, Lewandowsky S, Tang DT. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory Cognition. 2010;38(8):1087–1100. doi: 10.3758/MC.38.8.1087. [DOI] [PubMed] [Google Scholar]
  27. Elliott J, Lawrence R, Minx JC, Oladapo OT, Ravaud P, Tendal Jeppesen B, Thomas J, Turner T, Vandvik PO, Grimshaw JM. Decision makers need constantly updated evidence synthesis. Nature. 2021;600(7889):383–385. doi: 10.1038/d41586-021-03690-1. [DOI] [PubMed] [Google Scholar]
  28. European Commission. Impact assessments: The need for impact assessments. https://ec.europa.eu/info/law/law-making-process/planning-and-proposing-law/impact-assessments_en .
  29. European Commission. Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Tackling COVID-19 disinformation-Getting the facts right (JOIN/2020/8 final) 2020a. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020JC0008&qid=1647942789060 .
  30. European Commission. Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC (COM/2020/825 final) 2020b. https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1608117147218&uri=COM%3A2020%3A825%3AFIN .
  31. European Commission. Communication from the Commission to the European Parliament, the Council, the European and Social Committee and the Committee of Regions: Better Regulation-Joining forces to make better laws (COM/2021/219 final) 2021a. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2021:219:FIN .
  32. European Commission. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: European Commission Guidance on strengthening the Code of Practice on Disinformation (COM (2021) 262 final) 2021b. https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A52021DC0262 .
  33. European Commission. Proposal for a regulation of the European Parliament and of the Council on the transparency and targeting of political advertising (COM(2021) 731 final) 2021c. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12826-Political-advertising-improving-transparency_en .
  34. European Commission. Eurobarometer: Europeans show support for digital principles. 2021d. Dec 6, https://ec.europa.eu/commission/presscorner/detail/en/IP_21_6462 .
  35. European Commission. 2022 strengthened Code of Practice on Disinformation. 2022a. https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation .
  36. European Commission. Tackling online disinformation. 2022b. Jun 29, https://digital-strategy.ec.europa.eu/en/policies/online-disinformation .
  37. European Parliament. Corrigendum to the position of the European Parliament adopted at first reading on 5 July 2022 with a view to the adoption of Regulation (EU) 2022/ of the European Parliament and of the Council on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (P9_TA(2022)0269 (COM(2020)0825-C9-0418/2020-2020/0361(COD))) 2022. Sep 7, https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/IMCO/DV/2022/09-12/p3-2020_0361COR01_EN.pdf .
  38. Farrell H, Schneier B. Common-knowledge attacks on democracy. SSRN. 2018 doi: 10.2139/ssrn.3273111. [DOI] [Google Scholar]
  39. Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harvard Kennedy School Misinformation Review. 2020;1(2) doi: 10.37016/mr-2020-009. [DOI] [Google Scholar]
  40. Fung A. Infotopia: Unleashing the democratic power of transparency. Politics Society. 2013;41(2):183–212. doi: 10.1177/0032329213483107. [DOI] [Google Scholar]
  41. Funke D, Flamini D. A guide to anti-misinformation actions around the world. Poynter; https://www.poynter.org/ifcn/anti-misinformation-actions/ [Google Scholar]
  42. Global Disinformation Index staff. The quarter billion dollar question: How is disinformation gaming Ad tech Global Disinformation Index. 2019. Sep, https://disinformationindex.org/wp-content/uploads/2019/09/GDI_Ad-tech_Report_Screen_AW16.pdf .
  43. Gluckman PD, Bardsley A, Kaiser M. Brokerage at the science-policy interface: From conceptual framework to practical guidance. Humanities and Social Sciences Communications. 2021;8(1):84. doi: 10.1057/s41599-021-00756-3. [DOI] [Google Scholar]
  44. Gorwa R, Binns R, Katzenbach C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Society. 2020;7(1):205395171989794. doi: 10.1177/2053951719897945. [DOI] [Google Scholar]
  45. Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019;363(6425):374–378. doi: 10.1126/science.aau2706. [DOI] [PubMed] [Google Scholar]
  46. Guess AM, Lerner M, Lyons B, Montgomery JM, Nyhan B, Reifler J, Sircar N. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences. 2020;117(27):15536–15545. doi: 10.1073/pnas.1920498117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Guess A, Nagler J, Tucker J. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances. 2019;5(1):eaau4586. doi: 10.1126/sciadv.aau4586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Helberger N. The political power of platforms: How current attempts to regulate misinformation amplify opinion power. Digital Journalism. 2020;8(6):842–854. doi: 10.1080/21670811.2020.1773888. [DOI] [Google Scholar]
  49. Hoornbeek JA, Peters BG. Understanding policy problems: A refinement of past work. Policy and Society. 2017;36(3):365–384. doi: 10.1080/14494035.2017.1361631. [DOI] [Google Scholar]
  50. Hinds J, Joinson AN. What demographic attributes do our digital footprints reveal? A systematic review. PLoS One. 2018;13(11):e0207112. doi: 10.1371/journal.pone.0207112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. IJzerman H, Lewis NA, Jr, Przybylski AK, Weinstein N, DeBruine L, Ritchie SJ, Vazire S, Forscher PS, Morey RD, Ivory JD, Anvari F. Use caution when applying behavioural science to policy. Nature Human Behaviour. 2020;4(11):1092–1094. doi: 10.1038/s41562-020-00990-w. [DOI] [PubMed] [Google Scholar]
  52. Imhoff R, Zimmer F, Klein O, António JHC, Babinska M, Bangerter A, Bilewicz M, Blanuša N, Bovan K, Bužarovska R, DelouvéCichocka A, et al. Conspiracy mentality and political orientation across 26 countries. Nature Human Behaviour. 2022;6(3):392–403. doi: 10.1038/s41562-021-01258-7. [DOI] [PubMed] [Google Scholar]
  53. Jacobson GC. Driven to extremes: Donald Trump’s extraordinary impact on the 2020 elections. Presidential Studies Quarterly. 2021;51(3):492–521. doi: 10.1111/psq.12724. [DOI] [Google Scholar]
  54. Jylhä KM, Stanley SK, Ojala M, Clarke EJR. Science denial: A narrative review and recommendations for future research and practice. European Psychologist. 2023;28(3):151–161. doi: 10.1027/1016-9040/a000487. [DOI] [Google Scholar]
  55. Kosinski M, Stillwell D, Graepel T. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences. 2013;110(15):5802–5805. doi: 10.1073/pnas.1218772110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Kozyreva A, Herzog SM, Lewandowsky S, Hertwig R, Lorenz-Spreen P, Leiser M, Reifler J. Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences of the United States of America. 2023;120(7):e2210666120. doi: 10.1073/pnas.2210666120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Kozyreva A, Lewandowsky S, Hertwig R. Citizens versus the Internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest. 2020;21(3):103–156. doi: 10.1177/1529100620946707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Kozyreva A, Lorenz-Spreen P, Hertwig R, Lewandowsky S, Herzog SM. Public attitudes towards algorithmic personalization and use of personal data online: Evidence from Germany, Great Britain, and the United States. Humanities and Social Sciences Communications. 2021;8:117. doi: 10.1057/s41599-021-00787-w. [DOI] [Google Scholar]
  59. Kozyreva A, Lorenz-Spreen P, Herzog SM, Ecker UKH, Lewandowsky S, Hertwig R, Basol M, Berinsky AJ, Betsch C, Cook J, Fazio LK, et al. Toolbox of interventions against online misinformation and manipulation. PsyArXiv. 2022 doi: 10.31234/osf.io/x8ejt. [DOI] [Google Scholar]
  60. Kozyreva A, Wineburg S, Lewandowsky S, Hertwig R. Critical ignoring as a core competence for digital citizens. Current Directions in Psychological Science. 2022;32(1):81–88. doi: 10.1177/09637214221121570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Lazer DMJ, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, Schudson M, et al. The science of fake news. Science. 2018;359(6380):1094–1096. doi: 10.1126/science.aao2998. [DOI] [PubMed] [Google Scholar]
  62. Leuker C, Eggeling LM, Fleischhut N, Gubernath J, Gumenik K, Hechtlinger S, Kozyreva A, Samaan L, Hertwig R. Misinformation in Germany during the COVID-19 pandemic: A cross-sectional survey on citizens’ perceptions and individual differences in the belief in false information. European Journal of Health Communication. 2022;3(2):13–39. doi: 10.47368/ejhc.2022.202. [DOI] [Google Scholar]
  63. Lewandowsky S, Cook J, Ecker UKH. Under the hood of The Debunking Handbook 2020: A consensus-based handbook of recommendations for correcting or preventing misinformation. Center for Climate Change Communication; 2020. https://www.climatechangecommunication.org/wp-content/uploads/2020/10/DB2020paper.pdf . [Google Scholar]
  64. Lewandowsky S, Cook J, Ecker UKH, Albarracín D, Amazeen MA, Kendeou P, Lombardi D, Newman EJ, Pennycook G, Porter E, Rand DG, et al. The debunking handbook 2020. 2020 doi: 10.17910/b7.1182. [DOI] [Google Scholar]
  65. Lewandowsky S, Kozyreva A. Algorithms, lies, and social media. Open Mind; 2022. Mar 24, https://www.openmindmag.org/articles/algorithms-lies-and-social-media . [Google Scholar]
  66. Lewandowsky S, Pomerantsev P. Technology and democracy: A paradox wrapped in a contradiction inside an irony. Memory, Mind Media. 2022;1:e5. doi: 10.1017/mem.2021.7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Lewandowsky S, Risbey JS, Smithson M, Newell BR. Scientific uncertainty and climate change: Part II. Uncertainty and mitigation. Climatic Change. 2014;124(1-2):39–52. doi: 10.1007/s10584-014-1083-6. [DOI] [Google Scholar]
  68. Lewandowsky S, Risbey JS, Smithson M, Newell BR, Hunter J. Scientific uncertainty and climate change: Part I. Uncertainty and unabated emissions. Climatic Change. 2014;124(1-2):21–37. doi: 10.1007/s10584-014-1082-7. [DOI] [Google Scholar]
  69. Lewandowsky S, Smillie L, Garcia D, Hertwig R, Weatherall J, Egidy S, Robertson RE, O’Connor C, Kozyreva A, Lorenz-Spreen P, Blaschke Y, et al. Technology and democracy: Understanding the influence of online technologies on political behaviour and decision-making. Publications Office of the European Union; 2020. [DOI] [Google Scholar]
  70. Lewandowsky S, van der Linden S. Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology. 2021;32(2):348–384. doi: 10.1080/10463283.2021.1876983. [DOI] [Google Scholar]
  71. Loomba S, de Figueiredo A, Piatek SJ, de Graaf K, Larson HJ. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour. 2021;5(3):337–348. doi: 10.1038/s41562-021-01056-1. [DOI] [PubMed] [Google Scholar]
  72. Lorenz-Spreen P, Lewandowsky S, Sunstein CR, Hertwig R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour. 2020;4(11):1102–1109. doi: 10.1038/s41562-020-0889-7. [DOI] [PubMed] [Google Scholar]
  73. Lorenz-Spreen P, Oswald L, Lewandowsky S, Hertwig R. Digital media and democracy: A systematic review of causal and correlational evidence worldwide. Nature Human Behaviour. 2022;7(1):74–101. doi: 10.1038/s41562-022-01460-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Madsen JK, Bailey RM, Pilditch TD. Large networks of rational agents form persistent echo chambers. Scientific Reports. 2018;8:12391. doi: 10.1038/s41598-018-25558-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Marsden C, Meyer T, Brown I. Platform values and democratic elections: How can the law regulate digital disinformation? Computer Law Security Report. 2020;36:105373. doi: 10.1016/j.clsr.2019.105373. [DOI] [Google Scholar]
  76. Meta. Rating options for fact-checkers. 2022. Sep 9, https://www.facebook.com/business/help/341102040382165?id=673052479947730 .
  77. Mozilla. YouTube Regrets: A crowdsourced investigation into YouTube’s recommendation algorithm. 2021. Jul, https://assets.mofoprod.net/network/documents/Mozilla_YouTube_Regrets_Report.pdf .
  78. Newman N, Fletcher R, Robertson CT, Eddy K, Kleis Nielsen R. Reuters Institute digital news report 2022. 2022. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-06/Digital_News-Report_2022.pdf .
  79. Osborne J, Pimentel D, Alberts B, Allchin D, Barzilai S, Bergstrom C, Coffey J, Donovan B, Kivinen K, Kozyreva A, Wineburg S. Science education in an age of misinformation. Stanford University; 2022. https://sciedandmisinfo.stanford.edu . [Google Scholar]
  80. Pasquetto IV, Swire-Thompson B, Amazeen MA, Benevenuto F, Brashier NM, Bond RM, Bozarth LC, Budak C, Ecker UKH, Fazio LK, Ferrara E, et al. Tackling misinformation: What researchers could do with social media data. Harvard Kennedy School Misinformation Review. 2020;1(8) https://misinforeview.hks.harvard.edu/article/tackling-misinformation-what-researchers-could-do-with-social-media-data/ [Google Scholar]
  81. Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science. 2020;66(11):4944–4957. doi: 10.1287/mnsc.2019.3478. [DOI] [Google Scholar]
  82. Pennycook G, Binnendyk J, Newton C, Rand DG. A practical guide to doing behavioural research on fake news and misinformation. Collabra: Psychology. 2021;7(1):25293. doi: 10.1525/collabra.25293. [DOI] [Google Scholar]
  83. Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG. Shifting attention to accuracy can reduce misinformation online. Nature. 2021;592(7855):590–595. doi: 10.1038/s41586-021-03344-2. [DOI] [PubMed] [Google Scholar]
  84. Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences. 2019;116(7):2521–2526. doi: 10.1073/pnas.1806781116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Pennycook G, Rand DG. The psychology of fake news. Trends in Cognitive Sciences. 2021;25(5):388–402. doi: 10.1016/j.tics.2021.02.007. [DOI] [PubMed] [Google Scholar]
  86. Poushter J, Fagan M, Gubbala S. Climate change remains top global threat across 19-country survey. Pew Research Center; 2022. Aug 31, https://www.pewresearch.org/global/2022/08/31/climate-change-remains-top-global-threat-across-19-country-survey/ [Google Scholar]
  87. Rasmussen J. When do the public support hate speech restrictions? Symmetries and asymmetries across partisans in Denmark and the United States. PsyArXiv. 2022 doi: 10.31234/osf.io/j4nuc. [DOI] [Google Scholar]
  88. Rode JB, Dent AL, Benedict CN, Brosnahan DB, Martinez RL, Ditto PH. Influencing climate change attitudes in the United States: A systematic review and metaanalysis. Journal of Environmental Psychology. 2021;76:101623. doi: 10.1016/j.jenvp.2021.101623. [DOI] [Google Scholar]
  89. Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman ALJ, Recchia G, van der Bles AM, van der Linden S. Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science. 2020;7(10):201199. doi: 10.1098/rsos.201199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Roozenbeek J, Suiter J, Culloty E. Countering misinformation: Evidence, knowledge gaps, and implications of current interventions. European Psychologist. 2023;28(3):189–205. doi: 10.1027/1016-9040/a000492. [DOI] [Google Scholar]
  91. Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Communications. 2019;5(1):65. doi: 10.1057/s41599-019-0279-9. [DOI] [Google Scholar]
  92. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Science Advances. 2022;8(34):eabo6254. doi: 10.1126/sciadv.abo6254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Scharfbillig M, Smillie L, Mair D, Sienkiewicz M, Keimer J, Pinho Dos Santos R, Vinagreiro Alves H, Vecchione E, Scheunemann L. Values and identities: A policymaker’s guide. Publications Office of the European Union; 2021. [DOI] [Google Scholar]
  94. Schmid P, Altay S, Scherer L. The psychological impacts and message features of health misinformation: A systematic review of randomized controlled trials. European Psychologist. 2023;28(3):162–172. doi: 10.1027/1016-9040/a000494. [DOI] [Google Scholar]
  95. Schmidt E, Cohen J. The new digital age: Reshaping the future of people, nations and business. John Murray; 2013. [Google Scholar]
  96. Simonov A, Sacher SK, Dubé J-PH, Biswas S. The persuasive effect of Fox News: Non-compliance with social distancing during the COVID-19 pandemic (Working paper 27237) National Bureau of Economic Research; 2020. [DOI] [Google Scholar]
  97. Sunstein CR. Liars: Falsehoods and free speech in an age of deception. Oxford University Press; 2021. [DOI] [Google Scholar]
  98. The Economist. Censorious governments are abusing “fake news” laws. 2021. Feb 13, https://www.economist.com/international/2021/02/11/censorious-governments-are-abusing-fake-news-laws .
  99. Topp L, Mair D, Smillie L, Cairney P. Knowledge management for policy impact: The case of the European Commission’s Joint Research Centre. Palgrave Communications. 2018;4(1):87. doi: 10.1057/s41599-018-0143-3. [DOI] [Google Scholar]
  100. Tran T, Valecha R, Rad P, Rao HR. In: Secure knowledge management in artificial intelligence era. Sahay S, Goel N, Patil V, Jadliwala M, editors. Springer; 2020. An investigation of misinformation harms related to social media during humanitarian crises; pp. 167–181. [DOI] [Google Scholar]
  101. Twitter. How we address misinformation on Twitter. https://help.twitter.com/en/resources/addressing-misleading-info .
  102. Twitter Safety. How Twitter is nudging users to have healthier conversations. 2022. Jun 1, [accessed 13.09.2022]. https://blog.twitter.com/common-thread/en/topics/stories/2022/how-twitter-is-nudging-users-healthier-conversations .
  103. Van Bavel JJ, Harris EA, Pärnamets P, Rathje S, Doell KC, Tucker JA. Political psychology in the digital (mis)information age: A model of news belief and sharing. Social Issues and Policy Review. 2021;15(1):84–113. doi: 10.1111/sipr.12077. [DOI] [Google Scholar]
  104. van der Linden S, Leiserowitz A, Rosenthal S, Maibach E. Inoculating the public against misinformation about climate change. Global Challenges. 2017;1(2):1600008. doi: 10.1002/gch2.201600008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. van der Linden S, Panagopoulos C, Azevedo F, Jost JT. The paranoid style in American politics revisited: An ideological asymmetry in conspiratorial thinking. Political Psychology. 2020;42(1):23–51. doi: 10.1111/pops.12681. [DOI] [Google Scholar]
  106. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359(6380):1146–1151. doi: 10.1126/science.aap9559. [DOI] [PubMed] [Google Scholar]
  107. Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Science Communication. 2017;39(5):621–645. doi: 10.1177/1075547017731776. [DOI] [Google Scholar]
  108. Wardle C, Derakhshan H. Information disorder: Toward an interdisciplinary framework for research and policy making (Report DGI(2017)09) Council of Europe; 2017. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c . [Google Scholar]
  109. Wineburg S, Breakstone J, McGrew S, Smith MD, Ortega T. Lateral reading on the open Internet. A district-wide field study in high school government classes. Journal of Educational Psychology. 2022;114(5):893–909. doi: 10.1037/edu0000740. [DOI] [Google Scholar]
  110. Wineburg S, McGrew S. Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record. 2019;121(11) doi: 10.1177/016146811912101102. [DOI] [Google Scholar]
  111. Zmigrod L, Burnelli R, Hameleers M. The misinformation receptivity framework: Political misinformation and disinformation as cognitive Bayesian inference problems. European Psychologist. 2023;28(3):173–188. doi: 10.1027/1016-9040/a000498. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Table E1

RESOURCES