Skip to main content
PLOS One logoLink to PLOS One
. 2022 Jan 7;17(1):e0261811. doi: 10.1371/journal.pone.0261811

Cognitive cascades: How to model (and potentially counter) the spread of fake news

Nicholas Rabb 1,*, Lenore Cowen 1, Jan P de Ruiter 1,2, Matthias Scheutz 1
Editor: Marco Cremonini3
PMCID: PMC8740964  PMID: 34995299

Abstract

Understanding the spread of false or dangerous beliefs—often called misinformation or disinformation—through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual’s set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and “alternative facts.”

Introduction

Understanding the spread of false or dangerous beliefs through a population has never seemed so urgent. In our modern, highly networked world, societies have been grappling with widespread belief in conspiracies [15], increased political polarization [69], and distrust in scientific findings [10, 11]. Some of the most prominent in our times are conspiracies surrounding COVID-19, and starkly polarized distributions of beliefs regarding scientifically-motivated safety measures.

Throughout the course of the pandemic, much effort has been spent trying to understand why, in the face of a global pandemic, so many believed COVID-19 was a hoax, targeted political attack, caused by 5G cell towers, or that it was simply not dangerous and did not justify wearing a protective mask [1, 35, 10, 1214]. Understanding the spread of misinformation requires a way of modeling and understanding both how this misinformation spreads in a population, and also why some individuals are more or less vulnerable.

This paper applies a class of models which capture the spread of ideas, innovations, culture, and more [15] to the spread of misinformation—including components from both social network science and cognitive science. While social network science and cognitive science have both sought to contribute to the understanding of the mechanisms that govern individuals’ acquisition and updates of beliefs, each has traditionally focused on different pieces of this puzzle. Social network science has provided interesting insights by applying techniques originally developed to model the spread of disease to modeling the spread of misinformation [2, 1619]. Modern psychological and cognitive science has focused on the relationship of suggested beliefs to an individual’s current set of beliefs, and how this influences an individual’s likelihood of updating their beliefs in the face of confirmatory or contradictory information [3, 13, 14, 2023]. We call our models cognitive cascade models—those that adopt a cascading network-based diffusion model from social network science [2, 24, 25], but include a more individually differentiated model of belief update and adoption that is informed by cognitive science as in cognitive contagion models [2628]. We show that even very simple versions of cognitive cascades result in interesting network dynamics that seem to represent some of the real-world phenomena that were seen in pandemic misinformation.

Misinformation beliefs, once adopted, are so difficult to extricate that over the course of 2020, after forming an initial opinion, the proportion of those who did not believe in the virus hardly changed [29]. In the U.S., reports circulated of nurses in states with few regulations like South Dakota describing patients who would be dying of COVID and refusing to believe they had it. One nurse was quoted saying, “They tell you there must be another reason they are sick. They call you names and ask why you have to wear all that ‘stuff’ because they don’t have COVID and it’s not real” [30].

Modern psychological and cognitive science have made a substantive contribution by making a clear distinction between beliefs that update in accordance to the evidence one receives, and others which persist despite clear, logical contrary evidence [22, 31, 32]. This is the distinction that appears key to understanding mechanisms governing belief in misinformation. In fact, there is evidence that those who engage in manufacturing misinformation exploit this research to make their messages more potent. Wiley [33] has recently revealed that some polarized, partisan beliefs and conspiracies have been designed to persist despite contrary evidence.

While the study of individual beliefs has recently been advancing, attempting to determine how beliefs, true or otherwise, spread through an entire population adds yet another layer of complexity. Sociologists and political theorists have long studied public opinion: the theoretical mechanisms by which populations come to certain beliefs, which are notoriously difficult to verify empirically [3436]. However, with the advent of social network research, scholars are moving toward that goal, empirically studying how information cascades through groups [2, 24, 3739], leading to the wide-scale adoption of certain beliefs [40], and motivating theoretical models of networked opinion dynamics [25, 4145] based off of classic sociological theories [4648]. Other lines of research develop algorithms on top of those networked opinion models which optimally manipulate network-wide measures like polarization [49], opinion difference [50], or susceptibility to persuasion [51]. This work is complemented by media studies which theorize about and test mechanisms behind polarization and selective exposure stemming from dissonance pressures [5257], as well as the role of large media institutions in shaping public opinion [1, 5, 58, 59]. Our work seeks to take one further step: understanding how an integrated model combining individual cognitive belief models with social network and media dynamics might be employed to study how public opinion shifts.

Our work applies a class of Agent-Based Models (ABMs), called Agent Based Social Systems (ABSS) [60, 61]—more specifically, social contagion models [25]—to the study of network cascades, which measure the number people a given story spreads to via sharing [2]. ABM is a powerful modeling paradigm that has both successes and future potential in a variety of areas, including animal behavior [46, 6265], social sciences [66, 67], and, notably, opinion dynamics [25, 44, 68, 69].

By combining social contagion paradigms with some of the cognitive literature regarding misinformation, we propose a cognitive cascade model that captures the spread of identity-related beliefs. This model is tuned to capture, on the individual agent-level, two major effects cited in misinformation literature: dissonance [20], and exposure [10, 70], capturing what we call defensive cognitive contagion (DCC).

We then situate agents following different classes of contagion rules in a cascade model that includes institutional agents who begin the cascades by injecting messages into the network. Here, we define cascades as the spread of messages via sharing through the population, initiated by institutional agents. Where we say contagion, we mean the spread of a belief between two agents, as opposed to cascades, which describes multiple contagion events. We call this our public opinion diffusion (POD) model. This model allows media companies, who have played crucial roles in COVID misinformation [1, 5, 11], to be included in the study of opinion dynamics.

Through a simple cognitive function defined at the level of individual agents, our cognitive cascade model appears more expressive and ecologically valid than both cascade models that follow simple or complex contagion rules, and contagion models that do not simulate institution-driven cascades. Moreover, the motivations behind the DCC function demonstrate that simple and complex contagion rules cannot capture identity-related belief spread between individuals. Given findings from COVID misinformation studies, results from our POD model with DCC appear to align with population-level results reported by U.S. opinion polls—namely that beliefs about the virus remained starkly partisan [69], and hardly changed throughout 2020 [29]. This article is grounded in our experience with U.S. patterns of information, and the cases we cite are from the U.S. However, we note that misinformation has been on the rise worldwide, and in fact, may cross national boundaries as information is accessed on the Internet [71]. However, commonalities and differences and the extent that models for the US media ecosystem may generalize to different sociopolitical structures, or how models should be contextualized for different environments, will be left as a subject of future research. These preliminary results, which are in alignment with other similar emerging studies [72], hint at possible interventions and offer plenty of opportunities for future studies to be conducted using these methods.

Background

Social contagion

ABMs have been widely used to model social contagion effects—those which describe the process of ideas or beliefs spreading through a population [24]. Such models attempt to explain possible processes underlying the spread of innovations [15, 73, 74], culture and ideology [27, 28, 72, 7577], or unpopular norms [68]. These models are extensions of earlier work in sociology that theorized how social network structure and simple decisions, such as the threshold effect [47], may affect group-level behavior. Through more abundantly available computational power, these ideas can now be simulated and their implications can be analyzed.

There are two popular types of social contagion models used in ABMs: simple—also called independent cascade—and complex contagion—which has a proportional and absolute variation. Both model the spread of behaviors, norms, or ideas through a population. For simplicity, we will refer to behaviors or norms as “beliefs” going forward, as it is plausible to argue that both are generated by beliefs that an individual holds, explicit or otherwise.

Simple contagion

The simple contagion model assumes that behaviors or norms can spread in a manner akin to a disease [24, 37, 38, 44]. Simply being connected to an individual who holds a belief engenders a probability, p, that the belief may spread to you. This can even be true given different belief strengths or polarities for the same proposition. More formally, given two nodes in the model u and v, with each having respective beliefs bu and bv at time t, when node u is playing out its decision process (i.e. u is the focal node and v is the node exposing u to a belief), the probability of adopting belief bv can be modeled as:

P(bu,(t+1)=bv|bu,t)=p. (1)

As opposed to many studies in innovation diffusion [15], we choose to argue in our formulation of contagion that every agent has a prior belief. Innovation diffusion studies often imagine any individual has no prior opinion about a new idea until they are “infected” with it. However, as we will illustrate below, we model beliefs on a spectrum, including belief in, against, and uncertainty about, a proposition. This departure from the epidemiological view of opinion diffusion allows us to argue for a prior, even if it is uncertainty.

It is important to note that in belief contagion models, the probabilities assigned to adopting a new belief given a prior one are over an event set of only two outcomes: adopt the new belief, or keep the prior one. Thus, if there are several possible beliefs to adopt from a set B, the sums of probabilities of adopting some bv given a prior of bu, for all values in B, do not necessarily add to 1. Rather, because each agent interaction is only between two belief values, even if they both come from a larger set, the interaction is represented as a Bernoulli process. Each instance below in which we motivate probabilities of adopting a belief given a prior is adherent to the same logic.

Of course, since beliefs are not actually transmitted through airborne pathogens that incite infected individuals to believe something, there are abundant sociological hypotheses as to why this phenomenon may appear infectious [24]. There are other contagion models that have put forward alternative explanations.

Complex contagion

Complex contagion rather imagines that the spread of beliefs is predominantly governed by a ratio of consensus of those whom any agent is connected to [25, 42]. There are two major variations of complex contagion: what is typically called a proportional threshold contagion, and what we call an absolute threshold contagion. Proportional threshold contagion creates some α proportion of neighbors which must believe something for the ego agent u to believe it. Absolute threshold contagion, on the other hand, may imagine some whole number η of neighbors who must believe something in order for the ego u to believe it [25, 47]. We choose to use the proportional threshold model for our examples and in-silico experiments. One of the most famous examples of this type of model, captured an a cellular automata paradigm, is in Schelling’s segregation model [48]. Formally, given an ego u and set of neighbors N(u), the probability of adopting belief b can be represented as:

P(bu,(t+1)=b|bu,t)={1,1|N(u)|vN(u)d(v,b)α,0,otherwise, (2)

where d(v, b) is a simple indicator function which returns 1 if bv = b—i.e. if neighbor v believes b—and where α ∈ [0, 1] is a threshold indicating the ratio of believing neighbors necessary for u to adopt the belief. In this model, the agent u is guaranteed to adopt b if a sufficient ratio of its neighbors believe b.

It may seem tempting to imagine, given the expected number of agents necessary to cross a repeated trial probability threshold from p, that the behavior of proportional threshold contagion can also be modeled with the “sufficient number of neighbors” idea. However, that notion does not capture the ratio effect that proportional threshold contagion models. Proportional threshold contagion says nothing about the innate infectiousness of any belief, but rather the infectiousness of the connections surrounding any agent.

The focus on a “portion” of believing neighbors being required to propagate a belief spawned new questions and investigations. This type of belief contagion has been argued to explain why some norms may spread despite them being disagreed with on an individual level, such as collegiate drinking behavior [68]. It elegantly captures phenomena associated with group dynamics such as peer pressure. It also can be used to model diffusion of health information or technological innovations [78].

Cognitive contagion

Both these contagion models can be generalized to allow heterogeneous sets of agents whose update rules are different for agents of different types (for example, more or less susceptible to infection). However, while these two popular types of contagion can effectively model some classes of belief contagion, others cannot be captured by their mechanisms—even with heterogeneous agents. Many simple and complex contagion models only imagine several states of belief, heavily influenced by epidemiological models: susceptible, infected, and removed. Models where there is an internal model of what a given individual agent already believes that dynamically affects what beliefs they spread and adopt cannot be described by either simple or proportional threshold contagion.

To address these problems, a class of what Zhang & Vorobeychik [15] call “cognitive agent models” exist. Based off of foundational work by Hegselmann & Krause [79], Deffuant et al. [80], DeGroot [43] and others, these models are often used to study group opinion dynamics when agents can influence each other in a more nuanced manner. Rather than simply allow agents to be “infected” by a belief or not, these models often place belief in any given proposition on a continuous spectrum. Agents then influence each other through opinion diffusion processes that can be tailored to a given cognitive or social phenomena. A generalization of the belief update process may be stated as:

P(bu,(t+1)=bv|bu,t)=β(bu,t,bv), (3)

β can be a weighted update based on similarity of two agents’ beliefs [27, 77], do nothing if the beliefs are too far away from each other [81], or be beholden to logical relations between beliefs [28]. Notably for our purposes, these models have been used to study polarization [27, 72, 7577] and opinion dynamics given cognitive effects like dissonance reduction [81] or homophily [41].

Cognitive cascade model

The goals of this work are twofold: (1) to apply empirically-grounded techniques from network and cognitive science to opinion diffusion models, and (2) to show that even the simplest resultant cognitive cascade model adds expressive power and leads to interesting dynamics of belief propagation that cannot arise in cascade models using the simple or proportional threshold contagion rules. To do so, we will lay out our cascade model and compare simulation results between using simple, proportional threshold, and cognitive contagion techniques.

Agent contagion function

First, we can lay out which contagion models should be used for the agent-to-agent interactions occurring during a cascade. If our micro interactions are grounded in literature surrounding misinformation belief, we can analyze the ensuing macro effects knowing the model is grounded soundly. In our simple example, we consider belief in a single proposition B (for example, B could be, “COVID is a hoax,” or, “mask-wearing does not help protect against spreading or contracting COVID”). In cognitive contagion models, as distinct from simple or proportional threshold contagion models, u’s probability of believing a message from v is influenced by an internal model of u’s beliefs. For our simple cognitive cascade model, we model this internal prior initially as a single variable bu, with −1 ≤ bu ≤ 1. Without loss of generality, we use -1 to indicate strong disbelief and 1 to indicate strong belief.

Theoretically, bu can be a continuous variable with the interval from strong disbelief to strong belief, or it can take on discrete values. Inspired by frequently used 7-point scales to convey belief strength in public opinion surveys (e.g. [6, 11, 82], and justified by [83]), we choose 7 discrete, equally spaced values for belief in B as follows: we represent the strength of the belief in proposition B with elements from the set B={b|0b6},bZ; 0 represents strong disbelief, 1 disbelief, 2 slight disbelief, 3 uncertainty, 4 slight belief, 5 belief, and 6 strong belief. We note that our framework allows other resolutions of belief strength, from the discrete to approaching continuous, so we also explore these alternative model choices with additional in-silico experiments with lower and higher “resolutions” of belief: with b able to take integer values between 0 and 2, 3, 5, 7, 9, 16, 32, and 64 to approach behavior over continuous beliefs. Results from those belief resolutions are shown in S9S18 Figs. We find that the particular belief resolution of 7 was not exactly critical, and that nearby values (e.g. 5, 9, 13, and 16) produced very similar network dynamics. On the other hand, as b approached a more continuous scale, we did not see exactly the same behavior, as some of our initial conditions changed in a way that made cascades more difficult. This implies that a realistic continuous model of beliefs might require different initial model parameters. More details can be found in S3 Text.

Importantly, this representation captures the polarity of the proposition as well: belief strength of the affirmative of B (if b ≥ 4), and the negation of B (if b ≤ 2). From here on, we will capture the idea of belief polarity—belief in or against a proposition B—by simply saying “belief strength.”

We include this cognitive model for an individual agent within a message-passing ABM: At each time step t, agents have the chance to receive messages, to believe them, and share them with neighbors. We will further clarify the role of messages in spreading beliefs below, when we describe our diffusion model. But it should be noted upfront that regardless of being passed by a message, or by simple network connection exposure, we can compare beliefs from two agents, u and v the same way. We further note that cognitive science shows evidence that, for beliefs that are core to an individual’s identity (such as political or ideological beliefs), exposure to evidence that is too incongruous with an individual’s existing belief can cause individuals to disregard evidence in an attempt to reduce cognitive dissonance [20]. Therefore, we later choose an update rule where an agent u is only likely to believe messages when encoded belief values for proposition B are not too far from u’s prior beliefs.

As a simple example, this could be represented by a binary threshold function. Given an agent u with belief strength in B, bu, and an incoming belief from v with strength bv, the following equation could govern whether agent u updates its belief:

P(bu=bv|bu)={1,|bu-bv|γ,0,otherwise, (4)

where γ is a distance threshold. Each agent has some existing belief strength in the proposition B, but will be unwilling to change their belief strength if a neighbor’s belief strength is too far from theirs. There are similar functions motivated in contagion models centering dissonance [77, 81], and some which weight positive or negative influence differently [76]. We chose to weight positive or negative influence equally in this example, and subsequent contagion functions, to simplify the model and make its results more easily analyzable. Perhaps an agent u who strongly believes the proposition (bu = 6) will not switch immediately to strongly disbelieving it without passing through an intermediary step of uncertainty. Given a neighbor v sharing belief bv = 0, agent u should not adopt this belief strength, because the difference in belief strengths is clearly greater than γ. Simple contagion would fall short because agent u may simply randomly become “infected” with belief strength 0 by v with some probability p. A proportional threshold contagion would similarly falter if agent u were entirely surrounded by alters with belief strength 0. It would inevitably switch belief strengths regardless of some threshold α as in Eq (2).

As mentioned above, this manner of belief update could model the update of beliefs that are core to an individual’s identity, such as political or ideological beliefs [31, 84, 85]. Rather than updating based on evidence presented, exposure to evidence that is too incongruous with an individual’s existing belief may have no effect due to rationalization processes activated by cognitive dissonance [20].

In addition to the effects related to cognitive dissonance, there are other effects that have been reported to be involved in belief update. Pertinent to the misinformation literature, two that center the incoming belief itself are the illusory truth effect [23, 70] and the mere-exposure effect [86]. These effects emphasize that the number of exposures to a piece of information can motivate belief in it.

Regardless of effect, it is clear that some social contagion processes cannot be captured without modeling some sort of representation of an agent’s cognition. For these reasons, we will extend work done in cognitive contagion models—particularly those modeling dissonance [81] and selective exposure [27, 72]—to capture the above effects. In general, given an agent u with belief bu, during its update step, the likelihood of updating its belief strength from bu to b, given their prior belief, can be captured by the cognitive contagion model in Eq 3. We graphically illustrate this process in Fig 1.

Fig 1. A graphical illustration of cognitive contagion.

Fig 1

An illustration of cognitive contagion with the DCC contagion function described in Eq 6. (a) (Top) Given an agent u with bu = 6 and v with bv = 0, the chance of contagion is < 0.001. (b) (Bottom) Given an agent u with bu = 6 and v with bv = 5, the chance of contagion is 0.982.

Because this equation is so general, there is a need to motivate a meaningful choice of β function, and analyze how its effects differ from simple and proportional threshold contagion. There are obviously many choices for such a function, but the key lies in the fact that it compares beliefs between two agents, rather than being driven by network structure or mere chance. Below, we will describe our process of choosing a β function in order to adequately model the misinformation effects that motivated our study.

Institutional cascade model

Of course, a cognitive contagion model could be implemented on top of many different ABMs, so we will describe one that captures the misinformation problem, and that we can use over multiple experiments to arrive at a cognitive cascade model suited to the problem. Our public opinion diffusion (POD) ABM will be designed to capture the effects of misinformation spread by media companies, since in the aforementioned COVID misinformation studies, media played a pivotal role in people’s belief or disbelief in safety protocols [82, 87, 88]. Moreover, in media studies, some scholars utilize frameworks such as Giddens [89] “theory of structuration”—which views media providers as macrolevel “structures” and individuals as microlevel “users” [52]—in order to model the ontological duality of media ecosystems.

Often, ABMs attempt to model so-called “levels” of social systems by grouping individual agents into a set—as in the case of a corporation being some hierarchical relation of individuals. Epstein persuasively argues that this individualistic method of modeling is not accurate, as, for example, a media organization’s behavior depends not only on behaviors of a hierarchy of individual employees, but also on social conditions, governmental bodies, and more [90]. The POD model addresses this by including institutional influence in the form of media agents. We argue that an institution is not just a collection of individual agents, but an entirely different ontological entity, with separate incentives and influences, that still captures elements of media scholars framework for media ecosystems. Though, as will be clear below, our media agents are highly simplified, and could be made more complex in future studies. This addition is what makes our model a cascading model as opposed to a pure contagion model. Because the spread of messages starts with an institution—analogous to a media company—our contagion closely resembles cascade models used for analyzing media ecosystems [2, 5, 59]. A visual description of the model is included in Fig 2.

Fig 2. A graphical illustration of one time step of the POD model.

Fig 2

In the left panel, (A) depicts the initial setup of a small network with institutional agent i1 with subscribers s1, s2, s3. All agents in the network are labeled with their belief strength. The right panel, (B) depicts one time step t = 0 of agent i1 sending messages M1(t = 0) = (m0, m1). (i) shows the initial sending of m0 = 4 to subscribers, and (ii) shows s1 and s3 believing the message and propagating it to their neighbors. (iii) and (iv) show the same for m1 = 3, but only s3 believes m1.

As previously mentioned, we will be using a message-passing ABM: at each time step t, agents have the chance to receive messages m from the set of all possible messages M—whose spread begins with media agents, which we call institutional agents—and to believe and share them with neighbors. We chose a message-passing model as opposed to the simple diffusion models often found in simple and proportional threshold contagion models because it allows us to capture the notion that beliefs are spread by explicit communication rather than simply by being connected to an agent.

Our model consists of N agents in a graph, G = (V, E) where each agent’s initial belief strength bu, 0 ≤ uN is drawn from a uniform distribution over the set of possible belief strengths for proposition B, B={b,0b6},bZ. There is a separate set of institutional agents I—entirely different entities in the ontology of our model—which have directed edges to a set of “subscribers” SV if some parameter ϵ ≤ |bubi|, uV, iI—i.e. an agent in the network will subscribe to an institution if its belief strength from B is sufficiently close to the belief strength of that institution. For all of our experiments, we will fix ϵ at 0. Institutional agents are designed to model media companies or notable public figures which begin the mass spread of ideas through the population. The belief strength of an institutional agent can be thought of as a perceived ideological “leaning” that would cause people with different prior beliefs to trust different media organizations.

At each time step, t, each institution i will send a list of messages to each of its subscribers, represented by the function Mi:t(m0,m1,,mj),mjM,j0. In this simple example, the set of possible messages M will only encode one proposition, B, so for simplicity, we can set M=B. Additionally, institutions will only send one message per time step. Whenever a message is received, an agent will “believe” it based on the contagion method being utilized, where bmj is the strength of belief in proposition B encoded by the message, and bu is the agent’s belief strength for B. If agent u believes message mj, then its belief strength is updated to be bmj. When an agent believes a message, it shares the original message, mj, with all its neighbors. It should be noted that because agent u would change its belief strength to bmj, agents will always share beliefs that are congruous to prior belief strengths—cohering to our cognitive contagion model outlined above. After a neighbor receives a message, the cycle continues: It has a chance to believe the message, and if believed, spread it to its neighbors. To avoid infinite sharing, each agent will only believe and share a given message once based on a unique identifier assigned to it when it is broadcast by the institutional agent.

Each time a message makes its way through the population, via some contagion method, we are capturing one cascade. By combining this cascading behavior with belief modeling typical of cognitive contagion models, we can synthesize the advantages of both disciplines for a more expressive and grounded model.

Our model and experiments were implemented using NetLogo [91] and Python 3.5. Code is made available on GitHub (https://github.com/RickNabb/cognitive-contagion).

In-silico contagion experiments

Experiment design

We wish to show that our cognitive cascade model can capture the observed effects of identity-based belief spread better than existing models of simple or proportional threshold contagion. To do so, we will lay out a series of in-silico (computer simulated) experiments to test each contagion method given the same initial conditions.

For each contagion method, we test three conditions where one institutional agent, i1, attempts to spread different combinations of messages over time. We will refer to the first message set as single, as the institutional agent simply broadcasts one message for the entirety of the simulation; Mi(t) = (6), 1 ≤ t ≤ 100. The second set, we will call split, as the institution switches from messages of Mi(t) = (6), 1 ≤ t ≤ 50 to Mi(t) = (0), 51 ≤ t ≤ 100 halfway through the simulation. We call the final set gradual because the institution starts out spreading messages of Mi(t) = (6), but at every interval of ten time steps, switches to Mi(t) = (5), Mi(t) = (4), etc. until finishing the last 30 time steps by broadcasting Mi(t) = (0). We display these visually in Fig 3.

Fig 3. A graphical illustration of the different message sets.

Fig 3

A visual depiction of the different message set conditions used in our in-silico experiments: single (top), split (middle), and gradual (bottom), set against a 100-step simulation from t = 0 to t = 100.

These specific sets of messages were chosen to expose distinct effects given each set, specifically with agent-to-agent cognitive contagion in mind. Based on research about how identity-related beliefs update, a proper cognitive contagion function would not allow agents with a belief strength significantly different from the message to update their belief. With the single message set, we wish to provide the simplest case: only one belief strength message is being spread. We predict that simple contagion will simply spread the messages to all agents, regardless of prior belief strength, and all agents will eventually update their belief strength to that in the message. We also anticipate that proportional threshold contagion, being so reliant on prior belief strengths of an agent’s neighbors, will not be so straightforward. Assuming a nearly uniform distribution of agent neighbor belief strengths, the threshold chosen would likely make the difference between all agents updating their beliefs, or no agents updating their beliefs. A proper cognitive contagion function that captures our desired effect should see only belief updates from agents whose belief strengths are already close to the belief strength encoded in the message.

The split message set, on the other hand, should have different effects. We predict that simple contagion will see all agents believe the first belief, then all agents believe the second, while proportional threshold contagion may spread the initial belief but not the second. For cognitive contagion, if the function we choose successfully models our target phenomena, only agents within the same threshold as in the single condition should believe the first message, then virtually no agents should believe the second—except a few who may switch based on exposure effects.

Finally, we predict that the gradual message set should be the only which allows cognitive contagion to sway the entire network. Because agents will only believe messages that are relatively close to their prior beliefs, it logically follows that the only way to move beliefs from one pole to another is incrementally. We further anticipate that simple contagion will sway the entire agent population to adopt the belief strength of each message in turn, and that proportional threshold contagion may sway the entire population to one specific belief strength, but then not be able to change any agent belief strengths after such a contagion.

We also keep certain contagion variables static between experiments and conditions. In each case, we will fix the simple contagion probability, p to be 0.15, and fix α, the proportional threshold contagion neighbor threshold to be 0.35. Parameters are fully laid out in Table 1. The former was chosen to allow a slower spread of belief strengths, as higher values would make the spread too fast to properly analyze. The latter was chosen as it, too, would ideally avoid being so low that contagion happens immediately and in all cases, or so high that it never occurs. In preliminary experiments, the chosen values best satisfied these goals. We further explain the process we underwent to choose these values in S1 Text and S1 Table.

Table 1. Parameter values for in-silico contagion experiments.

Parameter Value Description
Contagion Methods
p 0.15 The chance of spread for simple contagion.
α 0.35 The ratio for proportional threshold contagion.
β(bu,(t+1), bv) 11+eα(|bu,t-bv|-γ) The cognitive contagion function (DCC, Eq (6)) we use.
α, γ α = 4, γ = 2 Scale and translation values for the DCC function, 11+eα(|bu,t-bv|-γ)
POD Model
N 500 The number of networked agents.
|I| 1 The number of institutional agents.
ϵ 0 The maximum distance between agent and institution beliefs necessary to be a subscriber.
T {t, 0 ≤ t ≤ 100} The set of timesteps.

As a final experimental condition to vary independently of message set, we will run experiments on a host of different graph topologies, keeping the number of nodes and prior distribution of belief strengths constant. We test each contagion method on four networks topologies:

  • The Erdős-Rényi (ER) random graph [92]

  • The Watts-Strogatz (WS) small world network [93]

  • The Barabási-Albert (BA) preferential attachment network [94]

  • The Multiplicative Attribute Graph (MAG) [95]

We explicate rationale behind choosing each below. Running experiments across different graph topologies should allow us to determine how much graph structure affects cascades. We anticipate that cascades using simple contagion may not vary much over graph type, that those with proportional threshold contagion will vary the most because it is the most dependent on neighborhood structure, and that cognitive contagion should vary least, as it relies more on single instances of neighbor belief strengths rather than an aggregate. Additionally, because the model has stochastic elements in the initial distribution of agent beliefs, at each potential contagion step governed by the contagion function, and in the randomness of the networks constructed, experiments are run from ten to one hundred times and results are shown as averages over the total simulations, with variances displayed in supplemental materials S1 and S2 Figs where applicable. We display results aggregated over 10 simulations here, as there was not significant variation between results from 10, 50, or 100 simulations, and include results from 50 and 100 simulations with justification for using 10 in S3 Text, S2 Table, and S23 Fig.

But first, we need to choose a cognitive contagion function that best suits our empirically-based goals. After choosing such a function, it will be compared against simple and proportional threshold contagion in each condition.

Motivating choice of β functions

Unsurprisingly, depending on the choice of β, the network should display significantly different dynamics. That choice should depend on what type of phenomena is being modeled. We will explore a variety of choices for this function and compare the outcome of their respective cognitive contagion against the qualitative target phenomena. Other works studying cognitive contagion have motivated similar dissonance-related contagion functions [76, 81], treating dissonance as a weighted update based on belief distance.

We want to follow suit, and tune the choice of function to capture the effect of updating beliefs core to an individual’s identity: in a manner where incoming belief strengths must be close to existing belief strengths to yield an update. If possible, it would also be useful to be able to choose a function which also captures aspects of exposure effects: that beliefs do have a small chance to update that grows as the agent is exposed to a belief over time. However, agents should prioritize the dissonance effect. That is, they should only experience an exposure effect if incoming messages are already somewhat close to existing beliefs. The dissonance effect should take precedence. Our function will differ, however, in our discrete belief updating—moving from one discrete value to another rather than meeting in between as in prior work.

Keeping these two target phenomena in mind, we explore three classes of functions: linear, threshold, and logistic.

Linear functions

To begin with perhaps the simplest function, an inverse linear function would capture the effect of making beliefs that are further apart have a smaller probability of updating. Moreover, we can generalize this inverse function by adding some parameters to add a bias and scalar to the denominator. The equation comes out as follows:

P(bu,(t+1)=bv|bu,t)=β(bu,t,bv)=1γ+α|bu,t-bv| (5)

In this equation, γ becomes a parameter to add bias towards being reluctant to update beliefs, and α similarly decreases the probability of update as it increases. This turns out to be a useful formulation, because if we set γ and α to be very low, then the agent becomes relatively more “gullible.” Conversely, setting γ and α to be high would make the agent “stubborn.” We expect that the “stubborn” agent will be most desirable for our purposes of modeling cognitive dissonance.

To compare parameterizations of the inverse linear function, we contrast, on an Erdős-Rényi random graph, G(N, ρ) = (V, E) with N = 250, ρ = 0.05, in our POD model setup described above, a relatively “gullible” agent function (γ = 1, α = 0) to a “normal” function (γ = 1, α = 1), and to a “stubborn” (γ = 10, α = 20) function. We additionally display results for only the split message set, though results for others can be found in S3S8 Figs. Results are displayed in Fig 4.

Fig 4. Linear cognitive contagion function result comparisons.

Fig 4

The split message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the inverse linear cognitive contagion function in Eq (5). Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 1, α = 0); the middle shows “normal” agents (γ = 1, α = 1), and the right, “stubborn” agents (γ = 10, α = 20).

As expected, the results show that the “gullible” agents simply believe everything. The “normal” agents take a bit longer to all update their belief strengths to that of the messages broadcast, but eventually do. Importantly, when all agents’ belief strength for B is b = 6 after the first 50 time steps, they all quickly switch over to b = 0, which does not fit the cognitive dissonance effect we are trying to model. The “stubborn” agent case is the closest to what we are seeking. Only the agents who are already closest to b = 6 believe the first message over time, with some effect on agents with more distant beliefs. After the messages switch polarity halfway through, the agents whose strength of belief is b = 6 do drop significantly, but less so than in the other conditions. This effect seems closer to the dissonance mixed with exposure effects that we desire: beliefs are less likely to change as they are far away, but there is a chance to change with many messages over time.

Threshold functions

Next, we evaluate the behavior of threshold functions and compare to our desired effect. Threshold update functions are used in opinion diffusion ABMs, most notably in the HK bounded confidence model [79, 81, 96]. We anticipate that these functions will capture the desired dissonance effect. The function can be parameterized as we already motivated in Eq (4), with γ serving as the threshold.

Using the same formulations as above, with “gullible” (γ = 6), “normal” (γ = 3), and “stubborn” (γ = 1) agents, we find results on the same graph structure as displayed in Fig 5.

Fig 5. Threshold cognitive contagion function result comparisons.

Fig 5

The split message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the threshold cognitive contagion function in Eq (4). Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 6); the middle shows “normal” agents (γ = 3); and the right, “stubborn” agents (γ = 1).

These results confirm our anticipations, and perfectly capture the effect of only updating if incoming messages are within a certain distance of existing beliefs. However, the threshold function leaves no possibility of update to capture the repetition effects of mere exposure or illusory truth. The probability of updating is either 0 or 1, which loses much of the nuance of the actual phenomena.

Sigmoid functions

Finally, we will test a logistic function—specifically a sigmoid function, as we are attempting to capture probabilities and the range of the sigmoid is [0, 1]. Sigmoid functions are commonly used in neural network models because they capture “activation” effects arguably akin to action-potentials in biological neurons, and rein in outputs so they do not explode while learning [97]. This property is useful for our purposes as well. We formulate our sigmoid cognitive contagion function as follows:

β(bu,(t+1),bv)=11+eα(|bu,t-bv|-γ) (6)

In this equation, α and γ control the strictness and threshold, respectively. As α increases, the function looks more like a binary threshold function, and restricts any significant probability to center around γ − 1. γ controls the threshold value by translating the function on the x axis. Though, in a sigmoid function, the value at x = γ will always be 0.5, so if one wishes to guarantee belief update given a threshold τ, it must be set as τ = γ + ϵ where ϵ1α. We use this strategy to set our γ values throughout our in-silico experiments.

Given the same experiments as above, the sigmoid function parameterized for different agent types (“gullible” (α = 1, γ = 7), “normal” (α = 2, γ = 3), and “stubborn” (α = 4, γ = 2)) yields results as show in Fig 6.

Fig 6. Sigmoid cognitive contagion function result comparisons.

Fig 6

The split message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the sigmoid cognitive contagion function in Eq (6). Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (α = 1, γ = 7), the middle shows “normal” agents (α = 2, γ = 3), and the right, “stubborn” agents (α = 4, γ = 2).

These results seem to display characteristics of both the inverse linear function and the threshold function. The “gullible” agents, as always, believe everything but with a softer transition than for the linear or threshold functions. “Normal” agents are the first indication that we are getting closer to our desired effect. After an initial widespread uptake of belief strength b = 6 in the first half of the simulation, some agents begin to believe b = 0, but the population that decreases the most to engender that gain seem to be agents with belief strength b = 1. Though, from our model, some agents with strength b = 6 must have believed b = 0 messages, because if they did not, the messages would never have been shared and made it to b = 1 agents—the institutional agents are only connected to b = 6 agents as ϵ = 0.

Finally, the “stubborn” agents seem to best capture our desired effect. Initially, b = 6, b = 5, and b = 4 agents are the only who update beliefs. There is also a small effect where some b = 3 agents update to b = 6, capturing the exposure effects combined with a dissonance effect. Importantly, when messages switch to b = 0, none of the b = 6 agents update their beliefs. The exposure effects would not work as the dissonance effect would take primacy.

These agents act in a way akin to what we observe from cognitive literature [20, 23], albeit in a highly simplified manner: an agent who “strongly disbelieves” in something like COVID mask-wearing will likely only be swayed by a message that “disbelieves” or is “uncertain” about the belief. On the individual level, a maximum two relative magnitudes of belief separation, with decreasing probabilities as distance increases, seems to qualitatively match empirical work. In our simulations using more than 7 points on a belief spectrum, this argument can still be held by setting the equivalent belief “markers” along the spectrum, and using those to scale the contagion function.

Given these initial experiments, it seems most reasonable to choose the “stubborn” sigmoid cognitive contagion function as that which best captures our desired effects. We will use this defensive cognitive contagion (DCC) function in the rest of our experiments as we compare the effects of cascades with cognitive contagion to those with simple and proportional threshold contagion.

Comparing contagion methods

Now that we have selected a cognitive contagion function that best captures the effects we wish to model, it is necessary to compare cascade results of this function to those from simple and proportional threshold contagion in our cascading POD model. We will investigate the way that these different contagion methods manifest effects on different network structures. For simplicity, we will henceforth refer to each cascade-contagion function combination as simple cascades, proportional threshold cascades, and cognitive cascades, respectively. Parameter values used in these comparisons are listed in Table 1. In addition to significant effects based on the choice of the β function, we expect that effects will also significantly differ based on the structure of the network. The structure will determine which ideas reach agents, and which do not, and thus should affect the final outcome of belief distribution over the network.

We will test each cascade method on five types of networks: the Erdős-Rényi (ER) random graph [92], the Watts-Strogatz (WS) small world network [93], the Barabási-Albert (BA) preferential attachment network [94], and the Multiplicative Attribute Graph (MAG) [95]. Each network has distinct properties that will affect how the cascading contagions play out. Additionally, we will test each message set for each network type to explore the effects of different influence strategies.

First, we will qualitatively analyze the aggregate effects on the diffusion of beliefs, and subsequent belief strength updates, across the entire network. After doing so for each network topology, we will analyze the cascading behavior itself.

Cascades on ER random networks

We will begin with cascades on Erdős-Rényi [92] random networks G(N, ρ) = (V, E) where ρ = 0.05 and N = 500. Note that ρ is not the chance of simple contagion, p, but the chance that two agents connect in the random graph. This graph type was chosen as a baseline to compare others to, as is standard in network science. Results of the simple and proportional threshold cascades are shown in S19 Fig.

Results from the simple cascade experiments show that in each message set condition, the belief strength being spread pervaded the entire network in all cases. Moreover, the strength of beliefs broadcast were adopted by the population very quickly. In the case of proportional threshold cascades, the initial distributions of belief strengths did not change in any message set condition.

Cognitive cascades on the ER graphs yielded markedly different results. As depicted in Fig 7, both the single and split message sets were only able to sway agents that started with b = 6, b = 5, or b = 4, with what appears to be a few b = 3 agents persuaded. Importantly, no agents were swayed after the messaging change in the split condition. The gradual message set is the only that was able to sway all agents over to b = 0.

Fig 7. DCC results on ER random networks.

Fig 7

DCC on ER random networks with N = 500, and connection chance ρ = 0.05. Graphs show the percent of agents who believe B with strength b over time.

Cascades on WS small world networks

Our second set of experiments were conducted on Watts-Strogatz [93] small world networks, G(N, k, ρ) = (V, E), where N = 500, k = 5, and ρ = 0.5. In this formulation, k is the number of initial neighbors any node is connected to, and ρ is the chance of rewiring any edge. We chose this graph topology because small world networks exhibit some attributes of real-world social networks (low diameter and triadic closure). Results of simple and proportional threshold cascades on these WS graphs are shown in S20 Fig.

Simple cascade results showed largely the same pattern as in the ER random graph, but a significantly slower spread through the population. Interestingly, proportional threshold cascades were successful on the WS graphs, but with a high amount of variance over simulations (variance shown in S1 Fig).

Results from cognitive cascade experiments closely match those from the ER random graph experiments. These are displayed in Fig 8. The population displayed the same patterns given a significantly different graph structure, which begs explanation. We will discuss this further below.

Fig 8. DCC results on WS small world networks.

Fig 8

DCC on Watts-Strogatz small world networks with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs show the percent of agents who believe B with strength b over time.

Cascades on BA preferential attachment networks

We continued our experiments by testing Barabási-Albert [94] preferential attachment networks, G(N, m) = (V, E), where N = 500 and m = 3, where m represents the number of edges added with each newly added node. This network type was also chosen because of its properties that closely resemble real-world social networks (low diameter, power law degree distribution). Results are shown in S21 Fig.

Again, simple cascade results are similar to those of the WS graphs: slower spread than in ER random graphs, but faster than in WS graphs. In this case, spread is facilitated by the power law distribution of node degree—even a few nodes with high degree believing the message can have an outsize effect in spreading quickly to the outskirts of the network [98]. Also interestingly, proportional threshold cascades appear to have slight effects on these graphs, with the gradual message set having most effect. Proportional threshold cascade results also showed significant variance (shown in S2 Fig).

Results from cognitive cascade experiments were again similar to those of the ER random and WS graphs—most similar to results from the WS graph. Results are shown in Fig 9.

Fig 9. DCC results on BA preferential attachment networks.

Fig 9

DCC on a Barabási-Albert preferential attachment network with N = 500, and added edges m = 3. Graphs show the percent of agents who believe B with strength b over time.

Cascades on MAG networks

Finally, we tested cascades on the Multiplicative Attribute Graph [95] with an affinity matrix Θb that yielded a graph with very high homophily. We chose this graph topology because real world social networks are homophilic—people who have similar interests tend to connect. Testing a highly homophilic graph (higher than in a real social network) can allow us to test the extreme case of communities in silos based on their belief strength. The affinity matrix was constructed as follows:

Θb=(θij)Rm×n=11+50(j-i)2 (7)
=[0.1670.0180.0050.0020.0010.00080.00060.0180.1670.0180.0050.0020.0010.00080.0050.0180.1670.0180.0050.0020.0010.0020.0050.0180.1670.0180.0050.0020.0010.0020.0050.0180.1670.0180.0050.00080.0010.0020.0050.0180.1670.0180.00060.00080.0010.0020.0050.0180.167] (8)

To measure homophily, we used a simple measure of the global average neighbor distance given the b value of each node, and compared against a random ER graph. The measure is detailed in Eq (9):

h(G=(V,E))=vVuN(v)|bu-bv|2|V|2, (9)

where N(v) is a function that returns neighbors uV of v. Over ten ER random graphs with N = 500 and ρ = 0.05, the mean average neighbor distance was 2.30 with a mean variance of 0.349. Over ten MAG graphs generated with Θb, the mean average neighbor distance was 0.31 with a mean variance of 0.02. Results from simple and proportional threshold cascades on these homophilic MAG graphs are shown in S22 Fig.

As it turns out, a high degree of homophily did not appear to make a significant difference to any cascade patterns. The patterns generated from simple and proportional threshold cascades appear similar to those from the ER random graphs. The same is true for cognitive cascade results, which are depicted in Fig 10.

Fig 10. DCC results on homophilic MAG networks.

Fig 10

DCC on homophilic MAG networks with N = 500, and Θb detailed in Eq (8). Graphs show the percent of agents who believe B with strength b over time.

Analysis of results

Belief change

To quantify the relationships we observe qualitatively between the cascading contagion patterns, we can examine correlations between results. We choose to illustrate these patterns using only data from the single messaging pattern, as differences in this pattern appear to be the smallest within cascade methods. By arguing from the standpoint of the apparent most similar results, it implies that results that are more different given simple and proportional threshold cascade methods—while DCC results are still quite similar—further prove the stability of the cascade method.

Table 2 displays results within cascade methods (simple, proportional threshold, and DCC) between pairs of graph types. We measure various correlations between belief results from these pairs. Details of the methods by which we performed the tests can be found in S4 Text. Briefly, r¯ denotes an average Pearson coefficient across belief values; and χ2¯ denotes a version of the χ2 test applied to time series data, testing for independence between the two graphs’ belief distributions at each time step. For all test values, a higher value indicates stronger correlation.

Table 2. Correlation test results measuring similarity between cascades on graph type pairs, by cascade method.

Measure ER-WS ER-BA ER-MAG WS-BA WS-MAG BA-MAG
Simple
r¯ 0.751 0.869 0.999 0.973 0.750 0.868
χ2¯ 0.010 0.208 1.000 0.040 0.010 0.208
Proportional Threshold
r¯ 0.529 0.649 0.899
χ2¯ 0.020 0.040 1.000 0.020 0.020 0.040
DCC
r¯ 0.964 0.983 0.981 0.989 0.970 0.917
χ2¯ 0.327 1.000 1.000 1.000 0.327 1.000

Correlation tests within cascade method, between graph type pairs. Bolded values are the strongest correlations across cascade methods for a given test. ∅ denotes that a test could not be performed because no changes occurred from initial conditions on one or both graphs.

It is clear from correlation test results that the DCC method yields the strongest correlations between graph topologies. This supports what looks apparent qualitatively from graph results. While for simple and proportional threshold cascade, correlations are weak or hardly present, almost all values for DCC are consistently strong. Intuitively, we may expect that graph structure would affect any cascading contagion method, as is the case for simple and proportional threshold. However, this analysis reveals that when using the DCC function, effects of structure are greatly diminished in most cases.

Cascades

Results across different graph topologies further support our motivations for introducing cognitive cascade models. These results are encouraging because the entire motivation for our model takes out the structural dependencies of proportional threshold cascade, and replaces simple cascades’ random chance of spread with one motivated by what agents already believe. Thus the key factors determining the spread of messages and change in beliefs in our cognitive cascade model are:

  1. Whether or not any given agent is exposed to some message;

  2. How many times an agent is exposed to similar messages; and

  3. The difference between agent beliefs and that message.

These results qualitatively match what has been observed in misinformation literature. Even when exposed to factual or scientific evidence (e.g. that wearing masks would mitigate the COVID-19 pandemic), people who are already skeptical of mask-wearing are not able to be swayed. They often instead rationalize their existing beliefs [2, 22, 31, 85]. Additionally, mass-exposure to a given message still has a chance to sway agents in our model—proportionally to how distant that message is to agent beliefs. This captures the illusory truth [23, 70] and mere-exposure effects [86].

We can also quantify this result by analyzing the cascading behavior for any given message. For any message mj from a list Mi:t(m0,m1,,mj),mjM,j0 sent by an institution i at time step t, there will be a probability that any agent u will believe the message, which depends on the factors listed above. In terms of our model,

(1) becomes the probability of mi being received and believed by any neighbor N(u) of u;

To travel from institution i to agent u, a message must follow a directed path through the graph, wiu = (v1, v2, …, vn) where v1 = i and vn = u. The probability of a message being passed down the entire path can be expressed as:

P(mj,w)=vwβ(v,mj). (10)

To then properly represent (1), we can limit wiu to end at neighbors of u.

The next step requires us to determine how many neighbors N(u) of u believe mj—as they would then subsequently propagate the message to u. Therefore,

(2) becomes |Nβ(i, u, mj)|, the number of neighbors of u who are likely to believe mj coming from i.

However, we do not know ahead of time which neighbors will actually believe mj as the model is stochastic. We can argue that a probability within some δ will suffice. We can represent Nβ(i, u, mj) as:

Nβ(i,u,mj)={v|vN(u)P(wiu,mj)1-δ}, (11)

Because in the POD model, agents can only believe and share mj once, to determine how many neighbors of u would share the message, we must choose a set of non-overlapping paths W* from i to neighbors of u—moreover, one that maximizes total path probabilities:

Wiu*=maxwP(mj,w){wiu|wiu(·)=} (12)

The algorithmic formulation of such a process would best be captured in future work.

Regardless of methods, it stands that P(mj,w) is crucial in determining whether any agent u will have a chance of receiving a message and believing it. This result can help explain why simple and proportional threshold cascades showed such variation across graph types, but cognitive cascades did not. Given the probabilities of adopting belief strengths bu given a prior belief of bv for DCC shown in Table 3, it becomes clear that if any β(v, mj) is given a belief strength difference of 3 or higher, then the entire chain’s probability P(mj,w) will collapse to very close to 0.

Table 3. Probabilities of adopting beliefs bu given prior bv using DCC.

bu/bv 0 1 2 3 4 5 6
0 0.999 0.982 0.500 0.018 <0.001 <0.001 <0.001
1 0.982 0.999 0.982 0.500 0.018 <0.001 <0.001
2 0.500 0.982 0.999 0.982 0.500 0.018 <0.001
3 0.018 0.500 0.982 0.999 0.982 0.500 0.018
4 <0.001 0.018 0.500 0.982 0.999 0.982 0.500
5 <0.001 <0.001 0.018 0.500 0.982 0.999 0.982
6 <0.001 <0.001 <0.001 0.018 0.500 0.982 0.999

Probabilities given by β(bu, bv) for the DCC function, described in Eq (3).

To satisfy (1), a path of agents with belief strengths at most distance 1 away from the message would reliably transmit it with high probability. If any agents in the path have a belief of distance 2 from the message, the transmission probability would decrease; halving the probability with each agent of distance 2. Compare this with simple cascades, where every agent has a flat 0.15 probability of sharing, and the path probability converges close to 0 in only two steps; or proportional threshold cascades where if any agent in the path does not meet the threshold of 0.35, the path probability immediately collapses to 0.

Taking these criteria for (1) into account, Table 3 also makes clear that to satisfy (2) and (3), the chain may only need to end with one agent—i.e. |Wiu*|=1. If the message belief strength is distance 0 or 1 from u, then there is already a near guaranteed chance that u will believe the message after receiving it only once. Conversely, in any quantitative analysis of which agents may believe a message, we can exclude with high confidence all agents with a belief strength difference of 3 or higher from consideration, as their chances of believing the message even if it made it to them would be near zero.

Therefore, to quantitatively demonstrate why the cognitive cascade results were so stable across random graph types, we can show, for a randomly selected agent u with belief strength bu, the percentage of 100 random generations of the graph which yield at least one path of agents v entirely with bv distance at most τ away from a message with belief strength bmj. Moreover, we can show this for all potential values of bu, keeping the belief strength of the message constant, bmj, at 6 (quantifying the single message condition). Importantly, this path does not include u because for bu = 3 and below, the distance of bu to bmj would always be too high; thus, our paths lead to neighbors of u. These paths were found by assigning edge weights equal to the distance between the message and the belief strength of the source node in the directed pair, and running Dijkstra’s weighted shortest path algorithm with i as the source and u as the target. Results are displayed in Table 4.

Table 4. Probabilities of agents with belief bu having a strongly contagious DCC path to it from an institutional agent.

(τ = 1) bu = 0 bu = 1 bu = 2 bu = 3 bu = 4 bu = 5 bu = 6
ER 0.91 0.9 0.97 0.95 1.0 1.0 1.0
WS 0.51 0.56 0.59 0.7 0.64 0.62 1.0
BA 0.73 0.66 0.66 0.75 0.63 0.75 1.0
MAG 0.13 0.2 0.37 0.54 0.8 1.0 1.0
(τ = 2) bu = 0 bu = 1 bu = 2 bu = 3 bu = 4 bu = 5 bu = 6
ER 0.98 0.99 0.97 1.0 1.0 1.0 1.0
WS 0.65 0.8 0.76 0.72 0.76 0.81 1.0
BA 0.84 0.89 0.84 0.82 0.89 0.88 1.0
MAG 0.29 0.29 0.45 0.84 0.99 1.0 1.0

Proportion of 100 random graphs (N = 500) with at least one path leading from the institutional agent i to a randomly selected node u with belief strength bu, where each agent v in the path has belief strength |bvbmj|≤τ, and bmj = 6.

When τ is 1 the path yields an almost guaranteed probability of the message reaching u. When τ equals 2, the path can yield a range of chances to reach u—depending on how many distances of 2 there are in the path, each inserting a probability of 0.5 into the total product. However, compared to the path probability for cascades using simple or proportional threshold contagion—where the former is a path entirely of probabilities of 0.15, and the latter requiring meeting a threshold of 0.35 for each agent in the path to even have a non-zero chance—both path types under DCC yield significantly higher probabilities of messages reaching target agents.

Moreover, the analysis shows that across graph topologies, where there are high proportions of both path types present, there is a high likelihood for messages to reach agents of all belief strengths. Particularly for Erdős-Rényi random graphs, both path types are almost always present. Both Watts-Strogatz and Barabási-Albert networks show lower, but still high proportions of both path types being present. This likely accounts for the slight variations in cognitive cascade results displayed in graphs above. Predictably, homophilic MAG graphs show a decreasing likelihood of both path types as the distance between bu and bmj increases. If any of these paths yielded a message reaching agent u, then combined with the probabilities in Table 3, we see that target agents with belief strength distance 2 or less from the message will likely believe it, and update accordingly. This is exactly what we see qualitatively in the above results, as regardless of graph topology, agents with belief strength 4 or higher quickly adopt a stronger belief of 6 from the message, and agents with belief strength of 3 eventually update after enough messages reach them.

Discussion

From our in-silico experiments, it is clear that the three types of social contagion affect populations differently given the same initial conditions and beliefs to spread. We were able to show, as predicted, that at the individual level, simple and proportional threshold contagion methods change agent belief strengths in a manner that does not depend on what they were believing previously. In the single and split message set conditions, many agents’ belief strengths were able to be swayed from value-to-value regardless of their initial belief. Thus, these contagion models do not capture the cognitive phenomena that motivated our experiments.

On the other hand, our simple individual cognitive contagion and cascade models also performed as expected. The results were fairly robust across several graph topologies. In the single and split message set conditions, most agents following the DCC function did not change their belief strength over time. This fits the underlying social theory because the messages were too far from what agents initially believed, so not updating their beliefs accurately models the defensive or entrenching effects observed when people are exposed to identity-related beliefs that they do not agree with [23, 31]. The only message set condition that was able to sway the entire population in the cognitive cascade condition was the gradual set.

The POD model with DCC also appears to capture population-level trends in opinion data that originally motivated our study. Results match the partisan polarization phenomena being observed [69, 29], as agents who only update their belief strengths in this manner are highly unlikely to be swayed by belief strengths that are too far from theirs. Once swayed in one direction or another, our agents could not adopt significantly differing belief strengths without being nudged along. Moreover, our results appear convergent with results from emerging work that use ABM with similar polar belief assumptions. Sikder et al. demonstrate in their ABM of biased agents—those only accepting congruent information—that they engender a mix of final opinions among the agents in a manner that appears robust against graph topology [72]. This result also seems to be in line with work on polarization which argues that having biased agents is key to bringing about polarized beliefs, regardless of homophilic network structure [7577]. Though, our results are convergent under a different model: the cited works employ typical social contagion models rather than one driven by an institutional spreader leading to cascades, as ours does. The questions asked in the cited works could also be asked of cascading contagion models in future study.

Given these similarities in results, even our very simple model may be able to lend insight into potential ways to deal with spread of conspiracy beliefs—though we in no way mean for these to be taken as policy recommendations. Our model and experiments would need to be modified and parameterized to fit a given narrative spread scenario in order to be grounded enough to draw conclusions from. This may be appropriate for a follow-up study which applies this model, with more detail, to the spread of a COVID-related belief such as belief in mask-wearing.

Our analysis revealed that the three most important factors in swaying any agent’s belief were (1) whether or not an agent is exposed to a message, (2) number of exposures, and (3) the difference between prior agent beliefs and those expressed in the message. Even if (1) and (2) are met, as in some attempts to debunk misinformation [5], (3) would prevent staunch conspiracy believers from changing beliefs if exposed to a contradictory message. Some analyses attempt to focus on the network structure [2, 16, 99]—i.e. (1) and (2)—without acknowledging that individual psychology is just as important—as in (3). Our model, which captures both network effects and individual effects, therefore gives novel insights into a more holistic intervention. Our analysis of results showed that for a highly homophilic network (a trait present in real social networks), certain messages have a slim chance of reaching those with certain beliefs. These theoretical results were confirmed in supplemental experiments (results in S15S18 Figs) where graphs with high homophily and low node degree yielded smaller cascades and less contagion—even with the DCC function. The interplay between network structure and cognitive function (e.g. “stubbornness”) on cascade results could benefit from further study, as it likely has empirical analogues that are crucial to understanding reality. Any intervention would need to take these factors into account, and imagine what types of messages would be most likely to reach certain populations.

Moreover, on the individual level, an intervention under our model would have to gradually nudge the agent away from an undesirable belief strength. This brings the individual into the debate over interventions. The only message sets from our experiments that successfully swayed all agents in the population were those which gradually eased agents from one belief polarity to another. It should be noted that belief change tactics aimed at individuals’ psychology are already widely considered and used by private and public institutions to manipulate target population beliefs, but not yet in the case of domestic misinformation [33, 100, 101]. The ethics of these interventions—often targeted at anti-radicalization, voter manipulation, or behavior change for economic gain—are clearly fraught. This begs more analysis of the ethics of intervention techniques, but such an endeavor is easily outside the scope of this report.

Limitations and future work

Our models are in their early stages of development. While the motivation for models of cognitive cascades exists, there are several steps that still must be taken in order to flesh the model out further. For instance, in the social science literature that backs misinformation and identity-based reasoning effects, there is a great deal of evidence backing belief effects that rely on in/out-group effects [14, 22, 84], trust in message sources [10], more nuanced belief structures [31, 102104], or effects of emotion on social contagion [14, 32]. These theories and findings could motivate more complex agent cognitive models than either the simple sigmoid distance function, or the singular proposition, we used in our model.

Our cognitive cascade model could also be made more complex in ways that would lend themselves to interesting analysis. For one, there could be added complexity when it comes to agent prior beliefs, or contagion functions. For the former, agent priors could be drawn from distributions other than the uniform distribution. Concerning the latter, parameters in the contagion functions themselves could be distributed to lead to varying levels of “gullible” versus “stubborn” agents—rendering the graph even more heterogenous. These techniques are common in opinion diffusion models [15]. Prior agent beliefs or contagion function parameters could otherwise be initialized from empirical data.

Moreover, there are potential model features that could be added to introduce a dynamic quality to the structure of the network. Some diffusion models we reviewed incorporate the ability for agents to switch information sources [72] and neighbor connections [81] based on the degree of agreement between their modeled beliefs. In media studies, however, there is no clear consensus on the behavior of individuals when it comes to choosing or switching media sources [52, 57]. Media studies dub consumer preference for bias-confirming media “selective exposure,” and studies have shown both evidence of its existence [5355] (often with small effects), and against it [8, 52, 56]. A more thorough synthesis of this literature could lead to another model layer that captures meaningful aspects of reality.

Further, a more rigorous application of the model to empirical population-level belief changes would help in verifying the legitimacy of the model’s results. Using both real network structures, such as snapshots of social media networks, and real institutional message data, such as tweets or posts from notable figures, would be steps forward for this goal. While ABMs have been used to model spread of misinformation [16], social media messages [99, 105], and value-laden topics [106], it appears that few have verified outcomes against ground truth. Among other reasons, this is likely due to the fact that such data seems difficult to obtain.

There is great promise for computational social scientific tools like ABM to leverage computational power to tackle complex social problems previously limited to thought experiments and small experiments—such as modeling misinformation spread. However, if the promise is to be fulfilled, more work must be done to motivate and empirically ground both individual agent models, and global network structures and population models. But this work is a step towards establishing fruitful collaboration between the computational modeling community and social scientists in order to tackle one of the greatest political challenges of our time.

Conclusion

This paper lays out what we call a cognitive cascade model: a combination of an individual cognitive contagion model for identity-related belief spread embedded in a Public Opinion Diffusion (POD) model in which external, institutional agents (modeling media companies) dictate influence of internal agent beliefs. The cognitive cascade model, by giving each individual agent a cognitive model to direct belief update, allows a level of expressiveness above existing simple and proportional threshold contagion models typically used in network cascade analysis. Moreover, adding institutional agents to drive belief cascades in an opinion diffusion model using various network topologies adds insights from network science to typical opinion diffusion studies. After proposing the cognitive cascade model, we compared potential cognitive contagion functions to arrive at one capturing misinformation spread—what we called a Defensive Cognitive Contagion (DCC) function—which adequately captured the cognitive dissonance and exposure effects referenced in empirical literature. This allowed us to run simulation models of networked populations of agents whose belief strength in a given proposition is influenced by an external agent. Across several graph topologies, keeping the cascade model consistent, we compared simulation results for cascades using simple contagion, proportional threshold contagion, and our DCC function. Analysis of these results revealed that our DCC function is much less sensitive to graph topology than the other cascade methods. It showed that the crucial factor in belief change was not only who surrounds a given agent but also the content of any message and its relation to the agent’s prior belief. We concluded by briefly motivating potential interventions to correct misinformation and conspiracy beliefs that address the individual and the network holistically, rather than only the network they are embedded in.

Supporting information

S1 Text. Process for choosing contagion parameters.

An outline of our decision process for choosing simple and proportional threshold parameter values of p = 0.15 and γ = 0.35 for in-silico experiments.

(TEX)

S2 Text. Process for testing different belief resolutions and analysis of results.

An outline of our process for setting up and running versions of our main contagion experiments with different belief resolutions (2, 3, 5, 9, 16, 32, and 64), including an analysis of some of the results.

(TEX)

S3 Text. Process for comparing contagion results with differing simulation run counts.

An outline of our process for setting up, running, and analyzing versions of our main contagion experiments with different simulation run counts (10, 50, and 100).

(TEX)

S4 Text. Process for running correlation analyses between contagion results.

An in-depth explanation of the correlation measures we used to measure similarity between contagion result data.

(TEX)

S1 Table. Results of parameter sweeping simple contagion value p.

A table of timestep t at which simple contagion with probability p spread b = 6 to at least 90% of agents in different graph topologies (at a belief resolution of 7). ∅ indicates the contagion never reached at least 90% of agents.

(TEX)

S2 Table. Correlation test results with low scores between run count combinations.

Correlation tests that yielded low scores for certain experimental combinations of graph type and message set.

(TEX)

S1 Fig. Proportional threshold contagion variance on WS small world networks.

Proportional threshold contagion results over ten iterations of a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs show the mean percent of agents who believe bB, color coded by b value, plotted against time step. Shaded portions show variance over iterations.

(PNG)

S2 Fig. Proportional threshold contagion variance on BA preferential attachment networks.

Proportional threshold contagion results over ten iterations of a Barabási-Albert preferential attachment network with N = 500, and added edges m = 3. Graphs show the mean percent of agents who believe bB, color coded by b value, plotted against time step. Shaded portions show variance over iterations.

(PNG)

S3 Fig. Additional results from single message set contagion with linear cognitive contagion functions.

The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the inverse linear cognitive contagion function in Eq (5) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 1, α = 0); the middle shows “normal” agents (γ = 1, α = 1), and the right, “stubborn” agents (γ = 10, α = 20).

(PNG)

S4 Fig. Additional results from single message set contagion with threshold cognitive contagion functions.

The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the threshold cognitive contagion function in Eq (2) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 6); the middle shows “normal” agents (γ = 3); and the right, “stubborn” agents (γ = 1).

(PNG)

S5 Fig. Additional results from single message set contagion with sigmoid cognitive contagion functions.

The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the sigmoid cognitive contagion function in Eq (6) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (α = 1, γ = 7), the middle shows “normal” agents (α = 2, γ = 3), and the right, “stubborn” agents (α = 4, γ = 2).

(PNG)

S6 Fig. Additional results from gradual message set contagion with linear cognitive contagion functions.

The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the inverse linear cognitive contagion function in Eq (5) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 1, α = 0); the middle shows “normal” agents (γ = 1, α = 1), and the right, “stubborn” agents (γ = 10, α = 20).

(PNG)

S7 Fig. Additional results from gradual message set contagion with threshold cognitive contagion functions.

The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the threshold cognitive contagion function in Eq (2) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 6); the middle shows “normal” agents (γ = 3); and the right, “stubborn” agents (γ = 1).

(PNG)

S8 Fig. Additional results from gradual message set contagion with sigmoid cognitive contagion functions.

The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the sigmoid cognitive contagion function in Eq (6) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (α = 1, γ = 7), the middle shows “normal” agents (α = 2, γ = 3), and the right, “stubborn” agents (α = 4, γ = 2).

(PNG)

S9 Fig. Contagion result comparisons using a belief resolution of 2.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b2},bZ.

(PNG)

S10 Fig. Contagion result comparisons using a belief resolution of 3.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b3},bZ.

(PNG)

S11 Fig. Contagion result comparisons using a belief resolution of 5.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b5},bZ.

(PNG)

S12 Fig. Contagion result comparisons using a belief resolution of 9.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b9},bZ.

(PNG)

S13 Fig. Contagion result comparisons using a belief resolution of 16.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b16},bZ.

(PNG)

S14 Fig. Contagion result comparisons on BA preferential attachment networks using a belief resolution of 32.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b32},bZ.

(PNG)

S15 Fig. Contagion result comparisons on MAG networks using a belief resolution of 32.

The single, split, and gradual message sets on a Multiplicative Attribute Graph, N = 500, and Θ generated from the same formula in Eq (8) in the main paper—i.e. one that brings about high levels of homophily so agents would rarely connect to agents more than 3 belief values away from them. Graphs display percent of agents who believe some b in B={b,0b32},bZ.

(PNG)

S16 Fig. Contagion result comparisons on WS small world networks using a belief resolution of 64.

The single, split, and gradual message sets on a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

(PNG)

S17 Fig. Contagion result comparisons on BA preferential attachment networks using a belief resolution of 64.

The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

(PNG)

S18 Fig. Contagion result comparisons on MAG networks using a belief resolution of 64.

The single, split, and gradual message sets on a Multiplicative Attribute Graph, N = 500, and Θ generated from the same formula in Eq (8) in the main paper—i.e. one that brings about high levels of homophily so agents would rarely connect to agents more than 3 belief values away from them. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

(PNG)

S19 Fig. Simple and proportional threshold contagion results on ER random networks.

Simple (top row) and proportional threshold (bottom row) contagion on ER random networks with N = 500, and connection chance ρ = 0.05. Graphs show the percent of agents who believe B with strength b over time.

(PNG)

S20 Fig. Simple and proportional threshold contagion results on WS small world networks.

Simple (top row) and complex (bottom row) contagion on a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs show the percent of agents who believe B with strength b over time. Asterisks (*) denote these contagions had significant variance over simulation iterations.

(PNG)

S21 Fig. Simple and proportional threshold contagion results on BA preferential attachment networks.

Simple (top row) and proportional threshold (bottom row) contagion on Barabási-Albert preferential attachment networks with N = 500, and added edges m = 3. Graphs show the percent of agents who believe B with strength b over time. Asterisks (*) denote these contagions had significant variance over simulation iterations.

(PNG)

S22 Fig. Simple and proportional threshold contagion results on homophilic MAG networks.

Simple (top row) and proportional threshold (bottom row) contagion on a homophilic MAG networks with N = 500, and Θb detailed in Eq (8). Graphs show the percent of agents who believe B with strength b over time.

(PNG)

S23 Fig. Graphical contagion results for low correlation combinations across simulation run counts.

Graphical results of contagion cascades across 10, 50, and 100 simulation runs for specific graph-message set combinations. Results displayed are those which yielded the lowest correlation scores between simulation run counts.

(PNG)

Data Availability

All data is found within the paper and its Supporting information files. In addition, we provide all code and documentation at: https://github.com/RickNabb/cognitive-contagion.

Funding Statement

We thank the Tufts Data Intensive Studies Center (DISC) and National Science Foundation grant NSF-NRT 2021874, for supporting this research. LC and NR also thank NSF CCF-1934553 for additional support.

References

  • 1.Bursztyn L, Rao A, Roth C, Yanagizawa-Drott D. Misinformation during a pandemic. University of Chicago, Becker Friedman Institute for Economics Working Paper. 2020;(2020-44).
  • 2. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proceedings of the National Academy of Sciences. 2016;113(3):554–559. doi: 10.1073/pnas.1517441113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Uscinski JE, Enders AM, Klofstad C, Seelig M, Funchion J, Everett C, et al. Why do people believe COVID-19 conspiracy theories? Harvard Kennedy School Misinformation Review. 2020;1(3). [Google Scholar]
  • 4. Kouzy R, Abi Jaoude J, Kraitem A, El Alam MB, Karam B, Adib E, et al. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus. 2020;12(3). doi: 10.7759/cureus.7255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Brennen JS, Simon F, Howard PN, Nielsen RK. Types, sources, and claims of Covid-19 misinformation. Reuters Institute. 2020;7:3–1. [Google Scholar]
  • 6.Schaeffer K. A look at the Americans who believe there is some truth to the conspiracy theory that COVID-19 was planned; 2020. Available from: https://www.pewresearch.org/fact-tank/2020/07/24/a-look-at-the-americans-who-believe-there-is-some-truth-to-the-conspiracy-theory-that-covid-19-was-planned/.
  • 7.Clinton J, Cohen J, Lapinski JS, Trussler M. Partisan Pandemic: How Partisanship and Public Health Concerns Affect Individuals’ Social Distancing During COVID-19. Available at SSRN 3633934. 2020. [DOI] [PMC free article] [PubMed]
  • 8. Bakshy E, Messing S, Adamic LA. Exposure to ideologically diverse news and opinion on Facebook. Science. 2015;348(6239):1130–1132. doi: 10.1126/science.aaa1160 [DOI] [PubMed] [Google Scholar]
  • 9.Conover MD, Ratkiewicz J, Francisco M, Gonçalves B, Menczer F, Flammini A. Political polarization on twitter. In: Fifth International AAAI conference on Weblogs and Social Media; 2011.
  • 10. Swire-Thompson B, Lazer D. Public health and online misinformation: challenges and recommendations. Annual Review of Public Health. 2020;41:433–451. doi: 10.1146/annurev-publhealth-040119-094127 [DOI] [PubMed] [Google Scholar]
  • 11.Jurkowitz M, Mitchell A. Fewer Americans now say media exaggerated COVID-19 risks, but big partisan gaps persist; 2020. Available from: https://www.journalism.org/2020/05/06/fewer-americans-now-say-media-exaggerated-covid-19-risks-but-big-partisan-gaps-persist/.
  • 12. Bago B, Rand DG, Pennycook G. Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. 2020. doi: 10.1037/xge0000729 [DOI] [PubMed] [Google Scholar]
  • 13. Pennycook G, Rand DG. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition. 2019;188:39–50. doi: 10.1016/j.cognition.2018.06.011 [DOI] [PubMed] [Google Scholar]
  • 14. Van Bavel JJ, Baicker K, Boggio PS, Capraro V, Cichocka A, Cikara M, et al. Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour. 2020; p. 1–12. [DOI] [PubMed] [Google Scholar]
  • 15. Zhang H, Vorobeychik Y. Empirically grounded agent-based models of innovation diffusion: a critical review. Artificial Intelligence Review. 2019;52(1):707–741. doi: 10.1007/s10462-017-9577-z [DOI] [Google Scholar]
  • 16.Brainard J, Hunter P, Hall IR. An agent-based model about the effects of fake news on a norovirus outbreak. Revue d’Épidémiologie et de Santé Publique. 2020. [DOI] [PubMed]
  • 17. Kopp C, Korb KB, Mills BI. Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news”. PloS One. 2018;13(11). doi: 10.1371/journal.pone.0207383 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ehsanfar A, Mansouri M. Incentivizing the dissemination of truth versus fake news in social networks. In: 2017 12th System of Systems Engineering Conference (SoSE). IEEE; 2017. p. 1–6.
  • 19. Maghool S, Maleki-Jirsaraei N, Cremonini M. The coevolution of contagion and behavior with increasing and decreasing awareness. PloS One. 2019;14(12):e0225447. doi: 10.1371/journal.pone.0225447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Festinger L. A theory of cognitive dissonance. vol. 2. Stanford University Press; 1957. [Google Scholar]
  • 21. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science. 2020;31(7):770–780. doi: 10.1177/0956797620939054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Van Bavel JJ, Pereira A. The partisan brain: An identity-based model of political belief. Trends in Cognitive Sciences. 2018;22(3):213–224. doi: 10.1016/j.tics.2018.01.004 [DOI] [PubMed] [Google Scholar]
  • 23.Swire-Thompson B, DeGutis J, Lazer D. Searching for the backfire effect: Measurement and design considerations. 2020. [DOI] [PMC free article] [PubMed]
  • 24. Christakis NA, Fowler JH. Social contagion theory: examining dynamic social networks and human behavior. Statistics in Medicine. 2013;32(4):556–577. doi: 10.1002/sim.5408 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Centola D, Macy M. Complex contagions and the weakness of long ties. American Journal of Sociology. 2007;113(3):702–734. doi: 10.1086/521848 [DOI] [Google Scholar]
  • 26. Secchi D, Gullekson NL. Individual and organizational conditions for the emergence and evolution of bandwagons. Computational and Mathematical Organization Theory. 2016;22(1):88–133. doi: 10.1007/s10588-015-9199-4 [DOI] [Google Scholar]
  • 27. Goldberg A, Stein SK. Beyond social contagion: Associative diffusion and the emergence of cultural variation. American Sociological Review. 2018;83(5):897–932. doi: 10.1177/0003122418797576 [DOI] [Google Scholar]
  • 28. Friedkin NE, Proskurnikov AV, Tempo R, Parsegov SE. Network science on belief system dynamics under logic constraints. Science. 2016;354(6310):321–326. doi: 10.1126/science.aag2624 [DOI] [PubMed] [Google Scholar]
  • 29.Gramlich J. 20 striking findings from 2020; 2020. Available from: https://www.pewresearch.org/fact-tank/2020/12/11/20-striking-findings-from-2020/.
  • 30.Shannon J.’It’s not real’: In South Dakota, which has shunned masks and other COVID rules, some people die in denial, nurse says. USA Today.
  • 31. Porot N, Mandelbaum E. The science of belief: A progress report. Wiley Interdisciplinary Reviews: Cognitive Science. 2020; p. e1539. [DOI] [PubMed] [Google Scholar]
  • 32. Brady WJ, Crockett M, Van Bavel JJ. The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspectives on Psychological Science. 2020;15(4):978–1010. doi: 10.1177/1745691620917336 [DOI] [PubMed] [Google Scholar]
  • 33. Wiley C. Mindf*ck: Cambridge Analytica and the Plot to Break America. Random House/Penguin Random House LLC; 2019. [Google Scholar]
  • 34. Lippmann W. Public opinion. vol. 1. Transaction Publishers; 1946. [Google Scholar]
  • 35.Bernays EL. Propaganda. Ig publishing; 2005.
  • 36. Chomsky N, Herman ES. Manufacturing consent: The political economy of the mass media. Vintage Books; London; 1994. [Google Scholar]
  • 37. Kramer AD, Guillory JE, Hancock JT. Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences. 2014;111(24):8788–8790. doi: 10.1073/pnas.1320040111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Fowler JH, Christakis NA. Cooperative behavior cascades in human social networks. Proceedings of the National Academy of Sciences. 2010;107(12):5334–5338. doi: 10.1073/pnas.0913149107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Adamic LA, Lento TM, Adar E, Ng PC. Information evolution in social networks. In: Proceedings of the Ninth ACM International Conference on Web Search and Data Mining; 2016. p. 473–482.
  • 40. Kearney MD, Chiang SC, Massey PM. The Twitter origins and evolution of the COVID-19 “plandemic” conspiracy theory. Harvard Kennedy School Misinformation Review. 2020;1(3). [Google Scholar]
  • 41. Axelrod R. The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution. 1997;41(2):203–226. doi: 10.1177/0022002797041002001 [DOI] [Google Scholar]
  • 42.Kempe D, Kleinberg J, Tardos É. Maximizing the spread of influence through a social network. In: Proceedings of the ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2003. p. 137–146.
  • 43. DeGroot MH. Reaching a consensus. Journal of the American Statistical Association. 1974;69(345):118–121. doi: 10.1080/01621459.1974.10480137 [DOI] [Google Scholar]
  • 44. Goffman W, Newill VA. Generalization of epidemic theory: An application to the transmission of ideas. Nature. 1964;204(4955):225–228. doi: 10.1038/204225a0 [DOI] [PubMed] [Google Scholar]
  • 45. Rosenkopf L, Abrahamson E. Modeling reputational and informational influences in threshold models of bandwagon innovation diffusion. Computational & Mathematical Organization Theory. 1999;5(4):361–384. doi: 10.1023/A:1009620618662 [DOI] [Google Scholar]
  • 46.Reynolds CW. Flocks, herds and schools: A distributed behavioral model. In: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques; 1987. p. 25–34.
  • 47. Granovetter M. Threshold models of collective behavior. American Journal of Sociology. 1978;83(6):1420–1443. doi: 10.1086/226707 [DOI] [Google Scholar]
  • 48. Schelling TC. Dynamic models of segregation. Journal of Mathematical Sociology. 1971;1(2):143–186. doi: 10.1080/0022250X.1971.9989794 [DOI] [Google Scholar]
  • 49.Musco C, Musco C, Tsourakakis CE. Minimizing polarization and disagreement in social networks. In: Proceedings of the 2018 World Wide Web Conference; 2018. p. 369–378.
  • 50.Anunrojwong J, Candogan O, Immorlica N. Social Learning Under Platform Influence: Extreme Consensus and Persistent Disagreement. Available at SSRN. 2020.
  • 51. Abebe R, Chan THH, Kleinberg J, Liang Z, Parkes D, Sozio M, et al. Opinion Dynamics Optimization by Varying Susceptibility to Persuasion via Non-Convex Local Search. ACM Transactions on Knowledge Discovery from Data (TKDD). 2021;16(2):1–34. doi: 10.1145/3466617 [DOI] [Google Scholar]
  • 52. Webster JG, Ksiazek TB. The dynamics of audience fragmentation: Public attention in an age of digital media. Journal of Communication. 2012;62(1):39–56. doi: 10.1111/j.1460-2466.2011.01616.x [DOI] [Google Scholar]
  • 53. Iyengar S, Hahn KS. Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication. 2009;59(1):19–39. doi: 10.1111/j.1460-2466.2008.01402.x [DOI] [Google Scholar]
  • 54. Cardenal AS, Aguilar-Paredes C, Galais C, Pérez-Montoro M. Digital technologies and selective exposure: How choice and filter bubbles shape news media exposure. The International Journal of Press/Politics. 2019;24(4):465–486. doi: 10.1177/1940161219862988 [DOI] [Google Scholar]
  • 55. Arendt F, Northup T, Camaj L. Selective exposure and news media brands: Implicit and explicit attitudes as predictors of news choice. Media Psychology. 2019;22(3):526–543. doi: 10.1080/15213269.2017.1338963 [DOI] [Google Scholar]
  • 56. Messing S, Westwood SJ. Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research. 2014;41(8):1042–1063. doi: 10.1177/0093650212466406 [DOI] [Google Scholar]
  • 57. Karlsen R, Beyer A, Steen-Johnsen K. Do High-choice media environments facilitate news avoidance? A longitudinal study 1997–2016. Journal of Broadcasting & Electronic Media. 2020;64(5):794–814. doi: 10.1080/08838151.2020.1835428 [DOI] [Google Scholar]
  • 58. Benkler Y, Faris R, Roberts H. Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press; 2018. [Google Scholar]
  • 59. Goel S, Anderson A, Hofman J, Watts DJ. The structural virality of online diffusion. Management Science. 2016;62(1):180–196. [Google Scholar]
  • 60. Pfeffer J, Malik MM. Simulating the dynamics of socio-economic systems. In: Networked Governance. Springer; 2017. p. 143–161. [Google Scholar]
  • 61.Macal CM, North MJ. Agent-based modeling and simulation. In: Proceedings of the 2009 Winter Simulation Conference (WSC). IEEE; 2009. p. 86–98.
  • 62.Scheutz M. Artificial Life Simulations: Discovering and Developing Agent-Based Models. In: Model-Based Approaches to Learning. Brill Sense; 2009. p. 261–292.
  • 63.Scheutz M, Schermerhorn P, Connaughton R, Dingler A. Swages-an extendable distributed experimentation system for large-scale agent-based alife simulations. Proceedings of Artificial Life X. 2006; p. 412–419.
  • 64. Ferreira GB, Scheutz M. Accidental encounters: can accidents be adaptive? Adaptive Behavior. 2018;26(6):285–307. doi: 10.1177/1059712318798601 [DOI] [Google Scholar]
  • 65.Ferreira GB, Scheutz M, Levin M. Modeling Cell Migration in a Simulated Bioelectrical Signaling Network for Anatomical Regeneration. In: Artificial Life Conference Proceedings. MIT Press; 2018. p. 194–201.
  • 66. Gilbert N, Troitzsch K. Simulation for the social scientist. McGraw-Hill Education (UK); 2005. [Google Scholar]
  • 67. Gilbert N. Agent-based models. vol. 153. Sage Publications; 2019. [Google Scholar]
  • 68. Centola D, Willer R, Macy M. The emperor’s dilemma: A computational model of self-enforcing norms. American Journal of Sociology. 2005;110(4):1009–1040. doi: 10.1086/427321 [DOI] [Google Scholar]
  • 69. Centola D, Eguíluz VM, Macy MW. Cascade dynamics of complex propagation. Physica A: Statistical Mechanics and its Applications. 2007;374(1):449–456. doi: 10.1016/j.physa.2006.06.018 [DOI] [Google Scholar]
  • 70. Begg IM, Anas A, Farinacci S. Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth. Journal of Experimental Psychology: General. 1992;121(4):446. doi: 10.1037/0096-3445.121.4.446 [DOI] [Google Scholar]
  • 71. Epstein B. Agent-based modeling and the fallacies of individualism. Models, Simulations, and Representations. 2012;9:115–144. [Google Scholar]
  • 72. Islam MS, Sarkar T, Khan SH, Kamal AHM, Hasan SM, Kabir A, et al. COVID-19–related infodemic and its impact on public health: A global social media analysis. The American Journal of Tropical Medicine and Hygiene. 2020;103(4):1621. doi: 10.4269/ajtmh.20-0812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Sikder O, Smith RE, Vivo P, Livan G. A minimalistic model of bias, polarization and misinformation in social networks. Scientific Reports. 2020;10(1):1–11. doi: 10.1038/s41598-020-62085-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Kiesling E, Günther M, Stummer C, Wakolbinger LM. Agent-based simulation of innovation diffusion: a review. Central European Journal of Operations Research. 2012;20(2):183–230. doi: 10.1007/s10100-011-0210-y [DOI] [Google Scholar]
  • 75. Ryan B, Gross NC. The diffusion of hybrid seed corn in two Iowa communities. Rural sociology. 1943;8(1):15. [Google Scholar]
  • 76. Baldassarri D, Bearman P. Dynamics of political polarization. American Sociological Review. 2007;72(5):784–811. doi: 10.1177/000312240707200507 [DOI] [Google Scholar]
  • 77. DellaPosta D, Shi Y, Macy M. Why do liberals drink lattes? American Journal of Sociology. 2015;120(5):1473–1511. doi: 10.1086/681254 [DOI] [PubMed] [Google Scholar]
  • 78. Dandekar P, Goel A, Lee DT. Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences. 2013;110(15):5791–5796. doi: 10.1073/pnas.1217220110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Guilbeault D, Becker J, Centola D. Complex contagions: A decade in review. In: Complex Spreading Phenomena in Social Systems. Springer; 2018. p. 3–25. [Google Scholar]
  • 80. Hegselmann R, Krause U, et al. Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation. 2002;5(3). [Google Scholar]
  • 81. Deffuant G, Neau D, Amblard F, Weisbuch G. Mixing beliefs among interacting agents. Advances in Complex Systems. 2000;3(01n04):87–98. doi: 10.1142/S0219525900000078 [DOI] [Google Scholar]
  • 82. Li K, Liang H, Kou G, Dong Y. Opinion dynamics model based on the cognitive dissonance: An agent-based simulation. Information Fusion. 2020;56:1–14. [Google Scholar]
  • 83.Mitchell A, Jurkowitz M, Oliphant JB, Shearer E. Three Months In, Many Americans See Exaggeration, Conspiracy Theories and Partisanship in COVID-19 News; 2020. Available from: https://www.journalism.org/2020/06/29/three-months-in-many-americans-see-exaggeration-conspiracy-theories-and-partisanship-in-covid-19-news/.
  • 84. Cox EP III. The optimal number of response alternatives for a scale: A review. Journal of Marketing Research. 1980;17(4):407–422. doi: 10.2307/3150495 [DOI] [Google Scholar]
  • 85.Pereira A, Van Bavel J. Identity concerns drive belief in fake news. 2018.
  • 86. Bail CA, Argyle LP, Brown TW, Bumpus JP, Chen H, Hunzaker MF, et al. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences. 2018;115(37):9216–9221. doi: 10.1073/pnas.1804840115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Zajonc RB. Attitudinal effects of mere exposure. Journal of Personality and Social Psychology. 1968;9(2p2):1. doi: 10.1037/h00258485667435 [DOI] [Google Scholar]
  • 88.Talev M. Axios-Ipsos poll: The skeptics are growing; 2020. Available from: https://www.axios.com/axios-ipsos-poll-gop-skeptics-growing-deaths-e6ad6be5-c78f-43bb-9230-c39a20c8beb5.html.
  • 89.Gertz M. Six different polls show how Fox’s coronavirus coverage endangered its viewers; 2020. Available from: https://www.mediamatters.org/fox-news/six-different-polls-show-how-foxs-coronavirus-coverage-endangered-its-viewers.
  • 90. Giddens A. The constitution of society: Outline of the theory of structuration. Univ of California Press; 1984. [Google Scholar]
  • 91.Wilensky U. NetLogo itself; 1999. http://ccl.northwestern.edu/netlogo/.
  • 92. Erdős P, Rényi A. On the evolution of random graphs. Publ Math Inst Hung Acad Sci. 1960;5(1):17–60. [Google Scholar]
  • 93. Watts DJ, Strogatz SH. Collective dynamics of ‘small-world’networks. Nature. 1998;393(6684):440–442. doi: 10.1038/30918 [DOI] [PubMed] [Google Scholar]
  • 94. Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–512. doi: 10.1126/science.286.5439.509 [DOI] [PubMed] [Google Scholar]
  • 95.Kim M, Leskovec J. Modeling social networks with node attributes using the multiplicative attribute graph model. arXiv preprint arXiv:11065053. 2011.
  • 96.Grim P, Singer D. Computational Philosophy. 2020.
  • 97.Ding B, Qian H, Zhou J. Activation functions and their characteristics in deep neural networks. In: 2018 Chinese Control And Decision Conference (CCDC). IEEE; 2018. p. 1836–1841.
  • 98. Ebrahimi R, Gao J, Ghasemiesfeh G, Schoenbeck G. How complex contagions spread quickly in preferential attachment models and other time-evolving networks. IEEE Transactions on Network Science and Engineering. 2017;4(4):201–214. doi: 10.1109/TNSE.2017.2718024 [DOI] [Google Scholar]
  • 99. Ross B, Pilz L, Cabrera B, Brachten F, Neubaum G, Stieglitz S. Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks. European Journal of Information Systems. 2019;28(4):394–412. doi: 10.1080/0960085X.2018.1560920 [DOI] [Google Scholar]
  • 100. Matz SC, Kosinski M, Nave G, Stillwell DJ. Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences. 2017;114(48):12714–12719. doi: 10.1073/pnas.1710966114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101. Thaler RH, Sunstein CR. Nudge: Improving decisions about health, wealth, and happiness. Penguin; 2009. [Google Scholar]
  • 102. Jost JT, Glaser J, Kruglanski AW, Sulloway FJ. Political conservatism as motivated social cognition. Psychological Bulletin. 2003;129(3):339. doi: 10.1037/0033-2909.129.3.339 [DOI] [PubMed] [Google Scholar]
  • 103. Jost JT, Federico CM, Napier JL. Political ideology: Its structure, functions, and elective affinities. Annual Review of Psychology. 2009;60:307–337. doi: 10.1146/annurev.psych.60.110707.163600 [DOI] [PubMed] [Google Scholar]
  • 104. Jost JT, Amodio DM. Political ideology as motivated social cognition: Behavioral and neuroscientific evidence. Motivation and Emotion. 2012;36(1):55–64. doi: 10.1007/s11031-011-9260-7 [DOI] [Google Scholar]
  • 105.Nasrinpour HR, Friesen MR, et al. An agent-based model of message propagation in the facebook electronic social network. arXiv preprint arXiv:161107454. 2016.
  • 106. Shugars S. Good Decisions or Bad Outcomes? A Model for Group Deliberation on Value-Laden Topics. Communication Methods and Measures. 2020; p. 1–19. [Google Scholar]

Decision Letter 0

Marco Cremonini

18 Jun 2021

PONE-D-21-06938

Cognitive contagion: How to model (and potentially counter) the spread of fake news

PLOS ONE

Dear Dr. Rabb,

Thank you for submitting your manuscript to PLOS ONE. Your manuscript has been carefully reviewed and overall regarded as a valuable contribution by the referees. I personally share the appreciation expressed by the referees and consider this work a valuable contribution to a class of models whose importance is raising and that are still open to many important development. However, some important aspects in need of further consideration have been commented. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Among the others, some comments need particular consideration and should be thoroughly addressed in the revised version of the manuscript: 

  • Referee 2 and Referee 3 provided specific and detailed comments about several important references useful to better contextualize your work and make more accurate statements about its contribution;

  • Referee 3 provided detailed comments (see attached file) on agent-based modelling that could greatly improve the analysis and presentation of the work.

Please submit your revised manuscript by Aug 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Marco Cremonini, Ph.D.

University of Milan

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please reupload your manuscript as a .docx or editable .pdf file

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper puts forward an agent-based model of opinion dynamics. The main innovation of the paper lies in its assumptions on the behaviour of the agents. Namely, in addition to the mechanistic aspects of information diffusion that are traditionally incorporated in most opinion dynamics models, the paper also makes specific assumptions on the behaviour of the agents, and explicitly models their likelihood to be receptive to new information depending on their current beliefs. This assumption is aimed at modelling phenomena of collective misinformation, such as the refusal to believe in the reality of covid-19.

I find the paper excellent in every possible way. I really like this modelling approach, and I think it's the only one with some hope of yielding empirically testable predictions, something which is sorely lacking in most of the opinion dynamics literature. I think the results presented by the authors are convincing, and I find their analysis (pages 19-22) very thorough and instructive. I basically have no criticisms or major comments, and I believe the paper should be published in a form very close to its current one. I only have a couple of minor suggestions, listed in the following:

1. The only aspects of the model I didn't find entirely convincing are related to its assumptions on institutional agents (let's call them news sources) and their relationships with their subscribers. Is it realistic to assume that a subscriber should keep listening to (as in maintaining their link with) a news source he/she consistently doesn't agree with? If I understand this aspect of the model correctly, it seems that non-institutional agents are not allowed to re-evaluate their links to news sources. Could this be accommodated, e.g., by expanding the number of news sources in order to simulate the effects of a competitive news market? I fully realise these points are probably way beyond the scope of the current paper, but I still think it could be interesting to see them acknowledged/discussed in the final section of the paper.

2. Have the authors considered the scenario in which the parameters in Eq. (6) are drawn from a suitable distribution? I think this would be a very interesting extension of the model towards a more realistic description of a heterogeneous audience. Again, like in the previous point I don't think it's necessary to include additional results in this respect, and I only recommend to include this aspect as a point of discussion.

3. I think the authors should take a look at Sikder et al., "A minimalistic model of bias, polarization and misinformation in social networks", Scientific Reports (2020). As they will see, that paper starts from very similar premises to those of their study, and also makes explicit assumptions about the behaviour of agents with respect to new information depending on their beliefs. The two models are ultimately quite different and focus on different aspects, but it's quite interesting to see how both achieve remarkable consistency in their results across very different network topologies (see, e.g., Fig. 2 in the Scientific Reports paper).

Reviewer #2: Dear Nicholas, Lenore, Jan, and Matthias,

Thank you for giving me the opportunity to review your paper “Cognitive contagion: How to model (and potentially counter) the spread of fake news”. You are tackling an important and timely topic with a rigorous approach, and yield an insightful conclusion. You point out (correctly, in my view) that traditional models of social contagion generally don’t account for the internal state of the individual, and fail to consider how this internal state influences their adoption decisions. With some minor modifications, I would like to see your paper in print.

pg 4, eqn 1 (and other equations describing changes in beliefs)

- I tripped up here at first because I didn’t recognize that this was an equation describing changes to the state at `t+1` as a function of the state at `t`. It might clarify things include a subscript for the timestep in these equations, ie: p(b_{u,t+1} = … | b_{u,t}) or p(b_{u}(t+1) = … | b_{u}(t). Also here, can you clarify in the text that ‘u’ is the focal adopter and ‘v’ is the focal exposer?

Pg 5: Complex contagion

There is some ambiguity in the literature about the precise meaning of “complex contagion”, and how it captures the need for social reinforcement.

Kempe, Kleinberg and Tardos (2003) (https://www.cs.cornell.edu/home/kleinber/kdd03-inf.pdf) articulate the difference between a (proportional) threshold model, and an independent cascade model (which most folks would call ‘simple contagion’). Their description of a threshold model is what you (and a lot of other papers) describe as complex contagion.

Centola and Macy actually have a different requirement for “complex” contagion than either mentioned by KKT: that a minimum absolute number (not fraction!) of individuals need to expose you to the belief before you adopt. This is important to their argument because they are making the claim that small world networks don’t lead to fast diffusion unless you have “wide bridges” across the network. (Of course, there are some new papers suggesting this falls apart if you have a stochastic decision rule, so maybe we should take it with a grain of salt.)

- I think if you want to use the term “complex contagion”, that’s probably ok, (given the ambiguity in common usage) just note that you’re using the proportional threshold interpretation, not the strict requirement from the Centola and Macy paper, which you’re citing at the moment. Even better, contribute to making the term less ambiguous by differentiating between Centola’s complex contagion and what’s in the Schelling model.

- The other place it will be relevant is in your section on complex contagion in WS networks (pg 15). You say that “interestingly, complex contagion was successful on the WS graphs”. I’m guessing that this is interpreted as surprising because of the Centola and Macy finding that complex contagion should be slower than simple contagion in small-world networks. I think the resolution is that you’re using a different interpretation of complex contagion (and that WS networks are not degree regular). I would suggest you choose one of the two usages and stick with it throughout.

Pg. 6: Cognitive contagion model

DeGroot 1974 (https://www.jstor.org/stable/2285509) has a classic model of updating degrees of belief in response to neighbors beliefs, with some weighting on neighbors. Your model could be considered an extension of the DeGroot model that makes the weights on individuals an increasing function of difference between individuals, ie. accounting for homophily. In this vein, you probably also want to look at Dandekar, Goel and Lee 2013 (https://www.pnas.org/content/110/15/5791) and see whether you agree with their conclusions, and if not, why not.

Axelrod 1997 (https://journals.sagepub.com/doi/abs/10.1177/0022002797041002001) has a model that accounts for individuals paying more attention to similar individuals, as does DellaPosta 2015 (https://www.jstor.org/stable/10.1086/681254) and Baldassarri and Bearman (https://www.jstor.org/stable/25472492). In these cases, homophily is conceptualized as similarity on other belief statements, rather than just on a single belief. The homophily element of their work is similar to what you’re doing here with a single belief. If homophily itself is the driving factor, your simulations should give the same results. If it’s about gradual belief updating, you may get different results.

There are a couple folks looking at within-person interactions between beliefs (a third way to conceptualize the interaction between existing beliefs and adoption decisions!). Goldberg and Stein 2018 (https://journals.sagepub.com/doi/abs/10.1177/0003122418797576) is a good start, as is Friedkin et al 2016 (https://science.sciencemag.org/content/354/6310/321/tab-figures-data) - although Friedkin’s model suffers from an ambiguity in whether the outcome is due to social contagion or to the assumed logic constraints. (I’ve done some work on this too (https://arxiv.org/abs/2010.02188) but please don’t read this as grubbing for citations - if you want to cite someone on those ideas, go with the folks above…)

- Please check that your results are insensitive to the number of levels in your model. 7 points is arbitrary (which is fine) but we don’t expect it to be a faithful representation of what goes on in people’s heads (regardless of what Likert thinks). Sometimes these things matter - just make sure that it doesn’t matter here. Make a 100-point model or something that approximates a continuous scale, just to be sure. You can put it in the supplement, and put a note in the main body to say that you checked.

- Another consequential parameter choice is the ’stubbornness’ of individuals. You do a good job exploring different options here, but your justification for why the stubborn parameters are the ones you carry forward is grounded in the outcomes you want to see. Again, it’s not a problem to do so if the purpose of the model is to highlight a possible outcome and describe when it is likely to occur. But, it does seem a bit like sampling on the dependent variable. Can you either justify why we should expect this to be the right choice without reference to the simulation outcomes (you could use a micro-level analysis with a single agent) or just make this assumption explicit? Something like “sometimes, people are stubborn. When this is true, here’s what we expect to happen”. Then you’re just making a micro-level assumption and exploring its consequences, rather than trying to say “the world is this way…”.

- I’m surprised 10 simulation runs is enough to get a stable result. I usually end up increasing the number of runs (10, 20, 50, 100, 200, etc) until I don’t see any difference in the resulting averages, and then do 2-10 times as many as that. If you have done thousands and found the same results, you can say that you did them, but that the results only show the results of 10 because the effects are so robust. If you haven’t done a large number, it might be a good idea, as they aren’t expensive.

- Do you pick a random agent to be the broadcaster, or are they different entities in your model? I had trouble working that out.

- Fig 3. beliefs don’t match what’s in the text (in the text, u is 1, but they are exposed to 6, in the fig, they believe 6 but are exposed to 0). This had me confused for a bit as I wasn’t sure I was reading the figure correctly.

Pg. 9: Contagion experiments

- I’m aware that the term “experiments” is sometimes used to describe running simulations under different conditions, and in the broadest sense (trying something to see what happens) it’s an appropriate term. However, I do feel that as a community it would be useful to distinguish between computer-assisted “gedanken-experiments” and experiments that make manipulations in a lab or in the real world with human participants. The first is for theory building - an essential part of the scientific division of labor - and the second for theory testing. I certainly won’t twist your arm, but you may find it helpful to be more explicit that you are exploring the macro-level consequences of a micro-level assumption, using a simulation to overcome the brain’s inability to see these consequences on its own. This will clarify your contribution for readers, as they’ll know what to expect in the next section.

- Do your behavior over time charts (Fig 6. - 16.) show a true T0? I.e. the distribution of individual beliefs *before* any adoption has taken place? I would imagine that I should see the T0 for all charts to be identical (same starting conditions) and fairly similar to the bottom charts in Fig 9. If I’m not mistaken, please include those t0 belief distributions in your plots - it would be helpful to know where individuals are coming from.

pg 14. Section on comparing contagion models

- You vary social network structure and then describe the qualitative differences of your model in each of these graphs. We know that different network structures yields different shaped diffusion curves, and so this isn’t your point. Instead, I believe you are suggesting that the differences between network structures under the traditional contagion models are not the same as the differences under your own model. This is a pretty complicated comparison to make, and as the text currently reads, I personally (with a reviewer’s normal cognitive impairments…) have trouble understanding what I should take away. Specifically, I’m having a hard time distinguishing the effect of your model from the effect of the changes to the network structure, in this section in particular. One idea might be to make a table with different adoption rules on one axis, and different social networks on the other, and for a specific metric compare the differences across condition. Then you can highlight why your model changes our expectation about the effect of network structure. You could have multiple tables for different metrics.

pg. 19 Section on Analysis of Results

- Your analysis suggests that because there is almost always a path for information to reach an individual from the broadcast node, then the internal logic of an individual’s decision rule is more determinant of the outcome than the social network structure (if I understand it correctly!). A good comparison to make might be a case where there is no social influence at all, essentially an individual adoption model from the “broadcast” source, i.e. a star network with the broadcast source at the center. Then you can compare the effects of the other network structures to see which components of the outcome are due to the characteristics exemplified in each of the networks (clustering, short path lengths, high/low degree, etc).

The challenge here is that this sets up a horse race between the effects of individual cognition, and the effects of network structure. You’re not in a position to really adjudicate between these two effects, as in your simulation the relative magnitudes will be entirely dependent on the parameters of “stubbornness” that get selected in the model. You could turn this into an opportunity, however, by describing (qualitatively) the conditions under which we should expect one of the effects to dominate over the other. Then you get to say something like “and so an important piece of empirical research will be to determine which of the two regimes (the types of social contagion we care about) fall into”.

Pg. 22 Discussion

- I appreciate your disclaimer that you are not making policy recommendations based on your results. It is fully appropriate to acknowledge that you are building theory that we as a community should subject to test before using it to derive policy. I also understand how (academically) we feel pressure to say that our work is “policy relevant”. Unfortunately, I think our community has a tendency to try and have it both ways, and try and both make policy claims and then hedge them at the same time, and it’s never really that clean. I think you may find it more comfortable to move fully away from the “policy recommendation” language, and instead describe how your work sets up what is arguably a very interesting follow-on study to confirm your understanding. (Plos one gives you this luxury - take advantage of it!) Then your description of an intervention can be wholeheartedly about an intervention in an experimental context, without the need for the disclaimer. You can describe how an experiment would differ from individual-level behavioral interventions in the psychology literature, and what that would add to our understanding. Best yet, you’d have your next paper lined up nicely.

Thank you again for reading through what has become an inexcusably long review. I’m certain that you can address the questions I have without too much effort, and I look forward to seeing the revisions.

James Houghton

Reviewer #3: This paper presents a model of diffusion that is grounded in the cognition of each agents. By comparing different graph types to the exposure to information/beliefs coming from an institutional source, the authors try to show how adding complexity to the definition of an agent may lead to map more closely what happened, for example, with beliefs around COVID-19.

While certainly interesting and timely, the article presents a series of concerns that make the message a bit less effective than it could be. The most relevant concerns I had are those around (a) the claim that more complex agents is something “new” of this model, while the ABM community has been discussing it since its beginnings, (b) the use of ABM (especially the “why” ABM), (c) the complete lack of reference to description and reporting standards in presenting the ABM, and (d) the report of results shows that you are probably trying to do too much with just one paper.

I have detailed what I mean by this in the attached file, I hope you find them useful. I enjoyed reading your paper. Best of luck with your research!

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Giacomo Livan

Reviewer #2: Yes: James Houghton

Reviewer #3: Yes: Davide Secchi

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Review of PONE-D-21-06938.pdf

PLoS One. 2022 Jan 7;17(1):e0261811. doi: 10.1371/journal.pone.0261811.r002

Author response to Decision Letter 0


25 Aug 2021

We want to thank all three reviewers for comments and suggested revisions and references that greatly improved our revised draft of the paper. We would like to point out, that some comments reviewer 2 and 3, have led us to decide to change our title, and a main term in the paper: in particular, we now are calling our method “cognitive cascade” instead of “cognitive contagion” (as they have pointed us to literature that also uses simple cognitive models in agents) and have now renamed some of the contagion models we compare against to be more consistent with terminology of the past work (see comments to reviewer 2 below, in regards to “complex” versus proportional threshold contagion models). More detailed comments to individual reviewers appear next.

==========

REVIEWER 1

==========

Reviewer #1: The paper puts forward an agent-based model of opinion dynamics. The main innovation of the paper lies in its assumptions on the behaviour of the agents. Namely, in addition to the mechanistic aspects of information diffusion that are traditionally incorporated in most opinion dynamics models, the paper also makes specific assumptions on the behaviour of the agents, and explicitly models their likelihood to be receptive to new information depending on their current beliefs. This assumption is aimed at modelling phenomena of collective misinformation, such as the refusal to believe in the reality of covid-19.

I find the paper excellent in every possible way. I really like this modelling approach, and I think it's the only one with some hope of yielding empirically testable predictions, something which is sorely lacking in most of the opinion dynamics literature. I think the results presented by the authors are convincing, and I find their analysis (pages 19-22) very thorough and instructive. I basically have no criticisms or major comments, and I believe the paper should be published in a form very close to its current one. I only have a couple of minor suggestions, listed in the following:

==== OUR RESPONSE ====

Thank you very much for the kind words!

==== REVIEWER ====

1. The only aspects of the model I didn't find entirely convincing are related to its assumptions on institutional agents (let's call them news sources) and their relationships with their subscribers. Is it realistic to assume that a subscriber should keep listening to (as in maintaining their link with) a news source he/she consistently doesn't agree with? If I understand this aspect of the model correctly, it seems that non-institutional agents are not allowed to re-evaluate their links to news sources. Could this be accommodated, e.g., by expanding the number of news sources in order to simulate the effects of a competitive news market? I fully realise these points are probably way beyond the scope of the current paper, but I still think it could be interesting to see them acknowledged/discussed in the final section of the paper.

==== OUR RESPONSE ====

We agree that a more realistic model would include a more complex relationship between how individuals link to new sources, and allow multiple news sources and include switching dynamics. We have now added the requested paragraph (along with some relevant references we found to cite) in the discussion.

==== REVIEWER ====

2. Have the authors considered the scenario in which the parameters in Eq. (6) are drawn from a suitable distribution? I think this would be a very interesting extension of the model towards a more realistic description of a heterogeneous audience. Again, like in the previous point I don't think it's necessary to include additional results in this respect, and I only recommend to include this aspect as a point of discussion.

==== OUR RESPONSE ====

We added language advocating for this as a potential extension of the model in our discussion section:

“Our cognitive contagion could also be made more complex in ways that would lend themselves to interesting analysis. For one, there could be added complexity when it comes to agent prior beliefs, or cognitive contagion functions. For the former, agent priors could be drawn from distributions other than the uniform distribution. Concerning the latter, parameters in the contagion functions themselves could be distributed to lead to varying levels of ``gullible'' versus ``stubborn'' agents -- rendering the graph even more heterogenous. These techniques are common in opinion diffusion models (Zhang & Vorobeychik, 2019). Prior agent beliefs or contagion function parameters could otherwise be initialized from empirical data.”

==== REVIEWER ====

3. I think the authors should take a look at Sikder et al., "A minimalistic model of bias, polarization and misinformation in social networks", Scientific Reports (2020). As they will see, that paper starts from very similar premises to those of their study, and also makes explicit assumptions about the behaviour of agents with respect to new information depending on their beliefs. The two models are ultimately quite different and focus on different aspects, but it's quite interesting to see how both achieve remarkable consistency in their results across very different network topologies (see, e.g., Fig. 2 in the Scientific Reports paper).

==== OUR RESPONSE ====

This is a great reference -- thank you for the suggestion. We now cite this in both the end of the introduction and in discussion and agree that it is very related and relevant work.

==========

REVIEWER 2

==========

Reviewer #2: Dear Nicholas, Lenore, Jan, and Matthias,

Thank you for giving me the opportunity to review your paper “Cognitive contagion: How to model (and potentially counter) the spread of fake news”. You are tackling an important and timely topic with a rigorous approach, and yield an insightful conclusion. You point out (correctly, in my view) that traditional models of social contagion generally don’t account for the internal state of the individual, and fail to consider how this internal state influences their adoption decisions. With some minor modifications, I would like to see your paper in print.

==== OUR RESPONSE ====

Thank you for the kind words!

==== REVIEWER ====

pg 4, eqn 1 (and other equations describing changes in beliefs)

- I tripped up here at first because I didn’t recognize that this was an equation describing changes to the state at `t+1` as a function of the state at `t`. It might clarify things include a subscript for the timestep in these equations, ie: p(b_{u,t+1} = … | b_{u,t}) or p(b_{u}(t+1) = … | b_{u}(t). Also here, can you clarify in the text that ‘u’ is the focal adopter and ‘v’ is the focal exposer?

==== OUR RESPONSE ====

We agree that this suggested change in notation would improve clarity. We updated equations (1), (3), (4), (5), (6), and (7) accordingly, and added some clarifying language when the terms are introduced before equation (1).

==== REVIEWER ====

Pg 5: Complex contagion

There is some ambiguity in the literature about the precise meaning of “complex contagion”, and how it captures the need for social reinforcement.

Kempe, Kleinberg and Tardos (2003) (https://www.cs.cornell.edu/home/kleinber/kdd03-inf.pdf) articulate the difference between a (proportional) threshold model, and an independent cascade model (which most folks would call ‘simple contagion’). Their description of a threshold model is what you (and a lot of other papers) describe as complex contagion.

Centola and Macy actually have a different requirement for “complex” contagion than either mentioned by KKT: that a minimum absolute number (not fraction!) of individuals need to expose you to the belief before you adopt. This is important to their argument because they are making the claim that small world networks don’t lead to fast diffusion unless you have “wide bridges” across the network. (Of course, there are some new papers suggesting this falls apart if you have a stochastic decision rule, so maybe we should take it with a grain of salt.)

- I think if you want to use the term “complex contagion”, that’s probably ok, (given the ambiguity in common usage) just note that you’re using the proportional threshold interpretation, not the strict requirement from the Centola and Macy paper, which you’re citing at the moment. Even better, contribute to making the term less ambiguous by differentiating between Centola’s complex contagion and what’s in the Schelling model.

- The other place it will be relevant is in your section on complex contagion in WS networks (pg 15). You say that “interestingly, complex contagion was successful on the WS graphs”. I’m guessing that this is interpreted as surprising because of the Centola and Macy finding that complex contagion should be slower than simple contagion in small-world networks. I think the resolution is that you’re using a different interpretation of complex contagion (and that WS networks are not degree regular). I would suggest you choose one of the two usages and stick with it throughout.

==== OUR RESPONSE ====

We agree that it would be valuable to tease apart better in our description these two types of contagion. . We added language in several places to address this:- In the second paragraph of the Simple Contagion section we now write: “There are two popular types of social contagion models used in ABMs: simple -- also called independent cascade -- complex contagion -- which has a proportional and absolute variation.”

In the first paragraph of the Complex Contagion section: “There are two major variations of complex contagion: what is typically called a proportional threshold contagion, and what we call an absolute threshold contagion. Proportional threshold contagion creates some $\\alpha$ proportion of neighbors which must believe something for the ego agent $u$ to believe it. Absolute threshold contagion, on the other hand, may imagine some whole number $\\eta$ of neighbors who must believe something in order for the ego $u$ to believe it (Centola & Macy 2007; Granovetter,1978). We choose to use the proportional threshold model for our examples and in-silico experiments.”

Additionally, we changed our use of “complex contagion” throughout the paper to “proportional threshold contagion.”

==== REVIEWER ====

Pg. 6: Cognitive contagion model

DeGroot 1974 (https://www.jstor.org/stable/2285509) has a classic model of updating degrees of belief in response to neighbors beliefs, with some weighting on neighbors. Your model could be considered an extension of the DeGroot model that makes the weights on individuals an increasing function of difference between individuals, ie. accounting for homophily. In this vein, you probably also want to look at Dandekar, Goel and Lee 2013 (https://www.pnas.org/content/110/15/5791) and see whether you agree with their conclusions, and if not, why not.

==== OUR RESPONSE ====

We now cite these valuable references. In addition, we added a new section to the background, called Cognitive Contagion, outlining contributions from these models. We also added language in the discussion to comment on how our results align with others, kindly provided by the reviewers.

==== REVIEWER ====

Axelrod 1997 (https://journals.sagepub.com/doi/abs/10.1177/0022002797041002001) has a model that accounts for individuals paying more attention to similar individuals, as does DellaPosta 2015 (https://www.jstor.org/stable/10.1086/681254) and Baldassarri and Bearman (https://www.jstor.org/stable/25472492). In these cases, homophily is conceptualized as similarity on other belief statements, rather than just on a single belief. The homophily element of their work is similar to what you’re doing here with a single belief. If homophily itself is the driving factor, your simulations should give the same results. If it’s about gradual belief updating, you may get different results.

There are a couple folks looking at within-person interactions between beliefs (a third way to conceptualize the interaction between existing beliefs and adoption decisions!). Goldberg and Stein 2018 (https://journals.sagepub.com/doi/abs/10.1177/0003122418797576) is a good start, as is Friedkin et al 2016 (https://science.sciencemag.org/content/354/6310/321/tab-figures-data) - although Friedkin’s model suffers from an ambiguity in whether the outcome is due to social contagion or to the assumed logic constraints. (I’ve done some work on this too (https://arxiv.org/abs/2010.02188) but please don’t read this as grubbing for citations - if you want to cite someone on those ideas, go with the folks above…)

Again, thanks for the treasure trove of helpful references which we now cite. Definitely helps to put our work in better context.

==== REVIEWER ====

- Please check that your results are insensitive to the number of levels in your model. 7 points is arbitrary (which is fine) but we don’t expect it to be a faithful representation of what goes on in people’s heads (regardless of what Likert thinks). Sometimes these things matter - just make sure that it doesn’t matter here. Make a 100-point model or something that approximates a continuous scale, just to be sure. You can put it in the supplement, and put a note in the main body to say that you checked.

As suggested, we have conducted further experiments to ensure that the behavior of the cascades and contagion were similar for different resolutions of belief: if b was divided into 2 discrete points, 3, 5, 7, 9, 16, 32, and 64. We find the results are stable up until 16, but once the model is closer to continuous (even 64), we find that our way of modeling news sources is overwhelmed and we don’t see the same dynamics. We included resultant graphs in the Supplemental Materials (S11 Text & S12-S21 Fig) and made note of them in the main text: “We additionally ran our later in-silico experiments with lower and higher ``resolutions'' of belief: with $b$ able to take integer values between 0 and 2, 3, 5, 7, 9, 16, 32, and 64. Results from those belief resolutions are shown in S12-S21 Figs.”

- Another consequential parameter choice is the ’stubbornness’ of individuals. You do a good job exploring different options here, but your justification for why the stubborn parameters are the ones you carry forward is grounded in the outcomes you want to see. Again, it’s not a problem to do so if the purpose of the model is to highlight a possible outcome and describe when it is likely to occur. But, it does seem a bit like sampling on the dependent variable. Can you either justify why we should expect this to be the right choice without reference to the simulation outcomes (you could use a micro-level analysis with a single agent) or just make this assumption explicit? Something like “sometimes, people are stubborn. When this is true, here’s what we expect to happen”. Then you’re just making a micro-level assumption and exploring its consequences, rather than trying to say “the world is this way…”.

==== OUR RESPONSE ====

We have now added language in the second to last paragraph of the “Sigmoid Function” subsection of “Motivating Choice of $\\beta$ Functions” in response to this critique as follows: “These agents act in a way akin to what we observe from cognitive literature (Swire-Thompson et al., 2020; Festinger, 1957), albeit in a highly simplified manner: an agent who ``strongly disbelieves'' in something like COVID mask-wearing will likely only be swayed by a message that ``disbelieves'' or is ``uncertain'' about the belief. On the individual level, a maximum two relative magnitudes of belief separation, with decreasing probabilities as distance increases, seems to qualitatively match empirical work. In our simulations using more than 7 points on a belief spectrum, this argument can still be held by setting the equivalent belief ``markers'' along the spectrum, and using those to scale the contagion function.”

==== REVIEWER ====

- I’m surprised 10 simulation runs is enough to get a stable result. I usually end up increasing the number of runs (10, 20, 50, 100, 200, etc) until I don’t see any difference in the resulting averages, and then do 2-10 times as many as that. If you have done thousands and found the same results, you can say that you did them, but that the results only show the results of 10 because the effects are so robust. If you haven’t done a large number, it might be a good idea, as they aren’t expensive.

==== OUR RESPONSE ====

To answer this point, we repeated all the in-silico experiments in the “Comparing Contagion Methods” section using 50 and 100 iterations, instead of 10. As expected, we found that results did not change significantly. We now include these results in the Supplemental materials (S22 Text, S23 Table, and S24 Fig). We also added text acknowledging this in the second to last paragraph in the “Experiment Design” subsection of the main text: “We display results aggregated over 10 simulations here, with results from 50 and 100 simulations and justification for using 10 in S22 Text, S23 Table, and S24 Fig.”

==== REVIEWER ====

- Do you pick a random agent to be the broadcaster, or are they different entities in your model? I had trouble working that out.

==== OUR RESPONSE ====

We agree that this was confusing in the original draft. We now clarify that “There is a separate set of institutional agents $I$ -- entirely different entities in the ontology of our model -- which have directed edges to a set of ``subscribers’’ $S \\subseteq V$...”

==== REVIEWER ====

- Fig 3. beliefs don’t match what’s in the text (in the text, u is 1, but they are exposed to 6, in the fig, they believe 6 but are exposed to 0). This had me confused for a bit as I wasn’t sure I was reading the figure correctly.

==== OUR RESPONSE ====

We have modified the text and hopefully now made it less confusing as follows: : ”Perhaps an agent $u$ who strongly believes the proposition ($b_u = 6$) will not switch immediately to strongly disbelieving it without passing through an intermediary step of uncertainty. Given a neighbor $v$ sharing belief $b_v = 0$, agent $u$ should not adopt this belief strength, because the difference in belief strengths is clearly greater than $\\gamma$. Simple contagion would fall short because agent $u$ may simply randomly become ``infected'' with belief strength 0 by $v$ with some probability $p$. A proportional threshold contagion would similarly falter if agent $u$ were entirely surrounded by alters with belief strength 0.”

==== REVIEWER ====

Pg. 9: Contagion experiments

- I’m aware that the term “experiments” is sometimes used to describe running simulations under different conditions, and in the broadest sense (trying something to see what happens) it’s an appropriate term. However, I do feel that as a community it would be useful to distinguish between computer-assisted “gedanken-experiments” and experiments that make manipulations in a lab or in the real world with human participants. The first is for theory building - an essential part of the scientific division of labor - and the second for theory testing. I certainly won’t twist your arm, but you may find it helpful to be more explicit that you are exploring the macro-level consequences of a micro-level assumption, using a simulation to overcome the brain’s inability to see these consequences on its own. This will clarify your contribution for readers, as they’ll know what to expect in the next section.

==== OUR RESPONSE ====

Thank you for the suggestion -- we agree. We changed the term “experiment” to “in-silico experiment” throughout the text in appropriate areas, notably in the section header “In-Silico Contagion Experiments.”

==== REVIEWER ====

- Do your behavior over time charts (Fig 6. - 16.) show a true T0? I.e. the distribution of individual beliefs *before* any adoption has taken place? I would imagine that I should see the T0 for all charts to be identical (same starting conditions) and fairly similar to the bottom charts in Fig 9. If I’m not mistaken, please include those t0 belief distributions in your plots - it would be helpful to know where individuals are coming from.

==== OUR RESPONSE ====

This is a very useful suggestion. We updated all of our graphs to display the equivalent of 10 time steps of initial belief distribution before t0 begins to change them. We think this is substantially clearer.

==== REVIEWER ====

pg 14. Section on comparing contagion models

- You vary social network structure and then describe the qualitative differences of your model in each of these graphs. We know that different network structures yields different shaped diffusion curves, and so this isn’t your point. Instead, I believe you are suggesting that the differences between network structures under the traditional contagion models are not the same as the differences under your own model. This is a pretty complicated comparison to make, and as the text currently reads, I personally (with a reviewer’s normal cognitive impairments…) have trouble understanding what I should take away. Specifically, I’m having a hard time distinguishing the effect of your model from the effect of the changes to the network structure, in this section in particular. One idea might be to make a table with different adoption rules on one axis, and different social networks on the other, and for a specific metric compare the differences across condition. Then you can highlight why your model changes our expectation about the effect of network structure. You could have multiple tables for different metrics.

==== OUR RESPONSE ====

We agree that the takeaway at the end of the comparison was confusing. We have added text in the “Analysis of Results” section under a subsection called “Belief Contagion” which lays out a statistical analysis of the results. Details about the analysis can be found in Supplemental Materials (S25 Text).

==== REVIEWER ====

pg. 19 Section on Analysis of Results

- Your analysis suggests that because there is almost always a path for information to reach an individual from the broadcast node, then the internal logic of an individual’s decision rule is more determinant of the outcome than the social network structure (if I understand it correctly!). A good comparison to make might be a case where there is no social influence at all, essentially an individual adoption model from the “broadcast” source, i.e. a star network with the broadcast source at the center. Then you can compare the effects of the other network structures to see which components of the outcome are due to the characteristics exemplified in each of the networks (clustering, short path lengths, high/low degree, etc).

The challenge here is that this sets up a horse race between the effects of individual cognition, and the effects of network structure. You’re not in a position to really adjudicate between these two effects, as in your simulation the relative magnitudes will be entirely dependent on the parameters of “stubbornness” that get selected in the model. You could turn this into an opportunity, however, by describing (qualitatively) the conditions under which we should expect one of the effects to dominate over the other. Then you get to say something like “and so an important piece of empirical research will be to determine which of the two regimes (the types of social contagion we care about) fall into”.

==== OUR RESPONSE ====

We agree with this framing, and have added language in our discussion to address it at the end of the second to last paragraph: “These theoretical results were confirmed in supplemental experiments where graphs with high homophily and low node degree yielded less contagion -- even with the DCC function. The interplay between network structure and cognitive function (e.g. ``stubbornness'') on contagion results could benefit from further study, as it likely has empirical analogues that are crucial to understanding reality.”

==== REVIEWER ====

Pg. 22 Discussion

- I appreciate your disclaimer that you are not making policy recommendations based on your results. It is fully appropriate to acknowledge that you are building theory that we as a community should subject to test before using it to derive policy. I also understand how (academically) we feel pressure to say that our work is “policy relevant”. Unfortunately, I think our community has a tendency to try and have it both ways, and try and both make policy claims and then hedge them at the same time, and it’s never really that clean. I think you may find it more comfortable to move fully away from the “policy recommendation” language, and instead describe how your work sets up what is arguably a very interesting follow-on study to confirm your understanding. (Plos one gives you this luxury - take advantage of it!) Then your description of an intervention can be wholeheartedly about an intervention in an experimental context, without the need for the disclaimer. You can describe how an experiment would differ from individual-level behavioral interventions in the psychology literature, and what that would add to our understanding. Best yet, you’d have your next paper lined up nicely.

==== OUR RESPONSE ====

We appreciate this suggestion, and have slightly changed our language in this section (in the same paragraph you comment on) as follows: “Our model and experiments would need to be modified and parameterized to fit a given narrative spread scenario in order to be grounded enough to draw conclusion from. This may be appropriate for a follow-up study which applies this model, with more detail, to the spread of a COVID-related belief such as belief in mask-wearing.”

==== REVIEWER ====

Thank you again for reading through what has become an inexcusably long review. I’m certain that you can address the questions I have without too much effort, and I look forward to seeing the revisions.

James Houghton

==========

REVIEWER 3

==========

Reviewer #3: This paper presents a model of diffusion that is grounded in the cognition of each agents. By comparing different graph types to the exposure to information/beliefs coming from an institutional source, the authors try to show how adding complexity to the definition of an agent may lead to map more closely what happened, for example, with beliefs around COVID-19.

While certainly interesting and timely, the article presents a series of concerns that make the message a bit less effective than it could be. The most relevant concerns I had are those around (a) the claim that more complex agents is something “new” of this model, while the ABM community has been discussing it since its beginnings, (b) the use of ABM (especially the “why” ABM), (c) the complete lack of reference to description and reporting standards in presenting the ABM, and (d) the report of results shows that you are probably trying to do too much with just one paper.

I have detailed what I mean by this in the attached file, I hope you find them useful. I enjoyed reading your paper. Best of luck with your research!

1.You may want to make sure that the abstract does not lean too much on text that is already in the paper. Sometimes it makes sense to express the same concepts differently.

We have changed the abstract due to our changes to the rest of the paper, and we hope that we did not lean too much on the text this time.

2.Your introduction reads well and I agree with you on the excessive price that diffusion models pay to epidemiology, especially the spread of diseases. In fact, most models of diffusion even use the same categories —e.g., immune, susceptible, infected — even though this may not have much sense in a social environment. I believe the broader category for these models is that of “threshold models of diffusion” (from Granovetter 1978 til Rosenkopf & Abrahamson 1999) and it would be a good idea for you to also acknowledge this literature more than you already do.

Granovetter M (1978) Threshold models of collective behavior. Am J Sociol 83(6):1420–1443.

Rosenkopf L, Abrahamson E (1999) Modeling reputational and informational influences in threshold models of bandwagon innovation diffusion. Comput Math Organiz Theory 5(4):361–384.

==== OUR RESPONSE ====

Thank you for these references. We added them into the introduction, as well as reference to the underlying sociological work.

==== REVIEWER ====

3.I have been working on diffusion model for quite some time now and I have found the literature review by Kiesling et al (2012) — a work that appears in your reference list — very much informative and well researched. They present and categorize diffusion models by considering what agent-based modeling brings to the theory. Even though they do not explicitly mention cognition, they do refer to individual characteristics (e.g., psychological, social) that affect the diffusion process.

Moreover, there have been agent-based models (ABMs) of diffusion with a cognitive backbone (Secchi & Gullekson 2016). I am writing this because I do not think you can claim that you are introducing “a new class of models” (p.2, line 25). In fact, this class of agent-based models that focus on individual attitudes, dispositions, perceptions as a way to study diffusion are very much the reason why scientists have turned to agent-based models. Besides this point, it is my understanding that PLOS is not interested in novelty as such, scientific robustness would be enough to grant publication. In short, I do agree with your approach and I think it is sound but, at the same time, I do not see the need to oversell. This is both because what you claim as unique is part of the ABM approach and because PLOS journals are not the place where these claims make a difference.

Kiesling E, Günther M, Stummer C, Wakolbinger LM (2012) Agent-based simulation of innovation diffusion: a review. CEJOR 20(2):183–230.

Secchi D, Gullekson NL (2016) Individual and organizational conditions for the emergence and evolution of bandwagons. Comput Math Organiz Theory 22(1):88–133.

==== OUR RESPONSE ====

We very much value the clarification of our claims of novelty. We agree, given more background literature, that our cognitive function is not a novel contribution. We added text throughout the main paper to indicate that we rather view our contribution as an application of existing contagion techniques to the disinformation problem -- centering cascading contagion started by institutional agents. That reframing also culminated in changing the title and name of our model from a “cognitive contagion” model to a “cognitive cascade” model.

==== REVIEWER ====

4.Your reference for agent-based social systems [37] is not what I was expecting. I was almost certain to find one of the publications from Macal and North... I may be mistaken here.

==== OUR RESPONSE ====

We added in a reference to a paper we found helpful (Macal & North 2009) where we introduce agent-based social systems in the introduction:

Macal, Charles M., and Michael J. North. "Agent-based modeling and simulation." Proceedings of the 2009 Winter Simulation Conference (WSC). IEEE, 2009.

==== REVIEWER ====

5.Your call for simplicity in ABM (p.3, lines 67-75) is a bit puzzling, perhaps since you cite a cellular automata model (not an ABM) to make your point. There has been much discussion in the social simulation modeling community around descriptive vs simple models. What is now a widely recognized feature of ABM is the fact that they do not need to be limited to simple rules, agents, or environments. Quite the contrary, a strong point of using these models is the capacity to generate complex systems. It was Moss and Edmonds (2005) who first popularized the idea that one not only could but should exploit the more descriptive features of ABM. These ideas reflect on the various modeling purposes (Edmonds et al. 2019) in the sense that the intensity of description may depend on the general aim of the model. In short, I believe the picture is much more nuanced than what you hint at in the paragraph mentioned at the beginning of this point.

Edmonds B, Moss S (2005) From KISS to KIDS—an ‘anti-simplistic’ modelling approach. Lecture Notes in Artificial Intelligence. In: Davidson P (ed) Multi agent based simulation, vol 3415. Springer, New York, pp 130–144.

Edmonds B, Le Page C, Bithell M, Chattoe-Brown E, Grimm V, Meyer R, Montañola-Sales C, Ormerod P, Root H, Squazzoni F (2019) Different modelling purposes J Artif Soc Soc Simul 22(3):6.

==== OUR RESPONSE ====

We appreciate the references, but chose to keep our model aimed at simplicity. Because this argument regarding simplicity vs complexity is not crucial to the points made in our main paper, we removed the introduction paragraph advocating for it.

==== REVIEWER ====

6.You do a good job in succinctly explaining the logic of a simple diffusion model. At the same time, I was wondering why you have decided to assume that one of the two individuals in the example already holds a belief. This is a slightly more complicated case than the case where the recipient does not actually have prior beliefs. This does not mean that there is going to be a necessary ‘contagion’ but that the likelihood for that to happen is entirely based on a probability (the probability to ‘meet’ this belief) without any given prior. That is more suitable with the analogy of the disease, since one does not necessarily have antibodies (i.e. the prior), and the current pandemic is a sad reminder of this. Moreover, in diffusion of innovation studies — I keep mentioning it because this has been the benchmark for diffusion research for a long time — typically there is no prior, otherwise the innovation would not be innovative. In short, I think you should at least mention that you are keeping the exemplification closer to your approach and model rather than being closer to what most equation-based models have done in the past to cover simple diffusion.

==== OUR RESPONSE ====

Given this useful background on innovation diffusion practices, we added language addressing our departure from these conventions in the section on Simple Contagion: “As opposed to many studies in innovation diffusion (Zhang & Vorobeychik, 2019), we choose to argue in our formulation of contagion that every agent has a prior belief. Innovation diffusion studies often imagine any individual has no prior opinion about a new idea until they are ``infected'' with it. However, as we will illustrate below, we model beliefs on a spectrum, including belief in, against, and uncertainty about, a proposition. This departure from the epidemiological view of opinion diffusion allows us to argue for a prior, even if it is uncertainty.”

==== REVIEWER ====

7.There are two points that came to mind as I was reading on page 6:Page of 25

7.1. I follow your reasoning and it makes intuitive sense but I am not sure a model — especially after the claims you make on cognition — should be based on sole intuition. What is the theory behind the assumption that individuals would hold beliefs and be more or less prone to accept advice, recommendations, or information coming from others? This is a very important aspect that should motivate the way you model a computational simulation, especially when approaching ABM. Again, the point is not about the logic behind your modeling, but the justification of your assumptions. In my models, I use something very similar to this to justify individual dispositions towards others, and that is a version of Simon’s concept of docility (1993). I usually frame this as a behavioral aspect of distributed cognitive processes.Simon HA (1993) Altruism and economics. Am Econ Rev 83(2):156–161.

==== OUR RESPONSE ====

We hope that we have sufficiently motivated our DCC model from relevant literature on cognitive dissonance in relation to political or ideological beliefs (Van Bavel & Pereira, 2018; Bail et al., 2018; Swire & Lazar, 2020; Porot & Mandelbaum, 2020; Jost, 2009; Iyengar, 2009), and articulated that our model is similar to others who also model dissonance (Li et al., 2020; Goldberg & Stein, 2018; Baldassarri & Bearman, 2007).

==== REVIEWER ====

7.2. The other comment is more technical. At one point, you mention that you have used a way of coding strong/weak beliefs in a way that resembles a Likert scale. This is a justification that I have also used myself several times, especially at conferences to justify why certain variables were programmed the way they were. The problem I see with your way of interpreting this assertion is that you are being too literal. Psychometric scales are almost never measured on a single item, hence you never have discrete values, but a pseudo-continuous set of values for each individual that is derived from summating or averaging the score in the various items. Hence, either you are following standard measurement in cognitive psychology (using scales, not items) or you are doing something else (using discrete values). A minor point —irrelevant for the calculations — is that 7-point Likert scales are coded from ‘1’ to ‘7’ not from ‘0’ to ‘6’.

==== OUR RESPONSE ====

Thank you for this framing. In response to reviewer 2’s related comments, we conducted additional experiments using belief resolutions of 2, 3, 5, 9, 16, 32, and 64, whose results are included in the supplemental materials (S11 Text, S12-S21 Fig). We hope that these different belief scales break us away from overreliance on a 7-point Likert scale.

==== REVIEWER ====

8.As I keep reading the description of your model, I really cannot understand your claim of novelty as far as agents are kept holding heterogeneous beliefs (and dispositions) in an ABM. What you write seems standard bounded rationality assumptions for heterogeneous agents in a computational simulation model of a complex system. See the introduction of Edmonds E, Meyer R (2017). Simulating social complexity. A handbook. Springer.

==== OUR RESPONSE ====

We agree, and hope that our reframing of our contributions are now more in line with the extant literature.

==== REVIEWER ====

9.In your formula for a binary belief update function, you assume that the positive or negative distance from one’s own belief should be treated equally. However, we know that the sign of the distance is probably relevant, as one may infer from people’s understanding of losses and gains in famous experiments such as those by Kahneman and Tversky (1979). You may want to argue as of why it makes sense to take the absolute value of the difference in Eq. 4.

Kahneman D, Tversky A (1979) Prospect theory: an analysis of decision under risk. Econometrica 47(2):263–292.

==== OUR RESPONSE ====

In response to this and other literature recommended by reviewers that also utilized this notion (Dellaposta et al., 2013), we made a small addition to address this as we discuss the update function in Eq. 4 (displayed as Eq. 5 in the track-changes version): “There are similar functions motivated in contagion models centering dissonance (Li et al., 2020; Dandekar et al., 2013), and some which weight positive or negative influence differently (DellaPosta et al., 2016). We chose to weight positive or negative influence equally in this example, and subsequent contagion functions, to simplify the model and make its results more easily analyzable.”

==== REVIEWER ====

10.Eq. 5 is, as you explicitly state, very vague and there is a need to justify beta:

10.1. I am not sure why you have lost the ‘v’ in your reference belief, as if you are now treating the belief to which one is exposed as a-social or not coming from a source of any kind. Whether a belief spreads from a family member, a friend, social media, or the news, it still comes from someone. I am not sure why this turn to ‘objective’ belief as opposed to the one anchored to another agent —i.e. the model you have used thus far. Obviously, I would have some problems with the “objectivization” of beliefs.

==== OUR RESPONSE ====

Thank you for pointing this out. We updated equations (7) and (8) to include $b_v$ rather than just $b$.

==== REVIEWER ====

10.2. I have the impression that everything that has been discussed thus far is instrumental to presenting beta and Eq. 5. If this is the case, then you could probably re-organize the section and do it more straightforwardly, perhaps being more explicit about the quest for a beta function that captures cognition involved in misinformation spreading, to some extent.

==== OUR RESPONSE ====

With our new reorganization, we believe that as the cognitive contagion motivations are more contained in the background section, citing more works, we are less centering a build-up toward the beta function. We hope that now, our argument is more clearly aimed at the cascade model with a defensive contagion function and institutionally-driven cascades.

==== REVIEWER ====

11.From the visualization of your model, it seems that you have not created another “level” or layer (i.e. the institution) but another agent in the system. I mean, technically, this is what it is —that you have two different types of agents, one is to represent an individual (cognizer) and the other is there to represent a news outlet (an institution). Of course, you can interpret this to be two different layers but the programming does not necessarily indicate that this is the case. This means that a modeler reading your article would need a more detailed description as of why this two-agent strategy is actually identifying two layers. I do not think this is too difficult, as I have done it in some of my models—it just needs a more substantial justification.

12.As I read this section, I have noticed that you state that ABM do not often model “levels”. I believe this depends on which field you are looking at. For models in economics, for example, it is customary to model a market and its agents. For models in organizational research, this is also something that happens quite often. This is related to the ability that ABM has to allow for the modeling of fast and slow timescales or, at least, this is the explanation that I think is the most valuable. You could have a look at Neumann M, Cowley SJ (2016). Modeling social agency using diachronic cognition: learning from the Mafia. In Agent-Based Simulation of Organizational Behavior (pp. 289-310). Springer, Cham.

==== OUR RESPONSE ====

We revisited the work we are citing to more clearly make the argument we attempted. Instead of claiming that ABM do not model levels, we should have claimed that they _do_, but we are trying to get away from the typical way they are modeled. We added text in the first two paragraphs of the “Institutional Cascades” subsection to clarify.

==== REVIEWER ====

13.The presentation of the model does not seem to follow any of the known protocols or schemes that are found in specialized journals for ABM such as, for example, the Journal of Artificial Societies and Social Simulation (JASSS). I am referring to the good custom to show, for example, a flow chart summarizing the processes, and to split the description of agents and their characteristics from the processes of the model. You may want to add a table where you define each parameter, their notation, values, experimental values, and a short description. While published articles do not include the entire ODD protocol (see Grimm et all 2020 for the latest version), it is good to include some of it.

Grimm V, Railsback SF, Vincenot CE, Berger U, Gallagher C, DeAngelis DL, Edmonds B, Ge J, Giske J, Groeneveld J, Johnston AS (2020). The ODD protocol for describing agent-based and other simulation models: A second update to improve clarity, replication, and structural realism. Journal of Artificial Societies and Social Simulation 23(2):7.

==== OUR RESPONSE ====

We agree with this suggestion, and have added a table under the section “Comparing Contagion Methods” which details our parameter selection.

==== REVIEWER ====

14.One of the aspects that characterizes ABM research is that these simulations lean (more or less) heavily on stochastic components. From your presentation of the model, I am not sure I have a clear understanding of where exactly are the stochastic components of the model. Of course, a limited part of these components suggests that, perhaps, ABM may not be the preferred choice for a computational simulation.

==== OUR RESPONSE ====

The stochastic elements are in the distribution of initial beliefs, in the probabilities guiding the spread of beliefs, and in the network topology, randomly constructed for each simulation.

==== REVIEWER ====

15.The three conditions (p.9) you describe for the institutional distribution of messages are interesting and make sense from a computational/technical point of view, because you want to understand the impact of a variety of messages. At the same time, they should reflect conditions that can be observed in an actual institutional environment, unless the simulation wants to be a theoretical (in principle) case. Given the actuality of your topic, I do not think this latter is the case and so, I would ask if you could provide some example of what these conditions refer to.Page of 45

==== OUR RESPONSE ====

In response to similar questions by reviewer 1, we added language in the Future Work section to address the possibility of more realistically modeling the media ecosystem and message sets.

==== REVIEWER ====

16.I am not sure it is standard practice in ABM to write hypotheses (p.10) as a way to present the expectations on what to find. However, if you must, then you should at least (a) ground them in the literature and (b) explicitly formulate hypotheses. I would prefer that you are not explicit on hypotheses since, if that is the case, then formal testing needs to be performed and that would probably take the manuscript a bit far from what you intend to do.

==== OUR RESPONSE ====

Though we appreciate the comment, we chose to leave the language as is. We hope that our use of “hypothesis” helps to clarify our thought process in conducting the experiments.

==== REVIEWER ====

17.When you write about “preliminary experiments” (p.10, line 341) to find the values of the threshold and the contagion probability, what is it exactly that you did? Did you perform a sensitivity analysis? And how come a constant is the outcome of such an analysis. ABM usually provide a range of values for their parameters.

==== OUR RESPONSE ====

We now detail the process by which we chose our simple and proportional threshold parameters in the supplemental material (S1 Text & S2 Table). We hope that this helps to explain our decisions.

==== REVIEWER ====

18.If I understand what you write about the model and your intention is to test the effect of different graph types, I am not sure why you have decided to go with ABM as opposed to a more classic social network analysis. This point, in connection with the one on stochasticity above, needs further (better) explanations.

==== OUR RESPONSE ====

We believe that the language that we added concerning our contribution being a cascade model that leverages both network science techniques and ABM techniques helps explain our work. We hope that our work borrowing from both disciplines allows both to benefit from the strengths of each other.

==== REVIEWER ====

19.Also, the determination of the number of runs for each configuration of parameters seem very vague. You write you set on 10 without providing any detail about what procedure you followed. This is a very important aspect to understand whether results are valid.

==== OUR RESPONSE ====

We agree, as this point also mirrors one made by reviewer 2. To address this, we chose to aggregate results for 10, 50, and 100 simulation runs for each configuration, and do correlation analyses to determine whether the results differed significantly. Results are in the supplemental material (S22 Text, S23 Table, and S24 Fig), and we use them to justify using 10 runs in the main paper.

==== REVIEWER ====

20.Results are overall well presented and described. I have only two comments:

20.1. The first impression is that you are trying to do too much with just one paper. There are many conditions that you want to test with this simulation and results are rich. However, too much information is probably as dangerous as its paucity. Maybe there is a way to limit the amount of graphs and comments and this could be done with a more structured systematic approach to the findings. You can start by presenting a table where computational experiments are classified and indicate the main finding for each one of them. Some results are similar and you could skip them, unless significant differences appear in the configuration tested. You may want to merge the presentation of results and their analysis, where some of these summary tables already appear. Given these tables, I am not sure whether the many graphs provide additional information.

20.2. The second comment is about the uncharacteristic form that most of your graphs show in relation to the type of simulation modeling you have conducted (ABM). This points at the minimal stochasticity that seems to be embedded in your model. I may be too much leaning towards models with a large stochastic component, but linear results such as the ones that most of your results show require an explanation or, maybe, just a more straightforward presentation of the ABM.

==== OUR RESPONSE ====

We hope that we have now sufficiently articulated how our model does have stochastic elements, and that some of the supplemental materials (S3 & S4 Fig) demonstrate the significant variation in results for some experimental combinations.

==== REVIEWER ====

21.I think I can stop here with my comments and come back to the final part of the paper once you have addressed the above

Attachment

Submitted filename: review-responses.pdf

Decision Letter 1

Marco Cremonini

19 Oct 2021

PONE-D-21-06938R1Cognitive cascades: How to model (and potentially counter) the spread of fake newsPLOS ONE

Dear Dr. Rabb,

Thank you for submitting your manuscript to PLOS ONE. The review process has taken longer that it usually does, but the reviewers have been accurate and overall they clearly appreciated the improvements following the first version of the manuscript. The work is now close to fully meet publication criteria, with some aspects that still require additional consideration. However, the Minor Revision status does not imply that reviewers' last comments could be considered lightly. They are comments that could further improve the manuscript's quality, so appropriate consideration is required.Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. In particular:Reviewer 2 pointed to a single issue regarding tests robustness and the validity of conclusions, which seems to need further explanations to be completely convincing. This is an important point.Reviewer 3, instead, suggests a list of minor modifications or in some cases to add in the manuscript some explanations that were only given in the answers to reviewers. 

Please submit your revised manuscript by Dec 03 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Marco Cremonini, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: Thank you for responding so thoroughly to my previous comments. The only area of outstanding concern for me is in the sensitivity test to the number of levels in your model. In my previous review, I asked that you check that the conclusions of your model are not sensitive to the arbitrary number of levels you chose. In theory, if the results are rigorous they should hold equally well despite the number of levels. Thank you for conducting this sensitivity test and reporting the results.

I am worried however that your tests did not find that your conclusions were truly robust to the number of levels in the model. In your response and supplement you state this, but in the paper itself you merely say that you ran a sensitivity test. The main body of the paper makes no mention of the fact that the results seem to be strongly influenced by the discretization assumption. In fact, the sentence "We additionally ran in-silico experiments with lower and higher resolutions..." seems to imply that you didn't see any difference by going to more continuous levels of belief, especially as you had previously said "bu can be a continuous variable with the interval from strong disbelief to strong belief, or it can take on discrete values". At the very minimum, you have a disconnect between the scope condition you are claiming (continuous levels belief) and the domain over which your model predicts the outcome you claim (discrete belief levels). To be completely honest with your readers, I think more of this discussion of the reliance on discrete belief levels belongs in your main text. Don't let anyone suspect you're hiding anything.

What your discretization assumption seems to be doing is acting as a coarse proxy for similarity between neighbors, so that you don't have to justify a rule for who individuals pay attention to that works in the continuous domain. As your results are dependent on this, you need to be more explicit about it. Otherwise, you can update your model to allow for similarity given a continuous measure of belief, and see if your results still hold. This would be a more robust result, and easier for you to justify if it doesn't add too much modeling complexity.

Thanks,

James

Reviewer #3: My report is in the file attached to this form. Please open that file to access my comments to the paper.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Giacomo Livan

Reviewer #2: Yes: James Houghton

Reviewer #3: Yes: Davide Secchi

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Review of PONE-D-21-06938.R1.pdf

PLoS One. 2022 Jan 7;17(1):e0261811. doi: 10.1371/journal.pone.0261811.r004

Author response to Decision Letter 1


3 Dec 2021

We would like to thank all reviewers for their comments and suggestions, which we believe again improved our revised draft of the paper. Specific responses to individual questions from reviewers can be found below.

==========

REVIEWER 1

==========

N/A

==========

REVIEWER 2

==========

Reviewer 2: Thank you for responding so thoroughly to my previous comments. The only area of outstanding concern for me is in the sensitivity test to the number of levels in your model. In my previous review, I asked that you check that the conclusions of your model are not sensitive to the arbitrary number of levels you chose. In theory, if the results are rigorous they should hold equally well despite the number of levels. Thank you for conducting this sensitivity test and reporting the results.

I am worried however that your tests did not find that your conclusions were truly robust to the number of levels in the model. In your response and supplement you state this, but in the paper itself you merely say that you ran a sensitivity test. The main body of the paper makes no mention of the fact that the results seem to be strongly influenced by the discretization assumption. In fact, the sentence "We additionally ran in-silico experiments with lower and higher resolutions..." seems to imply that you didn't see any difference by going to more continuous levels of belief, especially as you had previously said "bu can be a continuous variable with the interval from strong disbelief to strong belief, or it can take on discrete values". At the very minimum, you have a disconnect between the scope condition you are claiming (continuous levels belief) and the domain over which your model predicts the outcome you claim (discrete belief levels). To be completely honest with your readers, I think more of this discussion of the reliance on discrete belief levels belongs in your main text. Don't let anyone suspect you're hiding anything.

==== OUR RESPONSE ====

Thank you for this point, we agree. We have added language to clarify that while we can model things that approach continuity, that for some of our experiments, the dynamics became different when the number of different “belief levels” became large enough (while still showing there is nothing magic about 7 and that slightly fewer or more but still discrete belief states gave similar network dynamics. ) We suspect this is a limit of the way we set initial model parameters during experiments, more than something intrinsically different about true continuous belief states, but we now make it very clear that there’s a difference in the main paper.

==== REVIEWER ====

What your discretization assumption seems to be doing is acting as a coarse proxy for similarity between neighbors, so that you don't have to justify a rule for who individuals pay attention to that works in the continuous domain. As your results are dependent on this, you need to be more explicit about it. Otherwise, you can update your model to allow for similarity given a continuous measure of belief, and see if your results still hold. This would be a more robust result, and easier for you to justify if it doesn't add too much modeling complexity.

==== OUR RESPONSE ====

Completely agree – see above.

==========

REVIEWER 3

==========

Reviewer 3: Thank you very much for your hard work on the manuscript, I think the revision has improved it significantly from the previous version. I have enjoyed reading this version as well as the previous one and I believe it makes an important contribution to the literature. As an agent-based modeler, I am still slightly uncomfortable with some of your choices concerning modeling and their reporting on the paper. However, most of my concerns have been addressed and I am more comfortable with this version of the paper. There are a few open points that you may want to address before submitting the paper in its final version. I have reported them below —some refer to previous comments that require additional attention while others are new. I assess these comments as requiring minor efforts on your part.

31.Former point 3. I think your revised introduction clarifies the scope and objectives of the paper well —much better than before. One point that I think still requires your attention is the definition of a cognitive ‘cascade’. In fact, I cannot find a place where you define what you mean with this. You simply write that cognitive cascade models are “those that adopt a cascading network-based diffusion model from social network theory” (p.2, lines 26-27). Fine, but what is the cascade in a cascading network-based model? It seems you define cascading models through cascading networks. The result is a very unclear definition to those who do not know the meaning of a cascade.

==== OUR RESPONSE ====

We now define “cascades” explicitly in the introduction, and contrast it to how we intend to use the word “contagion.”

==== REVIEWER ====

32.Former point 5. When you refer to ABM in the social sciences you still refer to Schelling’s cellular automata model [48]. My comment was probably unclear on this point, since I was also trying to make another point (which you addressed). Apologies for repeating but [48] is not an ABM. You can cite any ABM published in JASSS to make your point or refer to the classic books by Gilbert, Edmonds, Troitzsch, etc. Here are some of them:

Edmonds E, Meyer R (2017). Simulating social complexity. A handbook. Springer.

Gilbert, N. (2019).Agent-based models(Vol. 153). Sage Publications.

Gilbert, N., & Troitzsch, K. (2005).Simulation for the social scientist. McGraw-Hill Education (UK).

==== OUR RESPONSE ====

We have now changed out the Schelling 1971 reference to the two references you provided from Gilbert.

==== REVIEWER ====

33.Former point 7.2. I have mentioned the use of Likert scale before and the fact that I find this approach very much in line with how modelers reason about numerical representations of beliefs. I do not quite understand your reply though. To my claim that you have been too literal and used a discrete value approach rather than mimicking the use of a Likert scale you have replied by adding a sensitivity analysis of the values of parameter b. I understand you performed those checks as reply to a point from another reviewer but that does not address my point. Please do let me know if I am missing something here. Do not get me wrong, I do think that your approach is fine; it is just that your claim is not fully consistent with the use of a Likert scale in applied psychology.

==== OUR RESPONSE ====

By showing that the model is somewhat robust to whether a 7- or 5- or 9-point scale is used (where we indeed used 7 as inspired by Likert scales), we hoped that this would dissuade any interpretation of our choice of 7 points as trying to exactly emulate Likert scales. Hopefully we have removed any indication that we are exactly using the Likert scale the way it is used in applied psychology.

==== REVIEWER ====

34.Former point 9. I could not find your response in the text of the article. I think it is a convincing one, please go ahead and find room for it.

==== OUR RESPONSE ====

We wrote language after Eq. 4 to capture what we wrote in the previous comments.

==== REVIEWER ====

35.Former point 14. I appreciate your reply and it may be a good idea to specify these points somewhere in the paper

==== OUR RESPONSE ====

Thank you. We captured this now at the end of the “Experimental Design” subsection.

==== REVIEWER ====

36. Former point 16. I respect your choice of leaving hypotheses but then I would expect them being accepted or rejected following some more formal testing procedure. Another way to do this would be to call them differently (e.g., assumptions, propositions) so that a reader does not expect formal testing. Now, the test you have used is correlation (I could see Chi-square tests from some of your tables as well). You may want to use that to test your hypotheses although it is a test of association, not one that would help you understanding whether the spread (or lack thereof) was determined to some extent by the conditions of the simulation. Maybe a test where you can actually show dependence between conditions and output would probably make your point stronger.

==== OUR RESPONSE ====

In order to be less confusing, we have changed the word “hypothesis” or “hypothesize” to “we expect” or “we anticipate” that do not imply that we will conduct statistical tests to test those hypotheses.

==== REVIEWER ====

37.Former point 19. I have read your supplementary materials files and I am afraid I was not able to find a justification for the 10 repetitions. In one of these files you state “We varied p, the contagion probability, from 0.05 to 0.95 in increments of 0.05, and took averages over 10 simulation runs”. This means that 10 was a given. “Why 10” and not 9 or 13, 57 or 9762 was my question.

==== OUR RESPONSE ====

Thank you for catching this. We agree that our previous answer did not address that aspect. We have run additional correlation tests and included a write-up of results in supplementary material S1 Text.

==== REVIEWER ====

38.Former point 20.1. I still think that you can find a better —more succinct way —of presenting your results. There are currently 14 figures in the paper and these are probably too many. Unless you convincingly argue that they all have something unique to tell about your results, I would consider moving some to the supplementary materials. Another way to deal with this is that of presenting results together with their analysis. You do not need to describe and then analyze, you can describe as you analyze results. I appreciate you do the analysis a bit more precisely, but some graphs can be made to compare results already and differences could be shown as a comment to those graphs.

Is there something we can move to the supplement? Doesn’t hurt.

==== OUR RESPONSE ====

We agree that this was a verbose way to present our results. During our presentation of results under the section “Comparing Contagion Methods,” we have removed some of the superfluous description of results, as well as moved the simple and proportional threshold cascade results to the supplements.

==== REVIEWER ====

39.A minor point: In the introduction you write about social network theory and cognition theory. I believe you want to refer to social network research (or analysis) and cognitive science. In particular, the latter (i.e. “cognition theory”) does not exist although there are many cognition theories that belong to the different domains of cognitive science. I suggest you make the reference to theories in the plural or change them accordingly.

==== OUR RESPONSE ====

We have changed “cognitive theory” to simply say “cognitive science” instead to avoid confusion.

==== REVIEWER ====

40.If you are mainly referring to the status of misinformation in the US, you should explicitly state it at the beginning of the article, when you bring in the first example. If you do not do that, the problem is that you assume readers implicitly think of the context as the US when, in reality, an article can be read from anywhere in the world. Maybe using the US as a reference has its advantages in the sense that misinformation seems to have reached particularly concerning levels there as compared to other countries. [I am using vaccination rates as a proxy and numbers are alarming if one compares the US with, say, Spain or the European Union as a whole.] So, please, try to be less US-centric and inform readers that you are taking this perspective.

==== OUR RESPONSE ====

This is a welcome point, thank you for bringing it to our attention. We have added explanations that we are primarily focusing on evidence from the U.S., even though COVID misinformation appears in many countries. We now make this more clear in the introduction.

==== REVIEWER ====

41.I appreciate the new focus on cognitive cascade models although you keep swinging between contagion and cascade (pp.5f). It may be a good idea to clarify the connection between the two and, at the same time, try and be more consistent in the use of terminology throughout the paper.

==== OUR RESPONSE ====

Thank you for catching the inconsistencies. We now consistently use the language of “cascades” where applicable, and when mentioning “contagion,” make clear the difference, as we define it, between individual-level contagion and macro-level cascades.

==== REVIEWER ====

42.Continuing from the point above, at page 6 you give the impression that the cascade is an event (“during a cascade”, line 196). This reminds me of the fact that you still have not defined what a cascade is. Something close to a definition comes much later (p.8).

==== OUR RESPONSE ====

As with the above point 31, we now define cascades explicitly in the introduction and differentiate it from contagion.

Attachment

Submitted filename: reviewers-response.pdf

Decision Letter 2

Marco Cremonini

13 Dec 2021

Cognitive cascades: How to model (and potentially counter) the spread of fake news

PONE-D-21-06938R2

Dear Dr. Rabb,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Marco Cremonini, Ph.D.

University of Milan

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Marco Cremonini

28 Dec 2021

PONE-D-21-06938R2

Cognitive cascades: How to model (and potentially counter) the spread of fake news

Dear Dr. Rabb:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Marco Cremonini

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Text. Process for choosing contagion parameters.

    An outline of our decision process for choosing simple and proportional threshold parameter values of p = 0.15 and γ = 0.35 for in-silico experiments.

    (TEX)

    S2 Text. Process for testing different belief resolutions and analysis of results.

    An outline of our process for setting up and running versions of our main contagion experiments with different belief resolutions (2, 3, 5, 9, 16, 32, and 64), including an analysis of some of the results.

    (TEX)

    S3 Text. Process for comparing contagion results with differing simulation run counts.

    An outline of our process for setting up, running, and analyzing versions of our main contagion experiments with different simulation run counts (10, 50, and 100).

    (TEX)

    S4 Text. Process for running correlation analyses between contagion results.

    An in-depth explanation of the correlation measures we used to measure similarity between contagion result data.

    (TEX)

    S1 Table. Results of parameter sweeping simple contagion value p.

    A table of timestep t at which simple contagion with probability p spread b = 6 to at least 90% of agents in different graph topologies (at a belief resolution of 7). ∅ indicates the contagion never reached at least 90% of agents.

    (TEX)

    S2 Table. Correlation test results with low scores between run count combinations.

    Correlation tests that yielded low scores for certain experimental combinations of graph type and message set.

    (TEX)

    S1 Fig. Proportional threshold contagion variance on WS small world networks.

    Proportional threshold contagion results over ten iterations of a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs show the mean percent of agents who believe bB, color coded by b value, plotted against time step. Shaded portions show variance over iterations.

    (PNG)

    S2 Fig. Proportional threshold contagion variance on BA preferential attachment networks.

    Proportional threshold contagion results over ten iterations of a Barabási-Albert preferential attachment network with N = 500, and added edges m = 3. Graphs show the mean percent of agents who believe bB, color coded by b value, plotted against time step. Shaded portions show variance over iterations.

    (PNG)

    S3 Fig. Additional results from single message set contagion with linear cognitive contagion functions.

    The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the inverse linear cognitive contagion function in Eq (5) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 1, α = 0); the middle shows “normal” agents (γ = 1, α = 1), and the right, “stubborn” agents (γ = 10, α = 20).

    (PNG)

    S4 Fig. Additional results from single message set contagion with threshold cognitive contagion functions.

    The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the threshold cognitive contagion function in Eq (2) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 6); the middle shows “normal” agents (γ = 3); and the right, “stubborn” agents (γ = 1).

    (PNG)

    S5 Fig. Additional results from single message set contagion with sigmoid cognitive contagion functions.

    The single message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the sigmoid cognitive contagion function in Eq (6) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (α = 1, γ = 7), the middle shows “normal” agents (α = 2, γ = 3), and the right, “stubborn” agents (α = 4, γ = 2).

    (PNG)

    S6 Fig. Additional results from gradual message set contagion with linear cognitive contagion functions.

    The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the inverse linear cognitive contagion function in Eq (5) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 1, α = 0); the middle shows “normal” agents (γ = 1, α = 1), and the right, “stubborn” agents (γ = 10, α = 20).

    (PNG)

    S7 Fig. Additional results from gradual message set contagion with threshold cognitive contagion functions.

    The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the threshold cognitive contagion function in Eq (2) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (γ = 6); the middle shows “normal” agents (γ = 3); and the right, “stubborn” agents (γ = 1).

    (PNG)

    S8 Fig. Additional results from gradual message set contagion with sigmoid cognitive contagion functions.

    The gradual message set on an ER random graph, N = 250, ρ = 0.05, for agents updating their beliefs based on the sigmoid cognitive contagion function in Eq (6) in the main paper. Graphs display percent of agents who believe B with strength b over time. The left graph shows agents parameterized to be “gullible” (α = 1, γ = 7), the middle shows “normal” agents (α = 2, γ = 3), and the right, “stubborn” agents (α = 4, γ = 2).

    (PNG)

    S9 Fig. Contagion result comparisons using a belief resolution of 2.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b2},bZ.

    (PNG)

    S10 Fig. Contagion result comparisons using a belief resolution of 3.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b3},bZ.

    (PNG)

    S11 Fig. Contagion result comparisons using a belief resolution of 5.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b5},bZ.

    (PNG)

    S12 Fig. Contagion result comparisons using a belief resolution of 9.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b9},bZ.

    (PNG)

    S13 Fig. Contagion result comparisons using a belief resolution of 16.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b16},bZ.

    (PNG)

    S14 Fig. Contagion result comparisons on BA preferential attachment networks using a belief resolution of 32.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b32},bZ.

    (PNG)

    S15 Fig. Contagion result comparisons on MAG networks using a belief resolution of 32.

    The single, split, and gradual message sets on a Multiplicative Attribute Graph, N = 500, and Θ generated from the same formula in Eq (8) in the main paper—i.e. one that brings about high levels of homophily so agents would rarely connect to agents more than 3 belief values away from them. Graphs display percent of agents who believe some b in B={b,0b32},bZ.

    (PNG)

    S16 Fig. Contagion result comparisons on WS small world networks using a belief resolution of 64.

    The single, split, and gradual message sets on a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

    (PNG)

    S17 Fig. Contagion result comparisons on BA preferential attachment networks using a belief resolution of 64.

    The single, split, and gradual message sets on a Barabási-Albert preferential attachment graph, N = 500, and added edges m = 3. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

    (PNG)

    S18 Fig. Contagion result comparisons on MAG networks using a belief resolution of 64.

    The single, split, and gradual message sets on a Multiplicative Attribute Graph, N = 500, and Θ generated from the same formula in Eq (8) in the main paper—i.e. one that brings about high levels of homophily so agents would rarely connect to agents more than 3 belief values away from them. Graphs display percent of agents who believe some b in B={b,0b64},bZ.

    (PNG)

    S19 Fig. Simple and proportional threshold contagion results on ER random networks.

    Simple (top row) and proportional threshold (bottom row) contagion on ER random networks with N = 500, and connection chance ρ = 0.05. Graphs show the percent of agents who believe B with strength b over time.

    (PNG)

    S20 Fig. Simple and proportional threshold contagion results on WS small world networks.

    Simple (top row) and complex (bottom row) contagion on a Watts-Strogatz small world network with N = 500, initial neighbors k = 5, and rewiring chance ρ = 0.5. Graphs show the percent of agents who believe B with strength b over time. Asterisks (*) denote these contagions had significant variance over simulation iterations.

    (PNG)

    S21 Fig. Simple and proportional threshold contagion results on BA preferential attachment networks.

    Simple (top row) and proportional threshold (bottom row) contagion on Barabási-Albert preferential attachment networks with N = 500, and added edges m = 3. Graphs show the percent of agents who believe B with strength b over time. Asterisks (*) denote these contagions had significant variance over simulation iterations.

    (PNG)

    S22 Fig. Simple and proportional threshold contagion results on homophilic MAG networks.

    Simple (top row) and proportional threshold (bottom row) contagion on a homophilic MAG networks with N = 500, and Θb detailed in Eq (8). Graphs show the percent of agents who believe B with strength b over time.

    (PNG)

    S23 Fig. Graphical contagion results for low correlation combinations across simulation run counts.

    Graphical results of contagion cascades across 10, 50, and 100 simulation runs for specific graph-message set combinations. Results displayed are those which yielded the lowest correlation scores between simulation run counts.

    (PNG)

    Attachment

    Submitted filename: Review of PONE-D-21-06938.pdf

    Attachment

    Submitted filename: review-responses.pdf

    Attachment

    Submitted filename: Review of PONE-D-21-06938.R1.pdf

    Attachment

    Submitted filename: reviewers-response.pdf

    Data Availability Statement

    All data is found within the paper and its Supporting information files. In addition, we provide all code and documentation at: https://github.com/RickNabb/cognitive-contagion.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES