Abstract
In pairwise interactions, where two individuals meet and play a social game with each other, assortativity in cognition means that pairs where both decision-makers use the same cognitive process are more likely to occur than what happens under random matching. In this paper, we show theoretically that assortativity in cognition may arise as a consequence of assortativity in other dimensions. Moreover, we analyze an applied model where we investigate the effects of assortativity in cognition on the emergence of cooperation and on the degree of prosociality of intuition and deliberation, which are the typical cognitive processes postulated by the dual process theory in psychology. In particular, with assortativity in cognition, deliberation is able to shape the intuitive heuristic toward cooperation, increasing the degree of prosociality of intuition, and ultimately promoting the overall cooperation. Our findings rely on agent-based simulations, but analytical results are also obtained in a special case. We conclude with examples involving different payoff matrices of the underlying social games, showing that assortativity in cognition can have non-trivial implications in terms of its societal desirability.
Subject terms: Evolution, Human behaviour, Social behaviour
Introduction
This paper investigates a concept of assortativity that happens at the cognitive level, where we posit the existence of two cognitive modes according to the dual-process theory of cognition. In application to the issue of cooperation, we show that assortativity in cognition can play a relevant role in determining the emerging average cooperation.
Assortativity is a broad concept that can be applied to different contexts. In general, assortativity means that individuals are more likely to be engaged in interactions with people that are similar to them along some dimensions. It is related to homophily: the tendency of individuals to associate and bond with similar others (from Ancient Greek: homoû + philíē, ‘love of the same’)1,2. Assortativity is a widespread phenomenon. A large amount of evidence has been collected showing that individuals often stay and interact with similar others, in some form or another: similarities may refer to belonging to the same cultural group, the same social or ethnic group, or the same religion3,4. In network theory, the assortativity coefficient measures the correlation between nodes of similar degree5. The effects of assortativity have also been studied extensively, e.g., in genetics6,7 or for the evolution of cooperation8,9. If we think of agents as divided in groups according to some characteristic or action, an index of assortativity can be formalized as the difference in probability of matching with an individual of a group conditional on belonging to that same group rather than to a different one10. Preferences may be used to rationalize different types of assortativity11–14.
The dual process theory is a paradigm that has become prominent in cognitive psychology and social psychology in the last thirty years. In the dual process framework, the decision making is described as an interaction between an intuitive cognitive processes and a deliberative one. Although different approaches emerge from the literature15–17, some common characteristics of the two processes are well established. The intuitive process, also called system 1 or type 1, is fast, automatic, and unconscious, while the deliberative process, also called system 2 or type 2, is slow, effortful and conscious. In evolutionary terms, the intuitive cognitive process is older than the deliberative one, and it is shared with other animals18. The existence of two systems in reasoning and decision making is extended to the domain of learning with associative implicit processes and rule-based explicit processes19,20.
Cooperation is a central feature of human behavior that differentiates Homo sapiens from the other species21,22. When people are cooperative they pay a cost to benefit others. The emergence of cooperation as a persistent phenomenon is a major focus of research across different subjects, such as social sciences23 and biology24. Indeed, the wide empirical evidence on cooperation is puzzling. For social scientists, it is at variance with the paradigmatic rational self-interested individual that is known as Homo economicus, even if other-regarding individuals can have reasons to cooperate25. For biologists, competition among individuals is at the basis of natural selection, and this is likely to wipe out cooperators though it is not necessarily the case26. In the literature on evolutionary game theory, great attention has been devoted to the mechanisms through which selection can favor the evolution of cooperation27–31. Recently, the cognitive basis of cooperative decision-making has also been explored, both experimentally32–34 and through theoretical modeling35,36.
In the following we show that cognition can play an important role for the evolution of cooperation by the channel of assortativity in cognition. By doing so, we exemplify how assortativity in cognition can be incorporated in a fully-fledged model, giving insights on the phenomenon under analysis, namely the emergence of cooperation and the degree of prosociality of intuition and deliberation. To do so, we describe a setting in which agents interact repeatedly in random pairs in two possible types of interaction, the one shot prisoner dilemma, which occurs with probability , and the repeated prisoner dilemma, which occurs with probability p. As in the previous literature35, by repeated prisoner dilemma we mean a stylized representation of an interaction in which there are reciprocal consequences over time: the payoff structure is given by the average payoffs in an infinitely repeated prisoner dilemma in which players can choose between tit for tat and always defect strategies. Each agent is able to remember the rewards obtained in the past when playing the two different actions, cooperation and defection. This information is stored in the memory of agents. The process of memory update is a form of reinforcement learning: it can be seen as myopic Q-learning37, i.e., the case in which agents are not able to make any prediction about the future. The process of memory update is characterized by the learning rate , which represents the weight given to the last reward. We assume that an agent adopts intuition or deliberation depending on the realization of a random variable. In particular, we let denote the probability that an agent responds intuitively, so that denotes the probability of deliberation. The cognitive processes adopted by two agents interacting together exhibit assortativity, as measured by parameter . Indeed, with probability A there is a single draw of the random variable, which means that the two agents are forced to use the same cognitive process. With probability , there are two independent draws of the random variable, one for each agent, whose cognitive process will be the same or different depending on the realized draws. An overview of the notation is provided in Fig. 1. The details of the model are clarified in “Model” section.
Results
The results are organized in three parts. In the first one, we provide two theoretical reasons that generate assortativity in cognition, “Sources of assortativity in cognition” section. In the second one we show the simulative results of an applied model on cooperation, “Learning intuitive cooperation through deliberation” section. Finally, in the third one we present the simulative results of two applied models in which small variations from the previous model generate qualitatively different results, “Bivalence of assortativity in cognition on payoffs” section.
Sources of assortativity in cognition
Assortativity in cognition may arise as a consequence of assortativity on other dimensions, such as the characteristics of the interaction or the characteristics of the interacting agents.
Let p(D|D) be the probability, for a given agent, to interact with a deliberating agent given that the agent is deliberating as well. Following the same notation, p(D|I) is the probability to interact with a deliberating agent given that the agent is deciding intuitively. Let p(I|I) and p(I|D) be defined analogously. There is assortativity in cognition if: , which implies, and is implied by, .
The first source of assortativity in cognition that we examine is state-based assortativity. The characteristics of an interaction (e.g., payoffs, information, complexity of choice) vary across interactions but are often the same, or at least similar, for the individuals in the same interaction. When such characteristics determine the likelihood of deliberation, assortativity in cognition emerges. To fix ideas, consider a case with two states of the world, A and B, that differ in the likelihood that deliberation and intuition are used by agents. State A and state B occur with probabilities p(A) and , respectively. Agents involved in the same interaction make decisions in the same state. In state A an agent decides intuitively with probability while she deliberates with probability . Analogously, in state B an agent decides intuitively with probability while she deliberates with probability .
In this setting, assortativity in cognition comes out if and only if the likelihood of intuition differs in the two states, i.e., (for the proof see SI Appendix, Subsection 1.1).
The second source of assortativity in cognition that we examine is type-based assortativity. Agents can have heterogeneous characteristics (e.g., skills, abilities, preferences, knowledge) which may determine the likelihood of deliberation. In this case, when the agents participating in the same interaction tend to share the same characteristics, assortativity in cognition emerges. To fix ideas, consider the case where the population is composed by two types of agents, X and Y, that differ in the likelihood of resorting to deliberation and intuition. The fraction of X agents is equal to p(X) and consequently is the fraction of Y agents. Type X agents and type Y agents decide intuitively with probability and , respectively, while they deliberate with the remaining probability and . Let p(X|X) and p(X|Y) be the probability of interaction with a type X for an agent of type X and type Y, respectively. There is assortativity in types if , which implies, and is implied by, .
In this setting, if we assume assortativity in types, then assortativity in cognition comes out if and only if the likelihood of intuition is different for the two types, i.e., (for the proof see SI Appendix, Subsection 1.2).
Learning intuitive cooperation through deliberation
Results in this Subsection are based on simulation of the model, where the two interactions are represented by the payoff matrices (I,a) and (I, b) in Fig. 2, with parameters’ values and .
A first result is that the average cooperation rate increases monotonically in the level of assortativity in cognition. The result is depicted in Fig. 3 where solid lines represent the average cooperation rate under intuition as the assortativity in cognition varies. Since the cooperation rate under deliberation is constant and equal to p, which is depicted with dashed lines in the figure, the result is driven by the increase in cooperation rate under intuition.
A second result points to the existing interaction effect between assortativity in cognition and other parameters in the model. In particular, Fig. 3 suggests that assortativity in cognition can be a substitute for both the likelihood of repeated interactions, i.e., p, and the recourse to deliberation, i.e., . Indeed, when p is quite large there is no room for a significant effect of assortativity in cognition, because repeated interactions are frequent and this, in itself, sustains high rates of intuitive cooperation. Also, when K is small every agent frequently deliberates, which implies that often both the agents in an interaction are deliberative, even in the absence of assortativity in cognition.
A third result is an observation that is independent of assortativity in cognition. The average cooperation rate under intuition, for given p and A, increases as K decreases, i.e., the more frequently agents resort to deliberation. Deliberation is able to shape the intuitive heuristic toward cooperation or, in other words, agents learn intuitive cooperation through deliberation.
Finally, a fourth result is about the role of assortativity in cognition in determining whether intuition is more cooperative than deliberation, which is a theme that has been harshly debated in the literature34,38. In our model, intuition can be more cooperative than deliberation, or the opposite can happen, and assortativity in cognition plays a role for this. By looking at Fig. 3, we observe that the average cooperation rate is always higher under intuition than under deliberation when K is quite small or p is quite large. When K is large and p is small, assortativity in cognition matters: indeed, it is often the case that intuition is still more cooperative than deliberation for high values of assortativity, while deliberation turns out to be more cooperative than intuition when assortativity in cognition is small. In this sense, assortativity in cognition helps intuition to be more cooperative than deliberation, in that it enlarges the region in the set of parameters where this holds.
Bivalence of assortativity in cognition on payoffs
Drawing from the results in “Learning intuitive cooperation through deliberation” section, one may be tempted to conclude that assortativity in cognition is socially desirable, in that a higher level of assortativity in cognition always leads to a superior societal outcome. In this subsection, we show that this conclusion would be an overstatement: indeed, the effects of assortativity in cognition on welfare, i.e., the sum of payoffs over the whole population, are complicated in general, and hence must be evaluated case by case.
In the previous section we focused on the cooperation rate since the total reward of agents is increasing in it. In the following examples we do not have an action that is always more cooperative than the other action, hence we focus on the average total reward, i.e., the average reward over the whole population along the entire time span.
We replicate the simulations of the previous section changing the types of interaction in which the agents are involved. For simplicity, we consider each of the two interactions in subplot (I) Fig. 2 combined with a variant of it, in which the two actions are permuted, i.e., the actions have inverted payoff consequences in the two types of interaction.
Firstly we consider two one shot prisoner dilemmas, subplot (II) Fig. 2. Under deliberation, agents choose the dominant action, S in game (a) (subplot (II), Fig. 2) and F in game (b) (subplot (II), Fig. 2). Let p be the probability of game (b) (subplot (II), Fig. 2). In this setting, playing the dominated action increases the overall payoff, with the result that miscoordination in behaviors can be beneficial with respect to coordination in the dominant action.
Figure 4 shows in (IV) that an increase in assortativity is welfare increasing when K is low and welfare decreasing when K is high. To grasp the learning effects contributing to this result, we can focus on pairs with one agent intuitive and the other deliberative, given that the main effect of assortativity is to reduce the likelihood of such pairs. Consider (everything remains the same when , with F and S switched). As K increases, i.e., agents are more often intuitive, the probability to choose action F gets larger under intuition (Fig. 4, II). Suppose first that the intuitive agent chooses F. With probability p both agents play F, since F is dominant and hence surely chosen by the deliberative agent, yielding no substantial effects on learning. With probability , the deliberative agent chooses S because it is dominant, with the result that S performs well and F performs poorly, which makes S more likely to be adopted in the future for both agents. Suppose now that the intuitive agent chooses S. Analogously, with probability both agents play S, with no substantial effect on learning, while with probability p the deliberative agent chooses F since it is dominant, which triggers a learning effect. Indeed, in the latter case F performs well and S performs poorly, which makes F more likely to be adopted in the future for both agents. Please note that S is the welfare enhancing action, when . To complete the reasoning, we make two observations. A first observation is that the two learning effects described above, one favoring S and the other favoring F, get weakened when assortativity in cognition increases, due to the reduction in the likelihood that a pair occurs with one agent intuitive and the other deliberative. The second observation is that an increase in K raises the likelihood of the learning effect favoring S and decreases the likelihood of the learning effect favoring F. This is so because a larger K makes the intuitive player more often choose F (Fig. 4, II), and the intuitive agent has to play F for the former effect and S for the latter effect. Therefore, an increase in assortativity reduces the likelihood of playing the dominant action when K is low and increases it when K is high (Fig. 4, III). Since the dominated action is socially optimal, this leads us to conclude that assortativity in cognition is welfare enhancing for low values of K and welfare decreasing for high values of K (Fig. 4, IV).
Secondly, we consider two repeated prisoner dilemmas, subplot (III) Fig. 2. Under deliberation, agents choose the weakly dominant action, S in game (a) (subplots, Fig. 2) and F in game (b) (subplots, Fig. 2). Let p be the probability of game (b) (subplots, Fig. 2). In this setting average payoffs are maximized when both agents choose the weakly dominant action, while other outcomes pay the same.
Intuitively, greater deliberation, i.e., a lower K, is beneficial because it makes agents choose the weakly dominant action (Fig. 5, I). The average payoff also increases for extreme values of p, close to either 0 or 1 (again Fig. 5, I), because also intuitive agents choose the weakly dominant action most of the time (Fig. 5, II). As already pointed out in the previous subsection, assortativity in cognition decreases the probability of interaction between an intuitive agent and a deliberative one, thus increasing the probability of interaction between two intuitive agents and between two deliberative agents. On the one hand, an increase of assortativity yields a direct effect on payoffs in that the increased likelihood of two deliberative agents interacting together allows an easier coordination on the weakly dominant action. On the other hand, there are other effects triggered by learning. To grasp these learning effects, we focus again on pairs with an intuitive agent and a deliberative one. Consider (everything remains the same when , with F and S switched). The most likely occurrence here is that agents play game (b) (Fig. 5), which happens with probability p, and that the intuitive agent plays action F (Fig. 5, II). Since the deliberative agent surely chooses F as well, they obtain the highest payoff b, which increases the likelihood of playing action F in the future. The least likely occurrence is that agents play game (a) (Fig. 5), which happens with probability , and that the intuitive agent plays action S (Fig. 5, II). Since the deliberative agent surely chooses S as well, they obtain the highest payoff b, which increases the likelihood of playing action S in the future. Since action F is more often the weakly dominant action, given , the former effect is stronger than the latter. To complete the picture, there are other two cases in which the intuitive agent plays the dominated action, this yielding no substantial effect on learning because both agents earn a payoff equal to c, even if for different actions. Overall, an increase in assortativity in cognition leads to a decrease in the rate at which intuitive agents play the action that is more often dominant (Fig. 5, III). In turn, this has a negative impact on average payoffs, and this impact is greater for extreme values of p, close to either 0 or 1 (Fig. 5, III). It turns out that, for extreme values of p, close to either 0 or 1, this negative indirect effect through learning more than offsets the positive direct effect on payoffs, resulting in the blue areas in Fig. 5, IV.
Discussion
Assortativity is a phenomenon characterizing social interactions in many contexts and along different dimensions. Our work explores a new dimension of assortativity, occurring at the cognitive level: the agents involved in the same interaction often exhibit similar degrees of cognitive effort. To the best of our knowledge, assortativity in cognition has not been considered and analyzed by the literature so far. In some cases, it is involved or even implied, but the focus was never on it. For instance, priming has been shown to affect the activation of cognitive processes39, hence interacting partners who are exposed to the same priming are more likely to rely on the same cognitive process. Recently, the connection between cognitive reflection and behavior in social media platforms was investigated40, identifying the existence of cognitive echo chambers in which users with similar cognitive reflection tend to cluster. Also, assortativity in actions often implies assortativity in cognition as a byproduct35. Assortativity in cognition along the temporal dimension emerges in evolutionary game theoretic models where cognitive processing and the environment in which agents interact affect each other41,42.
When assortativity in cognition emerges through assortativity in types, it also comes with assortativity in behavior35, at least if types are defined including actions. When this is the case, it is impossible to disentangle the effect of assortativity in cognition from the effect of assortativity in behavior. Our result in “Learning intuitive cooperation through deliberation” section suggests that assortativity in cognition is able to promote cooperation per sé, also in the absence of other forms of assortativity. This result is robust to changes, when we consider different entries in the payoff matrix (SI Appendix, Subsection 3.1) and different learning rates (SI Appendix, Subsection 3.2). The findings in “Learning intuitive cooperation through deliberation” section are based on simulations over 5000 periods, with 500 agents, payoffs and , and learning rate . A greater value of b makes cooperation more profitable in the repeated interaction while, in the one shot interaction, it has the same effect on cooperation and defection. Thus, greater values of b promote intuitive cooperation. In the SI Appendix (Subsection 3.3) we consider a variant in which deliberative decisions are based on myopic Q-learning with finer information, distinguishing between past performance of cooperation and defection under deliberation in the two types of interaction. We show that qualitatively similar results hold in that case as well for : after relatively few periods agents learn to play the dominant strategy under deliberation, thus yielding substantially equivalent simulations once learning has occurred.
We stress that K is homogeneous and exogenous in our model. This is so because our aim is not to study the evolution of dual process reasoning, rather we want to focus on the effects of assortativity in cognition given dual process reasoning, for which the literature has already provided evolutionary arguments35,43,44. The exogeneity of K is also the reason why, differently from previous contributions in the literature35, we do not consider any cost of deliberation, in that deliberation is not modeled as a choice. Quite interestingly, we find that in our model the value of K that maximizes the average payoff is often strictly in between 0 and 1 (SI Appendix, Section 4).
In conclusion, assortativity in cognition rests on sound theoretical reasons and yields relevant consequences, in that it allows internalizing the external effects of one’s own cognition: the partner exerts a similar cognitive effort and hence behaves in a similar way. The evolution of behaviors is significantly affected by assortativity in cognition, with consequences on overall welfare that should be carefully evaluated case-by-case.
Model
Time is discrete and agents are randomly matched pairwise in each period to play one of two possible types of interaction. We consider three applications of the model that are based on different pairs of interaction, as represented in the three subplots of Fig. 2. In the following we describe the functioning of the model when payoffs are those in subplot (I), stressing that the only difference with the other applications is given by the underlying payoff matrices.
The two types of interaction are the one shot prisoner dilemma, game (a), Fig. 2 subplot (I), that occurs with probability , and the repeated interaction, game (b), Fig. 2 subplot (I), that occurs with probability p. Two actions are available in both interactions, namely cooperation, C, and defection, D. When the two agents in an interaction play C, they both earn b irrespectively of the type of interaction. Similarly, when the two agents in an interaction play D, they both earn c irrespectively of the type of interaction. When the two agents choose different actions, the payoffs depend on the type of interaction: in the one-shot prisoner dilemma, the defecting player earns and the cooperating agent earns 0; in the repeated prisoner dilemma, both agents earn c. We assume that , which makes D strictly dominant in the one-shot interaction, and C weakly dominant in the repeated interaction. This payoff structure is already used in the literature35, with the only difference that c is added in every cell to avoid negative values.
Agents are characterized by their memory, in which are stored information about the past rewards obtained choosing the two different actions. In each period, every agent update the information about the past rewards obtained with the played in that period, keeping unchanged the information about the past rewards obtained with the other action. Indeed the memory of a generic agent i at time t, , is made of two elements, the information about the past rewards obtained in the previous periods when playing cooperation, , and the information about the past rewards obtained in the previous periods when playing defection, :
In particular, if agent i plays cooperation at time t, then the agent’s memory is updated in the following way:
with measuring the learning rate and being the reward obtained in the last period. Analogously, if agent i plays defection at time t, then the agent’s memory is updated in the following way:
We note that, when the learning rate is equal to one, only the last reward obtained for each action matters.
The decision process used by agents relies on either intuition or deliberation, with the latter following a more consequentialist rule (based on best reply) than the former (based on reinforcement learning).
Under intuition the agent is not able to recognize the type of occurring interaction. The intuitive decision is based on the information saved in memory. The action with the highest past reward is chosen: when cooperation is chosen, conversely defection is chosen when . In case of a tie, i.e., when , each action is chosen with one-half probability.
Under deliberation the agent is able to recognize the type of occurring interaction. The deliberative decision is driven by best-response. Defection is chosen in the one-shot prisoner dilemma because strictly dominant, while cooperation is chosen in the repeated prisoner dilemma, because weakly dominant.
Assortativity in cognition is measured with the parameter . Given each pair, with probability A the two agents are forced to use the same cognitive process, while with probability the cognitive processes of the two agents are independent. Each agent has probability to rely on intuition and probability to rely on deliberation. The possible occurrences, with the associated probabilities, are represented in Fig. 6.
Markov process
When the learning rate is equal to one, the behavior of one agent i, given the behavior of all the other agents, in the model can be described through a discrete-time Markov process P, defined on a finite state space S and characterized by a transition matrix T. The state space is made by all the feasible memories of agent i, i.e., all the pairs . The transition matrix describes the probabilities of moving from each state to any other. Transition probabilities depend on the current memory, i.e., the state, the parameters K and p, and the probability of intuitive cooperation of the rest of the population, denoted by . A probability distribution defined on S is a vector of probabilities such that , where denotes a memory and the probability that the agent has memory m. A probability distribution is said invariant if:
In words, an invariant distribution remains unchanged in the Markov process as time progresses. Since the Markov process has a unique recurrent class, the invariant distribution exists and is unique. Once obtained the invariant distribution, the probability of cooperation under intuition for agent i is the sum of probabilities, in the invariant distribution, of states in which plus half of the sum of probabilities of states in which . Indeed, when agents cooperate under intuition while they randomly choose the intuitive response in the cases in which . We denote with the probability of intuitive cooperation in the invariant distribution for agent i. Finally, we introduce the consistency condition: in the long run equilibrium of the model, the cooperation rate of agent i is equal to the cooperation rate of the other agents, i.e., .
In the SI Appendix (Subsection 2.1) we develop the analysis in detail for the simplifying case of full assortativity, i.e., .
Figure 7 represents the cooperation rate under intuition, distinguishing between the empirical frequencies obtained through simulations and the theoretical frequencies resulting from the long-run Markov chain analysis. For most values of p and k, the theoretical analysis overlap with simulations, with only perceptible differences for cooperation rates that are very close to one. See the SI Appendix (Subsection 2.2) for more details on this.
Supplementary Information
Acknowledgements
This paper was presented at the Economics Department seminars in Florence, at the “Learning, Evolution and Games” 2022 Conference held in Lucca, as well as at a number of informal meetings. All mistakes remain ours. We also gratefully acknowledge financial support from the Italian Ministry of Education, University and Research (MIUR) through the PRIN project Co.S.Mo.Pro.Be. “Cognition, Social Motives and Prosocial Behavior” (grant n. 20178293XT) and from the IMT School for Advanced Studies Lucca through the PAI project Pro.Co.P.E. “Prosociality, Cognition, and Peer Effects”.
Author contributions
E.B., L.B., and E.V. designed research, performed research, analyzed simulation data, and wrote the paper.
Data availability
The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
Code availability
Accession codes: The code in Python is available at https://github.com/EugenioVicario/Assortativity_in_Cognition.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Ennio Bilancini, Leonardo Boncinelli and Eugenio Vicario.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-30301-y.
References
- 1.Currarini S, Jackson MO, Pin P. An economic model of friendship: Homophily, minorities, and segregation. Econometrica. 2009;77:1003–1045. doi: 10.3982/ECTA7528. [DOI] [Google Scholar]
- 2.Fu F, Nowak MA, Christakis NA, Fowler JH. The evolution of homophily. Sci. Rep. 2012;2:1–6. doi: 10.1038/srep00845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.McPherson M, Smith-Lovin L, Cook JM. Birds of a feather: Homophily in social networks. Annu. Rev. Sociol. 2001;27:415–444. doi: 10.1146/annurev.soc.27.1.415. [DOI] [Google Scholar]
- 4.Domingue BW, Fletcher J, Conley D, Boardman JD. Genetic and educational assortative mating among US adults. Proc. Natl. Acad. Sci. 2014;111:7996–8000. doi: 10.1073/pnas.1321426111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Newman ME. Assortative mixing in networks. Phys. Rev. Lett. 2002;89:208701. doi: 10.1103/PhysRevLett.89.208701. [DOI] [PubMed] [Google Scholar]
- 6.Jennings HS. The numerical results of diverse systems of breeding. Genetics. 1916;1:53. doi: 10.1093/genetics/1.1.53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Wright S. Systems of mating. I. the biometric relations between parent and offspring. Genetics. 1921;6:111. doi: 10.1093/genetics/6.2.111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bergstrom TC. The algebra of assortative encounters and the evolution of cooperation. Int. Game Theory Rev. 2003;5:211–228. doi: 10.1142/S0219198903001021. [DOI] [Google Scholar]
- 9.Bilancini E, Boncinelli L, Wu J. The interplay of cultural intolerance and action-assortativity for the emergence of cooperation and homophily. Eur. Econ. Rev. 2018;102:1–18. doi: 10.1016/j.euroecorev.2017.12.001. [DOI] [Google Scholar]
- 10.Bergstrom TC. Measures of assortativity. Biol. Theory. 2013;8:133–141. doi: 10.1007/s13752-013-0105-3. [DOI] [Google Scholar]
- 11.Alger I, Weibull JW. Homo moralis: Preference evolution under incomplete information and assortative matching. Econometrica. 2013;81:2269–2302. doi: 10.3982/ECTA10637. [DOI] [Google Scholar]
- 12.Alger I, Weibull JW. Evolution and kantian morality. Games Econ. Behav. 2016;98:56–67. doi: 10.1016/j.geb.2016.05.006. [DOI] [Google Scholar]
- 13.Newton J. The preferences of homo moralis are unstable under evolving assortativity. Int. J. Game Theory. 2017;46:583–589. doi: 10.1007/s00182-016-0548-4. [DOI] [Google Scholar]
- 14.Xie Y, Cheng S, Zhou X. Assortative mating without assortative preference. Proc. Natl. Acad. Sci. 2015;112:5974–5978. doi: 10.1073/pnas.1504811112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Evans JSB. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol. 2008;59:255–278. doi: 10.1146/annurev.psych.59.103006.093629. [DOI] [PubMed] [Google Scholar]
- 16.Kahneman D. A perspective on judgment and choice: Mapping bounded rationality. Am. Psychol. 2003;58:697. doi: 10.1037/0003-066X.58.9.697. [DOI] [PubMed] [Google Scholar]
- 17.Sloman SA. The empirical case for two systems of reasoning. Psychol. Bull. 1996;119:3. doi: 10.1037/0033-2909.119.1.3. [DOI] [Google Scholar]
- 18.Evans JSB. In two minds: Dual-process accounts of reasoning. Trends Cogn. Sci. 2003;7:454–459. doi: 10.1016/j.tics.2003.08.012. [DOI] [PubMed] [Google Scholar]
- 19.Reber AS. Implicit learning and tacit knowledge. J. Exp. Psychol. Gen. 1989;118:219. doi: 10.1037/0096-3445.118.3.219. [DOI] [Google Scholar]
- 20.Sun R, Merrill E, Peterson T. From implicit skills to explicit knowledge: A bottom-up model of skill learning. Cogn. Sci. 2001;25:203–244. doi: 10.1207/s15516709cog2502_2. [DOI] [Google Scholar]
- 21.Melis AP, Semmann D. How is human cooperation different? Philos. Trans. R. Soc. B Biol. Sci. 2010;365:2663–2674. doi: 10.1098/rstb.2010.0157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Harari YN. Sapiens: A Brief History of Humankind. Random House; 2014. [Google Scholar]
- 23.Bowles S, Gintis H. A Cooperative Species. Princeton University Press; 2011. [Google Scholar]
- 24.Hamilton WD. The genetical evolution of social behaviour. II. J. Theor. Biol. 1964;7:17–52. doi: 10.1016/0022-5193(64)90039-6. [DOI] [PubMed] [Google Scholar]
- 25.Bowles S. The Moral Economy. Yale University Press; 2016. [Google Scholar]
- 26.Koduri N, Lo AW. The origin of cooperation. Proc. Natl. Acad. Sci. 2021;118:e2015572118. doi: 10.1073/pnas.2015572118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Nowak MA. Five rules for the evolution of cooperation. Science. 2006;314:1560–1563. doi: 10.1126/science.1133755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Axelrod R, Hamilton WD. The evolution of cooperation. Science. 1981;211:1390–1396. doi: 10.1126/science.7466396. [DOI] [PubMed] [Google Scholar]
- 29.Rand DG, Nowak MA. Human cooperation. Trends Cogn. Sci. 2013;17:413–425. doi: 10.1016/j.tics.2013.06.003. [DOI] [PubMed] [Google Scholar]
- 30.Traulsen A, Nowak MA. Evolution of cooperation by multilevel selection. Proc. Natl. Acad. Sci. 2006;103:10952–10955. doi: 10.1073/pnas.0602530103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Imhof LA, Fudenberg D, Nowak MA. Evolutionary cycles of cooperation and defection. Proc. Natl. Acad. Sci. 2005;102:10797–10800. doi: 10.1073/pnas.0502589102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Rand DG, Greene JD, Nowak MA. Spontaneous giving and calculated greed. Nature. 2012;489:427–430. doi: 10.1038/nature11467. [DOI] [PubMed] [Google Scholar]
- 33.Rand DG, et al. Social heuristics shape intuitive cooperation. Nat. Commun. 2014;5:1–12. doi: 10.1038/ncomms4677. [DOI] [PubMed] [Google Scholar]
- 34.Alós-Ferrer C, Garagnani M. The cognitive foundations of cooperation. J. Econ. Behav. Organ. 2020;175:71–85. doi: 10.1016/j.jebo.2020.04.019. [DOI] [Google Scholar]
- 35.Bear A, Rand DG. Intuition, deliberation, and the evolution of cooperation. Proc. Natl. Acad. Sci. 2016;113:936–941. doi: 10.1073/pnas.1517780113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Jagau S, van Veelen M. A general evolutionary framework for the role of intuition and deliberation in cooperation. Nat. Hum. Behav. 2017;1:1–6. doi: 10.1038/s41562-017-0152. [DOI] [Google Scholar]
- 37.Watkins CJ, Dayan P. Q-learning. Mach. Learn. 1992;8:279–292. doi: 10.1007/BF00992698. [DOI] [Google Scholar]
- 38.Zaki J, Mitchell JP. Intuitive prosociality. Curr. Dir. Psychol. Sci. 2013;22:466–470. doi: 10.1177/0963721413492764. [DOI] [Google Scholar]
- 39.Dijksterhuis A, et al. Seeing one thing and doing another: Contrast effects in automatic behavior. J. Pers. Soc. Psychol. 1998;75:862. doi: 10.1037/0022-3514.75.4.862. [DOI] [Google Scholar]
- 40.Mosleh M, Pennycook G, Arechar AA, Rand DG. Cognitive reflection correlates with behavior on twitter. Nat. Commun. 2021;12:1–10. doi: 10.1038/s41467-020-20043-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Mosleh M, Kyker K, Cohen JD, Rand DG. Globalization and the rise and fall of cognitive control. Nat. Commun. 2020;11:1–10. doi: 10.1038/s41467-020-16850-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Rand DG, Tomlin D, Bear A, Ludvig EA, Cohen JD. Cyclical population dynamics of automatic versus controlled processing: An evolutionary pendulum. Psychol. Rev. 2017;124:626. doi: 10.1037/rev0000079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Sherry DF, Schacter DL. The evolution of multiple memory systems. Psychol. Rev. 1987;94:439. doi: 10.1037/0033-295X.94.4.439. [DOI] [Google Scholar]
- 44.Carruthers P. The Architecture of the Mind. Oxford University Press; 2006. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
Accession codes: The code in Python is available at https://github.com/EugenioVicario/Assortativity_in_Cognition.