Abstract
Reputational concerns are believed to play a crucial role in explaining cooperative behaviour among non-kin humans. Individuals cooperate to avoid a negative social image, if being branded as defector reduces pay-offs from future interactions. Similarly, individuals sanction defectors to gain a reputation as punisher, prompting future co-players to cooperate. But reputation can only effectively support cooperation if a sufficient number of individuals condition their strategies on their co-players' reputation, and if a sufficient number of group members are willing to record and transmit the relevant information about past actions. Using computer simulations, this paper argues that starting from a pool of non-cooperative individuals, a reputation system based on punishment is likely to emerge and to be the driver of the initial evolution of cooperative behaviour. However, once cooperation is established in a group, it will be sustained mainly through a reputation mechanism based on cooperative actions.
Keywords: evolution of cooperation, indirect reciprocity, reputation, punishment
1. Introduction
Reputational considerations can promote cooperative behaviour in various ways. In indirect reciprocity with generosity image scoring [1,2], a player's reputation reflects past cooperation and defection. If defectors face some form of social sanctioning, typically in the form of withheld future cooperation or avoidance of interactions and thus exclusion from the group, individuals have incentives to avoid gaining a reputation as a defector [3–6]. Alternatively, individuals may seek to gain a reputation as a punisher of uncooperative behaviour [7–11]. Punishment of defection has been shown to promote cooperation [12–14] and is widespread across human societies [15]. Players are more likely to cooperate with a known punisher in order to avoid sanctions, and individuals are more likely to punish if they are being observed and thus have a possibility to build a reputation [16,17]. A prominent example of cooperation that is sustained through punishment-based reputation are societies with a culture of honour, where individuals seek revenge in order to defend a reputation of strength [18–20].
One limitation of a reputation mechanism is that players may not know the past behaviour of their opponent, since they do not necessarily observe every bilateral interaction in their group [21]. As a substitute for direct observation, individuals may rely on gossip to learn about a co-player's reputation [22–24]. While gossip is viewed as a low cost way of information acquisition, it is not entirely cost-free as it involves costs in the form of time, efforts spent for evaluating gossip veracity and memory devoted to the retention of information [25,26]. These costs can prevent indirect reciprocity from emerging, as players will only participate in gossiping if there is a strictly positive pay-off, reducing the role of genetic drift [27,28]. Additionally, gossip can only be informative if some members of the group have knowledge about the actions of other players [29]. For this, players must be willing to share their experiences—who cooperated and who cheated, who punished and who did not—within the group, and group members must be willing to collect, retain and further spread this information [30–32]. The more group members intend to condition their strategies on an opponent's reputation, and thus have incentives to gain and retain reputational information, the more informative gossip will be [33]. This paper explicitly considers reputation as a social institution that can only arise and be effectively used if sufficient group members participate in it [34].
How a given reputation system can sustain cooperative actions has been widely studied. Cooperation is believed to be more likely to emerge when reputation is based on punishment rather than cooperation [7,35], yet cooperation based on social image and generosity scoring is an important factor in maintaining cooperation within groups both in the laboratory [36–38] and in the field [39,40], and more so than punishment alone [41]. This paper can explain this, while at the same time also explaining how a reputation system itself develops, something less studied in the literature. It considers the joint emergence of both reputation and cooperation, and models the interdependence between the two: reputation can sustain cooperative behaviour, and a reputation system is itself sustained by individuals who condition their cooperative behaviour on a co-player's reputation.
2. Methods
I consider a Prisoner's Dilemma game with a punishment phase. Players build reputations over their lifetime, both for cooperative and for punitive behaviour, and may condition their actions on either type of reputation of their opponent. Knowledge about an opponent's reputation is acquired by gossiping with other group members. This process is costly and its success depends on the knowledge that others have. This paper studies which type of reputational information, if any, players seek to acquire, which type of reputation system thus emerges in the group, and how this evolves over time. The existence of costs in information gathering implies that players will only seek out information that is sufficiently valuable to them. Owing to the fact that availability of information is endogenous, a reputation system will only emerge if a sufficient amount of players actively participate.
A generation of size N plays a total of M = mN/2 rounds. In each round, two individuals are randomly paired to interact, so that each individual faces an expected m interactions during their lifetime. All interactions consist of a Prisoner's Dilemma game in the first stage, followed by a punishment stage. Individuals can condition strategies on their opponent's reputation, but not on the number of interactions that have occurred within a generation, since the latter is unknown. During the first stage of each round, individuals decide whether or not to contribute y to a common project. The sum of individual contributions is multiplied with r, 1 < r < 2, and subsequently equally divided within each pair. While total pay-offs are maximized if both players cooperate and contribute, non-contribution, i.e. defection, is the individually optimal strategy. In the second stage, players can choose to punish defection at a cost k > 0, inflicting damage of p > k on the co-player. Additionally, before the first stage and after learning the identity of the co-player, either player may choose to avoid the interaction at a cost φ, in which case no first or second stage is played by the pair. A constant is added to all outcomes to ensure positive pay-offs. Table 1 summarizes all parameters of the model as well as their corresponding values in the baseline specification.
Table 1.
Description of symbols used in the model and parameter values used in the baseline specification.
| parameter | description | value |
|---|---|---|
| r | public good multiplier | 1.6 |
| k | cost of punishing | 0.2 |
| p | impact of punishment | 0.5 |
| ρ | information acquisition cost | 0.01 |
| φ | avoidance cost | 0.05 |
| γ | relevance of active gossiping channel | 0.3 |
| μ | mutation rate | 0.01 |
| m | expected lifetime number of interactions | 30 |
| N | size of generation | 100 |
Individuals build a reputation based on whether they cooperate or defect in the first stage (coop-rep) and on whether they punish in response to defection of their co-player (pun-rep). Both types of reputation are represented by a scoring system that takes on one of three values (−1, 0, 1). Contributing to a project increases an individual's coop-rep by one, while refusal to do so reduces it [1]. The pun-rep of an individual depends on second stage choices and only changes if the co-player defected during the first stage. Punishing the defector increases the pun-rep by one, failure to do so reduces it. Strategies can be made conditional on an opponent's reputation of either type, but reputational information is not freely available, and efforts to acquire this information imply a cost. These costs can include transaction costs from asking other group members about an individual's reputation, as well as memory capacity to store relevant information. Since gossip is more likely to be informative if other group members are also interested in this type of information [42], the probability of successful information acquisition is determined by the share of other individuals in the group that also condition their strategies on the same type of reputation. After paying the search cost ρ, an individual learns a co-player's reputation score of type j, j∈{coop-rep, pun-rep} with probability (1 − γ) + γπj where 0 ≤ γ ≤ 1 and πj is the share of individuals in the population who condition their strategy on reputation j, and thus have an interest in actively participating in gossip about reputational information. Parameter γ then denotes the relevance of this active gossip channel in information acquisition, while (1 − γ) represents alternative forms of gaining information, such as direct observation and casual gossiping by group members without personal involvement. γ = 0 corresponds to a benchmark where all interactions are publicly observable.
Individuals are characterized by three traits, A, B and C. Trait A ∈ {d, c, a} determines the first stage action when facing a co-player of unknown or neutral reputation, where A = d denotes defection, A = c indicates cooperation and a contribution to the common project and A = a indicates avoidance of the interaction. Trait B determines whether an individual intends to use reputational information in the first stage of the game. Unless B = none, players seek to acquire information on the co-players' reputation and, if available, condition their strategies on it. The value of B is of the form rep-type( ± )action and determines whether a player seeks to learn the coop-rep or pun-rep of the co-player, as well as the corresponding action in case the reputation is positive or negative. For example, a player with B = pun(+)c will cooperate with a co-player who has a reputation as a punisher, while a player with B = coop(−)a will avoid interacting with a known defector. In total, there are 13 possible values for B, resulting in 39 combinations of traits A and B that determine first stage behaviour. I remove the 12 combinations for which the conditional action determined by B is the same as the action prescribed by A. Finally, C is an indicator of second stage punishment actions, taking on one of four values: C = np for individuals who never punish, C = pd for punishers of defection (responsible punishment), C = pc for punishers of cooperation (anti-social) and C = p for players who punish everybody (spiteful).
After M rounds of interactions, a generation is replaced by a new generation of offspring. Each generation is of equal size, and offspring inherit traits A, B and C from their parent through a cultural transmission process akin to genetic transmission. The parent–child transmission of traits fails with probability μ, in which case the child's characteristics are drawn randomly from a distribution with equal probability for all combinations of traits. This process of random mutation introduces new strategies into a group. The parent of each offspring is drawn with replacement from members of the old generation, with probabilities proportional to their accumulated life time pay-offs. Pay-offs from interactions thus translate to reproductive success.
3. Results
(a). Simulation results
All results are averages over 20 independent simulations of 20 000 generations, starting with an initial generation consisting only of non-punishing defectors who do not use reputational information (d–none–np), and retaining only the last 5000 generations of each simulation. I vary the parameters that determine the costs of information gathering (ρ) and gossip relevance (γ). Unless otherwise stated, N = 100, m = 30, μ = 0.01, r = 1.6, k = 0.2, p = 0.5, φ = 0.05. I consider three scenarios, which differ in the type of reputation systems that are potentially available: in Sc players may use only coop-rep, in Sp only pun-rep can be used, while in Scp both reputational types are theoretically available.
Figure 1 shows average rates of cooperation for all three scenarios, as a function of information acquisition costs (a) and the relevance of gossip (b). Figure 1 in the electronic supplementary material presents further results for varying levels of the public good multiplier (r), impact of punishment (p), mutation rate (μ) and group size (m). Across all parametrizations, cooperation rates are lowest in Sc when only coop-rep is available, rarely exceeding 20%. Cooperation is significantly more common in Sp, confirming the result in [35] that punishment-based reputation systems are a more likely driver of the emergence of cooperation than reputation systems based on cooperative actions. Cooperation rates are by far the highest when both reputation systems are available (Scp), indicating that while typically failing to establish cooperation on its own, cooperation-based reputation systems act as an important stabilizer when paired with punishment-based reputation.
Figure 1.
Share of cooperative acts as a function of information acquisition costs (a) and relevance of active gossiping channel (b). Cooperation rates reach high levels only when punishment scoring (pun-rep) is available (short-dashed line). While generosity scoring (coop-rep) fails to establish cooperation in significant numbers (long-dashed line), it substantially increases cooperation rates over pun-rep alone when both reputation types are available (solid line). γ = 0.3 in (a) and ρ = 0.01 in (b). (Online version in colour.)
A similar picture emerges when examining the type of reputation system used in each case. The share of players who condition their strategy on an opponent's reputation are shown in figure 2. In scenario Sc which only considers coop-rep, a reputation system fails to emerge in a large scale out of an initial pool of defectors when information costs are non-zero or when gossiping is relevant. Punishment-based reputation systems are prominent in both Scp and Sp. Interestingly, in Scp with both reputation systems available, groups rely as much on coop-rep as they do on pun-rep, despite the fact that coop-rep fails to emerge in significant numbers by itself. This result indicates that while a punishment based reputation is crucial for establishing cooperation, it is likely to be maintained in the long run by a reputation system based on a positive social image through cooperative actions [41].
Figure 2.
Share of players who condition their actions on a reputation type, as a function of information acquisition costs (a) and relevance of active gossiping channel (b). When both scoring types are available (solid lines), generosity scoring (blue lines) is used as much as punishment scoring (red lines). γ = 0.3 in (a) and ρ = 0.01 in (b).
Table 2 provides an overview of the most frequent types across the three scenarios found in the baseline specification with γ = 0.3 and ρ = 0.01. Strict defectors (d–none) are the most frequent types in both Sc (76.8%) and Sp (50.9%). Strict cooperators (c–none) appear in substantial numbers in Sp (21.5%), indicating that they invade groups of players that use punishment-based reputation. In Scp, reputational information is widely used, with the most frequent types being cooperators who defect against non-punishers (18.2%), who avoid players with a reputation as defectors (16.6%), or who defect against known defectors (15.2%). Non-punishing types are frequent in both Sc (54.1%) and Sp (40.5%). Anti-social punishers constitute a substantial share in Sc with 37.4%, presumably because they can drift in when defection is the near universal first stage action. Responsible punishers are widespread in both Sp and Scp, representing the most frequent punishing type in the latter scenario when both reputational systems can be used (55.7%).
Table 2.
Ten most frequent types (with respect to first stage strategies) across the three scenarios in the top panel, and frequency of punishing types during second stage in the bottom panel, both using the baseline parametrization. (The left columns report results when only cooperation-based reputation is available, and in the middle columns only punishment-based reputation. Both reputational types are available in the scenario reported in the right columns.)
|
coop-rep only |
pun-rep only |
both rep types |
|||
|---|---|---|---|---|---|
| type | share | type | share | type | share |
| d–none | 0.768 | d–none | 0.509 | c–pun(−)d | 0.182 |
| c–none | 0.055 | c–none | 0.215 | c–coop(−)a | 0.166 |
| c–coop(−)a | 0.036 | c–pun(−)d | 0.099 | c–coop(−)d | 0.152 |
| a–none | 0.034 | d–pun(+)a | 0.055 | c–none | 0.150 |
| c–coop(−)d | 0.034 | a–none | 0.035 | d–none | 0.077 |
| d–coop(+)c | 0.020 | d–pun(+)c | 0.032 | d–pun(+)a | 0.056 |
| d–coop(+)a | 0.014 | a–pun(−)d | 0.019 | c–pun(−)a | 0.054 |
| a–coop(+)c | 0.010 | c–pun(−)a | 0.013 | d–pun(+)c | 0.036 |
| d–coop(−)a | 0.010 | d–pun(−)a | 0.005 | d–coop(−)a | 0.022 |
| a–coop(−)d | 0.010 | a–pun(+)c | 0.004 | d–coop(+)c | 0.014 |
| no punishing | 0.541 | no punishing | 0.405 | responsible | 0.557 |
| anti-social | 0.374 | responsible | 0.299 | no punishing | 0.337 |
| responsible | 0.073 | anti-social | 0.267 | anti-social | 0.072 |
| spiteful | 0.013 | spiteful | 0.030 | spiteful | 0.034 |
How does cooperation emerge when both reputational systems are available? This is illustrated in figure 3a, which tracks the evolution of types in the 500 generations before and 1500 generations after the first widespread emergence of cooperation in a simulation. Round zero in figure 3 is defined as the first generation in a given simulation with a share of cooperative actions of at least 50%. All depicted series are averages across 20 simulations. The initial emergence of a cooperative society is preceded by a surge in players who use pun-rep followed by a surge in punishers. Players using coop-rep start to increase together with cooperation rates, while pun-rep peaks soon after t = 0, after which its use starts to decline rapidly. As cooperation becomes established in the group, coop-rep is the main reputation mechanism used. On average, both types of reputation keep being used by a significant number of players in the long run (around 40% each), but they rarely co-exist in large numbers within the same generation. This can be seen in the lower panel of figure 3, which tracks the share of simulations in which a substantial portion of all players (greater than or equal to 25%) condition their action on a specific type of reputational information. In the vast majority of cases, one of the two types of reputation is used by at least a quarter of all members of a generation, yet it is rarely the case that within a generation, both reputational types are used by at least 25% of all players.
Figure 3.
(a) Evolution of share of cooperation, punishers, and of players who condition their actions on a reputation type, before and after the initial emergence of cooperation, averaged across simulations. (b) Share of simulations where a reputational type is widely used (by at least 25% of players): pun-rep only (red), coop-rep only (blue) or both (grey). In both panels, round 0 denotes the first generation in a given simulation with at least 50% cooperation.
(b). Analytical results
To provide a better understanding of the mechanism at work, this section discusses analytical results based on a restricted version of the model. The set of first-stage strategies is limited to the five that appear most frequently in the simulations (frequencies of types can be found in table 2, right column): strict defectors (d–none), strict cooperators (c–none), exploiters of non-punishers (c–pun(−)d), avoiders of defectors (c–coop(−)a), and defect-against-defectors (c–coop(−)d). I consider both non-punishers and responsible punishers, and abstract from limited availability of reputational information (γ = 1). Since all simulations started with a pool of non-punishing strict defectors, I am analysing the condition under which other strategies can invade, and whether any strategy that supports cooperation is evolutionarily stable (ESS). I assume that the reputation of a player corresponds to their action when playing against the predominant type of the current generation. For example, if strict defectors are the predominant type, then a defect-against-defectors will play defect in all interactions and thus gain a reputation as a defector. Similarly, a responsible punisher will punish when playing against the predominant type and have a reputation as a punisher. All relevant payoffs are given in the electronic supplementary material, table S1.
A pool of non-punishing strict defectors (d–none–np) is an ESS with respect to c–none and c–coop(−)a, but as long as informational costs are zero, both c–pun(−)d and c–coop(−)d can invade through random drift. Thus, both a punishment-based and a cooperation-based reputation system can theoretically evolve out of the initial pool. Exploiters of non-punishers (c–pun(−)d) are neutral to strict defectors when playing against any type except all punishers, where they are at an advantage as long as p > 1 − r/2, i.e. punishment is sufficiently severe. This supports the invasion of pun-rep players even if small informational costs exist, as long as the mutation rate μ is not too small and occasional punishers thus not too rare. Once sufficient players using pun-rep are present, punishing becomes a profitable strategy and c–pun(−)d–pd can fully invade. Alternatively, cooperation-based reputation can emerge, since c–coop(−)d can invade a pool of non-punishing strict defectors through random drift if informational costs are zero, but c–coop(−)d is strictly inferior to d–none when playing against a non-punishing strict cooperator, making an invasion supported by cooperation-based reputation less likely than by a punishment-based one.
But punishing exploiters of non-punishers themselves are not an ESS. When c–pun(−)d–pd is the dominant type, cooperation is the nominal first stage action and non-punishers are thus not detected. Informational costs support the invasion by strict cooperators (c–none), which can in turn be invaded by strict defectors. A pool of players using pun-rep can also be invaded by punishing avoiders of defectors (c–coop(−)a–pd) through random drift, since they receive the same payoff as punishing exploiters of non-punishers when playing any type except rare strict defectors. In this case, coop-rep players are at an advantage as long as avoidance costs are not too large (φ < k + 1 − r/2). A cooperation based reputation is more likely to emerge out of a group where cooperation has already been established through pun-rep rather than directly from a pool of defectors.
Non-punishing avoiders of defectors are an ESS with respect to strict defectors, but they are neutral to strict cooperators. They are at an advantage compared to the latter when playing against rare strict defectors as long as φ < 1 − r/2. Thus, provided that avoiding interactions is not too costly, cooperation-based reputation appears to be the more stable reputational system once established as the dominant strategy. Nevertheless, either reputational system can potentially be invaded by strict cooperators and, subsequently, by strict defectors.
4. Discussion
For an individual that intends to be uncooperative in a public goods game, it is of first-order importance to know if this behaviour will be sanctioned, and thus to learn if co-players have punished anti-social behaviour in the past. For such individuals, it is beneficial to learn the punishment reputation of co-players, even if it is costly to do so. Reputations spread through gossip, and people gossip more about topics that they are personally interested in [43], and topics that they are particularly anxious about [44]. In a group with a large share of uncooperative individuals, there is widespread interest in learning about potential punishers, and a reputation mechanism based on punitive action is thus likely to emerge. Figure 4 shows that among players that defect when matched with an unknown co-player, 44.1% condition their strategies on a co-player's punitive reputation when possible.
Figure 4.

Share of individuals who condition their strategy on a co-player's reputation, whenever available. Defectors and cooperators are classified as such by their respective strategies against an unknown player. Defectors mainly use punishment-based reputation (pun-rep), while cooperators rely more heavily on cooperation-based reputation (coop-rep). The numbers shown are averages from 20 independent simulations with parameters set at γ = 0.3 and ρ = 0.01. (Online version in colour.)
This reputation mechanism allows for the appearance of punishers and of conditional cooperators, and thus of the initial evolution of cooperation. Punishment of uncooperative behaviour pays-off if it results in gaining a reputation as a punisher, and if sufficient co-players cooperate conditional on this reputation. On the other hand, if acquiring reputational information is costly, and if the share of punishing individuals in the group is sufficiently large, it can be beneficial to ignore reputations and to treat any co-player as a punisher, and thus to cooperate unconditionally. Information about a co-player's prior punishing behaviour is of little value for unconditional cooperators, and they are thus unlikely to learn and further spread the relevant gossip. As unconditional cooperators invade a population, punishment-based reputation will cease to be of importance.
Instead, and unlike defectors, cooperators do have a vital interest in learning about prior uncooperative behaviour of their co-players, in order to avoid being left with the sucker's pay-off. Cooperation-based reputation is used more often than the punishment-based one among individuals who cooperate with unknown co-players (figure 4, 44.2% compared to 35.5%). In groups with a sufficiently large share of cooperators, a reputation mechanism based on cooperative behaviour can arise. Punishing individuals may still exist in such a society, but punishment no longer pays off, as it is not supported by a punishment-based reputation mechanism, and it is not crucial for sustaining cooperative behaviour, since the latter relies instead on a system of cooperation-based reputation.
These results indicate that the emergence of cooperation in humans may not have been monotonic and may have been driven by more than just one factor. A reputation system that supported the initial appearance of cooperative interactions does not have to be the same as the one that maintains cooperation in a group. These findings can also help reconcile two lines of contradictory evidence: on the one hand the evidence of punishment of defectors as a crucial initial driver of the evolution of cooperation, and on the other hand the experimental findings indicating that punishment does not necessarily increase cooperation [45], that punishers can end up being worse off than non-punishers [46], and that antisocial punishment, i.e. the punishment of cooperators, is widespread [47].
Supplementary Material
Acknowledgments
I am grateful to two anonymous reviewers for providing very thoughtful comments on this manuscript.
Data accessibility
All Python code used to run simulations, as well as simulation output data using the benchmark specification are available in the Dryad Digital Repository at http://dx.doi.org/10.5061/dryad.v046c7g [48].
Competing interests
The author of this manuscript has no competing interests.
Funding
No funding has been received for this article.
References
- 1.Nowak MA, Sigmund K. 1998. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577. ( 10.1038/31225) [DOI] [PubMed] [Google Scholar]
- 2.Milinski M, Semmann D, Krambeck H-J. 2002. Reputation helps solve the ‘tragedy of the commons’. Nature 415, 424–426. ( 10.1038/415424a) [DOI] [PubMed] [Google Scholar]
- 3.Gintis H, Smith EA, Bowles S. 2001. Costly signaling and cooperation. J. Theoret. Biol. 213, 103–119. ( 10.1006/jtbi.2001.2406) [DOI] [PubMed] [Google Scholar]
- 4.Panchanathan K, Boyd R. 2004. Indirect reciprocity can stabilize cooperation without the second-order free rider problem. Nature 432, 499–502. ( 10.1038/nature02978) [DOI] [PubMed] [Google Scholar]
- 5.Dal Bó P. 2005. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. Am. Econ. Rev. 95, 1591–1604. ( 10.1257/000282805775014434) [DOI] [Google Scholar]
- 6.Sylwester K, Roberts G. 2013. Reputation-based partner choice is an effective alternative to indirect reciprocity in solving social dilemmas. Evol. Hum. Behav. 34, 201–206. ( 10.1016/j.evolhumbehav.2012.11.009) [DOI] [Google Scholar]
- 7.Sigmund K, Hauert C, Nowak MA. 2001. Reward and punishment. Proc. Natl Acad. Sci. USA 98, 10 757–10 762. ( 10.1073/pnas.161155698) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Brandt H, Hauert C, Sigmund K. 2003. Punishment and reputation in spatial public goods games. Proc. R. Soc. Lond. B 270, 1099–1104. ( 10.1098/rspb.2003.2336) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hilbe C, Sigmund K. 2010. Incentives and opportunism: from the carrot to the stick. Proc. R. Soc. B 277, 2427–2433. ( 10.1098/rspb.2010.0065) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.dos Santos M, Rankin DJ, Wedekind C. 2011. The evolution of punishment through reputation. Proc. R. Soc. B 278, 371–377. ( 10.1098/rspb.2010.1275) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Hilbe C, Traulsen A. 2012. Emergence of responsible sanctions without second order free riders, antisocial punishment or spite. Sci. Rep. 2, 458 ( 10.1038/srep00458) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Fehr E, Gächter S. 2000. Cooperation and punishment in public goods experiments. Am. Econ. Rev. 90, 980–994. ( 10.1257/aer.90.4.980) [DOI] [Google Scholar]
- 13.Fehr E, Gächter S. 2002. Altruistic punishment in humans. Nature 415, 137–140. ( 10.1038/415137a) [DOI] [PubMed] [Google Scholar]
- 14.Rockenbach B, Milinski M. 2006. The efficient interaction of indirect reciprocity and costly punishment. Nature 444, 718–723. ( 10.1038/nature05229) [DOI] [PubMed] [Google Scholar]
- 15.Henrich J. et al. 2006. Costly punishment across human societies. Science 312, 1767–1770. ( 10.1126/science.1127333) [DOI] [PubMed] [Google Scholar]
- 16.Fehr E, Fischbacher U. 2003. The nature of human altruism. Nature 425, 785–791. ( 10.1038/nature02043) [DOI] [PubMed] [Google Scholar]
- 17.Kurzban R, DeScioli P, O'Brien E. 2007. Audience effects on moralistic punishment. Evol. Hum. Behav. 28, 75–84. ( 10.1016/j.evolhumbehav.2006.06.001) [DOI] [Google Scholar]
- 18.Nisbett RE, Cohen D. 1996. Culture of honor: the psychology of violence in the South. Boulder, CO: Westview Press. [Google Scholar]
- 19.McElreath R. 2003. Reputation and the evolution of conflict. J. Theoret. Biol. 220, 345–357. ( 10.1006/jtbi.2003.3166) [DOI] [PubMed] [Google Scholar]
- 20.Nowak A, Gelfand MJ, Borkowski W, Cohen D, Hernandez I. 2016. The evolutionary basis of honor cultures. Psychol. Sci. 27, 12–24. ( 10.1177/0956797615602860) [DOI] [PubMed] [Google Scholar]
- 21.Nowak MA, Sigmund K. 1998. The dynamics of indirect reciprocity. J. Theoret. Biol. 194, 561–574. ( 10.1006/jtbi.1998.0775) [DOI] [PubMed] [Google Scholar]
- 22.Sommerfeld RD, Krambeck H-J, Semmann D, Milinski M. 2007. Gossip as an alternative for direct observation in games of indirect reciprocity. Proc. Natl Acad. Sci. USA 104, 17 435–17 440. ( 10.1073/pnas.0704598104) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Gallo E, Yan C. 2015. The effects of reputational and social knowledge on cooperation. Proc. Natl Acad. Sci. USA 112, 3647–3652. ( 10.1073/pnas.1415883112) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Giardini F, Vilone D. 2016. Evolution of gossip-based indirect reciprocity on a bipartite network. Sci. Rep. 6, 37931 ( 10.1038/srep37931) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Hess NH, Hagen EH. 2006. Psychological adaptations for assessing gossip veracity. Hum. Nat. Interdisc. Biosoc. Persp. 17, 337–354. ( 10.1007/s12110-006-1013-z) [DOI] [PubMed] [Google Scholar]
- 26.Sasaki T, Okada I, Nakai Y. 2016. Indirect reciprocity can overcome free-rider problems on costly moral assessment. Biol. Lett. 12, 20160341 ( 10.1098/rsbl.2016.0341) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Leimar O, Hammerstein P. 2001. Evolution of cooperation through indirect reciprocity. Proc. R. Soc. Lond. B 268, 745–753. ( 10.1098/rspb.2000.1573) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Suzuki S, Kimura H. 2013. Indirect reciprocity is sensitive to costs of information transfer. Sci. Rep. 3, 1435 ( 10.1038/srep01435) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Mohtashemi M, Mui L. 2003. Evolution of indirect reciprocity by social information: the role of trust and reputation in evolution of altruism. J. Theoret. Biol. 223, 523–531. ( 10.1016/S0022-5193(03)00143-7) [DOI] [PubMed] [Google Scholar]
- 30.Ben-Porath E, Kahneman M. 1996. Communication in repeated games with private monitoring. J. Econ. Theory 70, 281–297. ( 10.1006/jeth.1996.0090) [DOI] [Google Scholar]
- 31.Greif A, Kingston C. 2011. Institutions: rules or equilibria? In Political economy of institutions, democracy and voting (eds N Schofield, G Caballero), pp. 13–43. Berlin, Germany: Springer. [Google Scholar]
- 32.Feinberg M, Willer R, Stellar J, Keltner D. 2012. The virtues of gossip: reputational information sharing as prosocial behavior. J. Person. Soc. Psychol. 102, 1015–1030. ( 10.1037/a0026650) [DOI] [PubMed] [Google Scholar]
- 33.Sommerfeld RD, Krambeck H-J, Milinski M. 2008. Multiple gossip statements and their effect on reputation and trustworthiness. Proc. R. Soc. B 275, 2529–2536. ( 10.1098/rspb.2008.0762) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Craik KH. 2008. Reputation: a network interpretation. Oxford, UK: Oxford University Press. [Google Scholar]
- 35.dos Santos M, Wedekind C. 2015. Reputation based on punishment rather than generosity allows for evolution of cooperation in sizable groups. Evol Hum. Behav. 36, 59–64. ( 10.1016/j.evolhumbehav.2014.09.001) [DOI] [Google Scholar]
- 36.Wedekind C, Milinski M. 2000. Cooperation through image scoring in humans. Science 288, 850–852. ( 10.1126/science.288.5467.850) [DOI] [PubMed] [Google Scholar]
- 37.Milinski M, Semmann D, Bakker TC, Krambeck H-J. 2001. Cooperation through indirect reciprocity: image scoring or standing strategy? Proc. R. Soc. Lond. B 268, 2495–2501. ( 10.1098/rspb.2001.1809) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Seinen I, Schram A. 2006. Social status and group norms: indirect reciprocity in a repeated helping experiment. Eur. Econ. Rev. 50, 581–602. ( 10.1016/j.euroecorev.2004.10.005) [DOI] [Google Scholar]
- 39.Yoeli E, Hoffman M, Rand DG, Nowak MA. 2013. Powering up with indirect reciprocity in a large-scale field experiment. Proc. Natl Acad. Sci. USA 110, 10 424–10 429. ( 10.1073/pnas.1301210110) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.van Apeldoorn J, Schram A. 2016. Indirect reciprocity; a field experiment. PLoS ONE 11, e0152076 ( 10.1371/journal.pone.0152076) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Grimalda G, Pondorfer A, Tracer D. 2016. P: Social image concerns promote cooperation more than altruistic punishment. Nat. Commun. 7, 12288 ( 10.1038/ncomms12288) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Denti T. 2017. Network effects in information acquisition. Working paper. Princeton University, Princeton, NJ, USA. [Google Scholar]
- 43.Dunbar RI, Marriott A, Duncan ND. 1997. Human conversational behavior. Hum. Nat. 8, 231–246. ( 10.1007/BF02912493) [DOI] [PubMed] [Google Scholar]
- 44.Anthony S. 1973. Anxiety and rumor. J. Soc. Psychol. 89, 91–98. ( 10.1080/00224545.1973.9922572) [DOI] [PubMed] [Google Scholar]
- 45.Wu J-J, Zhang B-Y, Zhou Z-X, He Q-Q, Zheng X-D, Cressman R, Tao Y. 2009. Costly punishment does not always increase cooperation. Proc. Natl Acad. Sci. USA 106, 17 448–17 451. ( 10.1073/pnas.0905918106) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Dreber A, Rand DG, Fudenberg D, Nowak MA. 2008. Winners don't punish. Nature 452, 348–351. ( 10.1038/nature06723) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Herrmann B, Thöni C, Gächter S. 2008. Antisocial punishment across societies. Science 319, 1362–1367. ( 10.1126/science.1153808) [DOI] [PubMed] [Google Scholar]
- 48.Schlaepfer A. 2018. Data from: The emergence and selection of reputation systems that drive cooperative behaviour Dryad Digital Repository. ( 10.5061/dryad.v046c7g) [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- Schlaepfer A. 2018. Data from: The emergence and selection of reputation systems that drive cooperative behaviour Dryad Digital Repository. ( 10.5061/dryad.v046c7g) [DOI] [PMC free article] [PubMed]
Supplementary Materials
Data Availability Statement
All Python code used to run simulations, as well as simulation output data using the benchmark specification are available in the Dryad Digital Repository at http://dx.doi.org/10.5061/dryad.v046c7g [48].



