Skip to main content
PLOS One logoLink to PLOS One
. 2014 Aug 19;9(8):e105126. doi: 10.1371/journal.pone.0105126

Punishment Based on Public Benefit Fund Significantly Promotes Cooperation

Xiuling Wang 1, Jie Wu 1, Gang Shu 2, Ya Li 1,*
Editor: Matjaz Perc3
PMCID: PMC4138163  PMID: 25137051

Abstract

In prisoner's dilemma game (shortly, PD game), punishment is most frequently used to promote cooperation. However, outcome varies when different punishment approaches are applied. Here the PD game is studied on a square lattice when different punishment patterns are adopted. As is known to all, tax system, a common tool to adjust the temperature of the economy, is widely used in human society. Inspired by this philosophy, players in this study would pay corresponding taxes in accordance with their payoff level. In this way, public benefit fund is established consequently and it would be utilized to punish defectors. There are two main methods for punishing: slight intensity of punishment (shortly, SLP) and severe intensity of punishment (shortly, SEP). When the totaling of public benefit fund keeps relatively fixed, SLP extends further, which means more defectors would be punished; by contrast, SEP has a smaller coverage. It is of interest to verify whether these two measures can promote cooperation and which one is more efficient. Simulate results reveal that both of them can promote cooperation remarkably. Specifically speaking, SLP shows constant advantage from the point of view either of fractions of cooperation or average payoff.

Introduction

Early in the primitive society, humans learned to work in groups to capture prey. Even nowadays, there is no doubt that cooperative behavior exists widely in biological, social and economic systems [1]. Understanding the evolution of cooperation among unrelated individuals is still a major challenge to many natural and social scientists [2]. Thus far, evolutionary game [3][9] theory has provided a common mathematical framework to solve this problem. Especially the classical prisoner's dilemma (PD) game describes the conflict between individuals, it is commonly employed in biology and applied to many non-human species. Therefore its extensions have been researched widely [10][12].

Since the previous work conducted by Nowak and May [10], evolutionary games have been widely researched on lattices [13], [14] and complex networks [15][18]. In the previous studies, researchers found that popular mechanisms such as kin selection [2], [19], the time scale of strategy updating [20], [21] and spatial topology [22][24] played an important role in the emergence of cooperation. Recently, a simple rule in strategy changing based on the value of a single parameter Inline graphic, which influences the selection of players that are viewed as potential sources of the new strategy is adopted [25]. Results revealed that increasing the probability of adopting the strategy from the fittest player within reach (setting Inline graphic positive) promotes cooperation. Ref. [26] has researched correlation Inline graphic between the payoff and the increasing age of players and found that moderate values of Inline graphic allow cooperators outcompete defectors. In [27], the time course of cooperation evolution under different evolution rules is studied. It is found that the formation of the perfect C cluster at the end of the enduring period and the expanding fashion of the perfect C cluster during the expanding are two factors to determine the final cooperation level. Ref. [28] has studied the evolution of cooperation under two different evolutionary games within a fraction Inline graphic of each players' payoffs gained from direct game interactions, where Inline graphic determines the degree of the relatedness among the neighboring players. It found that closer relatedness can remarkably promote cooperation in the context of both games. Moreover, Ref. [29] investigated the emergence of cooperation in square lattice when adopting Dempster-Shafter theory, which is an important tool for decision analysis and predictiton [30][33], to combine evaluations from the point of payoff and environment. Simulate results revealed that the comprehensive strategy updating method promotes cooperation significantly. Most recently, evolutionary games have also been studied on interdependent networks [34][37]. In [34], it focused on evolution of public cooperation on two interdependent networks that are connected depending on a utility function, which is used to determine to what extend payoff in one network influence the players in the other network. Results indicated that the stronger the bias in the utility function, the higher the level of public cooperation was. Ref. [35] revealed that only an intermediate density of sufficiently strong interactions between networks warrants an optimal resolution of social dilemmas. In [36], two-layer scale-free networks with all possible combinations of degree mixing are studied, one is used for the accumulation of payoffs and the other is used for strategy updating. It turned out that breaking the symmetry impedes the evolution of cooperation. Ref. [37] showed that the interdependence between networks self-organizes is helpful to yield optimal conditions for the evolution of cooperation.

Pre-existing studies reminded us the fact that defection may lead to the tragedy of the commons [38]. Aimed to overcome this unfavorable outcome, a great of measures have been identified to promote cooperation. Typical measures include reward [39], [40] and punishment [41][45]. Here, we focus on punishment. Nevertheless, punishment is costly. Unlike the situation in most studies in which cost of punishment is paid by cooperators [44], [46][50], we collect public benefit fund by charging players the corresponding taxes according to the level of players' payoff. It would be used to afford the cost of punishment. Furthermore, we also take the punishment intensity and punishment range into consideration. However, they can not be satisfied at the same time. Limited by resources, it is often the case that severe inspection system only works on a small scope, because the establishment of severe system is overwhelmed by certain resources, and vice versa. Hence there mainly exist two measures for punishment: slight intensity of punishment (SLP) and severe intensity of punishment (SEP). In this paper, we investigate whether this new plan has a positive impact on the emergence of cooperation and which pattern is more effective. Simulate results indicate that SLP shows more efficiency.

Model

The PD game is conducted on a square lattice of size L×L with periodic boundary conditions. As a matter of rountine [51], the payoffs are listed below: T = b, where T is the temptation to defect and 1<b<2, R = 1, where R is the reward for cooperation, P and S(P = S = 0) is the punishment for mutual defection and the sucker's payoff, respectively. Although in this model of the weak PD game has P = S rather than P>S, it captures simply the essential social dilemma.

Initially, each player x is designated as a cooperator or defector with probability of fifty percent. At every time step, each player on the square lattice plays PD games with all four nearest neighbors and then obtains accumulated payoff U. Next, our studies are conducted in two different situations: The PD game without punishment and the PD game with punishment.

The PD game without punishment

Each player x chooses one of its neighbors y randomly and revises its strategy according to the following Fermi rule [14]. Let Inline graphic and Inline graphic denote the accumulated payoffs of player x and player y obtained from the previous round, respectively. Player x adopts the neighbor's strategy with the probability Inline graphic,

graphic file with name pone.0105126.e010.jpg (1)

where Inline graphic represents the amplitude of noise level, where Inline graphic presents determinate imitation, while Inline graphic indicates stochastic imitation. In our study, we will not take the effects of Inline graphic into consideration, so we set Inline graphic to a constant value, 0.1. Each round is started in a random initial state, with many times repeated.

The PD game with punishment

At every time step, after obtaining accumulated payoffs by playing games with all neighbors, players' payoff is sorted in a descending order. As Table 1 shows, according to the order, players whose payoff ranks the top 25% would pay 10% of their payoff as individual tax. In the same way, players whose payoff ranks from 25% to 50% would pay 5% of payoff as individual tax. However, in accordance with reality, tax would not be charged from the players whose payoff ranks the last 50%. In this way, a certain quantity of public benefit fund can be collected and it could be used to cover the cost of punishment. At each round, we will set a punishment intensity p, where 0<p<1. That is to say, the fine that defectors need to suffer is p×b, where b denotes the value of temptation, which means the higher b is, the more penalty is. We also set a punishment range q, where 0<q<1, in other words, the number of defectors to be punished is d×q, where d denotes the total number of defectors. As mentioned above, the intensity p is inversely proportional to punishment range q. Here the simplest liner relationship is adapted to imitate: q = 1−p. The penalty with corresponding intensity would be imposed in given range at every time step. After that, payoff of players would be refreshed again. Next, Each player x chooses one of its neighbors y randomly and revises its strategy according to Fermi rule in Eq. 1. And in next step, a new round of game begins.

Table 1. Public benefit fund collected by the rank of payoffs.

Ranking Payoff 0–25% 25%–50% 50%–100%
Corresponding taxes 10%×U 5%×U 0

Results and Analysis

The game is played in a square lattice of size L = 100. The impact of punishment on the outcome of the game can be fully understood only if the same experiments are carried out without the punishment. Therefore we first conduct the experiments in the absence of punishment to arrive at a baseline scenario, in particular to estimate the cooperators density Inline graphic at different values of b. We start by investigating how the temptation to defect b and the punishment intensity p affect the evolution of cooperation. The simulate results for fraction of cooperation Inline graphic with four values of p are shown in Fig. 1. To investigate the effectiveness of punishment when applying different intensity p, we focus on p = 0.2, p = 0.5 and p = 0.8, which represents SLP, suitable intensity with suitable range (shortly, SUP) and SEP respectively.

Figure 1. Temporal evolution of the cooperators density Inline graphic towards its stationary state for different values of b and different punishment intensity p.

Figure 1

SUP leads to the most effective effect and SLP is better than SEP. With the increase of b, the punishment is more efficient. Employed parameter value was: L = 100.

Figure 1 shows that with varying values of p, the fraction of cooperation Inline graphic rises significantly, compared with p = 0 (without punishment), which indicates that the measure of punishment has a positive impact on the emergence of cooperation. However, with different values of p, Inline graphic is total different. When p = 0.5, that is to say, not only the punishment intensity, but also the punishment range are perfect, Inline graphic always keeps in a high level in spite of the increase of b. More interestingly, for p = 0.2 and p = 0.8, despite the continue increasing values of b, Inline graphic in the former conditions is always higher than those in the latter conditions. Results presented thus far indicate that the punishment offered by public benefit fund has promote cooperation. The highest Inline graphic emerges when adopting SUP. However in the real world application, it is difficult to realize moderate p. Under such circumstance, SLP is a good choice. In other words, SLP is more suitable to promote cooperation than SEP.

Figure 2 provides a quantitative assessment of different values of p under different level of temptation to defect b. Obviously, Inline graphic is close to 0 when p = 0 and Inline graphic is close to 1 when p = 0.5. We mainly focus on p = 0.2 and p = 0.8, because these two situations are closer to reality and can be operated easier. It is not difficult to find that when p = 0.2 and p = 0.8, Inline graphic varies greatly with the values of temptation b. Namely, the higher b is, the more efficiently the punishment works. Because the penalties increase with the value of b, which leads to an obvious effect to promote cooperation.

Figure 2. Temporal evolution of the cooperators density Inline graphic towards its stationary state for different values of b and different punishment intensity p.

Figure 2

SUP leads to the most effective effect in all instances of b. According to Fig.2(b) and Fig.2(d), Inline graphic with SLP is higher than Inline graphic with SEP. With the increase of b, the punishment is more efficient. Employed parameter value was: L = 100.

Previous studies have showed that the maximal Inline graphic arrives at p = 0.5. In order to figure out whether the slight intensity is still more efficient than severe intensity under the circumstance of moderate p, we investigate Inline graphic with four typical values of temptation (b = 1.1, b = 1.3, b = 1.5, b = 1.7). As Fig. 3 shows, punishment with p = 0.4 is most efficient, which indicates that SLP is really better than SEP. To analyze the relationship between Inline graphic, b and p, we plot the value of Inline graphic under stationary pattern in dependence on b and p in Fig. 4. It is crystal clear that the space occupied by red is dramatically increasing as a result of the rise of b, which indicates that Inline graphic becomes higher. What is more, In circumstances of low values of p, the areas occupied by red is wider than that of high values of p. This is due to the fact that with the rise of b, penalties also increase, which leads to the phenomenon that defectors would pay for more fines. All this findings has proven fore-mentioned conclusions, SLP is more efficient than SEP and punishment works more effectively with the increase of b.

Figure 3. Temporal evolution of the cooperators density Inline graphic towards its stationary state for messy values of punishment intensity p in different values of temptation b.

Figure 3

Inline graphic with low p is always higher than Inline graphic with higher p. Employed parameter value was: L = 100.

Figure 4. Fraction of cooperation Inline graphic in dependence on Inline graphic and Inline graphic.

Figure 4

When p is moderate, Inline graphic keeps in a high level. Moreover, with the increase of b, the average fraction of cooperation Inline graphic grows. When p is too high or too low, cooperation is gradually extinct. Employed parameter value was: L = 100.

Moreover, we also investigate the average payoff in different situations. As is shown in Fig. 5, when applying SLP, the average payoff decreases initially and then rises slowly before reaching the steady state. It is due to that players would be charged for the public benefit fund, which leads to the drop of the average payoff. However, with the increase of Inline graphic, the average payoff also rises dramatically. While applying SEP, as shown in Fig. 5(b), the average payoff drops to a certain limits prior to keeping steady. It is interesting to find that the average payoff goes up when b sets as a high value, which proves that punishment is more effective under the circumstance of high temptation. Meanwhile, the average payoff with SLP (Fig 5(a)) is obviously higher than that with SEP. Because more defectors are punished, as a result, the cooperation is promoted. When Inline graphic always stays in a high level, most cooperators can receive high payoffs, which improves the whole average payoff. However, in the circumstance of SEP, only few defectors are punished, players tends to defect for higher payoffs. Therefore, Inline graphic is relatively lower. On the circumstance of SEP, only few defectors get high payoffs, payoffs of most cooperators are relatively less. And it is the reason why the average payoff is lower than that with SLP. It can be conducted that SLP is better than SEP according to the average payoff.

Figure 5. The average payoff with different values of b and p.

Figure 5

Owing to the application of punishment, the average payoff rises significantly with the increase of b. Furthermore, the average payoff with SLP is always higher than that with SEP. Employed parameter value was: L = 100.

More detailed studies about the payoff of cooperators and defectors are carried out by Fig. 6. Overall, the payoff of cooperators is higher than that of defectors. Because cooperators form cooperative clusters and get relatively high payoff. While defectors can only rely on the profit obtained from the cooperative neighbourhood. However, their payoff will become zero if they came across with other defectors. And this is exactly the reason why the payoff of cooperators is higher than that of defectors. Furthermore, from the contrast of Fig. 6(a) and Fig. 6(c), Fig. 6(b) and Fig. 6(d), the payoff with SLP is higher than the payoff with SEP no matter are cooperators or defectors. Consequently, judging from the payoff of cooperators and defectors, SLP is still more effective than SEP.

Figure 6. Payoff of cooperators and defectors in dependence on b and p.

Figure 6

Cooperators are represented by blue, while defectors are indicated by red. Owing to the application of punishment and public benefit fund, average payoff of cooperators is higher than that of defectors. What's more, when applying SLP, even with the same value of b, the payoff is respectively higher than that applied with SEP. Meanwhile, with the increase of b, the payoff also rises significantly. Employed parameter value was: L = 100.

In order to test the robustness of this observation against the change of interaction topology, the same experiment is conducted on the small world network. The mean degree is set to four, which aimed to compare with square lattice. Simulate results accord with the conclusion of this paper basically. The fraction of cooperation Inline graphic rises significantly when different values of p are applied, which is better than that under p = 0 (without punishment). The present mechanism turns out to be available to motivate cooperation in the small world network. The highest Inline graphic still emerges when adopting SUP. Nevertheless, with a higher value of temptation b, Inline graphic differs from that in square lattice. Regardless of varied values of p, cooperation is almost extinct in such situation, which means players prefer to defect for obtaining higher payoff even with the risk of being punished in the small world network.

Conclusions

The evolutionary emergence of cooperators in social dilemmas has long been an important topic and punishment is a commonly used tool to promote cooperation. In order to provide a detailed analysis of this phenomenon, the classical prisoner's dilemma game, as a basic model, is commonly researched. In this paper, based on the imitation of taxation system, public benefit fund is collected to afford the cost of punishment. Moreover, we also take the punishment intensity and the punishment range into consideration. After a careful observation, we find that punishment intensity is in inversely proportional to range most of the time, which means there are mainly two measures for punishment: slight intensity of punishment (SLP) and severe intensity of punishment (SEP). As expected, the results have shown that this mechanism is effective, no matter from the perspective of cooperation level or perspective of average payoff. And further study also reveals the fact that if we cannot determine the most suitable intensity, slight intensity of punishment rather than severe intensity of punishment should be adopted. We also conduct the same experiment in the small world network and results follow the similar trend. It has shown that the present mechanism is robust against the change of interaction topology. A specific point that merits further research is the imitation of relation between punishment intensity p and punishment range q. In this paper, the simplest linear formula q = 1−p is adopted to simulate the inverse relation. Whether there exist a better model to describe the restrictive correlation between the two parameters needs further research. We hope that our findings may provide a reference to establish punishment system in real world.

Data Availability

The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper. All the initial data is generated randomly by computer and a totally different ultimate data would be achieved consequently each time. The same tendency and conclusion can be reached by applying the identical method, which is mentioned by us.

Funding Statement

This study was supported by the Fundamental Research Funds for the Central Universities, Grant Nos. XDJK2013B029, XDJK2014C082. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Vukov J, Szolnoki A, Szabó G (2013) Diverging fluctuations in a spatial five-species cyclic dominance game. Physical Review E 88: 022123. [DOI] [PubMed] [Google Scholar]
  • 2. Nowak MA (2006) Five rules for the evolution of cooperation. Science 314: 1560–1563. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Liu Y, Zhang L, Chen X, Ren L, Wang L (2013) Cautious strategy update promotes cooperation in spatial prisoners dilemma game. Physica A: Statistical Mechanics and its Applications 392: 3640–3647. [Google Scholar]
  • 4. Perc M, Szolnoki A (2010) Coevolutionary gamesa mini review. BioSystems 99: 109–125. [DOI] [PubMed] [Google Scholar]
  • 5. Lee S, Holme P, Wu ZX (2011) Emergent hierarchical structures in multiadaptive games. Physical Review Letters 106: 028702. [DOI] [PubMed] [Google Scholar]
  • 6. Stewart AJ, Plotkin JB (2013) From extortion to generosity, evolution in the iterated prisoners dilemma. Proceedings of the National Academy of Sciences 110: 15348–15353. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Szolnoki A, Perc M, Szabó G (2012) Accuracy in strategy imitations promotes the evolution of fairness in the spatial ultimatum game. EPL (Europhysics Letters) 100: 28005. [Google Scholar]
  • 8. Rong Z, Yang HX, Wang WX (2010) Feedback reciprocity mechanism promotes the cooperation of highly clustered scale-free networks. Physical Review E 82: 047101. [DOI] [PubMed] [Google Scholar]
  • 9. Szolnoki A, Perc M, Szabó G (2012) Defense mechanisms of empathetic players in the spatial ultimatum game. Physical Review Letters 109: 078701. [DOI] [PubMed] [Google Scholar]
  • 10. Nowak MA, May RM (1992) Evolutionary games and spatial chaos. Nature 359: 826–829. [Google Scholar]
  • 11. Chen X, Fu F, Wang L (2009) Social tolerance allows cooperation to prevail in an adaptive environment. Physical Review E 80: 051104. [DOI] [PubMed] [Google Scholar]
  • 12. Hauert C, Doebeli M (2004) Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428: 643–646. [DOI] [PubMed] [Google Scholar]
  • 13. Szabo G, Szolnoki A, Vukov J (2009) Selection of dynamical rules in spatial prisoner's dilemma games. EPL (Europhysics Letters) 87: 18007. [Google Scholar]
  • 14. Szabó G, Tőke C (1998) Evolutionary prisoner's dilemma game on a square lattice. Physical Review E 58: 69–73. [Google Scholar]
  • 15. Poncela J, Gómez-Gardeñes J, Traulsen A, Moreno Y (2009) Evolutionary game dynamics in a growing structured population. New Journal of Physics 11: 083031. [Google Scholar]
  • 16. Gracia-Lázaro C, Floría LM, Gómez-Gardeñes J, Moreno Y (2013) Cooperation in changing environments: Irreversibility in the transition to cooperation in complex networks. Chaos, Solitons & Fractals 56: 188–193. [Google Scholar]
  • 17.Perc M, Wang Z (2010) Heterogeneous aspirations promote cooperation in the prisoner's dilemma game. PLoS ONE: e15117. [DOI] [PMC free article] [PubMed]
  • 18. Gómez-Gardeñes J, Poncela J, Mario Floría L, Moreno Y (2008) Natural selection of cooperation and degree hierarchy in heterogeneous populations. Journal of theoretical biology 253: 296–301. [DOI] [PubMed] [Google Scholar]
  • 19. Foster KR, Wenseleers T, Ratnieks FL (2006) Kin selection is the key to altruism. Trends in Ecology & Evolution 21: 57–60. [DOI] [PubMed] [Google Scholar]
  • 20. Rong Z, Wu ZX, Wang WX (2010) Emergence of cooperation through coevolving time scale in spatial prisoners dilemma. Physical Review E 82: 026101. [DOI] [PubMed] [Google Scholar]
  • 21. Wu ZX, Rong Z, Holme P (2009) Diversity of reproduction time scale promotes cooperation in spatial prisoners dilemma games. Physical Review E 80: 036106. [DOI] [PubMed] [Google Scholar]
  • 22. Szolnoki A, Perc M, Szabó G (2009) Topology-independent impact of noise on cooperation in spatial public goods games. Physical Review E 80: 056109. [DOI] [PubMed] [Google Scholar]
  • 23. Wang Z, Szolnoki A, Perc M (2012) If players are sparse social dilemmas are too: Importance of percolation for evolution of cooperation. Scientific Reports 2: 369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Wang Z, Szolnoki A, Perc M (2013) Interdependent network reciprocity in evolutionary games. Sci Rep 3: 1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Wang Z, Perc M (2010) Aspiring to the fittest and promotion of cooperation in the prisoners dilemma game. Physical Review E 82: 021115. [DOI] [PubMed] [Google Scholar]
  • 26. Wang Z, Zhu X, Arenzon JJ (2012) Cooperation and age structure in spatial games. Physical Review E 85: 011149. [DOI] [PubMed] [Google Scholar]
  • 27. Wang Z, Kokubo S, Tanimoto J, Fukuda E, Shigaki K (2013) Insight into the so-called spatial reciprocity. Physical Review E 88: 042145. [DOI] [PubMed] [Google Scholar]
  • 28. Wu ZX, Yang HX (2014) Social dilemma alleviated by sharing the gains with immediate neighbors. Physical Review E 89: 012109. [DOI] [PubMed] [Google Scholar]
  • 29. Li Y, Lan X, Deng X, Sadiq R, Deng Y (2014) Comprehensive consideration of strategy updating promotes cooperation in the prisoners dilemma game. Physica A: Statistical Mechanics and its Applications 403: 284–292. [Google Scholar]
  • 30. Deng X, Hu Y, Deng Y, Mahadevan S (2014) Supplier selection using AHP methodology extended by D numbers. Expert Systems with Applications 41: 156–167. [Google Scholar]
  • 31. Deng X, Hu Y, Deng Y, Mahadevan S (2014) Environmental impact assessment based on D numbers. Expert Systems with Applications 41: 635–643. [Google Scholar]
  • 32. Zhang X, Deng Y, Chan FT, Xu P, Mahadevan S, et al. (2013) IFSJSP: A novel methodology for the job-shop scheduling problem based on intuitionistic fuzzy sets. International Journal of Production Research 51: 5100–5119. [Google Scholar]
  • 33. Kang B, Deng Y, Sadiq R, Mahadevan S (2012) Evidential cognitive maps. Knowledge-Based Systems 35: 77–86. [Google Scholar]
  • 34. Wang Z, Szolnoki A, Perc M (2012) Evolution of public cooperation on interdependent networks: The impact of biased utility functions. EPL (Europhysics Letters) 97: 48001. [Google Scholar]
  • 35. Wang Z, Szolnoki A, Perc M (2013) Optimal interdependence between networks for the evolution of cooperation. Scientific Reports 3: 2470. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Wang Z, Wang L, Perc M (2014) Degree mixing in multilayer networks impedes the evolution of cooperation. Physical Review E 89: 052813. [DOI] [PubMed] [Google Scholar]
  • 37. Wang Z, Szolnoki A, Perc M (2014) Self-organization towards optimally interdependent networks by means of coevolution. New Journal of Physics 16: 033041. [Google Scholar]
  • 38. Hardin G (1968) The tragedy of the commons. Science 162: 1243–1248. [PubMed] [Google Scholar]
  • 39. Szolnoki A, Perc M (2010) Reward and cooperation in the spatial public goods game. EPL (Europhysics Letters) 92: 38003. [Google Scholar]
  • 40. Wang Z, Szolnoki A, Perc M (2014) Rewarding evolutionary fitness with links between populations promotes cooperation. Journal of Theoretical Biology 349: 50–56. [DOI] [PubMed] [Google Scholar]
  • 41. Wang Z, Xia CY, Meloni S, Zhou CS, Moreno Y (2013) Impact of social punishment on cooperative behavior in complex networks. Scientific Reports 3: 3055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Sigmund K (2007) Punish or perish? retaliation and collaboration among humans. Trends in Ecology & Evolution 22: 593–600. [DOI] [PubMed] [Google Scholar]
  • 43. Cremene M, Dumitrescu D, Cremene L (2014) A strategic interaction model of punishment favoring contagion of honest behavior. PLoS ONE 9: e87471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Szolnoki A, Perc M (2013) Effectiveness of conditional punishment for the evolution of public cooperation. Journal of Theoretical Biology 325: 34–41. [DOI] [PubMed] [Google Scholar]
  • 45. Jin Q, Wang Z, Wang Z, Wang YL (2012) Strategy changing penalty promotes cooperation in spatial prisoners dilemma game. Chaos, Solitons & Fractals 45: 395–401. [Google Scholar]
  • 46. Fehr E, Gächter S (2002) Altruistic punishment in humans. Nature 415: 137–140. [DOI] [PubMed] [Google Scholar]
  • 47. Mussweiler T, Ockenfels A (2013) Similarity increases altruistic punishment in humans. Proceedings of the National Academy of Sciences 110: 19318–19323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Jiang LL, Perc M, Szolnoki A (2013) If cooperation is likely punish mildly: insights from economic experiments based on the snowdrift game. PloS one 8: e64677. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Amor DR, Fort J (2011) Effects of punishment in a mobile population playing the prisoner's dilemma game. Physical Review E 84: 066115. [DOI] [PubMed] [Google Scholar]
  • 50. Boyd R, Gintis H, Bowles S (2010) Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science 328: 617–620. [DOI] [PubMed] [Google Scholar]
  • 51. Nowak MA, May RM (1993) The spatial dilemmas of evolution. International Journal of bifurcation and chaos 3: 35–78. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper. All the initial data is generated randomly by computer and a totally different ultimate data would be achieved consequently each time. The same tendency and conclusion can be reached by applying the identical method, which is mentioned by us.


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES