Abstract
Many specific models have been proposed to study evolutionary game dynamics in structured populations, but most analytical results so far describe the competition of only two strategies. Here we derive a general result that holds for any number of strategies, for a large class of population structures under weak selection. We show that for the purpose of strategy selection any evolutionary process can be characterized by two key parameters that are coefficients in a linear inequality containing the payoff values. These structural coefficients, σ1 and σ2, depend on the particular process that is being studied, but not on the number of strategies, n, or the payoff matrix. For calculating these structural coefficients one has to investigate games with three strategies, but more are not needed. Therefore, n = 3 is the general case. Our main result has a geometric interpretation: Strategy selection is determined by the sum of two terms, the first one describing competition on the edges of the simplex and the second one in the center. Our formula includes all known weak selection criteria of evolutionary games as special cases. As a specific example we calculate games on sets and explore the synergistic interaction between direct reciprocity and spatial selection. We show that for certain parameter values both repetition and space are needed to promote evolution of cooperation.
Evolutionary games arise whenever the fitness of individuals is not constant, but depends on the relative abundance of strategies in the population (1–7). Evolutionary game theory is a general theoretical framework that can be used to study many biological problems including host–parasite interactions, ecosystems, animal behavior, social evolution, and human language (8–18). The traditional approach of evolutionary game theory uses deterministic dynamics describing infinitely large, well-mixed populations. More recently the framework was expanded to deal with stochastic dynamics, finite population size, and structured populations (19–32).
Here we consider a mutation–selection process acting in a population of finite size. The population structure determines who interacts with whom to accumulate payoff and who competes with whom for reproduction. Individuals adopt one of n strategies. The payoff for an interaction between any two strategies is given by the n × n payoff matrix A = [aij]. The rate of reproduction is proportional to payoff: Individuals that accumulate higher payoff are more likely to reproduce. Reproduction is subject to symmetric mutation: With probability 1 − u the offspring inherits the strategy of the parent, but with probability u a random strategy is chosen. Our process leads to a stationary distribution characterizing the mutation–selection equilibrium. Important questions are the following: What is the average frequency of a strategy in the stationary distribution? Which strategies are more abundant than others?
To make progress, we consider the limit of weak selection. One way to obtain this limit is as follows: The rate of reproduction of each individual is proportional to 1 + w Payoff, where w is a constant that measures the intensity of selection; the limit of weak selection is then given by w → 0. Weak selection is not an unnatural situation; it can arise in different ways: i) Payoff differences are small, ii) strategies are similar, and iii) individuals are confused about payoffs when updating their strategies. In such situations, the particular game makes only a small contribution to the overall reproductive success of an individual.
For weak selection, all strategies have roughly the same average frequency, 1/n, in the stationary distribution. A strategy is favored by selection, if its average frequency is >1/n. Otherwise it is opposed by selection. Our main result is the following: Given some mild assumptions (specified in SI Text), strategy k is favored by selection if
![]() |
Here is the average payoff when both individuals use the same strategy,
is the average payoff of strategy k,
is the average payoff when playing against strategy k, and
is the average payoff in the population. The parameters σ1 and σ2 are structural coefficients that need to be calculated for the specific evolutionary process that is investigated. These parameters depend on the population structure, the update rule, and the mutation rate, but they do not depend on the number of strategies or on the entries of the payoff matrix.
How can we interpret this result? Let xi denote the frequency of strategy i. The configuration of the population (just in terms of frequencies of strategies) is given by a point in the simplex Sn, which is defined by . The vertices of the simplex correspond to population states where only one strategy is present. The edges of the simplex correspond to states where two strategies are present. In the interior of the simplex all strategies are present. Inequality [1] is the sum of two terms, both of which are linear in the payoff values. The first term,
, describes competition on the edges of the simplex that include strategy k (Fig. 1A). In particular, it is an average over all pairwise comparisons between strategy k and each other strategy, weighted by the structural coefficient, σ1. The second term,
, evaluates the competition between strategy k and all other strategies in the center of the simplex, where all strategies have the same frequency, 1/n (Fig. 1B).
Fig. 1.
Our main result has a simple geometric interpretation, which is illustrated here for the case of n = 3 strategies. (A) The first term of inequality 1 describes competition on the edges of the simplex. (B) The second term of inequality 1 describes competition in the center of the simplex. In general, the selective criterion for strategy 1 is the sum of the two terms.
Therefore, the surprising implication of our main result (Eq. 1) is that strategy selection (in a mutation–selection process in a structured population) is simply the sum of two competition terms, one that is evaluated on the edges of the simplex and the other one in the center of the simplex. The simplicity of this result is surprising because an evolutionary process in a structured population has a very large number of possible states; to describe a particular state it is not enough to list the frequencies of strategies but one also has to specify the population structure.
Further intuition for our main result is provided by the concept of risk dominance. The classical notion of risk dominance for a game with two strategies in a well-mixed population is as follows: Strategy i is risk dominant over strategy j if aii + aij > aji + ajj. If i and j are engaged in a coordination game, given by aii > aji and ajj > aij, then the risk-dominant strategy has the bigger basin of attraction. In a structured population the risk-dominance condition is modified to σaii + aij > aji + σajj, where σ is the structural coefficient (31). Therefore, the first term in inequality 1 represents the average over all pairwise risk-dominance comparisons between strategy k and each other strategy (taking into account population structure). The second term in inequality 1 measures the risk dominance of strategy k when simultaneously compared with all other strategies in a well-mixed population; it is the generalization of the concept of risk dominance to multiple strategies, .
In SI Text we show that the structural coefficients, σ1 and σ2, do not depend on the number of strategies. To calculate σ1 and σ2 for any particular evolutionary process, we need to consider games with n = 3 strategies. More than three strategies are not needed. Therefore, n = 3 is the general case. An important practical implication of our result is the following: If we want to calculate the competition of multiple strategies in a structured population for weak selection but any mutation rate, then all we have to do is to calculate two parameters, σ1 and σ2. This calculation can be done for a very simple payoff matrix and n = 3 strategies. Once σ1 and σ2 are known, they can be applied to any payoff matrix and any number of strategies.
For n = 2 strategies, inequality [1] leads to (a11 − a22)(2σ1 + σ2) + (a12 − a21)(2 + σ2) > 0. If 2 + σ2 ≠ 0, we obtain the well-known condition σa11 + a12 > a21 + σa22 with σ = (2σ1 + σ2)/(2 + σ2). Many σ-values have been calculated characterizing evolutionary games with two strategies in structured populations (31).
For a large, well-mixed population we know that σ1 = 1 and σ2 = μ, where μ = Nu is the product of population size and mutation rate (30). Therefore, if the mutation rate is low, μ → 0, then the evolutionary success of a strategy is determined by average pairwise risk dominance, . If the mutation rate is high, μ → ∞, then the evolutionary success depends on risk dominance,
For any population structure, we can show that low mutation, μ → 0, implies σ2 → 0. Therefore, in the limit of low mutation, the condition for strategy k to be selected becomes where σ0 is the low mutation limit of the structure coefficient σ = (2σ1 + σ2)/(2 + σ2). Hence, for low mutation it suffices to study two-strategy games, and all known σ results (31) carry over to the multiple-strategy case.
In the limit of high mutation, μ → ∞, we conjecture (but cannot prove) that, for a large class of processes, σ2 becomes >>σ1 and >>1. In that case the selection condition is simply risk dominance, , which is also the high mutation limit for a well-mixed population. Thus, if the mutation rate is large enough, then the effect of population structure on strategy selection is destroyed.
In SI Text we give a computational formula for how to calculate σ1 and σ2 for any process with global updating (which means all individuals compete globally for reproduction).
Let us now study a specific evolutionary process, where the individuals of a population of size N are distributed over M sets (32). These sets can be geographic islands, social institutions, or tags (32–35). At any one time each individual belongs to one set and adopts one of n strategies. Individuals interact with others in the same set and thereby obtain payoff. Individuals reproduce proportional to payoff. Offspring inherit their parent's strategy, subject to a strategy mutation rate, u, and their parent's set, subject to a set mutation rate, ν. We use rescaled mutation rates μ = Nu and ν = Nv. In SI Text we calculate σ1 and σ2 for this process and provide analytic results for large population size, N, but for any number of sets, M, and for any mutation rates. For large μ we obtain σ1 ∼ M(1 + ν)/(M + ν) and σ2 ∼ μ. Note that large strategy mutation rate, μ, destroys the effect of population structure, as expected.
In Fig. 2, we show the dependence of σ1 and σ2 on the strategy mutation rate, μ. We choose M = 100 sets and show different values of the set mutation rate, ν. For ν → 0 and ν → ∞ we obtain the same behavior, because both cases correspond to a well-mixed population. A particular strategy mutation rate, μ*, exists for which σ1 = σ2. For μ < μ* structural effects prevail over mutation, because σ1 > σ2. For μ > μ* mutation destroys the effect of population structure, because σ1 < σ2. For large M, the critical mutation rate is given by μ* ∼ 1 + ν.
Fig. 2.
The dependence of σ1 and σ2 on the strategy mutation rate, μ. We choose M = 100 sets and show different values of the set mutation rate: (A) ν = 0, (B) ν = 3, (C) ν = 10, (D) ν = 100, (E) ν = 1,000, and (F) ν = ∞. We observe that σ2 ∼ μ. For ν → 0 and ν → ∞ we obtain the same behavior, because both cases correspond to a well-mixed population. For a particular strategy mutation rate, μ*, we have σ1 = σ2. For μ < μ* structural effects prevail over mutation, because σ1 > σ2. For μ > μ* mutation destroys the effect of population structure, because σ1 < σ2.
We now use these results to study a particular game on sets. Our game has three strategies, always cooperate (AllC), always defect (AllD), and tit-for-tat (TFT), and is meant to describe the essential problem of evolution of cooperation under direct reciprocity. We assume there are repeated interactions between any two players subject to a certain continuation probability; and the average number of rounds is given by m. In any one round, cooperation has a cost, c, and yields benefit, b, for the other player, where b > c > 0. Defection has no cost and yields no benefit. We use average payoff per round to denote the entries of the payoff matrix:
![]() |
AllD is the only strict Nash equilibrium. If b − c ≥ b/m, then TFT is a Nash equilibrium, but not an evolutionarily stable strategy.
We are interested in calculating the condition for natural selection to oppose AllD, which means that its frequency is <1/3 in the stationary distribution. We observe that selection opposes AllD for small strategy mutation rates and intermediate set mutation rates (Fig. 3). For high strategy mutation rate and for low or high set mutation rate the structure behaves like a well-mixed population, which is detrimental to cooperation. There is an optimum set mutation rate that maximally supports evolution of cooperation (32).
Fig. 3.
The effect of strategy and set mutations on the condition to select against AllD. Selection opposes AllD for small strategy mutation rates and intermediate set mutation rates. For high strategy mutation rate and for low and high set mutation rate the structure behaves like a well-mixed population. There is an optimum set mutation rate. Parameters: b = 2, c = 1, m = 7, and M = 8.
Next we study how the condition for selecting against AllD depends on repetition and structure (Fig. 4). We make the following observations. For b/c < 3, even if the game is infinitely repeated, m → ∞, we still need population structure to oppose AllD. In this parameter region repetition alone is not enough. For b/c < 1 + (ν + 3)/(ν (ν + 2)), even if there are infinitely many sets (M → ∞), we still need repetition to oppose AllD. Hence, for certain parameter choices both repetition and spatial structure must work together to promote evolution of cooperation (36, 37). This example demonstrates the need for synergistic interactions between various mechanisms for the evolution of cooperation (38). In particular it is of interest that unless the benefit-to-cost ratio is substantial, b/c > 3, repetition alone does not provide enough selection pressure to oppose AllD.
Fig. 4.
The synergistic interaction of direct reciprocity and spatial selection. For certain parameter choices neither repetition nor structure alone can select against AllD. (A) c = 1, b = 3, μ = 0, and ν = 0.5. Either repetition or structure is sufficient. (B) c = 1, b = 2, μ = 0, and ν = 5. A minimum number of sets is needed. (C) c = 1, b = 3, μ = 0, and ν = 0.05. A minimum number of rounds is needed. (D) c = 1, b = 2, μ = 0, and ν = 0.5. Both a minimum number of rounds and a minimum number of sets are needed to select against AllD.
In summary, we have derived a simple, general condition that characterizes strategy selection, if multiple strategies compete in a structured population under weak selection. The condition is linear in the payoff values and includes two structural coefficients, σ1 and σ2, which depend on the population structure, the update rule, and mutation rates, but do not depend on the number of strategies or on the entries of the payoff matrix. The condition is a simple sum of two terms: One describes competition on the edges of the simplex and the other one describes that in the center. Future research directions suggested by this result include i) a classification of population structures and update rules based on the two structural parameters, ii) numerical and analytic explorations of how the weak selection result carries over to stronger selection intensities in specific cases, and iii) extending our theory from pairwise interactions to multiplayer games. Finally our general result can be used to guide the exploration of many specific evolutionary processes.
Supplementary Material
Acknowledgments
C.E.T. and M.A.N. gratefully acknowledge support from the John Templeton Foundation, the National Science Foundation/National Institutes of Health joint program in mathematical biology (National Institutes of Health Grant R01GM078986), the Bill and Melinda Gates Foundation (Grand Challenges Grant 37874), and J. Epstein.
Footnotes
The authors declare no conflict of interest.
*This Direct Submission article had a prearranged editor.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1016008108/-/DCSupplemental.
References
- 1.Maynard Smith J. Evolution and the Theory of Games. Cambridge, UK: Cambridge Univ Press; 1982. [Google Scholar]
- 2.Hofbauer J, Sigmund K. The Theory of Evolution and Dynamical Systems. Cambridge, UK: Cambridge Univ Press; 1988. [Google Scholar]
- 3.Weibull JW. Evolutionary Game Theory. Cambridge, MA: MIT Press; 1995. [Google Scholar]
- 4.Samuelson L. Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press; 1997. [Google Scholar]
- 5.Cressman R. Evolutionary Dynamics and Extensive Form Games. Cambridge, MA: MIT Press; 2003. [Google Scholar]
- 6.Hofbauer J, Sigmund K. Evolutionary game dynamics. Bull Am Math Soc. 2003;40:479–519. [Google Scholar]
- 7.Nowak MA, Sigmund K. Evolutionary dynamics of biological games. Science. 2004;303:793–799. doi: 10.1126/science.1093411. [DOI] [PubMed] [Google Scholar]
- 8.Maynard Smith J, Price GR. The logic of animal conflict. Nature. 1973;246:15–18. [Google Scholar]
- 9.May RM, Leonard W. Nonlinear aspects of competition between three species. SIAM J Appl Math. 1975;29:243–252. [Google Scholar]
- 10.Nowak MA, May RM. Superinfection and the evolution of parasite virulence. Proc Biol Sci. 1994;255:81–89. doi: 10.1098/rspb.1994.0012. [DOI] [PubMed] [Google Scholar]
- 11.Skyrms B. Evolution of the Social Contract. Cambridge, UK: Cambridge Univ Press; 1996. [Google Scholar]
- 12.Doebeli M, Knowlton N. The evolution of interspecific mutualisms. Proc Natl Acad Sci USA. 1998;95:8676–8680. doi: 10.1073/pnas.95.15.8676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.McNamara JM, Gasson CE, Houston AI. Incorporating rules for responding into evolutionary games. Nature. 1999;401:368–371. doi: 10.1038/43869. [DOI] [PubMed] [Google Scholar]
- 14.Turner PE, Chao L. Prisoner's dilemma in an RNA virus. Nature. 1999;398:441–443. doi: 10.1038/18913. [DOI] [PubMed] [Google Scholar]
- 15.Pfeiffer T, Schuster S, Bonhoeffer S. Cooperation and competition in the evolution of ATP-producing pathways. Science. 2001;292:504–507. doi: 10.1126/science.1058079. [DOI] [PubMed] [Google Scholar]
- 16.Hauert C, De Monte S, Hofbauer J, Sigmund K. Volunteering as Red Queen mechanism for cooperation in public goods games. Science. 2002;296:1129–1132. doi: 10.1126/science.1070582. [DOI] [PubMed] [Google Scholar]
- 17.Nowak MA, Komarova NL, Niyogi P. Computational and evolutionary aspects of language. Nature. 2002;417:611–617. doi: 10.1038/nature00771. [DOI] [PubMed] [Google Scholar]
- 18.Bshary R, Grutter AS, Willener AS, Leimar O. Pairs of cooperating cleaner fish provide better service quality than singletons. Nature. 2008;455:964–966. doi: 10.1038/nature07184. [DOI] [PubMed] [Google Scholar]
- 19.Nowak MA, May RM. Evolutionary games and spatial chaos. Nature. 1992;359:826–829. [Google Scholar]
- 20.Killingback T, Doebeli M. Spatial evolutionary game theory: Hawks and doves revisited. Proc Biol Sci. 1996;263:1135–1144. [Google Scholar]
- 21.Nowak MA, Sasaki A, Taylor C, Fudenberg D. Emergence of cooperation and evolutionary stability in finite populations. Nature. 2004;428:646–650. doi: 10.1038/nature02414. [DOI] [PubMed] [Google Scholar]
- 22.Hauert C, Doebeli M. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature. 2004;428:643–646. doi: 10.1038/nature02360. [DOI] [PubMed] [Google Scholar]
- 23.Lieberman E, Hauert C, Nowak MA. Evolutionary dynamics on graphs. Nature. 2005;433:312–316. doi: 10.1038/nature03204. [DOI] [PubMed] [Google Scholar]
- 24.Ohtsuki H, Hauert C, Lieberman E, Nowak MA. A simple rule for the evolution of cooperation on graphs and social networks. Nature. 2006;441:502–505. doi: 10.1038/nature04605. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Szabó G, Fath G. Evolutionary games on graphs. Phys Rep. 2007;446:97–216. [Google Scholar]
- 26.Taylor PD, Day T, Wild G. Evolution of cooperation in a finite homogeneous graph. Nature. 2007;447:469–472. doi: 10.1038/nature05784. [DOI] [PubMed] [Google Scholar]
- 27.Helbing D, Yu W. Migration as a mechanism to promote cooperation. Adv Complex Syst. 2008;11:641–652. [Google Scholar]
- 28.Santos FC, Santos MD, Pacheco JM. Social diversity promotes the emergence of cooperation in public goods games. Nature. 2008;454:213–216. doi: 10.1038/nature06940. [DOI] [PubMed] [Google Scholar]
- 29.Antal T, Ohtsuki H, Wakeley J, Taylor PD, Nowak MA. Evolutionary game dynamics in phenotype space. Proc Natl Acad Sci USA. 2009;106:8597–8600. doi: 10.1073/pnas.0902528106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Antal T, Traulsen A, Ohtsuki H, Tarnita CE, Nowak MA. Mutation-selection equilibrium in games with multiple strategies. J Theor Biol. 2009;258:614–622. doi: 10.1016/j.jtbi.2009.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Tarnita CE, Ohtsuki H, Antal T, Fu F, Nowak MA. Strategy selection in structured populations. J Theor Biol. 2009;259:570–581. doi: 10.1016/j.jtbi.2009.03.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Tarnita CE, Antal T, Ohtsuki H, Nowak MA. Evolutionary dynamics in set structured populations. Proc Natl Acad Sci USA. 2009;106:8601–8604. doi: 10.1073/pnas.0903019106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Riolo RL, Cohen MD, Axelrod R. Evolution of cooperation without reciprocity. Nature. 2001;414:441–443. doi: 10.1038/35106555. [DOI] [PubMed] [Google Scholar]
- 34.Traulsen A, Claussen JC. Similarity-based cooperation and spatial segregation. Phys Rev E Stat Nonlin Soft Matter Phys. 2004;70:046128. doi: 10.1103/PhysRevE.70.046128. [DOI] [PubMed] [Google Scholar]
- 35.Jansen VAA, van Baalen M. Altruism through beard chromodynamics. Nature. 2006;440:663–666. doi: 10.1038/nature04387. [DOI] [PubMed] [Google Scholar]
- 36.Ohtsuki H, Nowak MA. Direct reciprocity on graphs. J Theor Biol. 2007;247:462–470. doi: 10.1016/j.jtbi.2007.03.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.van Veelen M, Garcia J. Repeated Games in Structured Populations: A Recipe for Cooperation. 2010 in press. [Google Scholar]
- 38.Nowak MA. Five rules for the evolution of cooperation. Science. 2006;314:1560–1563. doi: 10.1126/science.1133755. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.