Abstract
The connected uniformly hyperstable sets of a finite game are shown to be precisely the essential components of Nash equilibria.
Keywords: game theory, equilibrium refinement, stability
The concept of equilibrium proposed by Nash (1, 2) is a cornerstone of game theory. He defined an equilibrium as a profile of players' strategies such that each is an optimal reply to others' strategies. Most games have multiple equilibria so his definition is not a complete theory of rational play. Hillas and Kohlberg (3) survey refinements that impose additional decision-theoretic criteria. Nash also showed that a game's equilibria are the fixed points of an associated map (i.e. a continuous function) from the space of strategies into itself, and in algebraic topology, too, refinements select fixed points with stronger properties.
Here, we establish for finite games an exact equivalence between a game-theoretic refinement and a topological refinement. The game-theoretic refinement is the uniform variant of Kohlberg and Mertens' (4) definition of a hyperstable component of equilibria of a game (this and other technical terms are defined below). The topological refinement is O'Neill's (5) definition of an essential component of fixed points of a map. Basically, Theorem 1 below shows that the space of perturbed games considered in the definition of hyperstability is as rich as the class of perturbed maps considered in the definition of essentiality.
Any map whose fixed points are the equilibria is called a Nash map for the game. We (6) prove for two-player games that within the set of Nash maps that are continuous in payoffs as well as strategies, between every two Nash maps there is a homotopy that preserves fixed points; our extension to N-player games has not been published. Similarly, Demichelis and Germano (7) show that the topological index, and thus also the topological essentiality, of an equilibrium component is independent of the map used. Here, in Formulation and Appendix A we show that, in fact, the index depends only on the local degree of the projection map from the equilibrium graph to the space of games. Therefore, hereafter we say that an equilibrium component is essential if it is an essential component of the fixed points of some Nash map, and thus all Nash maps. The restriction to components of equilibria is immaterial for extensive-form games with perfect recall and generic payoffs since for such games all equilibria in a component induce the same probability distribution over outcomes [Kreps and Wilson (8); Govindan and Wilson (9)]. The following paragraphs review definitions of the two refinements.
Essential Components of Fixed Points
Let be the space of maps f: X → X from/into a topological space X, where is endowed with the compact-open topology. Given f ∈ , a component is a maximal connected set of its fixed points. A component K is topologically essential if for each neighborhood U of K there is a neighborhood V of f such that each map in V has a fixed point in U. In this article the focus is on the case that X is the space of profiles of players' mixed strategies and f is a Nash map of a game. (In Formulation the space X is denoted Σ and it is a compact convex subset of with the norm.)
Uniformly Hyperstable Components of Equilibria
Hyperstability invokes two principles. “Hyper” refers to the axiom of invariance, which requires that a refinement should be immune to treating a mixed strategy as an additional pure strategy. This excludes presentation effects by ensuring that equivalent equilibria are selected in equivalent games. Stability requires that every nearby game has a nearby equilibrium. Here, a nearby game is one with players' payoffs in a neighborhood of those of the given game, represented as a point in Euclidean space with the norm (we use the norm throughout). Invariance and stability are applied as follows.
Equivalence of Games and Strategies. Two strategies of one player are equivalent if they yield every player the same expected payoff for each profile of others' strategies. A pure strategy is redundant if the player has another strategy that is equivalent. From a game G one obtains its reduction G* by deleting redundant pure strategies until none remain; the reduction is unique (apart from names of pure strategies). Two games are equivalent if their reductions are the same. If σ is a profile of players' strategies in G then its reduction σ* is the profile of equivalent strategies of G*. For each set C of strategy profiles for game G the corresponding set C′ for an equivalent game G′ consists of the profiles of equivalent strategies.
Hyperstability and Uniform Hyperstability. A closed set C of equilibria of game G is hyperstable if for every neighborhood U′ of the equivalent set C′ of equilibria for any equivalent game G′ there exists a neighborhood V′ of G′ such that every game in V′ has an equilibrium in U′. A stronger variant is: A closed set C of equilibria of G is uniformly hyperstable if for every neighborhood U of C there exists δ > 0 such that every δ-perturbation of every equivalent game G′ has an equilibrium equivalent to some strategy profile in U.
Formulation establishes notation, and then Proof of the Theorem proves the following.
Theorem 1. The connected uniformly hyperstable sets are the essential components of any Nash map of the game.
As mentioned above, essential for some Nash map implies essential for every Nash map of the game.¶
Theorem 1 is proved in two parts: an essential component is uniformly hyperstable, Theorem 2; a connected uniformly hyperstable set is an essential component, Theorem 3. An implication of Theorem 1 is that a component is uniformly hyperstable iff its topological index is nonzero. Independently, von Schemde (10) establishes this result for two-player outside-option games. Appendices A and B provide technical tools.
Formulation
We consider games with a finite set N of players, |N| ≥ 2. Each player n ∈ N has a finite set Sn of pure strategies. Interpret a pure strategy sn as a vertex of player n's simplex Σn = Δ(Sn) of mixed strategies. The sets of profiles of pure and mixed strategies are S = Πn Sn and Σ = Πn Σn. For player n, S–n = Πm≠n Sm and Σ–n = Πm≠n Σm denote the sets of profiles of others' pure and mixed strategies. Given N and S, each game G is described by its payoff function from profiles of pure strategies to players' payoffs. Thus a game is specified by a point in . Let Gn and Gn be the extensions of Ĝn from profiles of mixed strategies to player n's expected payoffs from pure and mixed strategies; namely, player n's expected payoffs from his pure strategies are given by , where , and . Note that Gn(σ) does not depend on σn but Gn(σ) does.
A profile σ ∈ Σ is an equilibrium of G if each player's strategy σn is an optimal reply to others' strategies; that is, [τn – σn]′Gn(σ) ≤ 0 for all τn ∈ Σn. Equilibria are characterized as fixed points of a map as follows [Gül, Pearce, and Stacchetti (11)]. Let be the piecewise-affine map that retracts each point in to the point of Σn nearest in Euclidean distance; i.e., rn(zn) is the unique solution r ∈ Σn to the variational inequality [τn – r]′[zn – r] ≤ 0 for all τn ∈ Σn. Let and define via r(z)n = rn(zn) for each player n, and via wn(σ) = σn + Gn(σ). Then σ is an equilibrium iff (σ). Hence the equilibria are the fixed points of the map . An equilibrium component is a maximal connected set of equilibria and thus compact. Each component of fixed points of the permuted map is homeomorphic to a corresponding component of the fixed points of Φ and their indices agree [Dold (12)]. In particular, the index is the local degree of the displacement map f ≡ Id – F used below.
A restricted class of perturbations perturbs a player's payoffs from his pure strategies independently of others' behaviors. For each g ∈ 𝒵 define the perturbed game G ⊕ g by (G ⊕ g)n(σ) = Gn(σ) + gn and thus . Let is an equilibrium of G ⊕ g} be the graph of equilibria over this class of perturbations. Define by θn(g, σ) = σn + Gn (σ) + gn, and let be the natural projection. Then θ is a homeomorphism; in particular, θ–1(z) = (f(z), r(z)). Consequently, . Moreover Appendix A shows that map f has degree +1. There exists an orientation of such that the local degree of f is the same as the local degree of the projection map p1. Hence the local degree of f and thus also the index of a component C of G is the same as the degree of the projection map p1 on any sufficiently small neighborhood of (0, C) in the graph . Appendix A presents an alternative definition of the index that depends only on the best-reply correspondence, which is intrinsic to a game independently of the map characterizing equilibria as fixed points.
As described in the Introduction, a profile σ ∈ Σ for game G induces an equivalent profile σ* ∈ Σ* of G's reduction G*. Let An be the matrix whose columns are the pure strategies in Sn represented as mixed strategies in Σ*n. Then σ*n = Anσn and . A profile σ ∈ Σ is an equilibrium of G if and only if the equivalent profile σ* = (Anσn)n∈N is an equilibrium of G*.
Proof of the Theorem
We now prove the two parts of Theorem 1. Theorem 2 extends to the entire class of equivalent games the implication of nonzero index established by Ritzberger (13).
Theorem 2. An equilibrium component is uniformly hyperstable if it is essential.
Proof: Let C be an equilibrium component of game G that is an essential component of a Nash map. Then its index is nonzero [O'Neill (5), McLennan (14)], say d ≠ 0. As shown in Appendix A, the index is invariant to addition of redundant strategies, so we can assume that G is reduced. Let U be an open neighborhood of C in Σ. We show that there exists δ > 0 such that for each equivalent game G* and each game G̃* within δ of G* there exists an equilibrium of G̃* equivalent to some profile in U. If necessary by replacing U with a smaller neighborhood, we can assume that the only equilibria in the closure of U are in C. Because no strategy in its boundary ∂U is an equilibrium, , where . Fix . Let G* be a game whose reduction is G, and let C* be the equilibrium component of G* whose reduction is C. Let be the graph of the equilibrium correspondence over the space of games with the same set of strategies as in G*. Let be the open ball around G* with radius δ. Let U* ⊃ C* be the set of profiles of G* that reduce to profiles in U; note that a profile in ∂U* reduces to a profile in ∂U. Then is an open neighborhood of (G*, C*) in the graph. Suppose σ* ∈ ∂U* and let σ be the corresponding profile in ∂U. Then there exists a pure strategy s for some player n whose payoff Gns(σ) in G from s against σ is greater than the payoff Gn(σ) from the reduction σn of σ*n by at least . For a game G̃* ∈ , the payoff from s against σ* is strictly greater than while the payoff from σ*n against σ* is strictly less than . Thus, σ* cannot be an equilibrium of G̃*. Therefore, G̃* has no equilibrium in ∂U*. Consequently, the projection map P*: V* → is proper: the inverse image of every compact subset of under P* is compact. Appendix A shows that the index of C and C* agree. Therefore, by Appendix A, the local degree of G* under P* is d. Because P* is a proper map, this implies that the local degree of each game G̃* ∈ is d [Dold (12)]. Therefore the sum of the indices of equilibrium components of G̃* in U* is d. Since d ≠ 0, G̃* has an equilibrium in U*. Since G* could be any game whose reduction is G and every game G̃* in its neighborhood has an equilibrium in U*, C is uniformly hyperstable.
Thus those components with nonzero indices are uniformly hyperstable, and such components exist because the sum of the indices of all components is +1. Now we prove necessity.
Theorem 3. A connected uniformly hyperstable set is an essential component.
Proof: Let C be a closed connected set of equilibria of G and let K be the component containing it. Suppose that Ind(K) = 0 or C ≠ K; we show that C is not uniformly hyperstable. Fix a neighborhood U as in the corollary of Appendix A and let δ > 0. We construct an equivalent game G̃ and a perturbation G̃δ of G̃ such that ∥ G̃ – G̃δ∥ ≤ δ and the perturbed game G̃δ has no equilibrium equivalent to a strategy profile in U. The construction of G̃ is done in three steps (we are indebted to a reviewer for suggesting a simplification of Step 2).
The best-reply correspondence for game G is , where . More generally, for β ≥ 0 say that a strategy τn of player n is a β-reply against σ ∈ Σ if τ′nGn(σ) ≥ Gns(σ) – β, where s ∈ Sn is any optimal reply of player n against σ. A profile τ is a β-reply against σ if for each n the strategy τn is a β-reply for player n against σ.
Step 1: First, we show that without loss of generality we can assume that G satisfies the following property (*): for every neighborhood W of Graph(BR) there exists a map h: Σ → Σ such that:
Graph(h) ⊂ W.
For each player n the n-th coordinate map hn of h depends only on Σ–n.
h has no fixed points in U.
It suffices to show existence of an equivalent game G* satisfying (*).
Define G* as follows. Player n's pure strategy set is S*n = Sn × Sn+1, where n + 1 is taken modulo N. For each n, and m ∈ {n, n + 1}, denote by pnm the natural projection from S*n to Sm. Then the payoff function from pure strategies for player n is given by G*n(s*) = Gn(s), where for each m, sm = pm,m(s*m). In other words, n's choice of a strategy for n + 1 is payoff irrelevant. Clearly G* is equivalent to G. Let Σ*n be player n's set of mixed strategies in the game G*. We continue to use pnm to denote the map from Σ*n to Σm that computes for each mixed strategy σ*n the induced marginal distribution over Sm. Let p: Σ* → Σ be the map p(σ*) = (p1,1(σ*1),..., pN,N(σ*N)); i.e., p computes the payoff-relevant coordinates of σ*. Finally, let P: Σ* × Σ* → Σ × Σ be the map for which P(σ*, τ*) = (p(σ*), p(τ*)). Use BR* to denote the best-reply correspondence for the game G*. Similarly, C* denotes the component of equilibria of G* equivalent to equilibria in C, and U* denotes the neighborhood corresponding to U.
Fix a neighborhood W* of Graph(BR*). For each μ > 0, let W(μ) be the set of those (σ, τ) ∈ Σ × Σ for which τ is a μ-reply to σ in G. Then the collection {W(μ) | μ > 0} is a basis of neighborhoods of the graph of BR. Choose μ > 0 such that P–1(W(μ)) ⊆ W*. By the corollary in Appendix A, there exists a map h: Σ → Σ such that Graph(h) ⊂ W(μ) and h has no fixed points in U. Now define the map h*: Σ* → Σ* as follows: for each n, h*n(σ*) is the product distribution τn(σ*) × pn+1,n+1(σ*n+1), where τn(σ*) = hn(p1,1(σ*1),..., pn–1,n–1(σ*n–1), pn–1,n(σ*n–1), pn+1,n+1(σ*n+1),..., pN,N(σ*N)). By construction, each coordinate map h*n depends only on Σ*–n. We claim that the graph of h* is contained in W*. To see this, observe first that τn(σ*) is player n's component of the image of (p–n(σ*), pn–1,n(σ*n–1)) under h. Since Graph(h) ⊂ W(μ), τn(σ*) is a μ-reply to p–n(σ*). Therefore, (p(σ*), τ(σ*)) belongs to W(μ). Hence (σ*, h*(σ*)) ∈ P–1(W(μ)) ⊆ W*.
We finish the proof by showing that h* has no fixed point in U*. Suppose σ* is a fixed point of h*. Then each σ*n is a product distribution with pn,n+1(σ*n) = pn+1,n+1(σ*n+1) for all n. Therefore pnn(σ*n) = pnn(h*n(σ*)) = hn(p–n(σ*), pn–1,n(σ*n–1)) = hn(p(σ*)) for each player n, which implies that p(σ*) is a fixed point of h. Since h has no fixed point in U, σ* ∉ U*.
Step 2: Let I be the interval [0, δ]. We now show that without loss of generality we can assume that G satisfies the following property (**): there exists a map g: Σ → IR, where R = Σn |Sn|, such that:
For each player n, gn depends only on Σ–n.
No profile σ ∈ U is an equilibrium of the game G ⊕ g(σ).
As in Step 1 we prove this by constructing an equivalent game with the property (**). Since the payoff functions are multilinear on the compact set Σ, there exists a Lipschitz constant M > 0 such that ∥Gn(σ) – Gn(τ)∥ ≤ M∥σ – τ∥ for all n and σ, τ ∈ Σ. We begin with a preliminary lemma.
Lemma. If τn is a β1-reply against σ, ∥τ′n – τn∥ ≤ β2, ∥σ′ – σ∥ ≤ β3 then τ′n is a (β1 + Mβ2)-reply to σ and τn is a (2Mβ3 + β1)-reply to σ′.
Proof of the Lemma: The first result follows directly from the Lipschitz inequality. Let s be an optimal reply for player n to σ′. Then the second result follows by using the Lipschitz inequality along with the inequality: Gn(s, σ′–n) – Gn(τn, σ′–n) ≤ |Gn(s, σ′–n) – Gn(s, σ–n)| + Gn(s, σ–n) – Gn(τn, σ–n) + |Gn(τn, σ–n) – Gn(τn, σ′–n)|.
Fix η = δ/6M. For each σ ∈ Σ there exists an open ball B(σ) around σ of radius < η such that for each σ′ ∈ B(σ) the set of pure best replies against σ′ is a subset of those that are best replies to σ. Since the set of best replies for each player n to a strategy profile is the face of Σn spanned by his pure best replies, BR(σ′) ⊆ BR(σ) for each σ′ ∈ B(σ). The balls B(σ) define an open covering of Σ. Hence there exists a finite set of points σ1,..., σk whose corresponding balls form a subcover. For each σi, let W(σi) be the η-neighborhood of BR(σi). Let W = ∪i (B(σi) × W(σi)). Then W is a neighborhood of the graph of BR. From Step 1 there exists a map h: Σ → Σ such that (i) Graph(h) ⊂ W; (ii) for each n, hn depends only on Σ–n; and (iii) h has no fixed point in U. If τ = h(σ) then there exist σ i, τi such that σ ∈ B(σ i), τi is a best reply to σ i, and τ is within η of τi. Therefore, the Lemma implies that τi is a 2Mη-reply against σ and therefore that τ is a 3Mη-reply against σ.
Fix α > 0 such that if σ ∈ U then ∥σ – h(σ)∥ > α. For each n, let be the simplicial complex obtained by taking a sufficiently fine subdivision of Σn such that the diameter of each simplex is less than both η and α, and let Tn be the set of vertices of this simplicial complex. Define T = Πn Tn. We now define a game Ḡ that is equivalent to G, as follows. For each player n the set of pure strategies is Tn. The pure strategy tn ∈ Tn is a duplicate of the mixed strategy in Σn corresponding to the vertex tn of . Since the vertices of Σn belong to Tn, Ḡ is equivalent to G. Let be the set of mixed strategies of player n in Ḡ and let . Denote by C̄ and Ū the sets in that are equivalent to C and U, respectively.
For tn ∈ Tn, define X(tn) ⊆ Σ–n as the projection on to Σ–n of the inverse image of the closed star (cf. Appendix B) of tn under the map hn. And let Y(tn) be the set of σ–n ∈ Σ–n such that ∥tn – hn(σ)∥ ≥ 2η [recall that hn(σ) does not depend on σn]. Since the diameter of each simplex of is < η, X(tn) ∩ Y(tn) = [null]. Now use Urysohn's lemma to define a function π tn: Σ–n → [0, 1] such that .
Let R′ = Σn |Tn|. We now construct a map with the requisite properties by first defining g on Σ and then extending it to the whole of by letting g() be g(σ), where σ is the equivalent profile in G. For each n, let be the map defined by f(σ–n) = maxs∈Sn Gn(s, σ–n). For each n, tn ∈ Tn, σ ∈ Σ, let gtn(σ) = π tn(σ–n)[fn(σ–n) – Gn(tn, σ–n) + Mη]. We first show that g is well defined, i.e. g maps each σ to a point in IR′. Fix n, tn, and σ. If σ–n ∈ Y(tn), then gtn(σ) = 0. If σ–n ∉ Y(tn), then ∥tn – hn(σ)∥ ≤ 2η. Since hn(σ) is a 3Mη-reply to σ–n, the Lemma implies that tn is a 5Mη-reply to σ–n, i.e., 0 ≤ fn(σ–n) – Gn(tn, σ–n) ≤ 5Mη. Hence, 0 ≤ gtn (σ) ≤ 6Mη = δ. Thus, g is a well defined map from Σ into IR′. Obviously the extension of g to the whole of also has norm at most δ. Also, by construction for each n, gn depends only on .
To finish the proof of this step we show that if then is not an equilibrium of . Suppose to the contrary that is such an equilibrium and let σ be the corresponding strategy in Σ. In the game , consider the payoff that player n gets when he plays a pure strategy tn while the others play according to . If σ–n ∈ X(tn), then his payoff is fn(σ–n) + Mη; if σ ∉ X(tn) then his payoff is Gn(tn, σ–n) + π tn(σ–n)[fn(σ–n) – Gn(tn, σ–n) + Mη], which is strictly smaller than fn(σ–n) + Mη since π tn (σ–n) < 1. Obviously there exists at least one tn such that σ–n ∈ X(tn), for instance, any vertex of the simplex of that contains hn(σ) in its interior. Thus, the set of optimal replies to for player n, call it T′n, is the set of tn's such that σ–n ∈ X(tn). For each tn ∈ T′n, there exists a simplex of that contains tn and hn(σ). Hence, the distance between hn(σ) and tn is < α. The support of being a subset of T′n, we then have ∥σn – hn(σ)∥ ≤ α. Since we are using the l∞-distance, ∥σ – h(σ)∥ ≤ α, which is impossible. Thus, there does not exist in Ū that is an equilibrium of .
Step 3: Suppose g: Σ → IR has the property (**) described in Step 2. For each σ ∈ U there exists ζ(σ) > 0 and an open ball B(σ) around σ such that for each σ′ ∈ B(σ) and each g′ such that ∥g′–g(σ′)∥ ≤ ζ(σ), σ′ is not an equilibrium of G ⊕ g′. The balls B(σ) form an open covering of U. Hence, there exists a finite set of points σ1,..., σk such that their corresponding balls cover Σ. Let ζ = mini ζ(σ i). Construct a simplicial subdivision ℐ of the interval I such that the diameter of each simplex (i.e., a subinterval) is at most ζ. Using the multisimplicial approximation theorem from Appendix B, there exists a simplicial subdivision of each Σn, and for each s ∈ Sn a multisimplicial approximation of gs that is multilinear on each multisimplex of . Let be the corresponding multisimplicial map defined by the coordinate maps g*s. By construction, no σ ∈ U is an equilibrium of G ⊕ g*(σ).
As in Appendix B let be the polyhedral complex generated by , and let γn: Σn → [0, 1] be the associated convex map. For each n let Pn be the set of vertices of . Given a polyhedron P–n in , there exists a multisimplex T–n of that contains it. Since g* is multilinear on each multisimplex, g* is multilinear on each polyhedron.
Consider now the equivalent game G̃ where the strategy set of each player n is the set Pn of vertices of the polyhedral complex . Let be the set of mixed strategies of player n in the game G̃. For each player n, let An be the |Sn| × |Pn| matrix, where column p is the mixed strategy vector that corresponds to the vertex p of . Then the payoff to player n from a strategy vector is his payoff in G from the profile σ, where for each m. For each n, let Bn: P–n → IPn be the map defined by Bn(p–n) = A′ng*n(p–n). Consider now the game G̃′ obtained by modifying the payoff functions to the following: for each player n, his payoff from the pure-strategy profile p is G̃n(p) + Bn,pn(p–n). By construction G̃′ is a δ-perturbation of G̃. Let cn be the vector in where the coordinate p of cn is γn(p). For each δ′ ≤ δ let G̃δ′ be the game G̃′ ⊕ [–δ′c]. Then G̃δ′ is a δ-perturbation of G̃.
We claim now that for sufficiently small δ′ the game Ĝδ′ has no equilibrium in the set Ũ that is the neighborhood equivalent to U of the equilibrium component C̃ for G̃ equivalent to C. Indeed, suppose to the contrary that there is a sequence δk converging to zero and a corresponding sequence of equilibria of G̃δk that lie in Ũ. For each k let σ k be the equivalent profile in Σ. For each k and each player n, if is a mixed strategy such that . Thus solves the linear programming problem subject to . Let be the unique polyhedron of that contains in its interior. Since γn is a convex function, for all such that , where An,p is the pnth column of An and is the probability that assigns to the pure strategy pn. Moreover, the construction of γn ensures that this inequality is strict unless the support of is included in . Therefore, the equilibrium strategy assigns positive probability only to points in .
Now let be a limit of as δk ↓ 0 and let σ be the equivalent mixed strategy. Then is an equilibrium of the game G̃′. Therefore, σ is an equilibrium of the game G ⊕ b, where for each n and s ∈ Sn. By the arguments in the previous paragraph, there exists for each n a polyhedron such that assigns positive probability only to points in . Since each is multilinear on the multisimplex T–n that contains , . Thus σ is an equilibrium of G ⊕ g*(σ), which is a contradiction. Thus, for all sufficiently small δ′ the game G̃δ′ has no equilibrium in Ũ.
Thus the connected uniformly hyperstable sets are precisely the essential components as stated in Theorem 1.
Acknowledgments
We thank Robert Brown of the University of California, Los Angeles for commenting on Appendix B and suggesting the term multisimplicial complex. This work was funded in part by a National Science Foundation Grant.
Appendix A: Index Theory
An Index Derived from the Best-Reply Correspondence. We define an index for equilibrium components by using the best-reply correspondence and show that this index coincides with the standard index constructed from a Nash map.
Let be the best-reply correspondence for the game G, i.e. BR(σ) = {τ ∈ Σ | (∀n) τn ∈ arg }. The set E of equilibria of G is the set of fixed points of BR; i.e. those for which σ ∈ BR(σ). Let C be a component of the equilibria of G. We follow McLennan (14) in defining an index for C. Let U be an open neighborhood of C such that its closure Ū satisfies Ū ∩ E = C. Let W be a neighborhood of Graph(BR) such that W ∩ {(σ, σ) ∈ Σ × Σ | σ ∈ Ū – U} = [null]. By corollary 2 in ref. 14 there exists a neighborhood V ⊆ W of Graph(BR) such that if f0 and f1 are any two maps from Σ to Σ whose graphs are contained in V, then there is a homotopy F: [0, 1] × Σ → Σ from f0 to f1 such that Graph(F) ⊂ [0, 1] × V. By the proposition in ref. 14 there exists a map f: Σ → Σ for which Graph(f) ⊂ V. Define the index IndBR(C) to be the standard index of the restricted map f: U → Σ. The choice of the neighborhood V and the homotopy axiom for index ensure that this index does not depend on the particular map f chosen to compute the index.
The index of the component C can also be defined by using the index obtained from the Nash map Φ: Σ → Σ defined in Formulation, which has the equilibria as its fixed points. Define the Gül-Pearce-Stacchetti (11) index IndGPS(C) to be the standard index of the component C computed from the restriction of Φ to the map g: U → Σ.
Theorem 4. IndBR(C) = IndGPS(C).
Proof: For each λ > 0 define the game Gλ as the game where the payoff functions of all players in G are multiplied by λ; i.e., Gλ = λG. Clearly, all games Gλ have the same equilibria. For Gλ let wλ be the map corresponding to w in the game G, and let be the corresponding GPS map. Then for each λ > 0 the homotopy H: [0, 1] × Σ → Σ, H(t, σ) = g1+t(λ–1), from g to gλ preserves the set of fixed points. Hence, the index of C under gλ is the same for all λ. To prove Theorem 4 it is sufficient to show that there exists λ > 0 such that the graph of gλ is contained in V, the neighborhood specified in the definition of IndBR(C). For each λ > 0 and σ ∈ Σ, wλ(σ) ≡ zλ is such that for all n, s. Choose c(σ) > 0 such that if s is not a best reply to σ–n for player n, then Gn(s′, σ–n) – Gn(s, σ–n) ≥ c(σ), where s′ is a best reply for player n against σ. Then if s is not a best reply and s′ is. In particular, if λ ≥ 2/c(σ), then this difference is at least 1. Therefore, for each such λ, zλ is retracted by r to a point in BR(σ). Now choose an open ball B(σ) around σ in Σ such that (i) B(σ) × BR(σ) ⊂ V; and (ii) for each n and each s ∈ Sn that is not a best reply to σ, there is an s′ such that Gn(s′, σ′–n) – Gn(s, σ′–n) ≥ c(σ)/2 for all σ′ ∈ B(σ). Then as before, gλ(σ′) ∈ BR(σ) for each λ ≥ 4/c(σ) and σ′ ∈ B(σ). The balls B(σ) for σ ∈ Σ form an open cover of Σ. Since Σ is compact there exists a finite set σ1,..., σK ∈ Σ such that ∪ B(σk) ⊃ Σ. Let λ* = maxk 4/c(σk). For each λ ≥ λ* the graph of gλ belongs to V, as required.
A corollary follows from results of A. McLennan (personal communication).
Corollary. If C is a closed subset of a component K of equilibria, with C = K only if Ind(K) = 0, then there exists a closed neighborhood U of C for which, for each neighborhood W of Graph(BR), there exists a map h such that Graph(h) ⊂ W and h has no fixed point in U.
Equivalence of Index and Degree. Let be the space of all finite N-player games with a fixed strategy set Sn for each player, and S = Πn Sn. Let be the graph of the Nash equilibrium correspondence over Γ and let be the natural projection. Each game G can be decomposed uniquely as G = G̃ ⊕ g, where for each player n and each pure strategy s ∈ Sn, Σs–nG̃n(s, s–n) = 0. Thus, Γ is the product space of all pairs (G̃, g). Define by Θ(G̃, g, σ) = (G̃, z) where for each player n and each s ∈ Sn, zns = σns + G̃n(s, σ–n) + gns. Theorem 1 of ref. 4 shows that Θ is a homeomorphism. The inverse Θ–1 is defined by Θ–1(G̃, z) = (G̃, g, r(z)), where r(z) = σ is the retraction of z to Σ and gns = zns – σns – G̃n(s, σ–n) for all n and s ∈ Sn. Furthermore, Θ extends to a homeomorphism between the one-point compactifications, call them and , of and Γ, respectively; and is homotopic to the identity map on . Thus, the map has degree +1. We can therefore orient such that the projection map p: → Γ has degree 1. Given a game G and a component C of the game, choose a neighborhood U of {(G̃, g)} × C in the graph that is disjoint from the other components of equilibria of G (viewed as a subset of ). The degree of C, denoted deg(C), is the local degree of (G̃, g) under the restriction of p to U. Since Θ is the identity on the factor, we can also define the degree of C by using 𝒵 as the space of games. Indeed given the game G = (G̃, g), let such that ((G̃, g′), σ) belongs to . Let be the map θ′(g′, σ) = z, where z is such that Θ((G̃, g′), σ) = (Ĝ, z). Then θ′ is a homeomorphism between and 𝒵 and as before we can define the degree of C as the local degree of the projection map from a neighborhood of {g} × C in whose closure does not contain any other equilibria of the game G. Obviously, these two definitions are equivalent. If we use θ′ then the degree of C is just the degree of g under the map from a neighborhood V of θ′({g} × C) in 𝒵, where p is the natural projection from to 𝒵. Letting θ and f be the maps defined in Formulation, we have θ′({g} × C) = θ({0} × C), and f = f′ – g. Therefore, the degree of zero under the map f over V is the same as the degree of g under the map f′ over V. As in Formulation, the degree of zero under the map f over V is the index of the component w(C) of the fixed point set of F, which is the same as the index of C under the GPS map Φ.
Invariance of Index and Degree. We provide a simple proof by using the index defined by the best-reply correspondence.
Theorem 5. The index of a component of equilibria is invariant under the addition or deletion of redundant strategies.
Proof: Let C be an equilibrium component of game G. It suffices to show that the index of C is invariant under the addition of redundant strategies. Accordingly, for each player n let Tn be a finite collection of mixed strategies. Let G* be the game obtained by adding the strategies in Tn as pure strategies for n; i.e., n's pure strategy set in G* is Sn ∪ Tn. Let Σ*n be his set of mixed strategies. Let BR* be the best-reply correspondence in Σ*. Let p*: Σ* → Σ be the function that maps each mixed strategy in G* to the equivalent mixed strategy in G. Let ι: Σ → Σ* be the inclusion map that sends a point in Σ to the corresponding point on the face of Σ*; precisely, ι(σ) = σ*, where σ*ns = σns for s ∈ Sn and σnt = 0 for t ∈ Tn. Obviously, ι(σ) ⊂ p–1(σ) for each σ ∈ Σ.
Let C* ≡ p–1(C) be the equilibrium component of G* corresponding to C. Let U be an open neighborhood of C whose closure is disjoint from other equilibrium components of G. Let U* = p–1(U). Choose a neighborhood W* of the graph of BR* such that the index of C* can be computed as the sum of the indices of the fixed points in U* of any map h* whose graph is contained in W*.
Let W be a neighborhood of the graph of BR such that (σ, τ) ∈ W implies p–1(σ) × p–1(τ) ⊂ W*. By ref. 14, every neighborhood of the graph of BR contains the graph of a function. Therefore, by the definition of IndBR(C), there exists a map h: Σ → Σ such that (i) the graph of h is contained in W; (ii) h has no fixed points on the boundary of U; and (iii) IndBR(C) is the index of the map h over U. Define now a map h*: Σ* → Σ* by . Then, by construction the graph of h* is contained in W*.
Moreover, h and h* have homeomorphic sets of fixed points. In fact, the fixed points of h* are the image of the fixed points of h under the injective map ι. Letting , we have that and . Therefore, by the commutativity property of the index [Dold (12)], the index of the map h: U → Σ is the same as that of h*: U* → Σ*. Hence IndBR*(C*) = IndBR(C).
Appendix B: Multisimplicial Complexes
A Multisimplicial Approximation Theorem. We establish a multilinear version of the simplicial approximation theorem. This result may be known but we found no reference. We begin with some definitions; see Spanier (15) for details.
A set of points {v0,..., vn} in is affinely independent if the equations and Σi λi = 0 imply that λ0 = ··· = λn = 0. An n-simplex K in is the convex hull of an affinely independent set {v0,..., vn}. Each vi is a vertex of K and the collection of vertices is called the vertex set of K. Each σ ∈ K is expressible as a unique convex combination Σi λivi; and for each i, σ(vi) ≡ λi is the vith barycentric coordinate of σ. The interior of K is the set of σ such that σ(vi) > 0 for all i. A face of K is the convex hull of a nonempty subset of the vertex set of K.
A (finite) simplicial complex 𝒦 is a finite collection of simplices such that the face of each simplex in 𝒦 belongs to 𝒦, and the intersection of two simplices is either empty or a face of each. The set V of 0-dimensional simplices is called the vertex set of 𝒦. The set given by the union of the simplices in 𝒦 is called the space of the simplicial complex and is denoted . For each , there exists a unique simplex K of 𝒦 containing σ in its interior; define the barycentric coordinate function σ: V → [0, 1] by letting σ(v) = 0 if v is not a vertex of K and otherwise by letting σ(v) be the corresponding barycentric coordinate of σ in the simplex K. For each vertex v ∈ V, the star of v, denoted St(v), is the set of such that σ(v) > 0. The closed star of v, denoted ClSt(v), is the closure of St(v).
A subdivision of a simplicial complex 𝒦 is a simplicial complex such that each simplex of is contained in a simplex of 𝒦 and each simplex of 𝒦 is the union of simplices in . Obviously . We need the following theorem on simplicial subdivisions for our approximation theorem below (15).
Theorem 5. For every simplicial complex 𝒦 and every positive number λ > 0 there exists a simplicial subdivision such that the diameter of each simplex of is at most λ.
A multisimplex is a set of the form K1 × ··· × Km, where for each i, Ki is a simplex. A multisimplicial complex 𝒦 is a product , where for each i, is a simplicial complex. The vertex set V of a multisimplicial complex 𝒦 is the set of all (v1,..., vm) for which each vi is a vertex of . The space of the multisimplicial complex is and is denoted . For each vertex v of 𝒦, the star of v, St(v), is the set of all such that for each i, σi ∈ St(vi). The closure of this set is Cl St(v). A subdivision of a multisimplicial complex 𝒦 is a multisimplicial complex where for each i, is a subdivision of . In the following, 𝒦 is a fixed multisimplicial complex and ℒ is a fixed simplicial complex.
Definition 1: A map is called multisimplicial if for each multisimplex K of 𝒦 there exists a simplex L in ℒ such that:
f maps each vertex of K to a vertex of L;
f is multilinear on |K|; i.e. for each σ ∈ |K|, f(σ) = Σv∈V f(v)·Πi σi(vi).
By property (i) vertices of K are mapped to vertices of L. Therefore, for each σ ∈ |K|, f(σ) is an average of the values at the vertices of K. Since the simplex L is a convex set, the image of the multisimplex K is contained in L. If 𝒦 is a simplicial complex then Definition 1 coincides with the usual definition of a simplicial map. In this special case the image of a (multi)simplex K under f is a simplex of ℒ, which is not necessarily true in general.
Definition 2: Let be a continuous map. A multisimplicial map is a multisimplicial approximation to f if for each , f(σ) is in the unique simplex of ℒ that contains g(σ) in its interior.
We could equivalently define a multisimplicial approximation by requiring that for each σ and each simplex L of ℒ, if g(σ) ∈ L then also f(σ) ∈ L. We now prove a multisimplicial approximation theorem.
Theorem 6. Suppose that is a continuous map. Then there exists a subdivision of 𝒦 and a multisimplicial approximation of g.
Proof: The collection {g–1(St(w)) | w is a vertex of ℒ} is an open covering of . Let λ > 0 be a Lebesgue number of this covering; i.e., every subset of whose diameter is < λ is included in some set of the collection. By Theorem 5, there exists for each i a simplicial subdivision of such that the diameter of each simplex is < λ/2. Then for each vertex v of K*, St(v) has diameter < λ (recall that we use the norm). We first define a function f 0 from the vertex set of to the vertex set of ℒ as follows. For each vertex v of , since the diameter of St(v) is < λ, there exists a vertex w of ℒ such that g(St(v)) ⊂ St(w). Let f 0(v) = w. Suppose v1,..., vk are vertices of a multisimplex K. We claim that their images under f 0 span a simplex in ℒ. Indeed, since the vj's are vertices of a multisimplex, we have that ∩jSt(vj) is nonempty. Therefore,
Therefore, the vertices f 0(vj) span a simplex in ℒ. Since f 0 maps vertices of a multisimplex to vertices of a simplex, there exists a well defined unique multilinear extension of f 0, call it f. To finish the proof we show that f is a multisimplicial approximation of g. Let σ be an interior point of a multisimplex K and let L be the simplex containing g(σ) in its interior. For every vertex v of K, g(St(v)) ⊆ St(f(v)) by construction. Thus g(σ) ∈ St(f(v)) for each vertex v of K. In particular, the set of vertices {f(v) | v is a vertex of K} span a subsimplex L′ of L. Since f(σ) ∈ L′, f is a multisimplicial approximation of g.
The proof of Theorem 6 shows a slightly stronger result. Let η = λ/2, where λ is as defined in the proof. If each is subdivision of such that the diameter of each simplex is at most η then g admits a multisimplicial approximation . Thus, we obtain:
Corollary 2. There exists η > 0 such that, for each subdivision of 𝒦 with the property that the diameter of each multisimplex is at most η, there exists a multisimplicial approximation of g.
Construction of a Convex Map on a Polyhedral Subdivision. We describe the construction of a convex map associated with a polyhedral refinement of a simplicial subdivision.
Let 𝒯 be a simplicial complex obtained from a simplicial subdivision of the d-dimensional unit simplex Σ in . The polyhedral complex 𝒫 is derived from 𝒯 as follows [Eaves and Lemke (16)]. For each simplex τ ∈ whose dimension is d – 1, let be the hyperplane that includes τ and is orthogonal to Σ. Then each closed d-dimensional admissible polyhedron of 𝒫 has the form , where each pτ ∈ {+, –} and H+τ and H–τ are the two closed half-spaces whose intersection is Hτ. Enlarge 𝒫 by applying the rule that each lower-dimensional polyhedral face of an admissible polyhedron is also admissible. By construction, the closure of each simplex in 𝒯 is partitioned by admissible polyhedra of 𝒫, any two nondisjoint admissible polyhedra meet in a common face that is also an admissible polyhedron, and each admissible polyhedron is convex. Associate with 𝒫 the map γ: Σ → [0, 1] for which γ(σ) = α Στ |a′τσ – bτ|, where the scaling factor α > 0 is sufficiently small that γ(Σ) ⊆ [0, 1]. Then γ is convex and piecewise affine. In particular for any finite collection σ1,..., σk of points in Σ and nonnegative scalars λ1,..., λk such that Σi λi = 1, we have that γ(Σi λiσi) ≤ Σi λiγ(σi), with the inequality being strict if and only if there does not exist an admissible polyhedron of 𝒫 that contains all of the σis.
Author contributions: S.G. and R.W. performed research and wrote the paper.
Footnotes
An analog of this theorem (with essentially the same proof) interprets hyperstability as a property of the equivalence class of a set of strategy profiles for the equivalence class of the game; i.e. G* represents the equivalence class of game G and a hyperstable set for G* represents the equivalent hyperstable sets for games equivalent to G*.
References
- 1.Nash, J. (1950) Proc. Natl. Acad. Sci. USA 36, 48–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Nash, J. (1951) Ann. Math. 54, 286–295. [Google Scholar]
- 3.Hillas, J. & Kohlberg, E. (2002) in Handbook of Game Theory III, eds. Aumann, R. & Hart, S. (Elsevier, Amsterdam), pp. 1597–1663.
- 4.Kohlberg, E. & Mertens, J. (1986) Econometrica 54, 1003–1039. [Google Scholar]
- 5.O'Neill, B. (1953) Am. J. Math. 75, 497–509. [Google Scholar]
- 6.Govindan, S. & Wilson, R. (1997) Econ. Theory 10, 541–549. [Google Scholar]
- 7.DeMichelis, S. & Germano, F. (2000) J. Econ. Theory 94, 192–217. [Google Scholar]
- 8.Kreps, D. & Wilson, R. (1982) Econometrica 50, 863–894. [Google Scholar]
- 9.Govindan, S. & Wilson, R. (2001) Econometrica 69, 765–769. [Google Scholar]
- 10.von Schemde, A. (2004) Ph.D. thesis (London School of Economics, London).
- 11.Gül, F., Pearce, D. & Stacchetti, E. (1993) Math. Ops. Res. 18, 548–552. [Google Scholar]
- 12.Dold, A. (1972) Lectures on Algebraic Topology (Springer, New York).
- 13.Ritzberger, K. (1994) Int. J. Game Theory 23, 207–236. [Google Scholar]
- 14.McLennan, A. (1989) Int. J. Game Theory 18, 175–184. [Google Scholar]
- 15.Spanier, E. (1966) Algebraic Topology (McGraw–Hill, New York).
- 16.Eaves, B. & Lemke, C. (1981) Math. Ops. Res. 6, 475–484. [Google Scholar]