Abstract
Kohlberg and Mertens [Kohlberg, E. & Mertens, J. (1986) Econometrica 54, 1003–1039] proved that the graph of the Nash equilibrium correspondence is homeomorphic to its domain when the domain is the space of payoffs in normal-form games. A counterexample disproves the analog for the equilibrium outcome correspondence over the space of payoffs in extensive-form games, but we prove an analog when the space of behavior strategies is perturbed so that every path in the game tree has nonzero probability. Without such perturbations, the graph is the closure of the union of a finite collection of its subsets, each diffeomorphic to a corresponding path-connected open subset of the space of payoffs. As an application, we construct an algorithm for computing equilibria of an extensive-form game with a perturbed strategy space, and thus approximate equilibria of the unperturbed game.
Keywords: game theory|extensive form|computing equilibria
1. Introduction
The most useful general result in game theory is the structure theorem of Kohlberg and Mertens (1). It characterizes the topological structure of the graph of the Nash equilibrium correspondence over the space of payoffs from finite normal-form games. Its proof is very simple, yet its corollaries include the existence of equilibria and strategically stable sets of equilibria, and the resulting index theory shows that a component with nonzero index contains a strategically stable set and that an equilibrium with negative index is typically dynamically unstable. Its practical applications include efficient algorithms for computing equilibria of games in normal form. Here we develop analogs for the graph of equilibrium outcomes over a domain that is the space of payoffs at final nodes of a game tree with perfect recall. As for normal-form games, these results yield an algorithm for computing equilibria of games in extensive form.
Kohlberg and Mertens (1) prove that the equilibrium graph is homeomorphic to the space of players' payoffs from pure-strategy profiles of the normal form. For a fixed game tree, however, the equilibrium graph cannot be homeomorphic to the space of payoffs at final nodes, because such games typically have continua of equilibria. Nevertheless, Kreps and Wilson (2) prove that a generic extensive-form game has finitely many equilibrium components, and all equilibria in each component have the same outcome, i.e., the same probability distribution on final nodes. Conceivably the analog of Kohlberg–Merten's characterization might be true for the graph of equilibrium outcomes, but we show in Section 3 that this analog is false. We establish two weaker characterizations. In Section 4, for a partition of the graph corresponding to the subtrees of equilibrium paths, we construct a homeomorphism from each part into a subset of the space of games; the graph is then the closure of the union of these parts. In Section 5 we prove an exact analog when the tree has no zero-probability events. This result applies to any construction based on perturbing players' simplices of mixed strategies, e.g., the graphs of ɛ-perfect and ɛ-proper equilibria. Section 6 develops an efficient algorithm for computing equilibria of extensive-form games.
2. Formulation
We consider the space 𝒢 of extensive-form games obtained by assigning payoffs to the final nodes of a finite game tree Γ ≡ (T, ≺, U, N, P*) with perfect recall. T is the set of nodes and ≺ is the irreflexive binary relation of precedence in the tree (T, ≺); that is, ≺ is acyclic and totally orders the predecessors {t′ ≺ t} of t. The subset of final nodes (those with no successors) is Z ⊂ T, U is the partition of T∖Z into information sets of the players and nature, N is the set of players, and P*(z) > 0 is the probability that nature's actions do not exclude the final node z. Un ⊂ U is the collection of information sets for player n ∈ N and An(u) is n's set of actions (branches of the tree) available at his information set u ∈ Un. Let An ≡ ∪uAn(u) be the entire set of n's actions, each labeled differently. Write u ≺ z, or equivalently z ≻ u, if t ≺ z ∈ Z for some node t ∈ u, and write (u, i) ≺ z if t ≺ t′ ⪯ z for some node t′ that follows node t ∈ u and action i ∈ An(u). Similarly i ≺ i′ if i, i′ are actions at u, u′ with (u, i) ≺ u′. Perfect recall implies that each (Un, ≺) is a tree. Player n's set of pure strategies is Sn ≡ {s : Un → An|s(u) ∈ An(u)}, and his simplex of mixed strategies is Σn ≡ Δ(Sn). Kuhn (3) shows that in a game tree with perfect recall each player n can implement a mixture of pure strategies by a payoff-equivalent behavior strategy bn = (bn(u))u∈Un in which each bn(u) ∈ Δ(An(u)) is a mixture of actions in An(u); i.e., bn(i|u) is the conditional probability at u that n chooses i.
For the fixed tree Γ, the space of games is 𝒢 = ℜN×Z, where a game G ∈ 𝒢 assigns payoff Gn(z) to player n at final node z. The space of outcomes is Ω = Δ(Z), where an outcome P ∈ Ω assigns probability P(z) to z. Let ℰ ⊂ 𝒢 × Ω be the graph of pairs (G, P) for which P is the outcome induced by P* and an equilibrium of the game G.
3. A Counterexample
A simple example shows that the analog of Kohlberg-Mertens' structure theorem cannot be true for every game tree. Consider the tree in which player 1 chooses an action in A1 = {T, B} and the game ends if 1 chooses T; otherwise, knowing that 1 chose B, 2 chooses an action in A2 = {L, R}. For this tree, the projection p : ℰ → 𝒢, p(G, P) = G, is a proper map; i.e., the inverse image of a compact set is compact. Therefore, if there exists a homeomorphism ℋ : ℰ → 𝒢, then the composite map p ○ ℋ−1 : 𝒢 → 𝒢 is also proper, so its local degree is the same at every game in 𝒢; see Dold (4) Section VIII.4.4-5. To establish a contradiction, it suffices to present examples at which p ○ ℋ−1 has different local degrees. Consider the games (in normal form)
![]() |
The game G33 has a unique equilibrium path BL that persists in a neighborhood of G33, because B remains a strictly dominant strategy for player 1 and L remains the unique best reply for 2. Therefore, the local degree of p ○ ℋ−1 at G33 must be +1 or −1. The game G31 has the two equilibrium paths BL and T, and again all games in a neighborhood of G31 have these same two outcomes. Therefore the local degree of p ○ ℋ−1 at G31 must be −2, 0, or +2. This contradiction does not occur over the space of normal-form games. For instance, in that larger space, the local degree of p ○ ℋ−1 at G31 is +1: the index of the outcome T is 0 and the index of BL is +1, and the local degree at G31 is the sum of these indices. Similar counterexamples can be constructed for the graphs of subgame-perfect and sequential equilibrium outcomes. Further analysis reveals two ways that the manifold property of the space of games on the tree is not inherited by the graph of equilibrium outcomes. First, (G32, T) has a neighborhood in ℰ that is a manifold with boundary. Second, (G21, T) is a point where the graph bifurcates over the segment 2 − ɛ < x < 2 + ɛ: if x < 2, then T is the unique equilibrium outcome, at x = 2 there is a continuum of equilibrium outcomes, and if x > 2, then both T and BL are equilibrium outcomes.
4. A Partial Structure Theorem for Games in Extensive Form
In this section, we prove a structure theorem for graphs of equilibrium outcomes on subtrees. For this section only, we assume that the space of games in 𝒢+ ≡ ℜ
. This assumption is without loss of generality, because there exists a homeomorphism between 𝒢 and 𝒢+ that preserves the equilibrium correspondence. Indeed, there exists a monotonic function f : ℜ → ℜ++ such that f(x) goes to zero as x goes to −∞, and f(x) goes to +∞ as x goes to +∞. Then the map that sends G ∈ 𝒢 to G′ ∈ 𝒢+ given by G′n(z) = Gn(z) + f(λn) − λn, where λn = minzGn(z), is a homeomorphism that preserves the equilibrium correspondence. It suffices therefore to use the graph Ē of equilibrium outcomes over 𝒢+. Consider first the subset ℰ+ ≡ {(G, P) ∈ Ē|P(z) > 0 ∀z} of the graph over 𝒢+, where all nodes have positive probability. Define the map Φ : ℰ+ → 𝒢+, Φ(G, P) = H, by Hn(z) = Gn(z)P(z).
Lemma 4.1.
Φ : ℰ+ → 𝒢+ is a homeomorphism.
Proof:
We construct a continuous map Ψ : 𝒢+ → ℰ+ and show that Ψ ○ Φ and Φ ○ Ψ are identities on ℰ+ and 𝒢+, respectively, which immediately implies the result.
Define Ψ : 𝒢+ → ℰ+ as follows. Given any H ∈ 𝒢+, first define Hn(u, i) ≡ ∑z≻(u,i)Hn(z) and Hn(u) ≡ ∑i∈An(u)Hn(u, i); next, define bn(i|u) ≡ Hn(u, i)/Hn(u); then the outcome P is obtained from P* and the behavior profile b; and finally, define Gn(z) ≡ Hn(z)/P(z). Each of these is positive and G ∈ 𝒢+ as required. The outcome P induces probabilities P(u) and P(u, i) of the events that the information set u is reached and that action i is taken there; and because P(u, i) > 0, also the conditional probability Q(z|u, i) = P(z)/P(u, i) if z ≻ (u, i). Thus
![]() |
![]() |
![]() |
![]() |
Therefore
![]() |
which implies that Gn(u, i) = Gn(u) for each i ∈ An(u). From its definition above, Gn(u, i) is n's continuation value from action i at u in the game G with behavior profile b, so the fact that it is the same for each action i ∈ An(u) verifies that b is an equilibrium. Therefore, (G, P) ∈ ℰ+ as required for Ψ to be well defined. Obviously, Ψ is a continuous map. Also, by construction, Φ ○ Ψ is the identity on G+. To complete the proof, it remains to be shown that Ψ ○ Φ is the identity on ℰ+. To prove this, suppose (G′, P′) ∈ ℰ+ and let v′n(i|u) be n's equilibrium continuation value from i at u. Because P′ ≫ 0, all actions at u are optimal, so v′n(i|u) = v′n(u) for all i ∈ An(u), and ∑z≻(u,i)G′n(z)P′(z) = v′n(u)P′(u)b′n(i|u), where b′ is the behavior profile for P′. Therefore, if Φ(G′, P′) = H and Ψ(H) = (G, P) as above, then b′n(i|u) = Hn(u, i)/Hn(u) ≡ bn(i|u). Because P′ is uniquely determined by b′, P′ = P; and because G′n(z)P′(z) = Hn(z) = Gn(z)P(z), also G′ = G. Thus Ψ ○ Φ is the identity, as asserted. □
Each equilibrium induces an equilibrium with full support on the pruned tree obtained by eliminating all nodes following a branch with zero probability. Therefore:
Theorem 4.2.
There exist finitely many subsets ℰk ⊂ Ē, such that (a) each ℰk is homeomorphic to an open path-connected subset of 𝒢+ and (b) Ē is the closure of ∪kℰk.
Proof:
Each ℰk is the subset of the graph consisting of all pairs (G, P) ∈ Ē in which the support of the outcome P is Zk ⊂ Z and the outcome is induced by an equilibrium in which any pure strategy is strictly inferior if it does not exclude some information set on the equilibrium path and uses there an action that has zero probability in the equilibrium. It is then clear that these ℰks satisfy part b, so it remains to prove a. Let 𝒢k be the projection of 𝒢+ onto the coordinates corresponding to Zk. 𝒢k is the space of games with the tree Γk obtained from Γ by retaining only branches leading to nodes in Zk. Let Êk be the graph of the (completely mixed) equilibrium outcome correspondence over 𝒢k. By Lemma 4.1, Êk is homeomorphic to 𝒢k. Express 𝒢+ as the product of 𝒢k with ℱk = ℜ
. Then Êk × ℱk is homeomorphic to 𝒢k × ℱk, using the identity map on the second factor ℱk. ℰk is an open subset of Êk × ℱk and is therefore mapped onto an open subset of 𝒢+ by the homeomorphism. It remains to show that ℰk is path-connected. Let ((Gk, Fk), P) and ((G′k, F′k), P′) be two points in ℰk. Because Êk is homeomorphic to 𝒢k, connect (Gk, Pk) and (G′k, P′k) by a path in Êk. Let F*k ∈ ℱk be a vector whose coordinates are all strictly less than the payoff received by any player at any equilibrium along this path, and less in each coordinate than Fk and F′k. Then the linear paths connecting ((Gk, Fk), P) to ((Gk, F*k), P) and ((G′k, F′k), P′) to ((G′k, F*k), P′) are in ℰk, because decreasing payoffs to players at nodes in Z∖Zk has no effect on equilibria. Now the points ((Gk, F*k), P) and ((G′k, F*k), P′) can be path-connected using the path in Êk. The choice of F*k ensures that the path belongs to ℰk. □
The counterexample in Section 3 shows that, for some trees, a violation of the manifold property occurs at boundaries among the open subsets ℰk. These boundaries lie within the lower-dimensional set of nongeneric games excluded by theorems showing that a generic extensive-form game has a finite number of equilibrium outcomes; see Kreps and Wilson (2) or Govindan and Wilson (5).
5. A Structure Theorem for Games with Perturbed Strategies
In this section, we assume that each player n's simplex Σn of mixed strategies is perturbed to the compact convex subset Σ
⊂ Σn disjoint from the boundary. Because P* ≫ 0 by assumption, these perturbations assure that every mixed-strategy profile in Σɛ ≡ ∏nΣ
yields a positive probability for every final node.
From a mixed-strategy profile σ, one can derive the corresponding “nonexclusion” or enabling profile p ∈ ∏n[0, 1]Ln as follows. Ln is the set of player n's last actions; that is, i ∈ Ln ⊂ An iff there exists z ∈ Z such that i is the ≺-maximal element in An(z) ≡ {i′ ∈ An|i′ ≺ z}; that is, i = ℓn(z) ≡ Arg max An(z). If Ln = ∅, then n is a dummy player so pn can be omitted from the profile. For each i ∈ Ln, pn(i) is the probability under σn that n's selected pure strategy does not exclude i or any of n's actions preceding i. One computes pn(i) as follows. The subset of n's pure strategies that do not exclude z is Sn(z) ≡ {s ∈ Sn|(u, i) ≺ z ⇒ s(u) = i}. If n uses the mixed strategy σn, then the probability that n does not exclude z is Pn(z) = ∑s∈Sn(z)σn(s), or Pn(z) = 1 if An(z) = ∅. Let Zn(i) ≡ ℓ
(i) = {z|ℓn(z) = i} and let sn(i) ≡ Sn(z) for each z ∈ Zn(i) be the event that n's pure strategy enables i ∈ Ln. Then pn(i) ≡ Probσ(sn(i)) = Pn(z) for each z ∈ Zn(i).
The feasible set of enabling profiles is 𝒞 ≡ ∏nCn, where for each nondummy player n, his feasible set of enabling strategies is
![]() |
Observe that Cn is compact and convex. Because Σ
is disjoint from the boundary of Σn, pn ∈ Cn only if each pn(i) > 0.
For a mixed strategy σn ∈ Σ
, the induced enabling strategy pn is equivalent to a behavior strategy bn obtained as follows. Each bn(i|u) is proportional to βn(u, i) ≡ ∑σn(s), where the sum is over those pure strategies s ∈ Sn such that if (u′, i′) ⪯ (u, i), then s(u′) = i′. Each pure strategy in this sum selects an action at each of n's next information sets after (u, i), if any. Therefore, if (u′, i′) is the immediate predecessor of u among n's information sets, then βn(u′, i′) = ∑i∈An(u)βn(u, i). This recursion enables calculation of βn by working backward from n's final information sets where βn(u, i) = pn(i). Conversely, from a behavior strategy one can derive the enabling strategy via pn(i) = ∏(u′,i′)⪯(u,i)bn(i′|u′) for each i ∈ Ln, because the normalizing factors cancel along a path. Similarly, an enabling profile p yields an outcome P via P(z) = P*(z) × ∏nPn(z), and an outcome implies the behavior strategy via bn(i|u) ∝ ∑z≻(u,i)P(z).
An enabling strategy is the minimal representation of a behavior strategy that preserves the linearity and convexity of the space of mixed strategies. This representation avoids complications from auxiliary constraints imposed by non-minimal representations. S. Elmes (Columbia D.P. 490, 1990, personal communication) notes these complications in her repair of defects in appendix C of ref. 1. Enabling strategies are closely related and essentially equivalent to strategies in sequence form [von Stengel (6)].
Given an outcome P, the expected payoff of player n can be written
![]() |
![]() |
where Pn(z) ≡ P*(z) × ∏n′≠nPn′(z), vn(∅) ≡ ∑z|An(z)=∅Gn(z)Pn(z), and vn(i) ≡ ∑z∈Zn(i)Gn(z)Pn(z). As in Gül, Pearce, and Stacchetti (7), let rn : ℜLn → Cn be the retraction that maps x to the point in Cn closest to x in Euclidean distance; namely, pn = rn(x) iff ∑i∈Ln[p′n(i) − pn(i)][x(i) − pn(i)] ≤ 0 for all p′n ∈ Cn.
Lemma 5.1.
An enabling strategy pn ∈ Cn for a nondummy player n is an optimal reply to σ ∈ Σɛ (or an equivalent profile of behavior or enabling strategies) iff pn = rn(pn + vn).
Proof:
A mixed strategy σn ∈ Σ
is optimal for player n iff for all σ′n ∈ Σ
![]() |
where pn(i) ≡ ∑s∈sn(i)σn(s) and p′n(i) ≡ ∑s∈sn(i)σ′n(s) are the corresponding enabling strategies in Cn. Because the possible values of p′n include all of Cn, this is precisely the variational inequality that characterizes the equality pn = rn(pn + vn). □
Thus in terms of enabling strategies, an equilibrium is a fixed point p = r(p + v) in the sense that pn = rn(pn + vn) for each player n, where v is derived from p as above. Hereafter, we consider only equilibria in enabling strategies. As above the set of enabling profiles is 𝒞. Represent a game as a point G ∈ 𝒢 ≡ ℜN×Z, the space of players' payoffs at final nodes. Let ℰ ⊂ 𝒢 × 𝒞 be the graph of the equilibrium correspondence over the space of games for the game tree Γ. Let E[⋅|⋅] be the conditional expectation operator for a fixed strictly positive probability distribution in Δ(Z). Define the map ℋ : ℰ → 𝒢, ℋ(G, p) = H, by
![]() |
or Hn(z) = Gn(z) if An(z) = ∅, for each player n and final node z, where gn(i) = E[Gn(z)|Zn(i)] for each i ∈ Ln.
Theorem 5.2.
ℋ is a homeomorphism.
Proof:
Define 𝒦 : 𝒢 → ℰ as follows. Given H ∈ 𝒢, first let hn(i) = E[Hn(z)|Zn(i)] for each n and i ∈ Ln. Also, let pn = rn(hn) and vn = hn − pn for each nondummy player n. Next, let gn(i) = hn(i) + [vn(i) − ∑z∈Zn(i)Hn(z)Pn(z)]/Pn(Zn(i)); and finally, let Gn(z) = Hn(z) + gn(ℓn(z)) − hn(ℓn(z)), or Gn(z) = Hn(z) if An(z) = ∅. The G thus constructed satisfies ∑z∈Zn(i)Gn(z)Pn(z) = vn(i) for i ∈ Ln. Therefore, vn(i) is indeed n's marginal expected payoff from increasing pn(i), which by Lemma 5.1 is sufficient for pn to be an optimal reply by a nondummy player n in the game G. Thus, 𝒦 is a well defined continuous map. It is immediate from our construction that ℋ ○ 𝒦 is the identity on 𝒢. We will now show that 𝒦 ○ ℋ is the identity on ℰ, which then implies that 𝒦 = ℋ−1, i.e., ℋ is a homeomorphism. Suppose H = ℋ(G′, p′) and (G, p) = 𝒦(H). For each n and i ∈ Ln, hn(i) ≡ E[Hn(z)|Zn(i)] = p′n(i) + v′n(i). Because pn ≡ rn(hn), we therefore have that pn(i) = p′n(i) and also that vn(i) = v′n(i) for all n and i ∈ Ln. By the definition of vn and ℋ,
![]() |
![]() |
![]() |
Hence, g′n(i) = hn(i) + [vn(i) − ∑z∈Zn(i)Hn(z)Pn(z)]/Pn(Zn(i)), which by the definition of 𝒦 is gn(i). Consequently, G = G′, and 𝒦 ○ ℋ is the identity on ℰ. □
A repetition of the proof in ref. 1 shows that H extends to a homeomorphism between the one-point compactifications of ℰ and 𝒢 and that proj𝒢 ○ ℋ−1 is linearly homotopic to the identity map on the one-point compactification of 𝒢; thus, proj𝒢 is a map of degree one. An obvious corollary is that their theorem applies to Nash equilibria of normal-form games with perturbed sets of mixed strategies. Theorem 5.2 applies to stronger definitions of equilibrium in extensive-form games based on perturbed strategy sets. For example, if Σ
= {ɛσ̄ + [1 − ɛ]σ|σ ∈ Σn)}, where ɛ > 0 and σ̄ is the barycenter of Σn, then ℰ is the graph of ɛ-perfect equilibria over the space of extensive-form games on the tree Γ. Similarly, if Σ
is the convex hull of the points generated by all permutations of the coordinates of the vector (1, ɛ, ɛ2, …), rescaled to lie in Σn, then ℰ is the graph of ɛ-proper equilibria; see ref. 1, proposition 5. In other applications where each rn is smooth because Σ
has a smooth boundary, ℋ is a diffeomorphism.
6. An Algorithm for Computing Equilibria of Perturbed Extensive-Form Games
In this section, we apply Theorem 5.2 to construct an algorithm for computing equilibria of an extensive-form game defined on a tree Γ with perturbed sets of mixed strategies. This algorithm is a variant of the algorithms in Govindan and Wilson (8, 9) for normal-form games; see those articles for technical background, detailed specifications, computer programs, and numerical results for N-player games. Proofs in those articles apply here almost verbatim: the only difference is that the retraction rn to player n's simplex Σn of mixed strategies is replaced here by the retraction to the polytope 𝒞n of n's enabling strategies.
First, we describe a general parametric method that exploits the key simplifying feature that the retraction r is independent of the payoffs. Represent the game G as the pair (G̃, g) where G̃n(z) = Gn(z) − gn(ℓn(z)) using gn ∈ ℜLn as defined in Section 5. Using the homeomorphism ℋ of Theorem 5.2, the algorithm finds an equilibrium of G by tracing the solutions of the equation ℋ(G̃, g + λγ; p(h)) = H ≡ (G̃, h), where λγ parameterizes a ray whose origin represents the game (G̃, g) whose equilibria are to be computed. As in Theorem 5.2, at each solution h, the equilibrium p(h) of the game (G̃, g + λγ) is the enabling profile p(h) = r(h) obtained by retracting h to 𝒞. The algorithm starts by choosing a ray γ and an initial scalar parameter λ° sufficiently large that the game (G̃, g + λ°γ) has a unique equilibrium p°. One then follows the trajectory in the graph above the line segment through the two points (G̃, g + λ°γ) and (G̃, g) as the parameter λ decreases to zero. The implementation must contend with the usual two complications of homotopy methods: (i) the ray γ must be generic to ensure that the equilibrium outcome p° is unique when λ° is sufficiently large and to exclude bifurcations along the trajectory; and (ii) the trajectory in the graph includes reversals of orientation, so the parameter λ cannot decrease monotonically. Each time the trajectory crosses λ = 0 yields an additional equilibrium of the game (G̃, g), and for generic games these equilibria have alternating indices +1 and −1. Different choices of the ray γ can yield different equilibria.
The parametric method can be implemented via the Global Newton Method (GNM) of Smale (10). Using a monotonic time parameter t, the trajectory is (h(t), λ(t))t≥t°. The algorithm starts at λ(t°) = λ° and finds the kth equilibrium on the trajectory at a time tk > tk−1 for which λ(tk) = 0. Actual computations use discrete steps, but here time is assumed to be continuous. In its simplest form, GNM finds a root of a differentiable function F by tracing the trajectory of the differential equation ḣ = −θ(h)[DF(h)]−1⋅F(h) starting from an initial point h°. In this form, θ(h) is a continuous scalar velocity, and DF(h) is the Jacobian matrix of F at h. However, better numerical properties are obtained by starting from a unique solution h° to F(h°) = λ°γ and using the equations ḣ = −(Adj[DF(h)])⋅γ and λ̇ = −Det[DF(h)] corresponding to the velocity θ(h) = −λ̇/λ. In particular, replacing the inverse by the adjoint matrix of the Jacobian enables the trajectory to pass through singularities of codimension 1. Note that the trajectory reverses orientation each time the determinant changes sign.
GNM is invoked by translating the equations in the proof of Theorem 5.2. Assuming there are no dummy players, let F : ∏nℜLn → ∏nℜLn be the displacement map of ℋ with G̃ fixed; that is,
![]() |
for each i ∈ Ln, where from p = r(h) one constructs the outcome P. Then an equilibrium of the game (G̃, g + λγ) is obtained from a solution of the equation F(h) = λγ. In vector form, let F(h) = h − g − p − G̃⋅Q(p), where in the matrix Q(p) a nonzero element Qzi = Pn(z) if z ∈ Zn(i) and i ∈ Ln. The Jacobian DF at h is DF(h) = I − [I − G̃⋅DQ(p)]⋅Dr(h), where I is the identity matrix, DQ is the Jacobian of Q at p = r(h), and Dr is the Jacobian of r at h. The resulting trajectory of GNM is the path of the differential equation
![]() |
Everywhere on this trajectory, ℋ(G̃, g + λγ; p(h)) = (G̃, h) if one starts with F(h°) = λ°γ, where for a generic ray γ, λ(t°) = λ° is sufficiently large that p° = r(h°) is the unique equilibrium of the game (G̃, g + λ°γ). The Jacobian DF is continuous except that, when 𝒞 is polyhedral (as in important applications), Dr changes discontinuously where some pn moves from one face to another of Cn. We show in ref. 9 that the trajectory is continuous across such a boundary even though its direction changes discontinuously. For standard applications such as ɛ-perfect equilibria (used to approximate sequential equilibria), the block-diagonal matrix Dr is constant on each face of the polyhedron 𝒞.
Acknowledgments
This work was funded in part by grants from the Social Sciences and Humanities Research Council of Canada and the National Science Foundation of the United States.
Abbreviation
- GNM
Global Newton Method
References
- 1.Kohlberg E, Mertens J. Econometrica. 1986;54:1003–1039. [Google Scholar]
- 2.Kreps D, Wilson R. Econometrica. 1982;50:863–894. [Google Scholar]
- 3.Kuhn H. In: Contributions to the Theory of Games II, Kuhn H, Tucker A, editors. Princeton, NJ: Princeton Univ. Press; 1953. pp. 193–216. [Google Scholar]
- 4.Dold A. Lectures on Algebraic Topology. New York: Springer; 1972. [Google Scholar]
- 5.Govindan S, Wilson R. Econometrica. 2001;69:765–769. [Google Scholar]
- 6.von Stengel B. Games Econ Behav. 1996;14:220–246. [Google Scholar]
- 7.Gül F, Pearce D, Stacchetti E. Math Oper Res. 1993;18:548–552. [Google Scholar]
- 8. Govindan, S. & Wilson, R. (2002) J. Econ. Dynam. Control 26, in press.
- 9. Govindan, S. & Wilson, R. (2002) J. Econ. Theory 104, in press.
- 10.Smale S. J Math Econ. 1976;3:107–120. [Google Scholar]
















