On Sustainable Equilibria

Following the ideas laid out in Myerson (1996), Hofbauer (2000) defined a Nash equilibrium of a finite game as sustainable if it can be made the unique Nash equilibrium of a game obtained by deleting/adding a subset of the strategies that are inferior replies to it. This paper proves two results about sustainable equilibria. The first concerns the Hofbauer-Myerson conjecture about the relationship between the sustainability of an equilibrium and its index: for a generic class of games, an equilibrium is sustainable iff its index is $+1$. Von Schemde and von Stengel (2008) proved this conjecture for bimatrix games; we show that the conjecture is true for all finite games. More precisely, we prove that an isolated equilibrium has index +1 if and only if it can be made unique in a larger game obtained by adding finitely many strategies that are inferior replies to that equilibrium. Our second result gives an axiomatic extension of sustainability to all games and shows that only the Nash components with positive index can be sustainable.


Introduction
Myerson [21] proposes a refinement of Nash equilibria of finite games, which he calls sustainable equilibria, based on the hypothesis that most games, even if they are one-shot affairs, should be analyzed not as if they are played in isolation, but rather as particular instances of many plays of such games. Myerson argues that when, say, two members of a society play a Battle-of-Sexes game, if the game has a history in this society, it becomes a "culturally familiar" game for these two players, and the past history of plays, by other members of the society, should inform play in this encounter. An equilibrium is then a cultural norm, an institution, in this society, and the game is typically played according to this norm. Any Nash equilibrium of the underlying game that can emerge as a norm in some society is sustainable. From this perspective, Myerson reasons, the two pure-strategy equilibria in the Battle-of-Sexes game are sustainable while the mixed equilibrium is not.
In his search for a formal definition of a sustainable equilibrium, Myerson considers, and then dismisses, on axiomatic grounds, existing refinements that yield the same prediction in the Battle-of-Sexes game as his heuristic argument does: for example, persistent equilibria [13] fail invariance [1]; and evolutionary stability fails existence. Myerson concludes his paper with a conjecture that the index of an equilibrium is a determinant of its sustainability. 1 Hofbauer [12] distills the ideas in Myerson's paper to provide a definition of sustainable equilibria in regular games 2 . Hofbauer posits that a minimum requirement of sustainability should be that if an equilibrium of a game is sustainable, it should remain sustainable in the game obtained by restricting players' strategies to the set of best replies to the equilibrium. 3 If one also accepts that an equilibrium that is unique is sustainable, then one is lead to the following definition. Say that a game-equilibrium pair is equivalent to another such pair if the restrictions of the two games to the set of best replies to their respective equilibria are the same game (modulo a relabelling of the players and their strategies) and the two equilibria coincide (under the same identification). An equilibrium of a game is sustainable if it has an equivalent pair where the equilibrium is unique.
In this paper, we prove two results about sustainable equilibria. The first result is about sustainable equilibria of (generic) games where all equilibria are regular. Following Myerson, Hofbauer conjectured that a regular equilibrium is sustainable iff its index is +1. Von Stengel and von Schemde [33] proved this conjecture for bimatrix games. In this paper, we show that the conjecture holds for all N -person games.
While the definition of sustainable equilibria does not assume that the game is regular, the existence of such equilibria is not guaranteed in nongeneric games. Our second result provides a resolution to this problem via an axiomatic approach. We enumerate a set of axioms that for regular games selects sustainable equilibria and for the other games selects components with positive index.
Our proof of the the Hofbauer-Myerson conjecture holds in all games with isolated equilibria. We prove that an isolated equilibrium σ of a finite game G has index +1 if and only if one can add finitely many new strategies together with their payoffs, all inferior replies to σ, and obtain a larger gameĜ in which σ is the unique equilibrium. To illustrate the statement, consider the battle of the sexes game G below. It has three Nash equilibria: two 1 Interestingly, Myerson speculates that one could perhaps develop a theory of index of equilibria based on fixed-point theory, seemingly unaware, as Hofbauer [12] observes, of an extant theory in the literature (see Gül, Pearce and Stacchetti [7] and Ritzberger [27]). 2 Call an equilibrium regular if locally the equilibrium is a differentiable function of the payoffs (see section 2.1). 3 For regular games, this is equivalent to restricting players' strategies to the support of the equilibriumsee Section 4 for a discussion of this point. are strict (t, l) and (b, r) (with index +1), and one is mixed (with index −1). G = l r t (3, 2) (0, 0) b (0, 0) (2,3) By adding one strategy to each player (x for player 1, and y for player 2, see the gameĜ below), b and r are now strictly dominated. Removing them yields a game where x and y are strictly dominated, making the strict equilibrium (t, l) of G the unique equilibrium ofĜ. This construction easily extends to all finite games: any strict equilibrium (necessarily pure, regular and with index +1, see Ritzberger [27]) can be made the unique equilibrium in a larger game obtained by adding finitely many strategies that are inferior replies to the equilibrium. 4 Our paper not only extends this property to isolated mixed equilibria with index +1 but shows that they are the unique equilibria having that property, that is, if a Nash equilibrium (isolated or not) can be made unique by adding finitely many inferior replies to it, that equilibrium must be isolated and must have index +1.
What are the key properties of the index of equilibria that drive this equivalence? To answer this question, let us see a sketch of our proof. In one direction, suppose that an equilibrium σ of a game G is sustainable, and that (G, σ) is equivalent to a pair (Ḡ, σ) where σ is the unique equilibrium ofḠ. Let G * be the game obtained from G by deleting strategies that are inferior replies to σ. It follows from a property of the index that the index of σ in G can be computed as the index of σ in G * . The game G * is also the game obtained fromḠ by deleting inferior replies there. Therefore, the index of σ in G * can also be computed as the index of σ inḠ. As σ is the unique equilibrium ofḠ, its index is +1, which then gives us the result.
Going the other way, if we have a +1 index equilibrium σ of a game G, the sum of the indices of the other equilibria is zero, as the sum of indices over all equilibria is +1. Now, we can take a map whose fixed points are the Nash equilibria of G and alter it outside a neighbourhood of σ so that the new map has no fixed points other than σ. 5 By a careful addition of strategies and specification of payoffs for these strategies, we obtain a gameḠ where any equilibrium must translate to a fixed point of the modified map of G, making σ the unique equilibrium inḠ.
A word about our methodology is in order. In a bimatrix game, a player's payoff function is linear in his opponent's strategy and the index can be computed easily using the Shapley formula [30]. Von Schemde and von Stengel [33] were able to exploit those features and use tools from the theory of polytopes (von Schemde [34]) to prove the conjecture. In the general case, their technique is inapplicable. What we do, instead, is start with a construction involving a fixed-point map and then convert it into a game-theoretically meaningful one. In this respect, our approach is similar in spirit to, but different in details from, that in Govindan and Wilson [6].
Equilibria with index +1 are also distinguished from their counterparts with index −1 in terms of their dynamic stability. It is well-known, both in general equilibrium and in game theory, that equilibria with index −1 are asymptotically unstable under any reasonable learning or adjustment process-cf. McLennan [17]. 6 Even computational dynamics like those generated by homotopy algorithms (Lemke-Howson, linear-tracing procedure, etc.) converge to a +1 index equilibrium (Herings-Peeters [10]). While these results might be seen as eliminating −1 equilibria, they do not come down conclusively in favor of all +1 equilibria. The main reason is that it is still an open question as to whether every regular game has at least one equilibrium (necessarily of index +1) that is asymptotically stable with respect to some natural dynamical system. Hofbauer observed that some +1 index equilibria are indeed unstable for all natural dynamics, as the following potential game G 1 shows. 7 G 1 = l m r t (10, 10) (0, 0) (0, 0) m (0, 0) (10, 10) (0, 0) b (0, 0) (0, 0) (10, 10) The profile where players mix uniformly is an isolated equilibrium with index +1 and so it can be made unique in a larger game, for example by adding the three strategies x, y, and z as inĜ 1 below. 8 The fact is that all natural dynamics increase the potential of G 1 ; since the completely-mixed equilibrium minimizes that potential, it is unstable for all natural dynamics-cf. Hofbauer [12] for more about his second conjecture, which states that any regular game has at least one +1 index equilibrium that is asymptotically stable w.r.t. some natural dynamics. 9 G 1 = l m r x y z t (10, 10) (0, 0) (0, 0) (0, 11) (10, 5) (0, −10) m (0, 0) (10, 10) (0, 0) (0, −10) (0, 11) (10, 5) b (0, 0) (0, 0) (10, 10) (10, 5) (0, −10) (0, 11) In Section 5 we tackle the question of extending sustainability to nongeneric games. When a game has no isolated equilibrium, no equilibrium can be made unique by adding inferior replies to it. Moreover, there are games where no subset of a connected equilibrium component can be made unique by adding inferior replies to it, and there are (non regular) games where all equilibria are isolated but none of them has index +1 and so none of the finitely many equilibria can be made unique by adding inferior replies to it. Our solution to guarantee existence in every games is to take an axiomatic view.
First, the Hofbauer-Myerson conjecture provides an axiomatic characterization of sustainability for regular games. Second, the solution concept that assigns to each game its components with positive index extends sustainability to the universal domain of finite games and it satisfies three other axioms, beyond those invoked for generic games: connectedness, invariance, and robustness. Finally, if we combine the last two axioms to obtain a strengthening of robustness, then we get the result that any extension of sustainability must select from among the components with a positive index.
The rest of the paper is organized as follows. Section 2 sets up the problem and states our main theorem, which is about the Hofbauer-Myerson conjecture. It also gives an informal summary of the theory of index of equilibria. Section 3 is devoted to proving the theorem. Section 4 provides a discussion of the role of unused best replies in the definition of sustainable equilibria. Section 5 extends sustainability to connected sets of equilibria and axiomatically characterises positive index Nash components in the same spirit as the uniform hyperstability characterisation of non-zero index components by Govindan and Wilson [6]. We have two appendices. The first reviews a construct from the theory of triangulations; and the second provides the proof of a key lemma that is invoked in Section 5.

Definitions and statement of the main theorem
A finite game in normal form is a triple (N , (S n ) n∈N , G) where: N = { 1, . . . , N } is the set of players, with N 2; for each n ∈ N , S n is a finite set of pure strategies; and, letting S ≡ n∈N S n be the set of pure strategy profiles, G : S → R N is the payoff function. By a slight abuse of notation, we will refer to a game by its payoff function G.
Given a game G, for each n, let Σ n be the set of n's mixed strategies and let Σ ≡ n∈N Σ n . Also, for each n, S −n ≡ m =n S n , and Σ −n ≡ m =n Σ n . The payoff function G extends to Σ in the usual way and we will denote this extension by G as well.
Define an equivalence relation on game-equilibrium pairs as follows. For i = 1, 2, let if, up to a relabelling of players and strategies, the restriction of G 1 to the set of best replies to σ 1 is the same game as the restriction of G 2 to the set of best replies to σ 2 , and the equilibria coincide under this identification. 10 It is easily checked that ∼ is an equivalence relation.
Sustainability is a property of equivalence classes and we could say that the canonical representation of a sustainable equilibrium is the game-equilibrium pair where there are no inferior replies to the equilibrium.
2.1. Index and degree of equilibria. Both the index and the degree of equilibria are measures of the robustness of equilibria to perturbations. They differ in the space of perturbations they consider (perturbations of fixed-point maps vs payoffs perturbations) but ultimately agree with one another. 11 We start with the degree of equilibria. For simplicity we give a definition of degree only for regular equilibria. This approach allows to bypass the use of algebraic topology, but more importantly it is germane to our problem, as we are concerned only with regular equilibria in this paper. For the general definition, see for e.g., Govindan and Wilson [6].
Fix both the player set N and the strategy space S. The space of games with strategy space S is then the Euclidean space Γ ≡ R N ×S of all payoff functions G. Let E be the graph of the Nash equilibrium correspondence over Γ, i.e., E = { (G, σ) ∈ Γ × Σ | σ is a Nash equilibrium of G }. Let proj : E → Γ be the natural projection: proj(G, σ) = G.
By the Kohlberg-Mertens Structure Theorem [14], there exists a homeomorphism h : E → Γ such that h −1 is differentiable almost everywhere. Say that an equilibrium σ of a game G is regular if proj • h −1 is differentiable and has a nonsingular Jacobian at h(G, σ); and say that a game is regular if each equilibrium σ of G is regular. If an equilibrium σ is regular, then it is a quasi-strict equilibrium-that is, all unused strategies are inferior replies 12 -and locally, the equilibrium is a smooth, even analytic, function of the game; moreover, it is also a regular equilibrium in the space of games obtained by deleting the unused strategies or, indeed, by adding strategies that are inferior replies to the equilibrium. The set of games that are not regular is a closed subset of lower dimension-actually codimension one-in Γ and thus regular games are generic.
If an equilibrium σ of a game G is regular, then we can assign a degree to it that is either +1 or −1 depending on whether the Jacobian of proj • h −1 at h(G, σ) has a positive or a negative determinant. An inspection of the formula for the Jacobian shows that the degree of a regular equilibrium σ is the same as its degree computed in the space of games obtained by deleting the strategies that are inferior replies to σ. Therefore, if σ is a regular equilibrium of G and if (G, σ) ∼ (Ḡ,σ) thenσ is a regular equilibrium ofḠ and it has the same degree as σ, making degree an invariant for an equivalence class.
As the Kohlberg-Mertens homeomorphism extends to the one-point compactification of E and Γ, and proj • h −1 is homotopic to the identity on this extension, the sum of the degrees of equilibria of a regular game is +1. 13 In fixed-point theory, the index of fixed points contains information about their robustness when the map is perturbed. (See McLennan [16] for an account of index theory written primarily for economists.) Since Nash equilibria are obtainable as fixed points, index theory applies directly to them. For simplicity, suppose f : U → Σ is a differentiable map defined on a neighborhood U of Σ in R N |S| and such that the fixed points of f are the Nash equilibria of a game G. Let d be the displacement of f , i.e., d(σ) = σ − f (σ). Then the Nash equilibria of G are the zeros of d. Suppose now that the Jacobian of d at a Nash equilibrium σ of G is nonsingular. Then we can define the index of σ under f as ±1 depending on whether the determinant of the Jacobian of d is positive or negative.
One potential problem with the definition of index is the dependence of the computation on the function f , as intuitively we would think of the index as depending only on the 12 see Ritzberger [27] and van Damme [32]. 13 If G is nongeneric, we can define the degree of a component of equilbria as the sum of the degrees of equilibria in a neighborhood of the component for a regular game that is in a neighborhood of G; this computation is independent of the neighborhoods chosen, as long as they are sufficiently small. The sum of the degrees of the components of equilibria of a game is +1. game G. But, under some regularity assumptions on f , we can show that the index is independent of f . Specifically, consider the class of continuous maps F : Γ × Σ → Σ with the property that the fixed points of the restriction of F to { G } × Σ are the Nash equilibria of G. Demichelis and Germano [3] show that the index of equilibria is independent of the particular map in this class that is used to compute it; Govindan and Wilson [5] show that the degree is equivalent to the index computed using one of the maps in this class, the fixed-point map defined by Gül, Pearce and Stacchetti [7]. Thus, the index and degree of equilibria coincide-see Demichelis and Germano [3] for an alternate, more direct, proof of this equivalence. Given these results, for a regular equilibrium, we can talk unambiguously of its index and use the term degree interchangeably with it.

2.2.
Games in strategic form. It is convenient for us to work with a somewhat larger class of games than normal-form games, called strategic-form games, and in this subsection we will define these games-cf. Pahl [26] for an extensive treatment of these games.
A game in strategic form is a triple (N , (P n ) n∈N , V ) where: N is the player set; for each n, P n is a polytope 14 of strategies; V : n∈N P n → R N is a multilinear payoff function. Clearly any normal-form game is a strategic-form game. Going the other way, given a strategic-form game (N , (P n ) n∈N , V ), we can define a normal-form game (N , (S n ) n∈N , G) where for each n, S n is the set of vertices of P n and for each s ∈ S = n S n , G(s) = V (s); the polytope P n can be viewed as the quotient space of Σ n obtained by identifying all mixed strategies that are duplicates of one another (i.e., induce the same payoffs for all players for any profile of strategies of n's opponents).
2.3. Statement of the main theorem. The following theorem settles the Myerson-Hofbauer conjecture in the affirmative.
Theorem 2.2. A regular equilibrium is sustainable iff its index is +1.
As the sum of the indices of the equilibria of a regular game is +1, there is at least one with index +1. Thus, we have the following corollary. Corollary 2.3. Every regular game has at least one sustainable equilibrium.

Proof of Theorem 2.2
We will present the proof in a sequence of steps, each of which will be carried out in a separate subsection.
3.1. The index of a sustainable equilibrium. We begin with a proof of the necessity of the condition. Let σ * be a regular equilibrium of a game G that is sustainable. Let (G, σ * ) ∼ (Ḡ,σ), whereσ is the unique equilibrium ofḠ. As we saw in the previous section, the index is constant on an equivalence class. Sinceσ is the unique equilibrium ofḠ, its index is +1, and the result follows.

3.2.
Preliminaries. The rest of the section is devoted to proving the sufficiency of the condition. In this subsection, we introduce some key ideas that we exploit in the proof.
First, we gather a list of notational conventions to be used. Throughout Section 3 (but not in the Appendix) we use the ∞ -norm on Euclidean spaces. For any subset A of a topological space X, we let ∂ X A be its topological boundary and int X (A) its interior. If C is a convex set in a Euclidean space, then ∂C and int(C) refer to the boundary and the interior of C in the affine space generated by C.
Definition 3.1. Given a payoff function G, and a vector h ∈ n∈N R Sn , let G ⊕ h be the game where the payoff to player n from a profile s ∈ S is G n (s) + h n,sn .
Thus f n (σ) is an average of σ and a mixed strategy r n (σ); r n (σ) has the following properties: (1) it assigns a positive probability to a pure strategy iff it does strictly better than σ n -in particular, it assigns zero probability to some strategy in the support of σ n , as f n (σ) = σ n ; (2) it assigns the highest probabilities to the best replies to σ. Again for notational convenience, we will talk of a gameḠ embedding G. WhenḠ embeds G, we view the set Σ of mixed strategies of G as a subset of the setΣ of mixed strategies in G. Obviously, ifḠ embeds G and σ is an equilibrium ofḠ where for each n, the strategies that are not in S n are inferior replies, then (G, σ) ∼ (Ḡ, σ). Our proof technique is to show that for each regular +1 equilibrium σ * of G, we can embed G in a gameḠ where σ * is the unique equilibrium and the newly added strategies are inferior replies to σ * .
We say that a strategic-form gameV embeds G if the associated normal-form gameḠ, as defined in Subsection 2.2, embeds G, or equivalently for each n, each strategy in S n is a vertex of the polytopeP n of n's strategies inV , and G(s) =V (s) for all s ∈ S. In our proof we construct embeddings of G in strategic-form gamesV that have a simple structure: for each n, the strategies in S n span a face ofP n .

3.3.
A Simple consequence of regularity. From now on fix a game G and let σ * be a regular equilibrium with index +1. For each n, let S * n be the support of σ * n . Our objective in this subsection is to record the following simple, and yet consequential, property of σ * . There existsε > 0 such that: if σ = σ * is an equilibrium of G, then there exist two different players n 1 , n 2 , such that for i = 1, 2, there exists s n i ∈ S * n i with σ n i ,sn i < σ * n i ,sn i −ε. Indeed if this property is not true, there exist a sequence k → ∞, a corresponding sequence σ k of equilibria converging to some σ, and a player n such that: (1) σ k = σ * for all k; and (2) for all m = n and s m ∈ S * m , σ k m,sm σ * m,sm − k −1 . Therefore, σ m = σ * m for all m = n. But σ n = σ * n as σ * is regular and, hence, isolated. This implies that λσ + (1 − λ)σ * is an equilibrium for all λ ∈ [0, 1], again contradicting the fact that σ * is isolated. Thus, there existsε with the stated property. 15 For 0 < ε ε, and each n, let B ε n be the set of σ n ∈ Σ n such that σ n,sn σ * n,sn − ε for all s n ∈ S * n ; and let B ε be the set of σ such that σ n is not in B ε n for at most one n. (N.B.
is the union of finitely many closed sets, it is closed.

3.4.
Killing all fixed points of f other than σ * . From the viewpoint of fixed-point theory, our problem amounts to embedding Σ as a proper face of a polytopeΣ, extending f (the Nash map) to a functionf on it, and then modifyingf such that its only fixed point is σ * . From a game-theoretic viewpoint, there is an additional problem introduced by the 15 Note that this proof only uses the fact that σ * is isolated. caveat thatf should, in a sense, be realizable as a fixed-point map of a gameḠ that embeds G-i.e., a map whose fixed points are the equilibria of gameḠ. In this subsection, we solve the first problem partially, by constructing a map f 0 that coincides with f on B ε for some 0 < ε ε and that has no fixed points outside it. We will later use this map to construct the embeddingḠ.
If necessary by adding a strictly dominated strategy for each player, we can assume that σ * n belongs to ∂Σ n for each n. (Recall that sustainability and index are properties of equivalence classes of regular equilibria, so that the addition of such strategies is harmless.) Let V ≡ Bε; X ≡ Σ \ int Σ (V ). The boundary of X is relative to the affine space generated by Σ, i.e., ∂X ≡ (∂Σ\V )∪∂ Σ V . We claim now that (X, ∂X) is homeomorphic to a ball with boundary. Indeed, the desired homeomorphism can be constructed as follows. Pick a completely-mixed strategy-profile σ 0 such that σ 0 n,sn < σ * n,sn −ε for all n and s n ∈ S * n . (Such a choice is possible since σ * n belongs to the boundary of Σ n and if necessary, we can decrease theε defined above.) The set X is star-convex at σ 0 : for each σ ∈ X, λσ + (1 − λ)σ 0 ∈ X for all λ ∈ [0, 1]. Therefore, there is now a closed ball around σ 0 in Σ \ ∂Σ that is contained in X \ ∂X that is homeomorphic to X using radial projections from σ 0 .
Letd be the displacement off :d(σ) ≡ σ −f (σ). For each n, let A n be the hyperplane in R Sn through the origin and with normal (1, . . . , 1), and let A = n A n . The mapd maps Σ into A. As the index of σ * is +1, the sum of the indices of the other components of fixed points off , which are contained in X \ ∂X, is zero. Therefore,d : (X, ∂X) → (A, A − 0) has degree zero. By the Hopf Extension Theorem (cf. Corollary 8.1.18, [31]) there exists a mapd 0 from X to A − 0 such that its restriction to ∂X coincides withd. Extendd 0 to the whole of Σ by letting it bed outside X, i.e., on V \ ∂ Σ V .

3.5.
Example. We now introduce a running example, where we can carry out our construction numerically, and which we hope will aid in the understanding of the proof. The example differs from the text in one somewhat irrelevant respect: we focus on symmetric strategies, as it reduces the dimension of the problem and allows us to perform a two-dimensional graphical analysis as well.
The game we study is a two-player coordination game given below.
Given our restriction to symmetric strategies, we will dispense with the subscript for players in the notation (here and throughout the paper when we work with this example). Thus, a symmetric mixed-strategy profile is represented by one number, x ∈ [0, 1], where x is the probability of playing L. There are two pure strategy equilibria: x = 1 and x = 0, both of which have index +1; and there is a mixed equilibrium, x = 1/2, which has index −1. The restriction of the Nash map f to symmetric strategies allows us to represent it as a function from [0, 1] to itself. By computation, we obtain: The graph of f , with its three fixed points corresponding to the three equilibria of the game, is illustrated below in Figure 1. Let V ≡ [2/3, 1]. We can directly construct a map f 0 as in the previous section, whose only fixed point is x = 1 (see the graph of f 0 in green in Figure 1). 16 3.6. A parametrized family of perturbed games. Ideally, we would like a game G 0 such that f 0 is the Nash map of G 0 . This seems to be too strong a property to hold. However, f 0 does contain enough information for us to construct a function g : Σ → n∈N R Sn such that: (1) g(·) is zero on B ε for some sufficiently small ε; (2) σ is an equilibrium of G ⊕ g(σ) iff σ = σ * . Figure 1. Graphs of f (black) and f 0 (green) Choose 0 < ε <ε and let U ≡ B ε ; note that U is a closed subset contained in the interior n , let r 0 n (σ) be the unique point in ∂Σ n on the ray from σ n through f 0 n (σ), i.e., it is the unique point of the form (1 − α)σ n + αf 0 n (σ) for α 1 that belongs to ∂Σ n . If σ ∈ Z 1 n , then there exists some t n that is in the support of σ n but not of r 0 n (σ). For each n, s n , let Z + n,sn be the closure of the set of σ ∈ Z 1 n for which r 0 n,sn (σ) r 0 n,tn (σ) for all t n ∈ S n . If f 0 (σ) = f (σ) (the Nash map) and σ ∈ Z 1 n , then r 0 n (σ) equals r n (σ) as defined in subsection 3.2; therefore, σ ∈ Z + n,sn iff s n is a best reply to σ in G. We are now ready to define the function g(σ). In doing so, we repeatedly invoke Urysohn's lemma to construct functions that are zero on a closed set and positive outside it. First, let v n (σ) = max sn G n (s n , σ −n ). Second, let β 1 n : Σ → [0, 1] be a continuous function that is zero on Z n and positive everywhere else. Third, for each n, s n , let β 2 n,sn : Σ → [0, 1] be a continuous function that is one on Z + n,sn and strictly smaller than one elsewhere. Finally, let β 3 : Σ → [0, 1] be a continuous function that is one on Σ \ int Σ (V ), zero on U and strictly positive everywhere else. For each n, s n and σ, define: If σ ∈ U , then g(σ) = 0 as β 3 (σ) = 0; and σ is an equilibrium of G ⊕ g(σ) iff σ = σ * . Suppose σ / ∈ U . Since σ * is the only fixed point of f 0 , there exists some n such that f 0 n (σ) = σ n . For this n, there exists s n such that: σ ∈ Z + n,sn (take s n s.t. r 0 n,sn (σ) r 0 n,tn (σ) for all t n ∈ S n ); and there is t n in the support of σ n but not in the support of r 0 n (σ). This implies β 2 n,sn (σ) = 1, while β 2 n,tn (σ) < 1. If σ / ∈ V , then β 3 (σ) = 1 and so, showing that σ is not an equilibrium of G ⊕ g(σ).
If σ ∈ V \ U , then as f 0 coincides with f , s n is a best reply against σ while t n is not. Thus and again σ is not an equilibrium of G⊕g(σ). Thus the function g has the desired properties.

3.7.
Example. We continue with the example of subsection 3.5. We will construct the function g of the previous section. (Recall our convention of dropping the player subscript for terms like Z n , Z 1 n .) What we are after is a function g : Because Z is empty, we can set β 1 (·) to be a constant function equal to δ > 0. The map β 3 (·) will be defined as follows: There is no need to introduce the function β 2 (·) in this example. Putting these ingredients together, we can define g as follows: We show that if the payoffs are now perturbed according to the bonus function g for each player, the only remaining equilibrium is x = 1. Let x be an equilibrium of the perturbed game. If x ∈ [0, 2/3], β 3 (x) = 1, so if player 1 plays x, player 2 gets δ more than the best payoff v 2 (x) in the unperturbed game from playing L whereas by playing R he will not get more than v 2 (x). Therefore, x = 1, which is a contradiction. On the other hand, if x ∈ [2/3, 1], then L is already the strict best-reply in the original game and since the g is nonnegative, it follows that x = 1 is the unique equilibrium of the perturbed game.
3.8. Isolating σ * . Before we can use the perturbation g, we need to first embed G in a gameG where σ * is the only equilibrium in the face Σ ofG and in fact the only equilibrium in which the strategy of even one of the players is in Σ n . (The perturbation g is then used on the face opposite to Σ.) Hence, the embeddingG will be such that ifσ is an equilibrium ofG and the support ofσ n is in S n for some n, thenσ = σ * . The gameG that embeds G will be represented in strategic form.
Choose 0 < ε * < ε such that σ * n,sn > ε * for each n and s n ∈ S * n . Let U * n ≡ B ε * n for each n ∈ N , and let U * ≡ B ε * . The set U * is a proper subset of U (and is a closed subset contained in the interior of V ). For each n, choose an arbitrary object 0 * n (not in S * n ). Let Θ n be the set of distributions over m =n (S * m ∪ { 0 * m }). For each player n, his strategy setΣ n in the strategic form ofG is Σ n × Θ n . A typical elementσ n ∈Σ n has coordinates (σ n , θ n ).
We will now describe the payoff functions. For each θ n ∈ Θ n and m = n, we let θ n,m be the marginal distribution of θ n over S Notice that the payoff function of each player n is affine over each strategy setΣ m , m = 1, ..., N , soG is indeed a well-defined game in strategic form. For each n, let θ 0 n = (0 * m ) m =n . Then G is embeddable inG as the face Σ n × { θ 0 n } is a copy of the original face Σ n , for n = 1, ..., N .
Supposeσ is an equilibrium ofG, then σ is an equilibrium of G, as the functions γ n of each player n do not depend on σ n . If σ = σ * , then the unique θ n,m that is optimal for each n = m is 0 * m and thus the equilibrium uses θ 0 n for each n. On the other hand if σ = σ * , by the property of subsection 3.3, there are at least two players m for whom σ m / ∈ U * m . Therefore, for each n, there is at least one m = n such that 0 * n,m is not optimal. Thus for each n, the support of θ n does not include θ 0 n . To conclude, we showed that ifσ = (σ n , θ n ) n∈N is an equilibrium ofG, then σ is an equilibrium of G and either: (1) σ = σ * and θ n = θ 0 n , for each n ∈ N ; or (2) σ = σ * and the support of θ n does not contain θ 0 n for any n ∈ N .
3.9. Example. In the context of the example of subsection 3.5: Figure 2. Let ε * = 1/8. The strategic-form game is now defined by letting each player's strategy set be the square [0, 1] × [0, 1]. The payoff function (which has to be defined for all profiles) is as follows. For player 1,G 1 (x, θ 1 , y, θ 2 ) = G 1 (x, y) + γ(θ 1 , y); the payoffs for player 2 are defined symmetrically. If player 2 plays y < 7/8, it follows that the (strict) best-reply of player 1 is to play θ 1 = 1, in order to capture the positive bonus coming from γ(L, x); if y 7/8, then the bonus γ(L, y) is nonpositive, and θ 1 = 0 is a best-reply, which makes the payoffs of the perturbed game equal to the original payoffs. This game has a copy of the original equilibrium x = 1: both players choose θ = 0 and x = 1. Any other equilibrium is such that θ = 1.
3.10. The embeddingḠ δ . The embedding that allows us to obtain σ * as the unique equilibrium (and regular as well) will be built fromG by adding a finite number of mixed strategies as pure strategies and by defining their payoffs to eliminate all other equilibria. 18 The set Σ\int Σ (U * ) is compact, g(·) is continuous, and no σ ∈ Σ\int Σ (U * ) is an equilibrium of G⊕g(σ), as shown in subsection 3.6. Hence, there exists η > 0 such that no σ ∈ Σ\int Σ (U * ) is an equilibrium of G⊕g for any g with g−g(σ) η. Also, since g is uniformly continuous, Reduce ζ to ensure that it is also smaller than the distance between U * n and ∂ Σn U n for each n. For each n, take a triangulation T n of Σ n × Θ n with the following properties. (See the Appendix for the details.) (1) The only vertices in Σ n × { θ 0 n } of T n are pure strategies (s n , θ 0 n ), s n ∈ S n ; (2) letting Θ 1 n be the face of Θ n where θ 0 n has zero probability, if T n ∈ T n is a simplex either with a face in Σ n × Θ 1 n , or shares a face with such a simplex, then the diameter of T n is less than ζ; (3) there exists a convex function n : We fix a pure strategy s 0 n ∈ S n for each n. These pure strategies will be used below to define the perturbation of payoffs π n for each player n. We denote a typical element ofS 1 n by (σ n,n+1 , θ n,n+1 ), which is a vertex in T n+1 . For i = 0, 1, letΣ i n be the set of mixtures overS i n . The pure strategy set of player n in the gameḠ δ in normal form isS n ≡S 0 n ×S 1 n . The set of mixed strategies is denotedΣ n . For each mixed strategyσ n , and i = 0, 1, we letσ i n be the marginals overS i n . DefineS ≡ nS n andΣ ≡ nΣ n . Also, letS i ≡ nS i n andΣ i ≡ nΣ i n for i = 0, 1. Fix δ > 0. We will now define the payoff functionḠ δ . For each n, let T 1 n be the collection of simplices of T n that have nonempty intersection with Σ n × Θ 1 n . Given a pure strategy profiles ∈S withs n = (σ n , θ n , σ n,n+1 , θ n,n+1 ) for each n, the payoffḠ δ n (s) has five distinct components: The first and the third terms have been defined before. The function n in the last term is the convex function defined above. We will specify the other two terms.
where ξ n (s 0 −n ) is one if for each m = n,s 0 m is a vertex of some simplex in T 1 m ; otherwise it is zero. The function π n is 0 if either: (1)s 0 n+1 is a vertex of some simplex in T 1 n+1 and s 1 n =s 0 n+1 ; or (2)s 0 n+1 is not a vertex of such a simplex, buts 1 n = (s 0 n+1 , θ 0 n+1 ) (s 0 n+1 is a bound: the construction depends on the fixed-point map f 0 obtained in subsection 3.4, which in turn relies on an existence result, the Hopf Extension Theorem (cf. Corollary 8.1.18, [31]).
fixed pure strategy chosen above, while θ 0 n+1 is the collection (0 * n,m )); elsewhere it is −1. The definition ofḠ δ clearly implies that it embeds G.
We want to make a couple of remarks about the payoffs. First, the function π n incentivizes player n to mimic player n+1 whenever the latter is choosing a strategy close to Σ n+1 ×Θ 1 n+1 : if n + 1 randomizes over the vertices of a simplex T n+1 ∈ T 1 n+1 , then player n's best replies must be among the vertices of the simplex. This will be a crucial property, since the choices in Σ n−1,n play a role in the evaluation of the bonus function g 1 n . The idea is that whenever the bonus function g 1 n is active, meaning that all players m = n randomize over the vertices of a simplex T m in T 1 m , then each pure best-reply for player n must choose a vertex of T n+1 . On the other hand, if player n + 1 is randomizing over S n+1 × { θ 0 n+1 } then it follows that the unique best-reply for player n is to choose the previously fixed strategiess 1 n = (s 0 n+1 , θ 0 n+1 ). Our second remark concerns the nature of the payoffs for mixed strategies. For n and each i = 0, 1, there is a linear map p i n :Σ n → Σ n+i × Θ n+i that sends each pures n to the corresponding mixed strategy in Σ n+i × Θ n+i . For each n, the first and the third terms of the payoffs depend onσ only through their images under p 0 = n∈N p 0 n ; the fourth term depends on all the information inσ 1 n andσ 0 n+1 . The second term depends onσ n only through p 0 n , but requires the entire information inσ −n , while the last term requires the information inσ 0 n .
Say that a strategyσ n is admissible for player n if the support of its marginalσ 0 n ∈Σ 0 n is the set of vertices of a simplex T n in T n . Observe that for any δ > 0, every best reply for player n is admissible. Indeed, the first three components of n's payoff function depend on n's strategy only through its projection to Σ n × Θ n and the fourth is independent of these choices. Therefore, any two strategies for n that project under p 0 n to the same point in Σ n × Θ n yield the same payoffs for these four terms, leaving the fifth to decide which one is better. But the map n is convex, and it is linear precisely on the simplices of T n , which then forces each best reply to be a mixture over the vertices of a simplex of T n .
We claim that if δ = 0 and the only admissible equilibrium ofḠ δ isσ * , then for sufficiently small δ > 0,σ * is the only equilibrium ofḠ δ . To prove this claim, suppose that we have a sequenceσ δ of equilibria ofḠ δ converging to some equilibriumσ 0 ofḠ 0 , then as we saw aboveσ δ must be admissible, and hence also its limitσ 0 . As we have assumed thatσ * is the unique admissible equilibrium ofḠ 0 ,σ 0 =σ * . Observe now that for each n, every pure best reply inḠ 0 toσ * is of the form (s n , θ 0 n , s 0 n+1 , θ 0 n+1 ) where s n ∈ S n is a best reply to σ * ; and this property holds for best replies toσ δ , for small δ. Thus for each such δ, and for each n, σ δ n is of the form (σ δ n , θ 0 n , s 0 n+1 , θ 0 n+1 ), where σ δ n is a best reply to σ * in G. In other words, σ δ is an equilibrium of G. As σ δ converges to σ * and as σ * is an isolated equilibrium of G, σ δ = σ * for all small δ. Thus the claim follows and it is sufficient to show thatσ * is the only admissible equilibrium for δ = 0.
To prove this last point, fix now an admissible equilibriumσ with marginals (σ 0 ,σ 1 ) ∈ Σ 0 ×Σ 1 of the gameḠ 0 . For each n, let (σ n , θ n ) and (σ n,n+1 , θ n,n+1 ) be the image ofσ n under p 0 n and p 1 n , resp. Also, let T n be the simplex of T n generated by the support ofσ 0 n for each n. Suppose first for each n, T n belongs to T 1 n . For each n, θ n assigns probability less than ζ, which is smaller than one, to θ 0 n . Also, for at least two n, σ n / ∈ int Σn (U * n ): indeed, otherwise there is one player n all of whose opponents m are choosing in int Σn (U * m ), making θ 0 n the unique optimal choice, which is impossible. Thus, σ n / ∈ int Σn (U * n ) for at least two n, i.e., σ / ∈ int Σ (U * ) and, hence, σ is not an equilibrium of G ⊕ g(σ). For each n, and eachs −n in the support ofσ −n , ξ n (s 0 −n ) = 1 ass 0 m is a vertex of the simplex T m , which is in T 1 m , for each m; because of the function π n , the optimality ofσ 1 n−1 implies that eachs 1 n−1 in the support ofσ 1 n−1 is a vertex of T n . Therefore, for eachs −n in the support ofσ −n , g 1 (s −n ) − g(σ) η and then g 1 (σ −n ) − g(σ) η. As σ is not an equilibrium of G ⊕ g(σ), by the choice of η in subsection 3.10, it is not an equilibrium of G ⊕ g 1 (σ), which contradicts the fact thatσ is an equilibrium ofḠ 0 . Now suppose that for exactly one n, say n = 1, T n does not belong to T 1 n . Then, θ 0 1 has positive probability under θ 1 . Therefore, because of the definition of γ n , σ n ∈ U * n for n > 1, i.e., σ ∈ U * . For n > 1, the fact that σ n ∈ U * n and T n belongs to T 1 n imply that for eachs 0 n = (σ n , θ n ) in the support ofσ 0 n , σ n belongs to U n (as the diameter of T n is less than ζ, which is smaller than the distance between U * and ∂ Σ U ). Thus, g 1 1 (σ) = 0. We will now show that g 1 n (σ) = 0 for n > 1. The payoff function π n for each n = N forces each strategys 1 n = (σ n,n+1 , θ n,n+1 ) in the support ofσ 1 n to be a vertex of T n+1 and hence σ n,n+1 is in U n+1 . Therefore, for n > 1, σ n−1,n ∈ U n . Recall that g n (·) was constructed to be 0 on U . Consequently, for each n > 1, g 1 n (s −n ) = 0 for eachs −n in the support ofσ −n , i.e., g 1 n (σ −n ) = 0. The fact that g 1 (σ) = 0, implies that σ = σ * . Optimality of θ n for n > 1 now requires that it assign probability one to θ 0 n . This is a contradiction: since T n ∈ T 1 n , its diameter is smaller than ζ (and hence one), putting it at positive distance from Σ n × { θ 0 n }.
Finally, suppose that for at least two players n, T n does not belong to T 1 n . Then, again because of γ n , for each n, σ n ∈ U * n . We claim that for each n, g 1 n (s −n ) = 0 for eachs −n in the support ofσ −n . Indeed, if for some m = n, T m has no vertex in T 1 m , then ξ n is zero by construction at eachs −n in the support and we are done. Otherwise, if for each m = n, T m has a vertex in T 1 m then, lettings 0 m = (σ m , θ m ) be an arbitrary vertex of T m , it follows from the fact that the diameter of each T m is less than ζ that σ m ∈ U m , which implies σ ∈ U . Therefore, g 1 n (·) is again zero on the support ofσ −n . It follows from the previous paragraph that σ is an equilibrium of G, i.e., σ = σ * , making θ n = θ 0 n . Finally, optimality ofσ 1 n implies that it is (s 0 n,n+1 , θ 0 n,n+1 ), as it yields zero with others yielding −1. Thus,σ =σ * , which concludes the proof.  Payoffs are defined in the exact same way as in subsection 3.10 with the following modifications: the function π ≡ 0, since there is no need to duplicate the strategy set of each player (by our construction, g n depends only on σ −n ); the function ξ is equal to 1 at all vertices of the triangulation that lie on the face θ = 1.
Paralleling the proof of subsection 3.10, we show that the only admissible equilibrium of the perturbed gameḠ δ with δ = 0 is the (symmetric) equilibrium (θ, x), where x = 1 and θ = 0. To see this, let (x, θ) be an equilibrium. Suppose first x < 7/8. Then θ = 1 is a strict best-reply. The support of the equilibrium (θ, x) is then a subset of one of the 1-dimensional simplices that subdivide {1} × [0, 1]. By our construction of g and the fact that it is linear in each of these simplices, it follows that x = 1, which is a contradiction. Therefore, x 7/8. Recall that in the original game x = 1 is a strict best-reply if x 7/8. Since g L 0 (whereas g R ≡ 0), it follows that x = 1 is a strict-best reply inḠ δ . Finally, given x = 1, θ = 0 is the optimal choice in Θ.

Deleting Unused Best Replies
In the definition of equivalence between game-equilibrium pairs (G, σ) and (Ḡ,σ), we insisted that G andḠ be the same game once we delete all strategies that are inferior replies to σ andσ, resp. We could have weakened the requirement by allowing the deletion of unused best replies as well, i.e., that the games be the same once we restrict them the support of their respective equilibria. For generic games, of course, these two notions coincide. But, for nongeneric games, as we show now using examples, allowing for the deletion of unused best replies leads to an unsatisfactory concept of equivalence.
Consider first the following bimatrix game: The equilibrium (b, r) is in dominated strategies and, clearly, an unreasonable equilibrium. Deleting the strategies t and l, which are ununsed best replies against this equilibrium, produces a trivial game where (b, r) is now the only solution.
One might conjecture that the misbehavior in the above game stems from the fact that the equilibrium (b, r) has index zero. But that is not the case, as the following 3-player game shows. This game has a unique equilibrium,σ, in which player 1 mixes uniformly between (T, t) and (T, b); player 2 mixes uniformly between (L, l) and (L, r); and player 3 plays W . If we delete B, R, E w , E e , all of which are unused best replies, we get the game G 1 : l r t (6, 6, 1) (0, 0, 1) b (0, 0, 1) (6, 6, 1) where the equilibriumσ is now regular but reverses its index to −1.
Our final example shows that there are problems even when an equilibrium is unique and in pure strategies: deleting some unused best replies results in a game where this equilibrium is not even isolated. In the gameḠ below  19 Deleting the unused replies B and E, but not R, gives us the following game G 2 : where now there is an interval of equilibria.
It is noteworthy that, when we view the last game G 2 as a two-player game, the equilibrium (T, L) (part of a connected component) is made unique by adding players and strategies. Similarly, in the two-player game G 1 above, the −1 index regular equilibrium in which player 1 mixes uniformly between (T, t) and (T, b) and player 2 mixes uniformly between (L, l) and (L, r) has been made unique by adding players and strategies, leaving open the question of how general such a property is. 20 Observe that without adding players, it is impossible to make a non +1 equilibrium unique in a two player game by adding only strategies, because a unique equilibrium of a two player game is necessarily quasi-strict (Norde [24]).

Extending Sustainability to Non-generic Games
It is fairly easy to construct non-generic games where no equilibrium is sustainable. For example, consider the (in effect) two-player game G 2 we obtained at the end of the last section, where the set of equilibria is a connected component. There is no way to make any of these equilibria unique by adding strategies that are inferior replies. 21 Therefore, any extension of sustainability to non-generic games needs to allow for solutions to be sets of equilibria, if we are to have existence. It is then natural to require that solutions be connected: as argued by Mertens [19], this requirement keep our selections "minimal" and, in addition, if the game is a generic extensive-form game, each solution would have a unique prediction in terms of the equilibrium outcome.
A natural definition of sustainability for connected sets of equilibria calls for such a set to be sustainable if it is the entire set of equilibria of a game obtained by adding or deleting strategies that are inferior replies to every equilibrium in the set. In such a definition, the component in which the candidate solution lies would survive these alterations and, hence, the connected sets would have to be entire components. 22 Moreover, a sustainable component should also have index +1 (for the same reason as why sustainability for individual equilibria implies that their index is +1). As there are games 23 where no component has index +1, there seems to be no reasonable way to extend sustainability to non-generic games without running afoul of the existence criterion.
Our way around this problem is to extend sustainability by an axiomatic procedure, rather than by a definition using the game-theoretic property that it attempts to capture. To do that, we first axiomatize sustainability for regular equilibria and then add axioms that are to hold on the universal domain of finite games to obtain a characterization of components with positive index. Actually, the axiomatization of sustainability holds for a slightly larger class of games than regular games and is derived from the following observation. In the main theorem of the paper, even though we focus on regular equilibria, what really matters, as a careful reading of the proof shows, is one particular implication of regularity, namely that regular equilibria are isolated. Thus, we have proved a slightly stronger result: 21 A non isolated equilibrium σ of a game G can never be made unique in a gameĜ obtained from G by adding/deleting inferior replies. Suppose this can be done. Let (σ k ) k∈N be a sequence of equilibria of G converging to σ. Passing to a subsequence if necessary, we can assume that the support T of σ k is fixed and that all strategies in the support are best replies to σ. As such no strategy in T can be deleted. Since σ is the unique equilibrium ofĜ, there is (up to a subsequence) a fixed player i and a fixed profitable best reply deviation τ i to σ k inĜ. By continuity, τ i is a best reply to σ inĜ: a contradiction. 22 A strict subset U of an equilibrium component C can never be made the unique equilibrium set in a gameĜ obtained from G by adding/deleting inferior replies. The proof is the same as in the last footnote. 23 [9] Fig. 8, or [28] p. 325.
Theorem 5.1. An isolated equilibrium of a finite game is sustainable iff its index is +1.
Let G be the class of all finite games and let G * be the subset consisting of all games where each equilibrium is isolated and has either index +1 or index −1. G * includes all regular games, but it does not coincide with the set of all games with finitely many equilibria. The following game G 3 has two isolated equilibria, neither of which is +1 or −1.
W : Game G 3 has two isolated equilibria, one mixed σ = ( 1 2 C + 1 2 D; 1 2 L + 1 2 M, W ), and one pure τ = (B, R, W ). It is easy to check that σ is quasi-strict, and so one can compute that its index is −1 by a restriction to its support. Thus, the index of τ is +2 and so no equilibrium in G 3 can be made unique in an equivalent pair. 24 A solution concept Φ on the domain G * assigns to each game G ∈ G * a collection of equilibria (called solutions of G). Let Φ * be the solution concept that assigns to each game G in G * the equilibria with index +1. Φ * is then the unique solution concept that satisfies the following axioms for a solution concept Φ on the domain G * . 25 • A1 Existence: Every game in G * has a solution.
Consider now the following axioms for an extension Φ of Φ * to G • A1 + Existence: Every game G ∈ G has a solution.
• A2 + IIA: A solution of a game G is also a solution of any gameḠ obtained from the game G by the addition and/or deletion of strategies that are inferior replies to every equilibrium in the solution. • A4 Connectedness: Every solution is a connected subset of Nash equilibria. 24 Our game is inspired by the two player game in [9] fig 8 in which the +2 index was a component. By adding a player and strategies we can kill all equilibria of that component except τ . 25 These axioms have been explicitly or implicitly stated in Myerson [21] and Hofbauer [12]. In fact, Hofbauer's conjecture is based on a combination of A1, A2 and A3.
• A5 Invariance: Equivalent games have equivalent solutions. 26 • A6 Robustness: Every game that is nearby in the space of payoffs has a nearby solution.
Axioms A1 + and A2 + are the natural statements of the corresponding axioms for the domain G. Observe that we dispense with a counterpart of Axiom 3 since we are not directly extending sustainability but only its prescription for generic games. Axioms A4 and A5 are standard in the strategic stability literature (Kohlberg and Mertens [14]). We have already discussed the reasonableness of Axiom A4. Axiom A5 requires that the solution depend only on the reduced normal form and thus be invariant to irrelevant changes in the extensive form description of the game or to the addition/deletion of duplicate strategies (mixtures of pure strategies). Axiom A6 is really a restatement of hyperstability [14]. The next proposition shows that there is a solution concept extending Φ * to G and satisfying all our axioms, namely the positive index Nash components. Afterwards it is proved that it is the unique concept satisfying A1 + , A4 and a combination of A5 and A6.
Proposition 5.2. The solution concept Φ + that associates to each game its set of positive index Nash components extends Φ * to G \ G * and satisfies A1 + , A2 + , A4, A5, and A6.
Proof. Obviously, the restriction of Φ + to G * is Φ * and is thus an extension to G. A1 + holds because the sum of indices across the finitely many Nash components is +1 (Ritzberger [27]). Axiom (A4) holds as positive index Nash components are closed and connected subsets of Nash equilibria. Axiom (A5) is a consequence of the invariance of the index to the addition/deletion of duplicate strategies (Govindan and Wilson [6], Theorem 5), the same argument also proves A2 + as well. Axiom A6 is a consequence of the fact that if the index is nonzero, then the component of equilibria is essential in the sense of O'Neill [25].
The solution concept that to games in G * assigns equilibria with index +1 and to other games all components with nonzero index is also an extension. It is not clear if our axioms rule out such solution concepts. However, we now show that if we strenghthen Axioms A5 and A6-in a sense, conflating them-then we obtain a characterization of Φ + .
We will now impose this property for an extension Φ of Φ * .
• A7 Uniform Robustness: For every game G, every solution C and every neighbourhood U of C, there exists δ > 0 such that every δ-perturbation of the payoffs in every strategically equivalent game G yields to a game G δ that has a solution that is equivalent to a subset in U .
If the term "solution" is replaced by "Nash equilibrium" we obtain precisely the uniform hyperstability concept defined in Govindan and Wilson [6] where they proved that nonzero index Nash components are the only connected uniformly hyperstable sets. Adapting their tools allows to show the following axiomatic characterization of positive index Nash components. 27 This theorem suggests that in non-regular games, sustainable equilibria are the components of equilibria with positive index. Theorem 5.3. Φ + satisfies A7. Moreover, any extension of Φ * to G that satisfies A4 and A7 must select from among the solutions of Φ + .
Proof. If a solution C for a game G / ∈ G * is not an entire component, then the proof in Govindan and Wilson [6] applies to show that C cannot be uniformly robust. Therefore, a solution must be an entire connected component of equilibria. If C is a component with negative index, then Lemma B.1 in the Appendix implies that there is a neighbourhood U of C such that for every δ > 0, there exists an equivalent gameḠ to G and a δ-perturbation G δ ofḠ with finitely many equilibria of index +1 and −1 such that all the equilibria ofḠ δ inŪ (the equivalent neighbourhood to U ) have index −1 and so are not "solutions". Thus C must have a positive index. Conversely, that positive index components are uniformly robust follows from Govindan and Wilson [6].

Conclusion
Most existing refinement concepts that are guaranteed to exist, such as perfection [29], properness [22], stable equilibria [11,14,19] only refine equilibria of nongeneric games, i.e., every equilibrium of a regular game survives these refinement criteria. 28 On the other hand, the procedure proposed by Harsany and Selten [8] selects a unique equilibrium in every game. Sustainability lies somewhere in between refinement and selection: it slices by half the set of predictions in regular games, as it disregards all -1 index equilibria.
In nongeneric games, one argument against components with negative index is that they are dynamically unstable for all Nash dynamics [3,16]. Our approach provides an axiomatic argument against these components. Moreover, the selection among components with a positive index can be made compatible with all of the above-mentioned refinements. Indeed, 27 A closed subset of Nash equilibria C of a finite game G is hyperstable if for any strategically equivalent game G and every neighbourhood U of the equivalent set C of equilibria, there exists a neighbourhood V of G such that every game in V has a Nash equilibrium in U . We conjecture that non-zero index Nash components are the only connected hyperstable sets in generic extensive form games, giving us the analogous conjecture in our context: A7 is implied by A5 and A6 for generic extensive-form games. 28 An exception is persistent equilibrium [13] which eliminates the mixed equilibrium in the battle of the sexes; but, as noted by Myerson, it violates invariance [1].
it is known that components with nonzero index always contain stable sets in the sense of Mertens (and hence also those that are stable in the sense of Kohlberg-Mertens or Hillas) and thus contain proper equilibria and sequential equilibria of every extensive form game with that normal form [14,32]. In addition, there are positive index components that are persistent. 29 We alluded to a number of open problems in the paper. One of the most important of these is the question of whether we can retain axioms A5 and A6, and eliminate A7at least on the domain of generic games in extensive-form-to select components with a positive index. A second question that is intriguing is the effect of adding more players to the game. For example, we can view an N -person game as, for example, an (N + 1)-person game by treating player N +1 as a dummy player and then considering equivalences of gameequilibrium pairs involving these N + 1 players. How would this more expansive notion of equivalence compare with what we know so far? Finally, one wonders whether the selection from among the components with a positive index by invoking some additional criterion like persistent equilibria [13] or settled equilibria [20] could lead to an equilibrium notion that addresses some of the shortcomings of using only the positivity of index. Grounds for optimism here come chiefly from examples like game G 1 in the Introduction, where the completely mixed equilibrium has index +1 and is unstable for all natural dynamics, but it is neither persistent nor settled.

Appendix A. Delaunay Triangulations
Here we construct a triangulation T n of Σ n × Θ n for each n with the properties stated in subsection 3.10. We start with some definitions.
A simplex T in R d is the convex hull of affinely independent points x 0 , x 1 , . . . , x k (k d); a face of T is the convex hull of a subset of the points x i . A triangulation T of a polytope C ⊂ R d is a finite collection of simplices T in R d such that: (1) if T ∈ T , so is every face of T ; (2) the intersection of two simplices in T is a face of both (possibly empty); (3) the union of the simplices in T equals C.
Throughout this Appendix, we use the 2 -norm, unless we specify differently. Take a finite collection of points { x 0 , x 1 , . . . , x k } in R d (where k is now an arbitrary positive integer) such that its convex hull C is d-dimensional. Suppose that the x i 's are in general position for spheres in R d -i.e., no subcollection of d + 2 of these points lie on any (d − 1)-sphere (centered at any point and of any radius) in R d . We can construct a triangulation of the convex hull C, called the Delaunay triangulation, as follows (cf. Loera et al [15] for details).
Let D be the convex hull of the set of points (x i , x i 2 ) ∈ R d+1 , i = 0, 1, . . . k. Let D 0 be the lower envelope of D. The natural projection (x, y) → x from D 0 to R d is C and D 0 is the graph of a piece-wise linear and convex function : C → R with the property that the subsets on which is linear are simplices, whose projections then yield the simplices of a triangulation of C.
There is a dual representation of the Delaunay triangulation, known as the Voronoi Diagram, which works as follows. For each i = 0, 1, . . . k, let P i be the polyhedron in R d consisting of points y in R d such that y − x i y − x j for all j = i. We then have a polyhedral complex (which is like a simplicial complex but with polyhedra rather than simplices) where the maximal polyhedra are the P i . There is an edge between two vertices x i and x j in the Delaunay triangulation iff the polyhedra P i and P j have a nonempty intersection. Also, the intersection of d + 1 of these polyhedra when nonempty is a single point (because of genericity), which is then the center of a ball that contains d + 1 points of the collection on its boundary and no other point in the ball itself-these d + 1 points span a d-dimensional simplex in the Delaunay triangulation.
For our purposes, we need a triangulation with the diameter of certain simplices to be smaller than ζ, as specified in Subsection 3. 10. To obtain that, we need an auxiliary construction. Let C be a full-dimensional polytope in R d . Let B 0 be a proper face of C and let H be a hyperplane that strictly separates B 0 from the vertices of C that are not in B 0 . Let B be the intersection of C with the halfspace generated by H that contains B 0 in its interior, i.e., B is of the form b } and a · x < b for all x ∈ B 0 . Let X be the set of vertices of C and let Y be the set of vertices of B that are not in X. Suppose the vertices in X ∪ Y satisfy the following property (P ): for each 0 < k d, and each collection v 0 , . . . , v k+1 of vertices in X ∪ Y that are not affinely independent, the intersection of the Voronoi polyhedra of these k + 2 vertices is empty. This assumption implies, a.o., by taking k = d, that the vertices in X ∪ Y are in general position for a Delaunay triangulation of C. Beyond that, the stronger assumption allows us to obtain refined Delaunay triangulations of C. Specifically, if we add a collection Z of points in C such that if each point z ∈ Z is in generic position in the face of C or H ∩ C that contains it in its interior-i.e., outside a nowhere dense subset of this face-then the collection X ∪ Y ∪ Z is in general position for spheres and we can construct a Delaunay triangulation with this set of vertices. Indeed, suppose the points in Z are chosen generically and suppose v 0 , . . . , v d+1 is a collection of vertices in X ∪ Y ∪ Z. There exists 0 < k d such that, after permuting the labels of the collection if necessary, v 0 = k+1 i=1 λ i v i , λ i = 0 for all i > 0 and i λ i = 1. By Property (P), the Voronoi polyhedra of the vertices do not intersect if all the vertices belong to X ∪ Y . If there is some vertex in the collection that belongs to Z, then we can assume that v 0 belongs to Z. If v 1 , . . . , v k+1 do not span a face of C or H ∩ C, then clearly we can perturb v 0 so that it does not lie in the affine space of the other v i 's, which contradicts the assumption that v 0 was chosen in a generic position. On the other hand, if v 1 , . . . , v k+1 do span a face of C or H ∩ C, then for generic choice of v 0 and also of the v i 's in the list v 1 , . . . , v k+1 that are not in X ∪ Y , their Voronoi polyhedra do not intersect. Thus, Property (P) allows to us refine the initial Delaunay triangulation of C (involving vertices X ∪ Y ).
Let δ > 0 be such that x − y δ/2, for all x ∈ B and vertices y ∈ C \ B. Let X δ be a finite collection of points in C such that: (1) X ∪ Y ⊂ X δ and X δ ∩ (C \ B) ⊂ X; (2) for x ∈ B, there is a point x δ ∈ B ∩ X δ such that x − x δ < δ/2 and x δ belongs to the face of B that contains x in its interior; (3) every point in int C (B) ∩ X δ is at least δ/2 from ∂ C B; (4) the points in X δ are in general position for spheres. Call T δ the associated Delaunay triangulation of C.
The triangulation T δ above achieves two properties: (i) every simplex with vertices in B has diameter at most δ; (ii) every simplex of T δ that has a vertex outside B does not intersect int C (B). To prove these properties, define r : R d → B by letting r(x) be the point in B that is closest to x. If r(x) = x, r(x) belongs to a proper face of B, and then we can write r(x) as x − p where p is a normal for a supporting hyperplane at r(x) with p · r(x) p · y for all y ∈ B. If in addition r(x) ∈ int C (B), then r(x) is at the boundary of C and so p · r(x) p · y for all y ∈ C as well. Suppose r(x) = x and let r(x) = x − p. Let y be a point such that p · y p · r(x). Let z be the nearest-point projection of y onto the line from x through r(x). Then with the inequality being an equality iff z = r(x), i.e., p · y = p · r(x).
We are now ready to prove that T δ has the requisite properties. Let x δ be a point in X δ ∩B and let x be a point in R d that belongs to the Voronoi polyhedron P (x δ ) of x δ . We claim that r(x) − x δ < δ/2. If r(x) = x, this follows directly from Property (2) of X δ . Suppose that r(x) = x. Then r(x) belongs to the interior of a proper face B of B and as we saw in the last paragraph, r(x) can be written as x − p. By definition of r(x), p · x δ p · r(x) and By Property (2), there exists y δ in B ∩ X δ such that r(x) − y δ < δ/2. Obviously p · y δ = p · r(x) and since x ∈ P (x δ ), it follows that x − x δ 2 x − y δ 2 < r(x) − x 2 + δ 2 /4; therefore, r(x) − x δ < δ/2, as claimed.
Observe that we proved that x − x δ 2 < r(x) − x 2 + δ 2 /4, a fact we will use below.
From the above paragraph, for each x δ ∈ X δ ∩B and each x ∈ P (x δ ), the distance between r(x) and x δ is less than δ/2; We claim finally that the diameter of each simplex with vertices in B is less than δ. Indeed, letting x δ and y δ be two vertices of a simplex in B, then their Voronoi cells intersect, so we can take x in the intersection. Since r(x) is of distance δ/2 from x δ and δ/2 from y δ , x δ and y δ are distant less than δ. This concludes the proof that T δ satisfies (i).
We now prove that T δ satisfies (ii): for this, it is sufficient to show that the intersection of P (x δ ) and P (y δ ) is empty for all x δ ∈ int C (B) ∩ X δ and y δ in X δ \ B. Take such a pair x δ , y δ . Fix x ∈ P (x δ ). If r(x) = x, then x − x δ < δ/2, while by the definition of δ, x − y δ δ/2 and thus x / ∈ P (y δ ). Suppose r(x) = x. Since x δ ∈ int C (B), by Property (3) of the set X δ , r(x) cannot belong to ∂ C B, since in this case the distance between x δ and r(x) is greater than δ/2. Therefore, r(x) belongs to a face of C. Writing r(x) as x − p, we then have that p is a normal to a hyperplane containing one of the faces of C and thus p · y δ p · r(x). Hence, r(x) − x 2 + δ 2 /4 by the definition of δ, while as we saw in the previous paragraph, x − x δ 2 < r(x) − x 2 + δ 2 /4; thus again x / ∈ P (y δ ) and we are done.
For our problem of triangulating Σ n × Θ n , we recall the properties that the triangulation should satisfy: (1) The only vertices in Σ n × { θ 0 n } of T n are pure strategies (s n , θ 0 n ), s n ∈ S n ; (2) letting Θ 1 n be the face of Θ n where θ 0 n has zero probability, if T n ∈ T n is a simplex either with a face in Σ n × Θ 1 n , or shares a face with such a simplex, then the diameter of T n is less than ζ; (3) there exists a convex function n : Σ n × Θ n → R + such that: (a) n (λx + (1 − λ)y) = λ n (x) + (1 − λ) n (y) iff x and y belong to a simplex T n of T n ; (b) −1 n ∈Ŝ 1 n , let x(ŝ 1 n ) be the unit vector in RŜ n for the coordinateŝ 1 n ; for eachŝ 2 n ∈Ŝ 2 n , let x(ŝ 2 n ) be a point in RŜ n to be determined later. Let X ≡ {x(ŝ 2 n )}ŝ 2 n ∈Ŝ 2 n . Define an affine function F X n : Σ n ×Θ n → RŜ n as follows: for eachŝ n ∈Ŝ n , F X n (ŝ n ) = x(ŝ n ); for a vertex (s n , θ n ) of Σ n × Θ n that is not inŜ 1 n ∪Ŝ 2 n , define F X n (s n , θ n ) = F X n (s n , θ 0 n ) + F X n (s 0 n , θ n )−F X n (s 0 n , θ 0 n ). The map F X n extends to the whole of Σ n ×Θ n by linear interpolation. If the collection X ∪ {x(ŝ 1 n )}ŝ 1 n ∈Ŝ 1 n is affinely independent (which holds for an open and dense set of choices for X), then F X n is an affine homeomorphism with its image C(X) ≡ F X n (Σ n × Θ n ) and the dimension of C(X) is |Ŝ n |.
Let H be a hyperplane in R Sn×Tn that strictly separates Σ n × Θ 1 n from Σ n × { θ 0 n } and such that the distance between H ∩ (Σ n × Θ n ) and Σ n × Θ 1 n is less than ζ. Every vertex of H ∩(Σ n ×Θ n ) is of the form (1−ε sn,θn )(s n , θ n )+ε sn,θn (s n , θ 0 n ) for some s n ∈ S n , θ 0 n = θ n ∈ Θ n and 0 < ε sn,θn < ζ. The hyperplane could be defined equivalently by choosing |S n | + |T n | of the coordinates ε sn,θn . Let B 0 (X) = F X n (Σ n × Θ 1 n ) and let B(X) = F X n (H − ∩ (Σ n × Θ n )), where H − is the halfspace that contains Σ n × Θ 1 n . LetX be the set of vertices of C(X) and letȲ be the set of vertices of C(X)∩F X n (H ∩(Σ n × Θ n )). We claim now that if the choice of the vectors in X is generic and if the hyperplane H is in generic position, i.e., if the collection of ε sn,θn is chosen outside a nowhere dense set, thenX ∪Ȳ satisfies property (P ); consequently there exists a Delaunay triangulation of C(X) that can be refined according to (P ). To prove this claim, let v 0 , . . . , v k+1 be a collection of vertices each belonging to either Σ n × Θ n or its intersection with H and such for all i > 0 and i λ i = 1. We have to show that the Voronoi polyhedra of the v i 's do not intersect. We divide the proof in three cases: assume to begin with that all the vertices belong to Σ n × Θ n . Then, k + 2 = 2J for some integer J > 1 and, after a relabeling of the vertices, J i=1 v i = 2J i=J+1 v i ; of these there are as many vertices in S 1 n among the first J as there are in the second; and there is at least one vertex on each side of the equality that does not belong toŜ 1 n . If the Voronoi polyedra of the F X n (v i )'s contain a point y in common, then letting c be the distance between y and each of the F X n (v i )'s, elementary algebra shows that 2F , an equality that holds only for a nongeneric choice of X. Therefore, the Voronoi polyhedra of the F X n (v i )'s do not intersect. Now suppose that all the v i 's belong to H ∩ C. Then the nearest-point projection of the v i 's to Σ n × Θ 1 n span a face. The Voronoi polyhedra of the projections do not have a point in common, as we saw in the previous paragraph. Therefore, if the ε sn,θn 's are small, the Voronoi polyhedra of the v i 's do not intersect either.
Finally, there remains to consider the case where one of them, which we can assume to be v 0 , belongs to H. There must be at least two v i 's in Σ n × Θ n as the hyperplane H does not contain any vertex of Σ n × Θ n . In particular, there are at most k d vertices in the collection of v i 's that belong to H. Therefore, we can perturb the ε sn,θn corresponding to v 0 but not the others, to ensure that the Voronoi polyhedra do not intersect. Thus we have verified our claim that for generic X and H, the vertices inX ∪Ȳ allows to construct a Delaunay triangulation of C(X) satisfying (P ).
Take now a generic set X with the property that the norm of each x(ŝ 2 n ) is strictly greater than one forŝ 2 n ∈Ŝ 2 n . Since F X n is an affine homeomorphism, x − y ∞ M F X n (x) − F X n (y) for some M > 0 and for all x, y ∈ Σ n × Θ n . Let δ > 0 be smaller than M ζ. Using the construction of a subdivision described above, we now have a triangulation T δ of C(X) where each point in int C (B(X)) belongs to a simplex with diameter less than δ/M , giving us properties (1) and (2) of Subsection 3.10. As for property (3), if n is the convex function associated to the Delaunay triangulation T δ , the composition n • F X n is convex and linear precisely on each cell of the triangulation of Σ n × Θ n induced by the inverse mapping (F X n ) −1 : C(X) → Σ n × Θ n . Our convex function takes value one on Σ n × { θ 0 n } and is strictly above one elsewhere. Subtracting now 1 from n • F X n we have a convex function satisfying property (3).

Appendix B. A Lemma for Section 5
This Appendix states and proves a key lemma that was invoked in Section 5. The lemma draws on the following concept of equivalence between normal-form games, which is generated by the addition and deletion of duplicate strategies. Say that two normal-form games are equivalent if they have the same reduced normal form. Given a game G with strategy space Σ, if an equivalent gameḠ with strategy spaceΣ is obtained by adding duplicate strategies, then there is natural (affine) map fromΣ to Σ that sends each profileσ inḠ to an equivalent strategy σ in G; in this case we say thatσ projects to σ.
Let C 1 , . . . , C k be the components of Nash equilibria of a finite game G. For each i, let c i be the index of C i . Choose ε > 0 such that the closed ε-neighborhoods U i of the C i 's are pairwise disjoint. (All the norms in this Appendix will be ∞ -norms.) The following lemma is the main result of this Appendix.
Lemma B.1. For each δ > 0, there exist a gameḠ obtained from G by adding duplicate strategies and a δ-perturbationḠ δ ofḠ such that: (1) Every equilibrium ofḠ δ is isolated and projects to a profile in U i for some i; The proof of the lemma calls upon the concept of multisimplicial complexes and multisimplicial approximations, introduced in [6], which we review briefly now. We assume that the reader is familiar with simplicial complexes, which are used as building blocks for multisimplicial complexes. A multisimplex is a set of the form K 1 × ... × K m , where for each i, K i is a simplex (in some Euclidean space). A multisimplicial complex K is a product K 1 × ... × K m , where for each i, K i is a simplicial complex. The vertex set V of a multisimplicial complex K is the set of all (v 1 , ..., v m ) for which v i is a vertex of K i . For a simplicial complex Λ, denote by |Λ| the space of the simplicial complex. The space of the multisimplicial complex K is then the product space i |K i | and is denoted |K|. For each vertex v of K, the star of v, denoted St(v), is the set of all σ ∈ |K| such that for each i, σ i ∈ St(v i ). A subdivision of a multisimplicial complex K is a multisimplicial complex K * = i K * i , where for each i, K * i is a subdivision of K i . Definition B.2. Let K be a multisimplicial complex and let Λ be a simplicial complex. A map f : |K| → |Λ| is called multisimplicial if for each multisimplex K of K there exists a simplex L ∈ Λ such that (1) f maps each vertex of K to a vertex of L; (2) f is multilinear on |K|; i.e., for each σ ∈ K, f (σ) = v∈V f (v) · i σ(v i ).
Remark B.3. We call the restriction of a multisimplicial map to its vertex set as a vertex map. From the definition above, one sees that a multisimplicial map is completely determined by its vertex map.
A map from a multisimplicial complex to another is called multisimplicial if each coordinate map is multisimplicial in the sense of Definition B.2.
Definition B.4. Given a multisimplicial complex T with |T | = Σ, a multisimplex K of T is called maximal if the dimension of K equals the dimension of Σ. Let T * be a multisimplicial subdivision of the multisimplicial complex T . Let h : |T * | → |T | be a multisimplicial function where |T * | = |T | = Σ. A multisimplex K * of T * is called fixed (by h) if the lowest dimensional multisimplex D of T that contains h(K * ) also contains K * .
Remark B.5. If a multisimplex K * of the map h in the above definition contains a fixed point in its interior, then necessarily it is fixed by h, which is the reason for the terminology. Observe, however, that the converse is not necessarily true.
Definition B.6. Let K be a multisimplicial complex and Λ a simplicial complex. Let g : |K| → |Λ| be a continuous map. A multisimplicial map f : |K| → |Λ| is a multisimplicial approximation of g if for each σ ∈ |K|, f (σ) is in the unique simplex of Λ that contains g(σ) in its interior.
The proof of the next claim can be found in the Appendix B of Govindan and Wilson [6].
Claim B.7. Suppose that g : |K| → |Λ| is a continuous map. There exists η > 0 such that for each subdivision K * of K with the property that the diameter of each multisimplex is at most η, there exists a multisimplicial approximation f : |K * | → |Λ| of g.
Remark B.8. Let T * be a multisimplicial subdivision of a multisimplicial complex T with |T | = Σ. Let g : |T * | → |T | be a continuous map. We call a multisimplicial map f : |T * | → |T | a multisimplicial approximation of g if for each n ∈ N , f n : |T * | → |T n | is a multisimplicial approximation of g n in the sense of Definition B.6.
Proof of Lemma B.1. The proof of Lemma B.1 is inspired by the idea of the proof of Theorem 1 of Govindan and Wilson [6], which shows that a component of equilibria that is uniformily hyperstable is essential. In order to facilitate comparison when we refer to the proof of that theorem, we have also subdivided our proof into three steps, each one corresponding to one of the steps in that proof. For Step 1, we have to extend the notion of equivalence between games from normal to strategic form. We say that a strategic-form gameḠ with strategy space P = n P n is equivalent to the normal-form game G if there exists, for each n ∈ N , an affine surjective map g n : P n → Σ n such that letting g ≡ × n g n , ∀p ∈ P,Ḡ n (p) = G n (g(p)).
Step 1. Let BR G be the best-reply correspondence of G and let W be a neighborhood of the graph of BR G . IfḠ is equivalent to G, then there is a corresponding neighborhoodW of the best-reply correspondence BRḠ ofḠ. Fix η > 0. In this step, we construct a strategic-form gameḠ that is equivalent to G with the following properties.
(1) For each n, there is a simplicial complex I n , with |I n | = Σ n , and a simplicial subdivision I * n of I n such that: (a) the diameter of each simplex of I n is less than η; (b) for each i = 1, . . . , k and j = 1, . . . , |c i | there is a distinguished full-dimensional multisimplexK ij of the complex |I * | × |I * | that is contained in U i × U i . (2) For each n, there exists a multisimplicial mapf n : m =n (|I * m |×|I * m+1 |) → |I n |×|I * n+1 | such that lettingf = × nfn : (a) the graph off is contained inW ; (b) the only multisimplices left fixed byf are theK ij 's; (c) the restriction off to eachK ij is a homeomorphism onto its imageL ij and has a unique fixed pointσ ij , whose index is the sign of c i .
This step follows once we obtain an appropriate multisimplicial function f from Σ to itself with properties that track those listed above. The following claim constructs such a multisimplicial function.
Claim B.9. There exist a multisimplicial complex T , with |T | = Σ, and a multisimplicial subdivision T * of T with the following properties.
(1a) The diameter of each multisimplex of T is less than η; (1b) For each i = 1, . . . , k and j = 1, . . . , |c i | there is a a distinguished full-dimensional multisimplex K ij of T * that is contained in U i . (2) There exists a multisimplicial map f : |T * | → |T | such that: (a) the graph of f is contained in W ; (b) the only multisimplices left fixed by f are the K ij 's; (c) the restriction of f to each K ij is a homeomorphism onto its image L ij and has a unique fixed point σ ij , whose index is the sign of c i . For each i, pick points σ ij , j = 1, . . . , |c i | in V i . For each i, j, and n, let X ij n ⊆ Y ij n be full-dimensional simplices with σ ij n as their barycenter and such that: for each n, the Y ij n 's are pairwise disjoint; and letting Y ij ≡ n Y ij n , we have that Y ij is contained in V i with diameter of Y ij n less than ζ. For each i, j, there exists a simplicial homeomorphism f ij n : X ij n → Y ij n such that σ ij n is the unique fixed point off ij n and the index of σ ij relative tof ij ≡ × nf ij n : X ij → Y ij is the sign of c i , where X ij = n X ij n . The mapsf ij define a mapf : ∪X ij → ∪Y ij . Letd be the displacement off and let d be the displacement of h.
By the Hopf Extension Theorem, the displacementd off can be extended to the whole of Σ in such a way that: From (B3), the mapf has no fixed points in Σ \ ∪int(X ij ). Let α > 0 be such that Consider now a simplicial complex T n whose underlying space is Σ n such that: (i) T n contains Y ij n as a subcomplex for each i, j; (ii) T n has diameter less than α 2 and η; (iii) for each i, j there exists a maximal simplex L (0)ij n ∈ T n that contains σ ij n in its interior; (iv) for each i, j and any multisimplex L of Y ij that is not an L (0)ij , L ∩f (L) = ∅. Asf n | X ij n : X ij n → Y ij n is an affine simplicial homeomorphism it follows that S ij n = {f −1 n (L n ) ∩ X ij n } Ln∈Tn is a simplicial complex with X ij n as its underlying space. Denote by K (0)ij n ∈ S ij n the simplex containing σ ij n in its interior. Take now a simplicial complex T * n that is a subdivision of T n and such that: T * n ∩ X ij n is a simplicial subdivision of S ij n ; σ ij n belongs to the interior of a maximal simplex K (1)ij n of T * n . The multicomplexes T * and T satisfy properties 1(a) and 1(b). There remains to construct a multisimplicial approximation f tof satisying the properties (2)(a)-(c).
Taking the diameter of T n sufficiently small, above, guarantees that the graph of every multisimplicial approximation off is contained in W . Fix such a diameter for T n ; taking now the diameter of T * n sufficiently small guarantees a multisimplicial simplicial approximation off indeed exists. Moreover, by properties (ii) and (iv) above, for each such approximation, a multisimplex that is not contained in K (0)ij for some i, j will not be held fixed. By a careful choice of the vertex map on the multisimplices of K (0)ij we will satisfy the remaining properties required in (2).
For each i, j, n, map the vertices of K (1)ij n 1-1 and onto the vertices of L (0)ij n such that the resulting multisimplicial map from K (1)ij to L (0)ij has σ ij n as the unique fixed point and its index is the sign of c i . For a vertex v * n of T * n in K (0)ij n that is not a vertex of K (1)ij n , define the following vertex map: consider the ray from σ ij n through the vertex v * n and let r(v * n ) be the point of intersection of the ray with the boundary of K (0)ij n . Let v n be closest vertex of L (0)ij n tof n (r(v * n )) in the carrier off n (r(v * n )) with respect to T n . The resulting assignment of vertices defines a simplicial approximation f n off n from K Let f be the map obtained from Claim B.9. We now define the following strategic-form gameḠ, which is equivalent to G. For each n, the strategy set isP n ≡ Σ n × Σ n+1(mod N) , where the second factor is payoff irrelevant. Let n,n and n,n+1 be the projections to the first and second factor, respectively. Define nowf n :P −n →P n by lettingf n (p −n ) = f n ( −n (p −n ), n−1,n (p n−1 )) × n+1,n+1 (p n+1 ). It is clear thatP andf have the properties we set out to establish.
Step 2. Let now I n and I * n denote the set of vertices of I n and I * n obtained in Step 1 above. For each n, letP n = ∆(I n ) × ∆(I * n+1 ). Since the vertices ofP n can be viewed as points inP n , there is a natural map fromP n toP n that sendsp n to the corresponding convex combination of vertices of I n × I * n+1 . LetG be the strategic-form game where the strategy polytope of player n isP n and the payoff of each player n ∈ N froms = ((i n , i * n+1 )) n∈N ∈ n∈N (I n ×I * n+1 ) is defined asG n (s) =Ḡ n (s). Perturb nowG's payoffs such that ifp is a vertex of a simplex K ij (cf. Step 1, (2.c)), all vertices ofL ij n are equally good replies against it. If η is small, this perturbation ofG will indeed be small. We replaceG with this perturbed game but, for notational convenience, call it stillG. We show that when W is a small neighborhood of BR G and η is also small, there exists a map g :P → [0, δ 2 ] R , R ≡ n∈N |I n × I * n+1 |, such that: (1) for each n ∈ N , g n = g 0 n + g 1 n , where g 0 n and g 1 n are continuous functions that are independent ofp n ; (2) ifp projects toK ij for some i, j, then for each player n, and every pair of vertices (i n , i * n+1 ), (ĩ n ,ĩ * n+1 ) ofL ij n , we have that g 0 n,(in,i * n+1 ) (p) = g 0 n,(ĩn,ĩ * n+1 ) (p); (3) ifK is a face ofP whose vertices span a multisimplex of I × I * , then for each (i n , i * n+1 ) ∈ I n × I * n+1 , the map g 1 n,in,i * n+1 is multilinear inK; (4)p is an equilibrium of the finite gameG g(p) iff it projects toσ ij for some i, j and the support ofp spans a multisimplex of I × I * ; moreover ifp is an equilibrium of G g(p), then it is an isolated equilibrium in this game and has the same index as its projectionσ ij .
We start by defining the map g 0 . The (i n , i * n+1 ) coordinate of g 0 n is independent of i * n+1 . First we define the map inP and then extend it toP by the equivalence relation between strategies inḠ andG. We shall writep n = (p n,n ,p n,n+1 ) ∈ Σ n × Σ n+1 =P n . Fix n ∈ N , p −n ∈P −n and let σ −n be the strategy profile in G that is equivalent top −n . Let g 0 n,in (p −n ) = π in (p −n )[r n (p −n ) − G n (i n , σ −n )], where r n (p −n ) ≡ max sn∈Sn G n (s n , σ −n ) and π in is a Urysohn function defined onP −n that is 1 on the inverse image underf n,n of the closed star of i n and strictly less than 1 elsewhere. Following the same reasoning as in Step 2 of GW, for sufficiently small η > 0 and an appropriate choice of the neighborhood W from which to obtain the mapf in Step 1, one guarantees that ||g 0 || ∞ < δ 4 . We will now define g 1 . For each n,p −n ∈P −n , and a vertex (i n , i * n+1 ) of I n × I * n+1 , letf n,n (p −n )(i n ) andf n,n+1 (p −n )(i * n+1 ) be, respectively, the i n -th and i * n+1 -th barycentric coordinate off n,n (p −n ) andf n,n+1 (p −n ). Also, let σ ij in be the i n -barycentric coordinate of σ ij n in I n and σ ij i * n+1 be the i * n+1 -barycentric coordinate of σ ij n+1 in I * n+1 . For each (i n , i * n+1 ) ∈ I n ×I * n+1 that is not a vertex ofK ij n (cf. Step 1, (2.c)), g 1 n,in (p −n ) = δ 8 (f n,n (p −n )(i n )), g 1 n,i * n+1 (p −n ) = δ 8 (f n,n+1 (p −n )(i * n+1 )); in case (i n , i * n+1 ) is a vertex ofK ij n , let g 1 n,in (p −n ) = λ |S n |σ ij in (f n,n (p −n )(i n )), where 0 < λ <  along with the fact that we have already perturbed the payoffs inG accordingly. We now prove (4).
Take an equilibriump ofĜ α . Forp n ∈P n , denote byp n,n (resp.p n,n+1 ) the coordinates of p n in ∆(F * n ) (resp. ∆(F * n+1 )). It follows from the construction of γ n (resp. γ n+1 ) that the support ofp n,n (resp.p n,n+1 ) must span a simplex inF * n (resp.F * n+1 ). Note now that each coordinate map of g 0, * is multilinear in each face ofP that projects to a multisimplex ofF * ; g 1, * is also multilinear in each such face ofP , since each of its coordinate maps is multilinear in a face ofP (cf. (3) of Step 2) that projects to a multisimplex of I * × I * (whichF * ×F * subdivides) . Hence the payoff to player n of a vertexv n ∈P n againstp −n in the gameĜ α is:Ĝ n (v n ,p −n ) + g 0, * vn,n (p −n ) + g 1, * n,vn,n (p −n ) − αγ n (p n,n ) + g 1, * n,v n,n+1 (p −n ) − αγ n+1 (p n,n+1 ). We claim that for α > 0 sufficiently small,p must project toK ij . Aiming at a contradiction, let α k → 0 and (p k ) k∈N a sequence of equilibria ofĜ α k such thatp k projects to the complement of ∪ int(K ij ). Assume without loss of generality that the sequence converges to somep, by passing to a convergent subsequence if necessary. Thenp is an equilibrium ofĜ 0 . Let p ∈P be such thatp projects top. Thenp projects to the complement of ∪ int(K ij ) and is an equilibrium ofG g * (p), which contradicts (3.3). Thus for all small α, there is no equilibrium ofĜ α that projects to a point outsideK ij for each i, j.
Let nowK ij be the multsimplex ofF * ×F * whose projection toP is inK ij and contains σ ij in its interior. Taking α > 0 sufficiently small, as we saw above, no equilibrium ofĜ α projects to the complement of ∪K ij . Now, (3.4) and (3.3) imply that the unique equilibrium p inK ij must project toσ ij . This proves thatĜ α satisfies (3.i) and (3.ii).
We now prove (3.iii). Fix i, j and letp ij ∈P project toσ ij . There is a component C ij of equilibria of the gameĜ ĝ(p ij ) that includesp ij . As the index is invariant to the addition/deletion of equivalent strategies, the index ofĈ ij is the sign of the equilibriump ij in the gameG g * (p ij ), wherep ij is the point inP projecting toσ ij , and thus the index of C ij is the sign of c i . For small α, the unique equilibrium close to this componentĈ ij is in factp ij . Thereorep ij retains the index ofĈ ij , giving us property (3.iii).
We now proceed to the proof of the second part, claimed at the begining of Step 3, namely: there exists an equivalent normal-form game G * toĜ (obtained by adding duplicates toĜ) and a δ-perturbation of this game such that properties (1) and (2) in the statement of Lemma B.1 are satisfied. Consider a simplicial complex H * n whose space isP n for which there exists a convex piecewise linear function q n : |H * n | → R + such that: q n is linear precisely on the simplices of H * n ; and if K * n is the simplex of H * n containingp ij in its interior, then q n is constant in each