Implementation via approval mechanisms

We focus on the single-peaked domain and study the class of Generalized Approval Mechanisms (GAMs): First, players simultaneously select subsets of the outcome space and scores are assigned to each alternative; and, then, a given quantile of the induced score distribution is implemented. Our main finding is that essentially for every Nash-implementable welfare optimum – including the Condorcet winner alternative – there exists a GAM that Nash-implements it. Importantly, the GAM that Nash-implements the Condorcet winner alternative is the first simple simultaneous game with this feature in the literature. © 2017 Elsevier Inc. All rights reserved. JEL classification: C9; D71; D78; H41


Introduction
In the single-peaked domain, the Nash-implementable welfare optima, practically, coincide with the outcomes of Generalized Median Rules (GMRs). 1 In simple terms, the outcome of a GMR is the median of a set of points that consists of: a) the voters' ideal policies and b) some exogenous values also known as phantoms. As proved by Moulin (1980) GMRs are the unique social choice rules that satisfy efficiency and strategy-proofness, while Berga and Moreno (2009) established that strategy-proof rules which are "not too bizarre" (in the context of Sprumont, 1995) 2 are the only implementable ones.
However, one should note that the direct revelation game of each GMR need not lead to the same outcome as the GMR itself. In this respect, the direct revelation games of GMRs share a common feature with other strategy-proof mechanisms: They admit a large multiplicity of Nash equilibria, some of which produce different outcomes (see Saijo et al., 2007). For instance, the direct revelation game triggered by the pure median rule -whose outcome is the Condorcet winner alternative -exhibits a large set of equilibria: As long as every player announces the same alternative x, this constitutes an equilibrium with outcome x since no unilateral deviation affects the median choice. 3 This leads to the following conclusion: The direct revelation game of a GMR does not Nash-implement the GMR (see Repullo, 1985 for similar results). 4 So how do we Nash-implement GMRs in a simple manner? Yamamura and Kawasaki (2013) propose the class of averaging mechanisms. Each player announces an alternative and a monotonic transformation of the average alternative is implemented. The equilibrium outcome coincides with the outcome of a GMR with an important restriction: All phantoms must be interior, which prevents, among others, the implementation of the Condorcet winner alternative. Moreover, Gershkov et al. (2016) have recently shown that sequential quota mechanisms can also implement GMRs. 5 Indeed, being able to implement GMRs by the means of simple sequential games is very important, but ideally one would like to be able to do the same using simple simultaneous games as well.
In this paper, we design the class of Generalized Approval Mechanisms (GAMs). These mechanisms are quite easy to describe and belong to the class of simultaneous voting games. First, players select subsets of the outcome space and scores are assigned to each alternative (hence, Approval). Given a subset of alternatives, two different GAMs may assign different positive scores to the same approved alternative (hence, Generalized). Then, a given quantile of the score distribution induced by the players' choices is implemented. Our main finding is that every generic 6 GMR -including the Condorcet winner alternative -can be Nash-implemented by some GAM. We explain how to derive a GAM for each GMR and we explicitly design the one that implements the Condorcet winner alternative, also known as the pure median rule. To our knowledge, this is the first simple simultaneous game that implements the Condorcet winner alternative and arguably this finding is of interest on its own. The equilibrium strategies of most players 7 take an easy "I approve every alternative at most (least) as large as the implemented alternative" form. In fact, every player with a preferred alternative to the left (right) of the implemented one approves the implemented alternative and all the alternatives to its left (right). That is, GAMs not only Nash-implement GMRs, but also promote sincerity and agreement, in the sense that most players include both their ideal policies and the implemented outcome in their approval sets.
Naturally, the present analysis relates to the wider Approval voting literature. Approval voting has been studied since Weber (1995) and Brams and Fishburn (1983), and has been shown to exhibit interesting properties in a variety of contexts: For example, it improves the quality of decisions in common value problems compared to plurality rule (Bouton and Castanheira, 2012) and leads to the sincere revelation of preferences in certain private value settings (see Laslier, 2009;Laslier andSanver, 2010 andNúñez, 2014). As we show, in the single-peaked domain Approval voting can additionally help a society reach, essentially, every feasible welfare optimum.
In what follows we describe the setting (section 2) and present an example (section 3). Then we provide our formal results and explain how to implement the Condorcet winner through a GAM (section 4).

Basic concepts and definitions
Let A := [0, 1] denote the set of alternatives and N := {1, . . . , n} the set of players with n ≥ 2. Let U be the set of single-peaked preferences. Each player i has utility function u i in U with u i (x) the utility of player i when x ∈ A is implemented. Each player i has a unique peak, t i , so that u i (x ) < u i (x ) when x < x ≤ t i and when t i ≤ x < x . 8 We let t = (t 1 , . . . , t n ) stand for a peak profile and u = (u 1 , . . . , u n ) ∈ U := n j =1 U . A social choice function is a function f : U → A that associates every u ∈ U with a unique alternative f (u) in A. For any finite collection of points x 1 , . . . , x s in [0, 1], we let m(x 1 , . . . , x s ) denote their median: m(x 1 , . . . , x s ) is the smallest number in {x 1 , . . . , x s } which satisfies 1 A social choice function is a generalized median rule (GMR) if there is some collection of points p 1 , . . . , p n−1 in [0, 1] such that, for each u ∈ U , f (u) = m(t, p 1 , . . . , p n−1 ). We refer to p 1 , . . . , p n−1 as the phantoms of the GMR. A GMR is considered to be generic if its interior phantoms -if any -are non-identical.
A mechanism is a function θ : S → A that assigns to every s ∈ S, a unique element θ(s) in A, where S := n i=1 S i and S i is the strategy space of player i. Given a mechanism θ : S → A, the strategy profile s ∈ S is a Nash equilibrium of θ at u ∈ U , if u i (θ (s i , s −i )) ≥ u i (θ (s i , s −i ) for all i ∈ N and any s i ∈ S i . Let N θ (u) be the set of Nash equilibria of θ at u. The mechanism θ implements the social choice function f in Nash equilibria if for each u ∈ U , a) there exists s ∈ N θ (u) such that θ(s) = f (u) and b) for any s ∈ N θ (u), θ(s) = f (u).
7 If a player's peak coincides with the equilibrium outcome, then this player may be employing a different kind of strategy. 8 For simplicity, we assume that t i = t j for any i, j ∈ N . Our results are not affected when relaxing this constraint.

Generalized approval mechanisms
We let B denote the collection of closed intervals of A. 9 A GAM is a mechanism θ : B n → A which requires each player to play simultaneously a strategy in B and determines for each strategy profile some alternative in A. For each b i ∈ B, we write b i = min b i and b i = max b i . The set B includes elements of different dimensions: singletons and positive length intervals. Since each b i is a convex set, its dimension is well-defined so that for each approval profile The set of zero-dimensional and one-dimensional strategies are respectively labeled by B 0 and B 1 with B = B 0 ∪ B 1 . Similarly, B n 0 denotes the set of profiles in which every player plays a singleton and B n 1 the set of profiles such that at least one player plays a one-dimensional strategy.
In order to state a precise definition of a GAM, we let η : R → R be a differentiable and strictly increasing function with η(0) = 0 and η(1) = 1 and q a non-negative real number. We assume that when player i submits the interval b i , he is endowed with a weight of q is a singleton and some player announces a one-dimensional interval, we let s x (b i , q, η) = 0 for every x ∈ [0, 1] so that his strategy is not taken into account.
Collectively, each profile b in B n 1 assigns to each alternative x a score of s In other words, a GAM selects the α-quantile of the distribution endogenously generated by b given q and η when at least some player announces a positive length interval; otherwise, it selects the median of the announced singletons. In what follows, we write θ rather than θ α,q,η . The initial step is to show that any GAM is well-defined.
Lemma 1. For any admissible q, α and η, the associated GAM is well-defined.

An example: the median approval mechanism
In this section we present an example that illustrates how a specific GAM works for a simple class of preference profiles. We consider a society composed of three individuals with peaks such that 0 = t 1 < t 2 < t 3 = 1. The Approval mechanisms that we consider throughout have the following common structure: a) Every player simultaneously and independently announces an interval b i ∈ B, b) these intervals generate a score distribution, and c) the mechanism implements θ(b) which equals some quantile of the score distribution such as the median. The Approval mechanisms differ in how this distribution is generated and in the quantile of the distribution that is implemented.
While the general structure is discussed in the rest of the paper, we stick here to the simplest interesting score assignment process: That is, we assume that when player i submits the interval b i , he assigns an individual score s x (b i ) to each x ∈ [0, 1] as follows: Collectively, each strategy profile b assigns a score of s x (b) to each alternative x with s x (b) = n i=1 s x (b i ). If at least one player submits a positive length interval, the distribution is the func- The Median Approval mechanism associates any profile b with the median θ(b) of the score distribution (when φ is continuous, φ(b, θ(b)) = 1 2 , while when all players announce a singleton, θ(b) corresponds to the median of these singletons).
We first notice that for any profile b with θ(b) = t i and b i ∈ B 0 , player i can effectively move the median of the score distribution closer to her peak, t i ∈ (0, 1), by submitting a sufficiently small -but non-degenerate -interval containing t i . Hence, in equilibrium it must be the case that an individual whose peak does not coincide with the outcome uses a one-dimensional strategy and, in particular, he uses . This is so because placing weight to alternatives to the left of the implemented one shifts the implemented alternative to the left and vice versa.
Note that for the three players example that we consider, In other words, when n = 3, the phantoms of the Median Approval mechanism are κ 1 = 1 3 and κ 2 = 2 3 . The previous arguments suggest that: a) when t 2 < 1 3 the unique equilibrium is ([0, 3 . But what happens when t 2 ∈ [ 1 3 , 2 3 ]? Then, in any equilibrium b, player 1 still uses [0, θ(b)] and player 3 still uses [θ(b), 1], but player 2 can use a different kind of strategy and have his peak implemented. Indeed, when, for example, t 2 ∈ [ 1 3 , 1 2 ] an equilibrium can be such that θ([0, t 2 ], [0, 4t 2 − 1], [t 2 , 1]) = t 2 . In these cases the equilibrium need not be unique, as the median player has many best responses, but the equilibrium outcome is unique and coincides with the peak of the median player. In Fig. 1 we present the unique equilibrium outcome of the Median Approval mechanism for all the preference profiles that we considered here. In Fig. 2 we present the scores assigned to each alternative, s x (b), in an equilibrium of the form θ([0, t 2 ], [0, 4t 2 − 1], [t 2 , 1]) = t 2 .

Formal analysis
We prove first how best replies are under a GAM (Lemma 3), then prove that a GAM Nashimplements a GMR (Proposition 1) and, finally, establish that for every generic GMR there exists a GAM that Nash-implements it (Theorem 1).
Next, we assert that if a player whose peak lies to the left (right) of the outcome uses a best response, then he approves of all the alternatives to the left (right) of the implemented outcome.  Proof. We only provide a proof for the case in which t i < θ(b) since the proof for t i > θ(b) is symmetric. We first consider a strategy profile

Lemma 3. If θ is a GAM, and
and argue that b i cannot be a best response of player i; and then we consider a strategy profile b with t i < θ(b) and b i = [0, θ(b)] and argue that b i is a best response of player i. When and dim(b) = 0 then i can submit a sufficiently small -but non-degenerate -interval centered at t i and bring the implemented outcome arbitrarily closer to his peak. 11 ; and hence bring the implemented outcome closer to her peak. If θ(b) > b i and dim(b i ) = 1 then, there exists β ∈ (0, θ(b)) such that θ(b) = θ ([β, θ(b)], b −i ). This is so because the outcome of a GAM does not depend on the specific interval that one submits when this interval contains outcomes only to the left (right) of the implemented one, but only on the total weight assigned to policies on the left (right) of the implemented outcome. We assume that i deviates to such a strategy, [β, θ(b)], that delivers the same outcome as b i . 12 After this intermediate step, we simply consider marginal changes in β. Indeed, one can show that ∂ ∂β φ (([β, θ(b)], b −i ), θ(b)) < 0 which means that the implemented outcome θ ([β, θ(b)], b −i ) continuously decreases when β decreases; and this clearly improves the payoff of player i. That is, b i cannot be a best response of player i. Case b) admits a completely symmetric proof. Case c) is actually simpler since it is such that b i ≤ θ(b) ≤ b i , so one can consider directly marginal changes of b i and/or b i without the need for the described intermediate step.
Now consider that t i < θ(b) and b i = [0, θ(b)], and that there exists b i such that is not a best response is contradicted and this concludes the argument. 2 Next we establish that a GAM implements a GMR in Nash equilibria. (t 1 , t 2 , . . . , t n , κ 1 , . . . , κ n−1 ).

Proposition 1. If the mechanism θ : B n → A is a Generalized Approval Mechanism (GAM) then: a) there is an equilibrium in pure strategies for every admissible preference profile; and b) in every equilibrium
Proof. Take some GAM mechanism θ : B n → A. The proof first establishes the existence of an equilibrium (Step A.) and then fully characterizes the unique equilibrium outcome (Step B.). For short, we write (t, κ) rather than (t 1 , t 2 , . . . , t n , κ 1 , . . . , κ n−1 ).
Step A. There is some equilibrium b of θ with θ(b) = m(t, κ).
Step A. is divided into two cases: There is either no t h with t h = m(t, κ) (Step A.I.), or there is a t h with t h = m(t, κ) (Step A.II.).
Step A.I. There is no t h with t h = m(t, κ). Since there is no t h with t h = m(t, κ), there must exist j ∈ {1, . . . , n − 1} such that κ j = m(t, κ). Therefore, the number of elements located below and above κ j in (t, κ) is equal to n − 1, which is equivalent to: #{i ∈ N | t i < κ j } + (j − 1) elements strictly lower than κ j = #{i ∈ N | t i > κ j } + (n − j − 1) elements strictly higher than κ j = n − 1.
The previous equalities imply that #{i ∈ N | t i < κ j } = n − j and #{i ∈ N | t i > κ j } = j . Let b ∈ B(j, κ j ) be such that: By Lemma 2, θ(b) = κ j and therefore θ(b) = m(t, κ). Since every player is playing a best response as defined in Lemma 3, b is an equilibrium of the game and this concludes Step A.I.
Step A.II. There is some t h with t h = m(t, κ). If there exists j ∈ {1, . . . , n − 1} such that κ j = t h , then either j = n − h or j = n − h + 1. Using the same line of reasoning as in A.I., one can show that: a) when j = n − h, any b ∈ B(n − h, t h ) is an equilibrium with θ(b) = t h and b) when j = n − h + 1, any b ∈ B(n − h + 1, t h ) is an equilibrium with θ(b) = t h .
If t h = m(t, κ) and t h = κ j , there are n − 1 values strictly smaller than t h in (t, κ). There are essentially two cases here: a) t h ∈ (κ 1 , κ n−1 ) and b) t h < κ 1 (the proof for the case t h > κ n−1 is symmetric). Below, we consider both cases in turn. a) Choose j , such that 1 ≤ j ≤ n − 2, with κ j < t h = m(t, κ) < κ j +1 . Moreover #{κ l | κ l < t h } = j and #{i ∈ N | t i < t h } = h − 1 so that: For each strategy c * ∈ B, we let b = (c * , b −h ) denote a strategy profile with: Our objective is to prove that there is some c * such that θ(b) = t h and b is an equilibrium. By Lemma 2, it follows that if κ n−h ∈ (0, 1), This is so because when the rest of the players behave according to b −h , h can smoothly deviate from [0, t h ] to [t h , 1] -first, continuously increase the right bound of his interval up to 1, and, then, continuously increase the left bound of his interval up to his peak -and induce a continuous change of the implemented policy from θ ([0, t h In order to prove that b = (c * , b −h ) with θ(b) = t h is an equilibrium, suppose by contradiction that there exists some i ∈ N with a profitable deviation b i . Yet, as proved by Lemma 3, any player with a peak different than t h is playing a best response in b. Moreover, the player with peak t h is also playing a best response since θ(b) = t h . Therefore, b must be an equilibrium concluding a) in Step A. b) In this case, t h = m(t, κ) < κ 1 , and hence, h = n. Therefore, in any equilibrium b, the n − 1 players with peak strictly lower than t n play [0, t n ]. Moreover, θ([0, t n ], b −n ) < t n , since for any b ∈ B(0, x), θ(b) < x for every x ∈ (0, 1); and θ([t n , 1], b −n ) > t n , since for any b ∈ B(1, x), θ(b) > x if κ 1 > x (by Lemma 2). Hence, the existence of an interval A * such that θ(b A * ) = t n is ensured. This, in turn, ensures the existence of an equilibrium similar to the one described in a), which concludes the proof of step A.
Suppose, by contradiction, that there is some GAM, θ , that admits an equilibrium b with 1 > θ(b) > m (t, κ). 13 The rest of the proof inspects the different cases for each value of n − i . A symmetric argument applies when 0 < θ(b) < m (t, κ). 13 An equilibrium with θ(b) = 1 can be trivially ruled out since it requires that all players announce singletons. Obviously, any i with t i < 1 can gain by deviating to [t i , t i + ε] for ε > 0 and small enough.
Step B.I. n − i ∈ {0, n}. Assume first that there is some equilibrium b with n − i = 0. It follows that i = n players have a peak lower than m(t, κ). Since, by assumption, m(t, κ) < θ(b), Lemma 3 implies that each player i plays b i = [0, θ(b)]. However, by definition θ(b) is the α-quantile of the sample generated by b given q and η. Since α ∈ (0, 1), it follows that θ(b) ∈ (0, θ(b)) which is impossible. If there is some equilibrium b with n − i = n, then all players have a peak higher than m(t, κ). Hence, a similar contradiction to the case with n − i = 0 arises, which concludes Step B.I.
Step B.II. n − i / ∈ {0, n}. Assume now that there is some equilibrium b with n − i / ∈ {0, n} and let i = #{i ∈ N | t i < θ(b)} denote the number of players with a peak strictly lower than the outcome θ(b). Since, by assumption, θ(b) > m(t, κ) it follows that i ≤ i .

κ).
If i = n, there are n players with a peak strictly lower than θ(b). Lemma 3 implies that each player plays [0, θ(b) (t, κ). Thus, there is no equilibrium b with θ(b) = t h , which ends the proof. 2 We now have all the tools that are necessary to state the main result of this paper.

Theorem 1. For every generic GMR there exists a GAM that Nash-implements it.
Proof. Take some generic GMR with phantom vector p = (p 1 , . . . , p n−1 ). We want to prove that there is some GAM with phantom vector κ = (κ 1 , . . . , κ n−1 ) that Nash-implements it. Given the result of Proposition 1, it is sufficient to show that there exists a GAM with κ = p.
Assume now that there is some pair a, b ∈ {1, . . . , n − 1} such that p a = 0 and/or p b = 1 with p i ∈ (0, 1) if i ∈ (a, b). 14 As previously argued, it must be the case that p 1 ≤ p 2 ≤ . . . ≤ p n−1 . Hence, for any s ≤ a, p s = 0 and for any t ≥ b, p t = 1.
Take now some q and α such that This ensures that κ a = 0 and κ b = 1. The previous equalities are equivalent to while α depends on the value of a + b. More precisely, and Moreover, since 0 ≤ κ 1 ≤ κ 2 ≤ . . . ≤ κ n−1 ≤ 1, it follows that, for any s ≤ a, κ s = 0 and for any t ≥ b, κ t = 1. If b = a + 1, then we are done, since κ = p. If b > a + 1, then by assumption, any p j with j ∈ {a, . . . , b} ∩ {1, . . . , n − 1} satisfies p j ∈ (0, 1). Then, given that q and α are given by (1), it is enough to suitably select η such that for any j ∈ {a, . . . , b} ∩ {1, . . . , n − 1}, p j = η −1 ( α(nq+j)−(n−j)q (n−j)−α(n−2j) ) which ensures that κ = p as wanted. 2 Finally, we discuss some examples that show the usefulness of the analysis above. The first one is concerned with the implementation of the Condorcet winner. The second attempts to illustrate how to implement GMRs with interior phantoms.
Example 1: Implementing the Condorcet winner. Let N = {1, 2, 3} be the set of players with t 1 < t 2 < t 3 and set q = 1, α = 1/2 and η(x) = x. Namely, each player is endowed with a weight of 1 + b i − b i and the outcome selected corresponds to the median of the distribution generated by b. For short, we let θ(b) denote the mechanism outcome and φ(b, z) the cumulative distribution associated to any profile b. The unique equilibrium outcome of this game is the selection of t 2 , the median of the peaks and the Condorcet winner policy.
However, in the precise case in which m(t 1 , t 2 , t 3 , 1 3 , 2 3 ) = 1 3 (the case m(t 1 , t 2 , t 3 , 1 3 , 2 3 ) = 2 3 being symmetric), the logic is different. Indeed, the mechanism admits a unique equilibrium b * with b * 1 = b * 2 = [0, 1 3 ] and b * 3 = [ 1 3 , 1]. In general, if the equilibrium outcome coincides with a phantom and not with a type, there is a unique equilibrium (all players playing either to the left or to the right of the outcome) whereas this is not the case when a player's peak is the equilibrium outcome (this player can play in several ways, while the rest of the players play either to the left or to the right of the outcome).