Doubts and Dogmatism in Conflict Behavior

This paper studies a game of conflict where two individuals fight in order to choose a policy. Intuitively, conflicts will be less violent if individuals entertain the possibility that their opponent may be right. Why is it so difficult to observe this attitude? To answer this question, this paper considers a model of indoctrination where altruistic advisors (such as, preachers or parents), after receiving signals from Nature, send messages to the participants in the conflict. In some cases, as a result of indoctrination, both individuals never doubt about the possibility of being wrong, although all available information suggests otherwise. In other cases, one of the two individuals is excessively reasonable: he believes that the opponent may be right even when all the evidence indicates beyond any doubt that the policy preferred by the opponent is suboptimal. The common feature in both cases is that information is distorted, although in different directions. The model has a rich set of predictions concerning the incidence and intensity of conflict, and the evolution of indoctrination strategies over time.


Introduction
It is intuitive that individuals will be more (resp. less) willing to exert e¤ort in a con ‡ict, when they believe that the con ‡ict has high (resp. low) stakes. As suggested in the quote by Karl Popper (1963), violence in con ‡icts can be reduced if individuals entertain the possibility that they may be wrong and that their opponent may be right. How do people form these beliefs? Various episodes of escalation in ideological violence across the world suggests that individuals often engage in a great deal of reality distortions which lead them to negate the possibility of being wrong. Other evidence seems to suggest that in some other cases reality is distorted in the opposite direction: some individuals seem to overestimate the possibility that the opponent may be right. 1 In this paper, we build a model where two individuals play a game of con ‡ict over an ideological dimension. In the context of our model, we intend to understand whether and in which direction beliefs in con ‡ict behavior are distorted.
Our …rst point of departure is to assume that individuals'preference rankings over policy alternatives are state-dependent and that the current state of the world is not observable by the two participants in the con ‡ict. This implies that the two parties cannot be ex-ante certain about the optimality of the policy that they are trying to impose. The second crucial feature of our model is that each opponent relies on the information provided by an altruistic advisor (such as, a preacher or a parent), who is assumed to be better (although not necessarily perfectly) informed about the current state.
Since the general setup with two opponents and two advisors is somewhat involved (see Section 3), in Section 2 we …rst focus on a simpler setup with three individuals: a "student", an "opponent" and a "preacher" who is (more or less) altruistic vis-à-vis the student. 2 To justify the asymmetry between the two parties of the con ‡ict, in Section 2 we suppose that the opponent's preferred policy is constant regardless of the state of the world. This implies that the opponent does not need to be informed about the current state in order to know which policy maximizes his utility. In contrast, we suppose that the student's preferences are state-dependent. More speci…cally, in one state of the world, the optimal policy of the student is di¤erent from the one of the opponent, while in another state of the world the policy preferred by the opponent is also optimal for the student. Moreover, assume that the initial prior of the preacher and of the student is that the state where preferences are not aligned is more likely.
The timing of our model is as follows. Nature privately sends to the preacher a signal that is (not necessarily fully) informative about the current state; the preacher updates his prior and decides which message to send to his student. Upon receiving a message from the preacher, the student naively forms his beliefs about the current state. Then, both individuals simultaneously decide the e¤ort level in the con ‡ict. We assume that the individual that exerts the highest e¤ort wins and is able to impose his preferred policy.
In this paper, we will solve the information transmission game between the preacher and his student as well as the subsequent game of con ‡ict between the student and the opponent. The …rst questions that we intend to answer are the following. When facing an opponent that has no doubt, does the preacher have an incentive to send a truthful message? If not, does the preacher have an incentive to remove or instill doubts in his student?
A preview of our results is the following. Whether or not the preacher is truthful depends on a crucial parameter: the prior probability of being in a state where preferences are not aligned. In particular, manipulation of information does not take place when the prior probability is su¢ ciently low. Since we expect that prior probability to be high in a heterogeneous society, this suggests that manipulation of information is less likely in homogenous societies.
Second, we show that in societies that are su¢ ciently heterogenous, truthful reporting does not generally take place. The type of manipulation that is observed depends on another crucial parameter of the model: the degree of preacher's altruism, which measures how much does the preacher internalize the e¤ort cost exerted by the student.
When altruism is low, we …nd that the preacher induces a dogmatic attitude in his student by removing any doubt that the student may have had about the possibility that the opponent's preferred policy is optimal for him. This occurs even when the signal that the preacher receives from Nature indicates that the opponent may be right.
As a result, the dogmatic student strenuously …ghts because he incorrectly excludes the possibility that the policy that the opponent would choose may be optimal. This leads to con ‡icts that are more violent than the ones that we would have observed if information had not been manipulated.
When altruism is high (or even perfect), we obtain that the preacher induces a skeptical attitude by always instilling in his student the doubt that opponent may be right even when the evidence received by the preacher indicates that the policy that the opponent would choose is certainly not optimal. As a result, the skeptical student exerts little e¤ort because he incorrectly entertains the possibility that the policy that the opponent is trying to impose may be optimal also for him. Skepticism leads to more moderate con ‡icts, but it also leads to asymmetric outcomes: the opponent wins and is able to implement his preferred policy more often than the student.
It should be stressed that the incentive to manipulate beliefs does not arise here because agents derive utility from anticipation of future payo¤s, as in Akerlof and Dickens (1982). 3 In this model, the preacher manipulates information in order to a¤ect his student's behavior in the con ‡ict and, due to the existence of strategic interdependence between agents'e¤ort decisions, to also a¤ect the behavior of the opponent. The incentive to induce a dogmatic attitude can be easily understood: removing doubts has a motivating e¤ect because it induces the student to exert higher e¤ort. Clearly, this e¤ect is more valuable to the preacher, the lower his altruism parameter. But, why would a preacher ever want to instill doubts in his student after learning that the opponent's preferred policy is not optimal? Instilling doubts decreases the probability that the student enters the con ‡ict and, consequently, increases the probability that the suboptimal policy is implemented. This is clearly welfare reducing for both the preacher and the student. However, instilling doubts has a moderating e¤ect on the con ‡ict: it decreases the average e¤ort exerted by both opponents and reduces the con ‡ict's Pareto ine¢ ciency. Then, conditional upon the student exerting a positive e¤ort, the preacher obtains a larger payo¤ in the con ‡ict. In contrast to the motivating e¤ect, the moderating e¤ect is more valuable to the preacher, the higher his altruism parameter.
One can then show that if the preacher is su¢ ciently altruistic, the moderating e¤ect dominates and, consequently, skeptical attitudes may be observed. The good news is that truthful reporting is also an absorbing state: societies where information is not manipulated give incentives to conduct research, which improves the e¤ectiveness of future research and provides future generations with even more incentives to report truthfully. On the contrary, we show that skepticism cannot be observed for a long period of time. Luckily, it is not replaced by dogmatism but by truthful reporting.
Finally, in Section 3 we consider a model where the utilities of both opponents are state-dependent and where both individuals are associated to a preacher. In this case, we show that besides the motivating e¤ect, removing doubts has also a preempting e¤ect: the preacher induces the opponent to exert lower e¤ort, without decreasing the e¤ort level of his own student. As a result of this e¤ect, we obtain that one of the two preachers will always induce a dogmatic attitude. This …nding con…rms Popper's pessimism about the possibility to observe an attitude of reasonableness in both opponents.
This model yields some predictions about the incidence and the intensity of con ‡ict.
In particular, the results con…rm the intuition that more heterogenous societies (that is, societies that are ex-ante more likely to disagree) have a higher risk to enter into a con ‡ict. More surprisingly, conditional on con ‡ict occurring, the intensity of con ‡ict may not be monotone in the degree of ex-ante heterogeneity. The latter result occurs because, as described above, in more divided societies one of the two individuals may be induced to have a skeptical attitude, which reduces the overall e¤ort exerted in the con ‡ict.
This paper is related to recent literature that deals with other examples of distorted collective understanding of reality, such as anti and pro-redistribution ideologies (Bénabou, 2008, Bénabou and Tirole, 2006), over-optimism (or over-pessimism) about the value of existing cultural norms (Dessi, 2008), contagious exuberance in organizations (Bénabou, 2009), and no-trust-no-trade equilibria due to pessimistic beliefs about the trustworthiness of others (Guiso et al., 2008). A common trait of these phenomena is that individuals rely on distorted evidence about the current state of the world. In Bénabou (2008Bénabou ( , 2009, the individuals themselves distort their own processing of information. Here instead we consider a model of indoctrination where the opponents in the con ‡ict receive (possibly manipulated) information from their preachers (or parents). 4 Contrary to Guiso et al. (2008), where parents can perfectly choose the beliefs of their children, indoctrination possibilities are more limited here because preachers can a¤ect students' beliefs only by misreporting the private signals that they have received. In contrast to Bénabou (2009), where censorship and denial occur because individuals have anticipatory feelings, in our model a preacher may decide to misreport the truth for a di¤erent set of reasons: to motivate his own student (a similar motive is also present in, for instance, Tirole, 2002, 2006) and also, because of the existence of strategic interdependence between agents' e¤ort decisions, to a¤ect the strategy of the opponent. 5 Notice that the latter, but not the former, motive is also present if preachers are perfectly altruistic. This implies that in our model misreporting may occur also when the utility of the preacher coincides with the one of the student. 6 This paper is also related to the literature on social con ‡ict. Starting from the classic contributions by Grossman (1991) and Skaperdas (1992), the literature has developed theoretical models to study the determinants of social con ‡ict. 7 Recently, Caselli and Coleman (2006) and Ray (2008, 2009) have focused on the role 4 However, as discussed in Bénabou and Tirole (2006), a model of indoctrination is formally identical to a model where individuals with imperfect willpower distort the information they have received to a¤ect their e¤ort decision in the future. See also the discussion at the end of Section 2.1. 5 In Bénabou (2009) there is no strategic interdependence between agents'e¤ort decisions. 6 In Carillo and Mariotti (2000) and Tirole (2002, 2006), a necessary condition to have strategic ignorance or beliefs manipulation is to have disagreement between the multiples selves (that is, time-inconsistent preferences). See also the classic model of strategic information transmission of Crawford and Sobel (1982), where the sender has no incentive to misreport if he has the same utility of the receiver. 7  of ethnic divisions; Persson (2008a, 2008b) have investigated the economic determinants of social con ‡ict, while Weingast (1997) and Bates (2008) have studied the importance of institutional constraints. It should be noticed that in virtually all papers on the subject, the parties in the con ‡ict …ght over a given amount of resources. In contrast, we consider here a con ‡ict over an ideological dimension, which we expect to be more susceptible to beliefs'manipulation. Two recent papers have also studied how the outcome of a con ‡ict (or of a bargaining under the threat of war) can be manipulated. In Jackson and Morelli (2007) The remainder of the paper is as follows. In Section 2, we analyze the basic setup with one preacher, one student and one opponent. Section 3 considers a slightly di¤erent setup with two preachers and two students. Section 4 concludes.

The Model
Consider a model with three players: A; B and b A: Individuals A and B are assumed to play a game of con ‡ict. Individual A is associated to b A; whose role is to provide information to A. Individual b A is assumed to be altruistic towards A: Throughout this paper, we shall refer to b A as the "preacher" and to A as the "student". Alternatively, one could think of b A and A as, respectively, a parent and a son or as two multiple selves that exist at di¤erent times within the same individual. 8 It is important to emphasize that the game of con ‡ict analyzed here does not concern the division of a given amount of resources. 9 Instead, the con ‡ict is over the choice of a policy x 2 X: We will assume that X includes only two alternatives: The model is su¢ ciently general to admit various interpretations. For example, it could describe a con ‡ict between two political factions in order to decide the type of economic policy (government intervention vs. laissez faire) or the type of constitution (theocracy vs. secular democracy) to adopt in the country. Also, the model could o¤er insights into more innocuous con ‡icts: for instance, a couple choosing between two dining options. Players'utilities are assumed to depend on x but also on the current state of the world 2 : In most of this paper, with the exception of Section 3, we assume that there are only two possible states of the world: = f 1 ; D g : The state is randomly drawn by Nature. In state 1 we assume that the preferences of A and B are aligned: the policy that maximizes the utility of both individuals is the same. In state D we assume instead that individuals disagree on the correct policy to implement: the policies that maximize the utility of the two individuals are di¤erent. Throughout the paper we will denote 1 as the state of agreement and D as the state of disagreement.
The assumption that individuals with di¤erent views may sometimes agree seems quite where c i is the cost of e¤ort exerted in the con ‡ict and u i (x; ) is a term that depends on the current state and on policy x. 10 More speci…cally, we will assume that in the state of agreement 1 policy b is optimal for both individuals. Conversely, in the state of disagreement D individual A's preferred policy is a, while B's preferred policy is b: The following matrix summarizes the preferred policies by each individual in each state: creating stories against poor (or rich) minorities in order to block (or pass) redistribution policies.
A's optimal policy B's optimal policy For simplicity, it is assumed that the term u i (x; ) is either zero or one: it is equal to one if the appropriate policy for individual i in state is selected, and zero otherwise.
More formally, As mentioned above, b A is assumed to be (more or less) altruistic towards A. His utility is Let 0 1: When = 1, the utility of b A coincides with the one of A. When < 1; the preacher is not fully altruistic vis-à-vis his student: b A does not fully internalize the cost of e¤ort exerted by A. This seems quite natural. After all, c A is the e¤ort exerted by A; not by b A: However, notice that the preacher does not disagree with his student on the right policy to adopt in each state .
We assume incomplete information about the current state of the world. Individuals have a common prior on : The prior probability that all agents assign to the state of disagreement is denoted by P ( D ). We will assume that P ( D ) 2 (1=2; 1): that is, the two individuals are (ex-ante) more likely to be in a state of disagreement than in a state of agreement. To some extent, P ( D ) can be viewed as a measure of societal heterogeneity. In fact, it seems intuitive that two randomly selected individuals from a heterogenous society are likely to disagree on various issues; consequently, we expect them to have a high prior P ( D ).
Finally, it is important to notice the asymmetry between A and B. Individual B, unlike A; does not need to know the current state in order to decide which policy to adopt in case of victory: he has no doubt that b is the appropriate policy. On the contrary, A needs to know the current state of nature in order to know which is the appropriate policy to adopt. 11

Timing and Information Structure
The period is divided in three sub-periods: t = 0; 1; 2. At t = 0; the message game between b A and A takes place: At t = 1, A and B play a game of con ‡ict. At t = 2; the winner decides the policy. See Figure 1 for the timing. We now discuss each stage in detail.
At t = 0; Nature sends to b A a signal s 2 fs CAU ; s IDY g which is (not necessarily fully) informative about the current state : It is crucial to assume that this signal is privately observed by b A: We now describe each signal. Signal s CAU is perfectly informative and reveals that the state is D : The subscript CAU stands for "con ‡ict as usual" because, unconditional on the signal, D is the most likely state: Conversely, signal s IDY is not perfectly informative. It indicates that the state may not be a state of con ‡ict. The subscript IDY stands for "idiosyncratic" because this signal suggests that the state may not be D .
The conditional probabilities of receiving signals s CAU and s IDY in state 1 are P (s CAU j 1 ) = 0 and P (s IDY j 1 ) = 1: In state D ; they are P (s CAU j D ) = and P (s IDY j D ) = 1 ; The parameter can be viewed as a measure of the precision of Nature's signals.  A.) Consequently, upon receiving message m b A ; A's posterior, which is denoted by , is equal to (5) when m b A = s IDY and is equal to (6) when m b A = s CAU : The naivete assumption is somewhat justi…ed because of the particular relationship between preacher and student and on the assumption that b A is altruistic vis-à-vis A: 12 Finally, it is important to notice that the preacher cannot fabricate new evidence that would allow him to perfectly choose the posterior of his student. Instead, we assume here that b A can a¤ect A's beliefs only by misreporting the signal received from Nature. 13 At t = 1; we posit the following game of con ‡ict. Individuals A and B simultaneously choose e¤ort levels c A and c B , where c A ; c B 0: The probability of i winning the contest given his own e¤ort decision and the one of the opponent is In words, the individual that exerts the highest e¤ort wins with probability one. This technology of con ‡ict, which is extremely sensitive to e¤ort di¤erences, turns out to be analytically tractable for our purposes. 14 An e¤ort strategy for i speci…es an e¤ort level for any message m b A : 15 12 We brie ‡y discuss what happens when A is not naive at the end of Section 2.2.3 and in footnote 20. 13 A similar assumption is also made in Bénabou and Tirole (2006), Bénabou (2008Bénabou ( , 2009, and Dessi (2008). 14 In the social con ‡ict literature, this technology of war is considered, for instance, by Jackson and Morelli (2007, ex. 3). This type of contest, known in the literature as all-pay auction, has also been considered by the lobbying and rent-seeking literature: e.g., Ellingsen (1991), Baye et al. (1993), and Che and Gale (1998). For a survey of other technologies of con ‡ict, see Gar…nkel and Skaperdas (2007). 15 Notice that, due to strategic interdependence in the game of con ‡ict, m b A a¤ects the e¤ort decision of B indirectly, through its e¤ect on A's beliefs.
At t = 2, the winner is selected and picks the policy. The decision strategy D i speci…es the policy decision by i in case of victory.
The equilibrium of the game we have just described is quite standard. At each stage, players maximize their expected utility given their beliefs at that stage and given the strategies of the other players. The only non-standard assumption is that A naively believes the message sent by b A. Before moving to the characterization of the equilibrium, we stress that our model of indoctrination could be interpreted as an intrapersonal game between two multiple selves within the same individual. 16 In lieu of assuming that information is provided by b A; we could assume that A is able to observe s. If A discounts future payo¤s using quasi-hyperbolic discounting, at t = 0 he may have an incentive to engage in denial to in ‡uence the game of con ‡ict between his other self at t = 1 and B. More in particular, suppose that from the perspective of time t = 0; the discount factor between t = 1 and t = 2 is 1= > 1; while the discount factor between t = 0 and t = 1 is 1. However, due to time-inconsistent preferences, at t = 1 (when the e¤ort decision is made) A discounts the payo¤ at t = 2 (when the policy is implemented) with a discount factor equal to 1. It is easy to verify that the intertemporal utility of A as of t = 0 is equal to -times the utility in (2), while the intertemporal utility of A at t = 1 would look as (1). As a result, the equilibrium message strategy of the preacher would coincide with the censoring strategy of an individual with imperfect willpower.

Equilibrium Characterization
We solve the model by backward induction:

Policy Decisions
At t = 2; the decision rule of individual B in case of victory in the con ‡ict is trivial.
The decision by A is also straightforward: A picks a only if his posterior probability of being in a state of con ‡ict is greater than 1=2. That is,

The Game of Con ‡ict
We now determine the e¤ort decisions at t = 1. At the beginning of t = 1; both A and B observe the message m b A sent by b A: Individual B knows that A is naive and, consequently, he is able to …gure out A (m b A ); the probability assessment of player A of being in state D . To …nd out the equilibrium in the game of con ‡ict, two cases must be considered. First, suppose A (m b A ) 1=2: In this case, A agrees with B that b is the correct policy to adopt. Then, c A ; c B = 0: In this case, a con ‡ict is inevitable. As we will see in Proposition 1, the equilibrium is in continuous mixed strategies. Let G i (:) denote the equilibrium cumulative distribution of individual i's e¤ort. The expected payo¤ That is, with probability G B (c A ) individual A wins (this occurs because B exerts e¤ort c A or less) and implements policy a, which gives A an expected payo¤ equal to A (m b A ): With complementary probability, B wins and implements b; which gives A an expected It should be transparent from (7) that the stakes in the con ‡ict (i.e., the payo¤ di¤erence between winning and losing) for A are increasing in The reason is twofold. First, the higher A (m b A ); the stronger A's con…dence about the optimality of policy a. This implies that the expected payo¤ of winning imply fewer doubts about the possibility that policy b could be optimal. The expected payo¤ of losing is then decreasing in A (m b A ). We can rewrite (7) as From expression (8) notice that A will never bid more than 2 A (m b A ) 1: The reason is easily understood: an e¤ort level strictly greater than 2 A (m b A ) 1 would at most allow A to win with probability one. One can see that by exerting an e¤ort level equal to zero, A would obtain a greater payo¤. Note that A's valuation goes to zero when A (m b A ) goes to 1=2: In fact, when the two states become equally likely, A has weak reasons to enter into a con ‡ict: eventually, when A (m b A ) = 1=2 player A is willing to let B decide and pick b: The expected payo¤ to B is instead Note that B's valuation is 1; which is (weakly) greater than A's valuation. This is intuitive: B has no doubts that b is the right policy. Then, the stakes in the con ‡ict for B are higher than for A since policy a is for sure not the optimal policy for B.
The game of con ‡ict that we have just described has the following unique equilibrium.
PROPOSITION 1: (Hillman and Riley, 1988 ; the equilibrium cumulative distribution functions of e¤ort levels by A and B are, respectively, ) of exerting zero e¤ort. Thereafter, A 0 s mixed strategy is also a uniform distribution over the interval 0; 2 A (m b A ) 1 : The proof of Proposition 1 is contained in the appendix. A few features of the equilibrium described in Proposition 1 are worth noting. First, the maximum e¤ort level of both individuals is given by 2 A (m b A ) 1; the valuation of the lower-valuing individual. This suggests, as we will see in the next section, that by instilling doubts in his student preacher b A is able to reduce the escalation of violence in the con ‡ict. Second, the lower-valuing individual exerts zero e¤ort with positive probability, which is decreasing in A (m b A ); in contrast, individual B always enters the con ‡ict. Finally, conditional upon exerting a positive e¤ort, A adopts the same uniform distribution as B: Using the characterization of Proposition 1, for any given s, and knowing the preacher's equilibrium message strategy, we can compute expected total e¤ort in the con ‡ict, Not surprisingly, (9) is increasing in A (m b A ): We now introduce a de…nition that will be often used in the paper.
De…nition: A total con ‡ict is de…ned as a con ‡ict where the valuation for winning is 1 for both individuals.
That is, a total con ‡ict arises when A (m b A ) = 1. In this case, individual A (possibly incorrectly) expects to receive zero in case of loss and one in case of victory.
From the results of Proposition 1 we know that when A (m b A ) = 1 both players enter with probability one and e¤ort is distributed uniformly on the interval [0; 1] : Notice that total con ‡icts are particularly ine¢ cient. In fact, both A and B expect to receive zero from a total con ‡ict. The two individuals would then be better o¤ if they could commit to exert zero e¤ort and toss a coin to decide the winner.

Message Strategies
Depending on the underlying parameters (namely, ; and the initial prior of being in state D ) we will show (see Propositions 2 and 3) that three message strategies may occur. First, there exists a region of parameter values where the preacher reports Nature's signals in a truthful manner. Second, for other parameters values we obtain that b A always sends message s CAU regardless of the actual signal received from Nature. In this case, we say that b A induces a dogmatic attitude in his student. 17 That is, b A removes any doubt from A even when the actual signal sent by Nature is noisy: Finally, there exists a third region of parameter values where b A always sends message s IDY regardless of the actual signal. In this other case, we say that b A induces a skeptical attitude in his student. 18 That is, A is induced to doubt about the optimality of policy a even when s is perfectly informative and indicates that a is the optimal policy to adopt.
To begin with, in Lemmas 1 and 2 we compute the payo¤s to b A for each message and for each Nature's signal. 17 According to Popper (1963, p. 50), "dogmatic attitude is [...] related to the tendency to verify our laws and schemata by seeking to apply them and to con…rm them, even to the point of neglecting refutations." Following the pioneering work of Rokeach (1960), the psychological literature has investigated dogmatism as a personality trait and developed various measures (such as, the Rokeach Dogmatism Scale) to assess the extent to which individuals'belief systems are open or closed. 18 Throughout the paper we use the word skepticism to indicate an attitude of (systematic) doubt.
If instead b A sends the false message s IDY , his expected payo¤ is If instead b A sends the false message s CAU , his expected payo¤ is The proofs of Lemmas 1-2 are contained in the appendix. To understand (11) and (12), notice that from Proposition 1 we know that with probability 2 A (s IDY ) 1 individual A enters the con ‡ict upon receiving message s IDY : Conditional on A exerting a positive e¤ort, both individuals have the same probabilities of winning. Notice that expression (12) contains an extra term compared to (11). This occurs because, whenever A exits the con ‡ict, b A expects to obtain a positive payo¤ when s = s IDY but not when s = s CAU : Finally, expressions (10) and (13) are the preacher's utilities of inducing A to play a total con ‡ict. Note that the lower ; the higher the preacher's utility from a total con ‡ict.

To understand b
A's choice between sending a truthful message and sending a false message, we need to compare expression (10) with expression (11) and expression (12) with expression (13). It turns out that is a crucial parameter in these comparisons. Proposition 2 shows that when is above 1/2, b A may have an incentive to always send message s IDY regardless of the actual s: When instead is below 1/2, Proposition 3 shows that b A may have an incentive to always send message s CAU : The intuition behind these results is the following. On the one hand, b A has an incentive to remove A's doubts about the possibility that B may be right in order to increase A's e¤ort in the con ‡ict. This motivating e¤ect is present in our model because the preacher does not fully internalize the cost of e¤ort of A. On the other hand, b A may want to instill doubts in A to reduce the ine¢ ciency of the game of con ‡ict. To understand this moderating e¤ect, recall from Proposition 1 that if A has more doubts, con ‡icts are less violent (hence, less ine¢ cient) because the equilibrium e¤ort levels of both players decrease. The cost of instilling doubts when doubts are not justi…ed by evidence (that is, when s = s CAU ) is that A exits the con ‡ict with higher probability and b; which is suboptimal for A in state D , is more often implemented. The preacher then chooses the message strategy that optimally solves the trade-o¤ between, on the one hand, inducing A to enter the con ‡ict more often but obtaining a smaller return whenever A enters the con ‡ict and, on the other hand, making A enter less often but obtaining a larger return, conditional on A entering the con ‡ict. The importance of the two e¤ects depends, among other things, on : Consider, for instance, a preacher with high : The motivating e¤ect is not very valuable for him because his expected payo¤ from a total con ‡ict is close to zero (see Lemmas 1 and 2). Therefore, a su¢ ciently altruistic preacher would rather increase the expected payo¤ of a con ‡ict than maximize the probability that A exerts positive e¤ort. The converse holds true for a preacher with low : his expected payo¤ from a total con ‡ict is so large that he always prefers to maximize the probability that A enters the con ‡ict, even at the cost of inducing a total con ‡ict. This is why we may observe skeptical (resp. dogmatic) attitudes when is high (resp. low).

Proposition 2 considers the case where b
A is su¢ ciently altruistic (1=2 1): Notice that the case of perfect altruism is also included.
information transmission is truthful. When instead P ( D ) > P ; preacher b A reports s IDY regardless of Nature's signal.
The message of Proposition 2 is twofold. First, it states that when is su¢ ciently high, b A may have an incentive to send message s IDY after all signals s. As we discussed before, instilling doubts is a defence mechanism which moderates the escalation of violence in the con ‡ict. Second, Proposition 2 says that skeptical attitudes are observed when P ( D ) is su¢ ciently large (i.e., above the cuto¤ P ). The reason behind this result is the following. Suppose that b A receives signal s CAU : is low, sending the false message s IDY would instill a great amount of doubt in A since A (s IDY ) is increasing in P ( D ). In this case, the di¤erence between the preacher's posterior after the true signal and the student's posterior after the false message would be large and this would cause an excessive reduction of A's e¤ort level. As a result, conditional on A exerting a positive e¤ort, the preacher's payo¤ would be high, but the probability of A exerting positive e¤ort is so low that policy b is almost always implemented. This explains why sending the false message s IDY when s = s CAU and P ( D ) is low is not a pro…table strategy for b A: Proposition 3 discusses the case when 0 < 1=2: PROPOSITION 3: (Dogmatism) Fix and suppose that 0 < 1=2: For all information transmission is truthful. When instead P ( D ) > b P ; preacher b A always reports s CAU regardless of Nature's signal.
The previous proposition establishes that when is su¢ ciently low, b A may have an incentive to send message s CAU after all signals s. The preacher's message con…rms the student's prior even when the actual signal goes against it. As a result, individuals always engage in a total con ‡ict. As in Proposition 2, manipulation of information occurs when P ( D ) is su¢ ciently large (i.e., above the cuto¤ b P ). To see this, suppose that P ( D ) is just above 1=2: After receiving signal s IDY ; preacher b A would change his view about the optimality of a and start to believe that b is the correct decision. Then, he has no incentive to send message s CAU ; which would induce A to enter a total con ‡ict with the goal of imposing the "wrong" policy.
In Figure 2, for a given ; we draw the parameter regions in the (P ( D ) ; ) space where beliefs'manipulation occurs. As stated in Propositions 2 and 3, b A sends truthful reports when P ( D ) is su¢ ciently low. When instead P ( D ) is large, we observe either dogmatism (in the lower-right region) or skepticism (in the upper-right region). Also notice that truthful reporting is more likely to occur when is around 1=2. From , it is easy to observe that ceteris paribus an increase of may move from the dogmatic to the truthful region. However, a large increase of may move from the dogmatic to the skeptical region. As a result, it is unclear whether or not an increase of provides stronger incentives to report truthfully. Finally, if Nature's signals become more precise (i.e., increases), it is easy to verify that both cuto¤s b P and P increase, thereby reducing the incentives to manipulate beliefs. We state without proof the following Corollary.
Corollary 1: The higher P ( D ) and the lower ; the stronger the incentives to manipulate signals. An increase of has instead an ambiguous e¤ect on the incentives to report truthfully; an increase of reduces the incentives to induce a dogmatic attitude, but it may favor the occurrence of skeptical attitudes.
Before concluding, we brie ‡y discuss what would happen if A were not naive. Take the region of parameter values where truthful reports occur according to Propositions 2 and 3. It is easy to see that informative communication would also occur if A were not naive. 19 Instead, if we are in the parameter region where the preacher has an incentive to misrepresent the facts, A would ignore the message of his preacher: A's probability assessment of being in state D would then coincide with his prior.

Incidence and Intensity of Con ‡ict
In this section, we study how the likelihood that a con ‡ict occurs and the total e¤ort levels exerted in the con ‡ict depend on the underlying parameters of our model.
First, we compute the probability that a con ‡ict occurs (or incidence of con ‡ict): That is, the incidence of con ‡ict has a discontinuous jump at 1= (2  ): To understand (14), notice that for all  (2  ); the threshold where the incidence of con ‡ict jumps up. This occurs because when is large, A enters the con ‡ict after message s IDY only for high values of P ( D ). It is then easy to see that higher precision of Nature's signals increases (resp. decreases) the likelihood that a con ‡ict occurs when P ( D ) is low (resp. high). The next proposition summarizes the discussion above.

PROPOSITION 4:
(i) The incidence of con ‡ict is increasing in P ( D ).
(ii) The incidence of con ‡ict does not depend on .
Besides the incidence of con ‡ict, it is worth studying its intensity. The two variables do not necessarily go together. We will see, for instance, that in homogenous societies (that is, in societies where P ( D ) is low) peace is more likely but that, conditional on con ‡ict occurring, con ‡icts are violent.
As a measure of the intensity of con ‡ict, we compute expected total e¤ort by taking expectations over the space of possible signals. Let (s) denote the probability of observing signal s; which can be derived from (3) and (4). Expected total e¤ort is then given by To help intuition, we derive E(c A + c B ) for two simple cases: = 1 and = 0: (For the general case, see the proof of Proposition 5 in the appendix) When = 1; expected total e¤ort is Knowing that A (s IDY ) coincides with (5), in the right panel of Figure 3 we draw expected total e¤ort as a function of P ( D ): To understand (16), recall from the previous discussion that when P ( D ) 1=(2 ) a con ‡ict occurs only when s CAU is observed. With probability P ( D ) expected total e¤ort is then equal to one. When instead P ( D ) > 1=(2 ) from Proposition 2 we know that the preacher has an incentive to send message s IDY regardless of the signal he has received; expected e¤ort when the message is s IDY can be derived from (9). Figure 3 shows that the intensity of con ‡ict is not monotone in P ( D ): 20 Suppose instead that = 0. Then, To understand (17), recall from Proposition 3 that when P ( D ) > 1=(2 ) the preacher has an incentive to send message s CAU regardless of the signal he has received. In Proposition 5, we study how expected total e¤ort varies with the parameters of the model. 20 Notice that the non-monotonicity result would also hold if A is not naive. In this case, when P ( D ) > 1=(2 ) the student ignores the message of his preacher. By replacing A (s IDY ) with the prior P ( D ) in expression (16), one can verify that the intensity of con ‡ict drops at 1=(2 ): (i) The intensity of con ‡ict is weakly increasing in P ( D ) when < 1=2 and nonmonotone in P ( D ) when 1=2: (ii) The intensity of con ‡ict is decreasing in : (iii) Suppose that increases from to ; with 0 < 1: Then, the intensity of con ‡ict strictly increases when P ( D ) 1=(2 ) and weakly decreases when P ( D ) is close to one: The proof of Proposition 5 is contained in the appendix. The second part of result (i) is in line with recent empirical evidence that studies the consequences of ethnic heterogeneity on the duration of civil wars, which can be viewed as a proxy of the e¤ort levels exerted by the two parties in the con ‡ict.

Dogmatism and Mistakes
Proposition 3 shows that dogmatism leads to violent con ‡icts by distorting the e¤ort decision of A. However, it is important to stress that in the previous sections we obtained that dogmatism does not distort the …nal policy decision. In other words, the decision that A makes at t = 2 on the basis of m b A is the same that he would make if he knew the true signal. This result occurs because b A does not disagree with his student on the correct policy to implement in each state and, as a result, he does not manipulate information to the point of inducing the wrong policy decision in the …nal stage. However, besides causing violent con ‡icts, one would expect dogmatism to also lead to distorted policy decisions. A simple extension of the previous setup would capture this cost as well.
In this section, we suppose that A is able to conduct autonomous research in order to …nd out the current state of the world. This possibility will be used only when A receives message s IDY . This assumption seems quite natural. In fact, after receiving message s CAU ; A has no doubts that the state is D : Therefore, from A's perspective, autonomous research is not needed.
More precisely, the timing is now as follows. As before, at t = 0 preacher b A observes evidence s 2 fs CAU ; s IDY g and sends a message to A: If b A sends message s CAU ; the game unfolds exactly as before. If instead b A sends message s IDY , we now assume that individual A is able, if he decides so, to conduct costless research in order to discover the current state. For the sake of tractability, we also assume that research is not manipulable by A himself. With probability 2 [0; 1] research is successful and A is able to perfectly observe the actual state of the world. With complementary probability 1 , research is not successful (N S denotes unsuccessful research). Let A (s IDY ; N S) denotes the posterior probability of individual A after receiving message s IDY and after unsuccessful research. The probability of success is independent from . To simplify the analysis, we assume that B observes m b A as well as the research outcome. As before, at t = 1 individuals simultaneously choose the e¤ort levels in the con ‡ict and the …nal decision is made at t = 2: We now study whether the possibility of autonomous research a¤ects the message strategy of the preacher. First, it is easy to see that regardless of the true signal, the expected utility of the preacher after sending message s CAU is, as in Lemmas 1 and 2, equal to (Recall that after message s CAU we assumed that the student does not do research.) In Lemma 3, we compute the payo¤s to b A of sending message s IDY : Does A decide to conduct autonomous research after receiving s IDY ? It turns out (see the proof of Proposition 6) that A is indi¤erent between conducting and not conducting research; in what follows we will assume that A; upon receiving message s IDY ; does indeed conduct autonomous research.

LEMMA 3: Let s = s CAU : If b
A sends the false message s IDY , his expected payo¤ is: Suppose instead that s = s IDY : If b A is truthful, his expected payo¤ is: The proof of Lemma 3 is contained in the appendix. To …nd out the equilibrium message strategy of the preacher, it is instructive to consider the extreme cases of = 0 and = 1: It is straightforward to see that when = 0 the setup analyzed here is identical to the one analyzed in the previous sections: the message strategies are then exactly the same as in Propositions 2 and 3. Consider instead the other extreme: = 1: Does b A have an incentive to send message s CAU when s = s IDY ? It is easy to see, by comparing (20) to (18), that when = 1 the answer is negative: inducing A to conduct research when s = s IDY is strictly preferable to sending message s CAU : To understand this result, notice that if the student discovers that the state is 1 , the preacher obtains a payo¤ equal to 1, which is strictly greater than the payo¤ of sending message s CAU . If instead A discovers that the current state is D ; the preacher obtains the same payo¤ that he would have obtained by sending the false message s CAU : Therefore, dogmatism never arises when = 1. Figure 4 draws the message strategies in the ( ; P ( D )) space for an intermediate value of : One can see that dogmatism is still observed when is su¢ ciently low and P ( D ) su¢ ciently large, but that the region of parameter values where dogmatism occurs has shrunk compared to Figure 2. Overall, these results suggest that in societies where research is likely to be successful, it is more di¢ cult to observe dogmatic attitudes.
It is interesting to note that the incentives to induce skeptical attitudes are not a¤ected by : To see this, it is enough to observe that (19) is greater than (18) if and only if (11) is greater than (10). This implies that the region of parameter values where skepticism occurs is identical to the one characterized in Proposition 2.
Proposition 6 describes the message strategies when A is allowed to conduct autonomous research. Since dogmatism sometimes prevents A from conducting potentially successful research, Proposition 6 establishes that when is su¢ ciently low, dogmatism, besides leading to violent con ‡icts, may induce A to make wrong policy decisions. Clearly, these mistakes could have been avoided if information had been truthfully transmitted.

Dynamics
We now consider a simple dynamic extension to the model with autonomous research that we analyzed in the previous section. To understand what follows, it helps to remind the reader that in our static model the incentives to misrepresent (in either way) the facts are decreasing in and that the incentives to induce dogmatic attitudes are decreasing in (see Corollary 1 and Proposition 6, respectively).
Let denote the time, where = 1; 2; :::; 1: 21 Consider an OLG model where A-type and B-type individuals live for two periods. When they are young, individuals exert e¤ort in a con ‡ict; when they are old, they become preachers (or parents). We denote by A (resp. B ) the A-type (resp. B-type) individual that was born at time . At each the active players in the model are A , A 1 and B : 22 Individual A is associated to A 1 ; a preacher that was born at time 1: At each , Nature draws the current state of the world . We suppose that draws are i.i.d. across time. Preacher A 1 observes evidence s 2 fs CAU ; s IDY g (as before, its precision is summarized by ) and sends a message to A : Nature's signals are also assumed to be i.i.d. across time.
As in Section 2.3, after message s IDY individual A conducts autonomous research which is successful with probability .
After receiving a message from A 1 and after observing the research's outcome; individuals A and B play a game of con ‡ict in order to choose the policy to implement at time ; which is denoted by x . We assume that individual A is naive when he is young; when he is old, he becomes aware that his student is naive towards him. The two-period utility of a young individual of type i (where i = A; B) that was born at time is We use the notation 1 to denote an indicator function that takes the value 1 if A conducts autonomous research at time , and 0 otherwise. Let denote by +1 (j) and +1 (j) the values of, respectively, +1 and +1 if at time we had 1 = j; with j = 0; 1: For instance, +1 (1) denotes the precision of Nature's signal at time + 1 if at time individual A conducted autonomous research. We suppose that and , the state variables of our model, evolve over time as follows. Moreover, for tractability, we also assume that individuals at time do not take into account the consequences of their decisions on +1 and +1 : This implies that the problem of a young individual at time is essentially a static problem. Consequently, in each period, the message strategies and the e¤ort decisions are exactly the ones described in Proposition 6. However, since by Assumption 1 the state variables evolve, the preacher's incentives change over time.
Given our modeling assumptions, solving for the dynamics is straightforward. First, it is easy to show that dogmatic attitudes are persistent. Suppose in fact that at time = 1 the parameters of the model (i.e., ; P ( D ); 1 and 1 ) are such that A 0 induces dogmatic attitudes in A 1 : Then, no autonomous research is conducted at time = 1 and by Assumption 1 all parameters stay constant. Then, A 1 will also induce dogmatic attitudes in A 2 ; and so on for all . This result suggests that societies may be trapped in a dogmatic equilibrium.
Suppose instead that the initial parameters are such that A 0 is truthful. It is equally easy to show that truthful reporting is also an absorbing state. Two cases must be considered. First, suppose that s 1 = s CAU : In this case, A 1 truthfully reports s CAU and no research is conducted. Since parameters do not change (see Assumption 1), in the next period A 1 will still be truthful. Second, suppose that s 1 = s IDY : In this case, research will be conducted. By Assumption 1, this implies that 2 > 1 and 2 1 . From Corollary 1 and Proposition 6 we know that an increase of and 23 Note that this is independent of whether the research was successful.
will reinforce A 1 's incentives to be truthful at = 2. Following a similar argument, we obtain that for all > 2 preachers will also be truthful. Finally, suppose that at = 1 the parameters are such that A 0 always sends message s IDY . In particular, from Proposition 2 we know that this occurs if 1=2 and In what follows, we will show that at = 2 we may observe either truthful reporting or, as it occurred at = 1; skeptical attitudes. To see this, notice that since A 1 conducts research at = 1; by Condition (i) we have that 2 1 . Two cases are possible. First, at = 2 it could be that In this case, from Proposition 2 we know that A 1 sends truthful reports. From the discussion above we also know that truthful reports will be observed for all > 2: The other possibility is that (23) is not satis…ed. In this case, A 1 induces skeptical attitudes, A 2 conducts research and 3 2 : This increases the cuto¤ at time 3, thereby making the transition to truthful reporting more likely. If Condition (iii) is also satis…ed, goes eventually to 1. Then, at some date in the future the prior P ( D ) of our economy will necessarily lie below the corresponding -cuto¤; thereby implying that after observing skeptical attitudes for several periods preachers will start being truthful. The above discussion is summarized in the following proposition.
PROPOSITION 7: Suppose that and evolve according to Conditions (i) and (ii) of Assumption 1. Then, truthful reporting and dogmatism are two absorbing states. That is, if the parameters are such that at time dogmatism (resp. truthful reporting) is observed, at time + 1 dogmatism (resp. truthful reporting) will also be observed. If instead at time preacher A 1 induces skeptical attitudes in A ; at time + 1 preacher A may either keep inducing skeptical attitudes in A +1 or be truthful. If Condition (iii) is also satis…ed, we obtain that as ! 1 skepticism will eventually be replaced by truthful reporting.
An implication of Proposition 7 is that societies will be able to escape from a dogmatic-trap only if a large shock occurs, such as an increase of due, for example, to the opening of the society, which would provide access to more research instruments.

Interdependent Beliefs'Manipulation.
This section considers a slightly di¤erent setup from the one analyzed in Section 2.
Take the basic (static) model analyzed in Section 2.1 without autonomous research and assume that = f 1 ; 2 ; D g ; where 2 is another state of agreement. The following matrix summarizes the preferred policies by each individual in each state: A's optimal policy B's optimal policy The payo¤s to A and B in states 1 and D are as before. In state 2 ; policy a is optimal for both individuals: Assume that ex-ante states 2 and 1 are equally likely: that is, P ( 1 ) = P ( 2 ): As before, P ( D ) 2 (1=2; 1) : Notice that in contrast to before, individual B is not ex-ante certain that policy b is the right policy for him. In this section, we assume that B is also associated to a preacher, b B, whose role is to advice B: Nature sends a common signal to both preachers and both preachers will send a message to their respective students. This implies that besides the strategic interdependence in the game of con ‡ict, there will be strategic interdependence between the advisors' message strategies. The timeline is as follows. At t = 0; Nature sends to both preachers a common signal s 2 fs CAU ; s IDY g which is (not necessarily fully) informative about the current state 2 f 1 ; 2 ; D g : To keep the model as tractable as possible, we maintained the assumption that there are only two possible signals. The information structure is as follows: P (s CAU j 1 ) = 0 and P (s IDY j 1 ) = 1; P (s CAU j 2 ) = 0 and P (s IDY j 2 ) = 1; P (s CAU j D ) = and P (s IDY j D ) = 1 :

Nature sends common signal to the preachers of A and B.
Each preacher sends a public message to his student

A and B choose effort in the conflict
The winner picks the policy T=2 T=1 T=0 Figure 5: Timeline in the Model with two "Preachers" Notice that upon receiving signal s IDY the posterior probability of being in the state of con ‡ict decreases. However, signal s IDY does not favor one state of agreement relatively to the other. As a result, the posterior probabilities of being in state 1 and 2 will be equal. After receiving a signal from Nature, each preacher b i sends a message m b i 2 fs CAU ; s IDY g. As before, we assume that messages are public and that each student is naive vis-à-vis his own preacher. At the same time, we assume that each individual knows that his opponent is naive and that the other preacher may not tell the truth. In other words, each individual believes that the relationship with his own preacher is unique: in a student's mind only the other preacher, not his own, may misreport the facts. This perception bias implies that the two individuals will, in general, end up with di¤erent posteriors and "agree to disagree" about the probability of the current state. After receiving the message of his preacher each individual forms his posterior belief. The message of the other preacher is also observed. The posterior probability of individual i of being in state D after receiving m b i is denoted by i (m b i ) : At this point, it is practical to introduce some further notation. Let A (m b A ) denote the posterior probability of individual A that policy a is the correct policy to adopt when b A's message is m b A . This is given by The …rst term is the posterior probability that the state is D while the second term is the posterior probability that the state is 2 : Further, let B m b B denote the posterior probability of individual B that policy b is the correct policy. Similarly, one obtains The …rst term is the posterior probability that the state is D while the second term is the posterior probability that the state is 1 : It is obvious that upon receiving message s CAU , for individual i = A; B we have that i (s CAU ) = 1: If instead i receives message s IDY , It is important to notice that since P ( D ) > 1=2, we have that A (m b A ) and B m b B are both greater than 1=2: This implies that the conditionally optimal policy (i.e., the policy that an individual picks in the …nal period) does not depend on the message he has received. In other words, receiving either message s CAU or s IDY only a¤ects the con…dence that A (resp. B) has in the optimality of policy a (resp. b). However, no message can ever induce A to choose policy b or B to choose policy a:

Policy Decisions
We solve the game backwards; we start by …nding the policy decisions at t = 2. As discussed above, since A (m b A ) and B m b B are both greater than 1=2; regardless of the messages: Notice that since each individual expects the opponent to choose a di¤erent policy from the one that he prefers, a con ‡ict always arises.

The Game of Con ‡ict
We now move to the game of con ‡ict. The characterization of the e¤ort decisions is provided by the following proposition. Without any loss of generality suppose that That is, A is the lower-valuing individual. and : This implies that for individual B the mixed strategy is a uniform distribution over the interval 0; 2 A (m b A ) 1 : For A there is a positive probability equal to of exerting zero e¤ort. Thereafter, A 0 s mixed strategy is a uniform distribution over the interval 0; 2 A (m b A ) 1 : The proof of Proposition 8 is identical to the one of Proposition 1; it is therefore omitted. Similarly to Proposition 1, notice that for both individuals the maximum e¤ort level is identical and equal to the valuation of the individual with higher stakes in the con ‡ict. Moreover, the lower-valuing individual enters the con ‡ict with probability less than one; this probability is equal to the ratio between his valuation and the valuation of the higher valuing individual. This implies that an increase of the valuation of the higher-valuing individual would not a¤ect the e¤ort level of that individual; it would only induce the other opponent to exert less e¤ort. This suggests that inducing dogmatic attitudes, besides having a motivation e¤ect, has here also a preempting e¤ect. This e¤ect was not present before because in Section 2 the valuation of the higher-valuing individual (i.e., B) was not manipulable and because A's valuation could not exceed B's valuation. In this section instead, the two players are ex-ante symmetric. This implies, as we will see, that one preacher may induce a dogmatic attitude to make his student the individual with the highest stakes in the con ‡ict in order to preempt the other player.

Message Game
It is quite easy to show that it is not possible to have an equilibrium where both preachers send truthful reports. By way of contradiction, suppose for instance that Nature sends signal s IDY . Do preachers report truthfully? If this were the case, by the assumed symmetry of our setting, both individuals would enter the con ‡ict with the same amount of doubts. Notice, however, that one of the two preachers would have an incentive to remove doubts in his student so as to make his student the highvaluation individual in the con ‡ict. In fact, from Proposition 8 we know that inducing dogmatism when the opponent has doubts induces the opponent to further decrease his e¤ort level, without a¤ecting the e¤ort exerted by the dogmatic student. Contrary  to the motivating e¤ect discussed in Section 2, this e¤ect is present regardless of the preacher's altruism parameter. After arguing that one of the two preachers always induces a dogmatic attitude, the incentives to manipulate information of the other preacher are practically identical to the ones discussed in Section 2. We now state Proposition 9. The proof of Proposition 9 is contained in the appendix. See Figure 6. Notice that dogmatic attitudes in both opponents can only be observed if preachers'altruism is low. When altruism is high, one of the two preachers will choose to always instill doubts. As a result, even if the starting game is symmetric, the distribution of payo¤s is asymmetric: the dogmatic individual obtains, in expected terms, a higher payo¤ than the skeptical one.

Conclusions
Karl Popper (1963), who is cited at the beginning of the paper, argues that con ‡icts will be less violent if individuals entertain the possibility that their opponent may be right. Why is it so di¢ cult to observe this attitude? To answer this question, this paper considered a model of indoctrination where altruistic advisors (such as, preachers or parents), after receiving signals from Nature, send (possibly distorted) messages to the participants in the con ‡ict. In the context of our model, we have shown that there exist two possible deviations from an attitude of reasonableness. In some cases, as a result of indoctrination, individuals never doubt about the possibility of being wrong, although all available information suggests otherwise. In other cases, some individuals are excessively reasonable: they believe that their opponent may be right even when all the evidence indicates beyond any doubt that the policy preferred by the opponent is suboptimal. The common feature in both cases is that information is distorted, although in di¤erent directions.
A brief summary of our results is the following: (i) Manipulation of information (in both directions) is more likely to occur in heterogenous societies, when Nature's signals are less informative, and when research is less likely to be successful.
(ii) Dogmatic attitudes in both opponents are observed if (and only if) advisors' altruism is low. In this case, advisors remove doubts in their students to motivate them and also to preempt the opponent. When instead altruism is perfect, we obtain that one of the two opponents is induced by his advisor to always doubt. Instilling doubts is a defence mechanism which moderates the escalation of violence in the con ‡ict.
(iii) Con ‡icts are more likely in heterogenous societies. However, the intensity of con ‡ict is not necessarily at its maximum in very heterogeneous societies.
(iv) Dogmatism and truthful reporting are persistent over time. On the contrary, skeptical attitudes are less likely to persist in the long-run.
An extension of this model seems particularly worthy: to look at the role of institutions in a¤ecting beliefs'manipulation. Virtually all the extant literature on optimal institutions has taken as given the degree of ideological polarization in the society. It would be interesting to study optimal constitutional design by taking into account that institutions, by changing the way con ‡icts are resolved in a legislature or in the society, may also a¤ect the degree of ideological polarization.

PROOF OF PROPOSITION 1
Suppose A m b A < 1: We …rst show that the equilibrium expected payo¤ of B is strictly positive. To see this, notice that A will never exert an e¤ort level higher than his valuation, 2 A m b A 1: This implies that B can guarantee for himself a strictly positive payo¤ by exerting an e¤ort level just above 2 A m b A

1.
We now show that the e¤ort strategies of both players are mixed, with no mass points for all c i > 0: By way of contradiction, suppose that player j has a mass point at a particular bid c j : Then, the payo¤ of the other player would increase discontinuously at c j : It then follows that there is a " > 0 such that the other player exerts e¤ort on the interval [c "; c] with zero probability. However, if this were the case, j would increase his payo¤ by bidding c j " instead of c j : We now show that the maximum e¤ort level of the two players is the same. To see this, notice that since the e¤ort strategies are mixed, if one individual has a maximum e¤ort level, the other individual would win with probability one by just exerting that e¤ort level.
We now show that the minimum e¤ort level is zero. By way of contradiction, suppose that an individual has a minimum e¤ort level c 2 0; 2 A (m b A ) 1 : Then the other player does not exert e¤ort in the interval [0; c) because by doing so he would lose with probability one. But this implies that the …rst individual would rather exert an e¤ort level lower than c.
Individual B's expected payo¤ from exerting e¤ort c B is while A's expected payo¤ from exerting e¤ort c A is Noticing that B must be indi¤erent among all the e¤ort levels in the set and recalling that the equilibrium expected payo¤ for B is strictly positive, we evaluate EU B when c B = 0: It follows that G A (0) > 0: We now show that B cannot put positive mass at zero. If this were the case, there would be a tie with some positive probability. But B would be better o¤ increasing his e¤ort just above zero. This implies G B (0) = 0 and A's expected payo¤ is 1 A m b A : Then, : Then, This concludes the proof of Proposition 1.
PROOF OF LEMMA 1: Recall that if m b A = s CAU a total con ‡ict occurs. In this case, from Proposition 1 we know that the expected e¤ort exerted by A is equal to 1=2: To explain the …rst term of (10), recall that in b A's utility the e¤ort exerted by A is multiplied by . To explain the second term of (10), note that in a total con ‡ict both players win with equal probabilities. Since s = s CAU , b A obtains a payo¤ equal to one if A wins and zero if B wins.
To understand (11), recall from Proposition 1 that after receiving message s IDY , A enters the con ‡ict with probability 2 A (s IDY ) 1 . Conditional on A exerting positive e¤ort, his expected e¤ort cost is Conditional on A exerting a positive e¤ort, both individuals have equal probabilities of victory and b A's expected gain from the con ‡ict is 1=2. With complementary probability 2 1 A (s IDY ) , individual A exerts no e¤ort, B picks policy b; and, consequently, the payo¤ to b A is zero: PROOF OF LEMMA 2: Compared to (11), expression (12) includes a second term. To understand this term, note that when A exits the con ‡ict, B chooses policy b; which is optimal for A with probability 1 b A (s IDY ) : We now explain (13). Suppose that b A induces A to start a total con ‡ict by sending the false message s CAU when s = s IDY . The …rst term of (13) coincides with the …rst term of (10). To explain why the second terms of (10) and (13) also coincide, note that b A's expected gain from a total con ‡ict when which is equal to 1=2: To understand (27)  Step 1: When Proof of Step 1: Two cases must be considered. First, suppose that s = s IDY : If the condition in the statement of Step 1 is satis…ed, this implies that b A (s IDY ) 1=2: Suppose that b A is truthful and sends message s IDY : Then, it is also the case that Since A (s IDY ) 1=2; A exerts no e¤ort and B picks policy b: The expected payo¤ to the preacher is Suppose instead the preacher sends message s CAU . In this case, A starts a total con ‡ict.
Using (13), the preacher's expected payo¤ would be which is lower than 1=2. This implies that a deviation from a truthful report is not pro…table when the actual signal is s IDY : Second, suppose that s = s CAU : If the preacher sends message s CAU his expected payo¤ is which is greater than zero, the payo¤ obtained by sending message s IDY ; which induces A to exert no e¤ort: This implies that a deviation from a truthful report is also not pro…table when the actual signal is s CAU : Step 2: When A is also truthful.

Proof of
Step 2: First, suppose that s = s IDY and that the preacher is truthful. If the condition in the statement of Step 2 is met, A (s IDY ) > 1=2. Then, a con ‡ict arises. By Lemma 2, the preacher's expected utility of sending a truthful message is given by (12). Since b A (s IDY ) = A (s IDY ) when reporting is truthful, we can rewrite (12) as To see whether b A has an incentive to deviate and send message m b A = s CAU when the actual signal is s IDY , we compare (28) to (13), the expected utility after the deviation.
To show that (13) is lower than (28) when the condition in the statement of Step 2 is met, take the derivative of (28) with respect to A (s IDY ): This derivative can be written as Knowing that 1 A (s IDY ) > 1=2 and that 1 1=2; one can verify that the derivative is always negative. Since (13) is equal to (28) when A (s IDY ) = 1; we have proved that (13) is lower than (28). Therefore, b A has no incentive to send message s CAU when s = s IDY : To conclude the proof of Step 2, we have to show that the preacher does not want to deviate even when s = s CAU . The preacher utility from truthful reporting is (10) while the utility of sending message s IDY is (11). One can show that when the preacher has no incentive to misreport. In fact, when A (s IDY ) = 1=(2 ) and A (s IDY ) = 1 expressions (10) and (11) coincide: Between the two roots, (10) is greater than (11). When A (s IDY ) 1=(2 ) we have that (10) is lower than (11): b A has no incentive to misreport when s = s CAU : Knowing that A (s IDY ) is given by (5), it is easy to show that A (s IDY ) 1= (2 ) if and only if P ( D ) 1 2 (1 ) + : Step 3: When A sends message s IDY regardless of Nature's signals.

Proof of
Step 3: Following the algebra of Step 2, we obtain that when the condition in the statement of Step 3 is satis…ed, b A has an incentive to send message s IDY when the actual signal is s CAU . When instead s = s IDY the report is truthful. It then follows that regardless of s; b A always sends message s IDY : This concludes the proof of Proposition 2.

PROOF OF PROPOSITION 3
We proceed by steps.
Step 1: When Proof of Step 1: The proof is identical to the proof of Step 1 of Proposition 2, since that proof did not use the fact that was greater or equal than 1=2.
Step 2: When Proof of Step 2: First, suppose that s = s IDY : Since we have that A (s IDY ) > 1=2: Then, a con ‡ict arises. The preacher's expected utility of sending a truthful message is (28). To see whether b A has an incentive to deviate and send message m b A = s CAU when the actual signal is s IDY , we compute his utility after this deviation. This is given by (13). In comparing (28) to (13), one can show that when < 1=2 it may be the case that (13) is greater than (28). However, when (13) is lower than (28). Then, b A has no incentive to send message s CAU when he receives signal s IDY : Knowing that A (s IDY ) is given by (5), it is easy to verify that (32) is satis…ed if and only if Finally, suppose that the actual signal is s = s CAU . The preacher utility from truthful reporting is (10), while the utility of sending message s IDY is (11). One can show that when < 1=2 the preacher has no incentive to misreport.
Step 3: When A sends message s CAU regardless of Nature's signals.

Proof of
Step 3: This follows from the algebra in the previous step.
This concludes the proof of Proposition 3.

PROOF OF LEMMA 3:
To understand (19), notice that with probability (1 ) research is not successful. Since the probability of success is independent from ; A does not update his beliefs in case of failure: the expected payo¤ to the preacher is then given by (11). With probability research is successful and A perfectly observes the state. Since s = s CAU , A can only discover that = D : In this case, a total con ‡ict arises and the preacher's payo¤ is (18). To understand (20), note that with probability (1 ) research is not successful and individual A does not change his beliefs: the expected payo¤ to the preacher is then given by (12). The preacher's expected payo¤ in case research is successful is as follows. With probability b A (s IDY ), b A expects A to discover that the true state is D . In this case, the payo¤ would be the one from a total con ‡ict. With complementary probability b A expects A to discover that the true state is 1 . In this case, b A's payo¤ would be equal to one.

PROOF OF PROPOSITION 5
First, knowing the conditional probabilities (3) and (4), we derive the probabilities of the two signals.  Step 1: We show that E (c A + c B ) is weakly increasing in P ( D ) when < 1=2: Proof of Step 1: To see this, we …rst show that is increasing in P ( D ): Knowing (5), we …nd the derivative of (33) with respect to P ( D ): which is positive since (2 A (s IDY ) 1) is positive, P ( D ) 2 (1=2; 1), and 0 1: Moreover, note that (33) is equal to P ( D ) when P ( D ) = 1= (2 ), and that (34) is greater than ; the slope of E (c A + c B ) when P ( D ) 1= (2 ): Finally, note that (33) is lower than one: that is, right after P ( D ) = b P ; total e¤ort jumps.
Step 2: We show that E (c A + c B ) is not monotone in P ( D ) when > 1=2: Proof of Step 2: It is enough to show that right after P ( D ) = P ; total e¤ort drops. This is obvious since (2 A (s IDY ) 1) A (s IDY ) < 1: Step 3: We show that E (c A + c B ) is decreasing in : Proof of Step 3: Note that only a¤ects the location of the point of discontinuity where total e¤ort jumps (if < 1=2) or drops (if 1=2): To show that E (c A + c B ) is decreasing in when < 1=2 is straightforward, since b P is increasing in , thereby implying that the cuto¤ where total e¤ort jumps to 1 shifts further to the right. Suppose instead that > 1=2: In this case, it can be shown that Given that P is decreasing in ; we have that an increase of shifts to the left the cuto¤ where the drop occurs.
Step 3: We show that if b A always sends message s CAU when = 0; there exists a cuto¤ e ; with e < 1; such that for all > e preacher b A is truthful: Proof of Step 3: Suppose that b A always sends message s CAU when = 0: This implies that (10) (11) and (13) (12). Suppose now that > 0: It is easy to see that if (10) (11) we also have that (10) (19). Note however that (13) (12) Notice that when 1=2 inequality (38) is never satis…ed. Suppose instead < 1=2: When = 1; the RHS of inequality (38) goes to in…nity, thereby implying that inequality (38) is never satis…ed. When = 0 inequality (38) is sometimes satis…ed when < 1=2: This implies that exists a cuto¤ e ; which depends on the parameters of the economy, such that for all e inequality (38) is satis…ed. When instead > e the preacher reports truthfully.
Step 4: We show that if b A is truthful when = 0; he will follow the same strategy when > 0:

PROOF OF PROPOSITION 9
Step 1: We show that if the opponent has doubts ( i (m c i ) < 1); preacher b i sends message s CAU to individual i regardless of s:  (2 i (s IDY ) 1) 2 +2(1 i (s IDY )) 1 (2 i (s IDY ) 1) 2 : (39) If instead b i sends message s IDY ; the payo¤ to b i is Clearly, by comparing (40) and (39), one can see that sending message s CAU is preferable for b i. Second, suppose s = s IDY : By sending message s IDY the payo¤ to b i is 1 (2 i (s IDY ) 1) 2 : By sending message s CAU the payo¤ to b i is Again, by comparing (42) and (41), and recalling that b i (s IDY ) 1=2; one can verify that sending message s CAU is preferable for b i. Second, suppose s = s CAU In this case, since < 1=2, it is never the case that 2 i (s IDY ) 1 1 (2 i (s IDY ) 1) 2 2 + 1 2 : Finally, it is easy to check that if b i is truthful, c i always sends message s CAU : If instead P ( D ) > b C and b i always sends message s CAU ; it is also easy to show that by symmetry c i always sends message s CAU : Step 3: Now suppose s = s IDY: In this case, since 1=2, it is never the case that Finally, it is easy to check that if b i is truthful, c i always sends message s CAU ; if instead P ( D ) > C and b i always sends message s IDY ; it is also easy to show that c i always sends message s CAU : This concludes the proof of Proposition 9.