Detecting the Maximum of a Scalar Diffusion with Negative Drift

Let $X$ be a scalar diffusion process with drift coefficient pointing towards the origin, i.e. $X$ is mean-reverting. We denote by $X^*$ the corresponding running maximum, $T_0$ the first time $X$ hits the level zero. Given an increasing and convex loss function $\ell$, we consider the following optimal stopping problem: $\inf_{0\leq\theta\leq T_0}\mathbb{E}[\ell(X^*_{T_0}-X_\theta)],$ over all stopping times $\theta$ with values in $[0,T_0]$. For the quadratic loss function and under mild conditions, we prove that an optimal stopping time exists and is defined by: $\theta^*=T_0\wedge\inf\{t\geq 0;~X^*_t\geq \gamma(X_t)\},$ where the boundary $\gamma$ is explicitly characterized as the concatenation of the solutions of two equations. We investigate some examples such as the Ornstein-Uhlenbeck process, the CIR--Feller process, as well as the standard and drifted Brownian motions.

1. Introduction.Motivated by applications in portfolio management, Graversen, Peskir, and Shiryaev [6] considered the problem of detecting the maximum of a Brownian motion W on a fixed time period.More precisely, [6] considers the optimal stopping problem where W * t := max s≤t W s is the running maximum of W , p > 0 (and p = 1), and the infimum is taken over all stopping times θ taking values in [0, 1].Using properties of the Brownian motion and a relevant time change, [6] reduces the above problem to a one-dimensional infinite horizon optimal stopping problem and proves that the optimal stopping rule is given by θ := inf{t ≤ 1; W * t − W t ≥ b(t)}, where the free boundary b is an explicit decreasing function.
A first extension of [6] was achieved by Pedersen [10], and later by Du Toit and Peskir [3], in the case of a Brownian motion with constant drift.A similar problem was solved by Shiryaev, Xu, and Zhou [13] in the context of exponential Brownian motion.See also Du Toit and Peskir [5], Dai, Yang, and Zhong [2] and Dai et al. [1].
This problem can indeed be related to the previous one by the observation of Urusov [14] that E(W τ − W θ ) 2 = E|τ − θ| + 1  2 for any stopping time θ.A similar problem formulated in the context of a drifted Brownian motion was solved by Du Toit and Peskir [4], although the latter identity stated by Urusov is no longer valid.
In the present paper, we consider a scalar Markov diffusion X, which "meanreverts" toward the origin starting from a positive initial datum, and we consider the problem of optimal detection of the absolute maximum up to the first hitting time of the origin T 0 := inf{t ≥ 0 : Here, the infimum is taken over all stopping times with values in [0, T 0 ], and is a nondecreasing and convex function, satisfying some additional technical conditions.We solve explicitly this problem as a free boundary problem and exhibit an optimal stopping time of the form: for some stopping boundary γ.Our analysis has some similarities with that of Peskir [11]; see also Obloj [9] and Hobson [7].
Notice that the formulation of the above optimal stopping problem involves the hitting time of the origin as the maturity for the problem.From the mathematical viewpoint, this is a crucial simplification, as the value function does not depend on the time variable.From the financial viewpoint, this formulation is also relevant, as it captures the practice of asset managers of trading at the extrema of excursions of some underlying asset.Namely, a popular strategy among portfolio managers is the following: -Managers identify some mean-reverting asset or portfolio of assets; the portfolio composition may be estimated from historical data by minimizing empirical autocorrelations, -Managers would then want to buy at the lowest price, along an excursion below the mean, and sell at the highest price, along an excursion above the mean; since trading decisions can occur only at stopping times, the only hope is to better approximate the extrema of the price process.
The above formulation corresponds exactly to a single-excursion problem of the asset managers.Clearly, a similar problem with fixed deterministic time horizon is not suitable for the present practical problem.
Using the dynamic programming approach, our problem leads to a two-dimensional elliptic variational inequality, in contrast with the finite horizon, where the problem can be reduced to a one-dimensional parabolic variational inequality.A major difficulty in the present context is that, in general, our solution exhibits a nonmonotonic free boundary γ made of two different parts and driven by two different equations.Except for [4], the latter feature does not appear in the literature mentioned above and has the following a posteriori interpretation.Because of the mean-reversion, we expect that stopping is optimal whenever the running maximum X * is sufficiently larger than the level X, which corresponds to the intuitive increasing part of the boundary.On the other hand, for some specific dynamics, we may expect that when the process approaches the origin, the martingale part dominates the mean-reversion, implying that the process has equal chances to be pushed away from the origin, so that the investor may defer the stopping decision.This indeed turns out to be the case for the Ornstein-Uhlenbeck process and induces a decreasing part of the boundary near the origin.
The paper is organized as follows.Section 2 presents the general framework and provides some necessary and sufficient conditions for the problem to be well defined.In section 3, we derive the formulation as a free boundary problem, and we prove a verification result together with some preliminary properties.Sections 4-6 focus on the case of a quadratic loss function.In section 4, we study a certain set Γ + which plays an essential role in the construction of the solution.The candidate boundary is exhibited in section 5, and in section 6 the corresponding candidate value function is shown to satisfy the assumptions of the verification result of section 3. Section 7 is dedicated to some examples.In section 8, we provide sufficient conditions which guarantee that a similar solution is obtained for a general loss function.

Problem formulation.
Let W be a scalar Brownian motion on the complete probability space (Ω, F , P), and denote by F = {F t , t ≥ 0} the corresponding augmented canonical filtration.Given two Lipschitz functions μ, σ : R −→ R, we consider the scalar diffusion defined by the stochastic differential equation together with some initial datum X 0 > 0. We assume throughout that μ < 0 and σ > 0 on (0, ∞) (2.1) as well as the following stronger restrictions: 2) are needed only for technical reasons.See, in particular, Remark 2.2 for some crucial implications of the concavity condition.In the context of our problem defined below, we shall consider only the process X up to the first hitting time of 0. Therefore the negative drift in condition (2.1) models the mean-reversion of X.Notice that we could formulate a symmetric problem on the negative real line under the condition of a positive drift on (−∞, 0).
The scale function S is defined by (see [8]) We denote by T y := inf {t > 0 : X t = y} the first hitting time of the barrier y.We recall that, for the above homogeneous scalar diffusion with positive diffusion coefficients, we have Our main objective is to solve the optimization problem where X * t := max s≤t X s , t ≥ 0, is the running maximum process of X; : R + −→ R + is a nondecreasing, strictly convex function; and T 0 is the collection of all F stopping times θ with θ ≤ T 0 almost surely.
Remark 2.3.Our main results (sections 4-6) concern the quadratic loss function (x) = x 2 2 .However, a large part of the analysis is valid for general loss functions.In particular, we provide a natural extension of the quadratic case in section 8, but we have not succeeded in obtaining satisfactory conditions which guarantee that the extension holds true.
We shall approach this problem by the dynamic programming technique.We then introduce the dynamic version and E x,z denotes the expectation operator conditional on (X 0 , Z 0 ) = (x, z).Clearly, the process (X, Z) takes values in the state space, and we may rewrite this problem in the standard form of an optimal stopping problem, (2.9) we deduce from (2.5) that the reward function g is given by where is the generalized derivative of and the last expression in (2.10) is obtained by integration by parts together with the observation that for all x ≥ 0, Proof of (2.11).Denote R := S −1 , and assume x = 0, without loss of generality.Then 12).Remark 2.4.For the linear loss function (x) = x, we have V = g.Indeed, We now provide necessary and sufficient conditions on the loss function which ensure that V is finite on If, in addition, < ∞ for every (x, z) ∈ Δ, (2.13) then all of the above items are equivalent to the following: The proof of this proposition, together with discussion of the conditions, is reported in section 9.1.

A verification result.
From now on, we assume so that, by Proposition 2.1, g and V are finite everywhere.Our general approach to solving the optimal detection problem is to exhibit a candidate solution for the corresponding dynamic programming equation, max {−Lv, v − g} = 0 on Int(Δ), and where L is the second order differential operator and α is defined as in (2.2).Notice that LS = 0. We do not intend to prove directly that V satisfies this differential equation.Instead, we shall guess a candidate solution v of (3.2) and show that v indeed coincides with the value function V by a verification argument.
In order to exhibit a solution of (3.2), we guess that there should exist a free boundary γ(x) so that stopping is optimal in the region {z ≥ γ(x)}, while continuation is optimal in the remaining region {z < γ(x)}.If such a stopping boundary exists, then the above dynamic programming equation reduces to The verification step requires that the value function be C 1 and piecewise C 2 in order to allow for the application of Itô's formula.We then complement the above system by the continuity and the smoothfit conditions Our objective is to find a candidate v which satisfies (3.4)-(3.8)and an optimal stopping boundary γ so as to apply the following verification result.
Then v = V and θ * = T 0 ∧ inf{t ≥ 0; Z t ≥ γ(X t )} is an optimal stopping time.Moreover if τ is another optimal stopping time, then θ * ≤ τ a.s.Proof.(i) We first prove that V ≥ v. Let θ ∈ T 0 , and for n ∈ N, define Then from the assumed regularity of v, we may apply Itô's formula to obtain: Using the fact that v z (X t , Z t )dZ t = v z (Z t , Z t )dZ t = 0, Lv ≥ 0, and v ≤ g, this implies Clearly, as n → ∞, θ n → θ a.s.Notice that 0 ≤ (Z T0 − X θn ) ≤ (Z T0 ) ∈ L 1 (P) by (3.1).Then it follows from the dominated convergence theorem that By the assumed regularity of v, we have Lv(X t , Z t ) = 0 for t ∈ [0, θ * ), and by the same calculation as in (i), we see that Since v is bounded from below and v ≤ g, we have |v| ≤ c+g for some constant c.Since 0 ≤ (Z T0 − X θn ) ≤ (Z T0 ) ∈ L 1 (P) by (3.1), the sequence (E[ (Z T0 )|X θn , Z θn ]) n is uniformly integrable.This property is then inherited by the sequences (g(X θn , Z θn )) n and (v(X θn , Z θn )) n .Then, sending n → ∞ in (3.10), it follows from the continuity of γ that (iii) Finally we show the minimality of θ * .Assume to the contrary that there exists On {τ < θ * }, we have by assumption V (X τ , Z τ ) < g(X τ , Z τ ), while we always have V (X τ , Z τ ) ≤ g(X τ , Z τ ).This leads to the following contradiction: where the last inequality follows immediately from the definition of V .
In the rest of this paper, our objective is to exhibit functions γ and v satisfying the assumptions of the previous theorem.For the quadratic loss function, this is the content of our main theorem, Theorem 6.1.In view of (3.5), the stopping region satisfies We therefore need to study the structure of the set Γ + .
In the subsequent sections we shall first focus on quadratic loss functions.For general loss functions, we shall provide some conditions which guarantee that the structure of the solution agrees with that of the quadratic case; see section 8.

4.
The set Γ + for a quadratic loss function.Throughout this section as well as sections 5 and 6, we consider the quadratic loss function and we assume that the coefficient α satisfies the following additional condition: Since α is positive on (0, ∞) by (2.2), we immediately check that (3.1) holds true, so that g and V are finite on Δ.In order to study the set Γ + defined by (3.11), we compute that S(z) > 0, so it follows that for every fixed x ≥ 0, the function By direct computation, we see that for x > 0, by the concavity, the nondecrease, and the positivity of α on (0, ∞).This implies that the function Γ is U -shaped in the sense of Proposition 4.2(i).We first isolate some asymptotic results that will be needed.
Remark 4.2.The fact that Γ 0 < Γ ∞ implies, in the quadratic case, that the increasing part of Γ will never be reduced to a subset of the diagonal, or, in other words, that Γ(ζ) > ζ.
Figures 1(a) and 1(b) exhibit the two possible shapes of the function Γ and the location of Γ + .Notice that in both cases, Γ ∞ can be finite or infinite.We refer the reader to section 7 for examples of both cases.
We now give a result, stronger than Proposition 4.2(ii) above, concerning the behavior of Γ at infinity.Recall that Γ ∞ was defined by (4.4).
By a Taylor expansion, together with the boundedness of α/S = S /(S ) .
By the definition of the function Γ, this implies that Γ(x) = x whenever A ∞ ≥ 0, and Γ(x) > x whenever A ∞ < 0.
(ii) We now assume that lim x→∞ α(x) = ∞, and we intend to prove that A ∞ = ∞, which would imply that Γ ∞ < ∞ by Case 2 above.Let x ≥ 1.Since α is nondecreasing, we have On the other hand, lim Since α is not bounded, the left-hand side is not integrable at infinity, so the right-hand side is also not integrable.In other words, 5. The stopping boundary in the quadratic case.We now turn to the characterization of the stopping boundary γ.Following Proposition 4.2(i), we define as the restrictions of Γ to the intervals [0, ζ] and [ζ, ∞).

The increasing part of the stopping boundary
We first guess that the free boundary γ is continuous and increasing near the diagonal.Then, denoting its inverse by γ −1 , the continuity and smoothfit conditions (3.8) imply that Finally, the Neumann condition (3.7), together with (2.10) and the specific form of the loss function , implies that the boundary γ satisfies the following ODE: In what follows, we take this ODE (with no initial condition!)as a starting point to construct the boundary γ.Notice that this ODE has infinitely many solutions, as the Cauchy-Lipschitz condition is locally satisfied whenever (5.1) is complemented with the condition γ(x 0 ) = z 0 for any 0 < x 0 < z 0 .This feature is similar to that in Peskir [11].The following result selects an appropriate solution of (5.1).
Proposition 5.1.Let the coefficient α satisfy conditions (2.2) and (4.1).Then, there exists a continuous function γ defined on R + with graph {(x, γ(x)) The remaining part of this section is dedicated to the proof of this result.We first introduce some notation.We recall from Remark 4.2 that the graph of Γ ↑ is not reduced to the diagonal, and therefore where b may take infinite value.We also introduce with the convention that d( Let x 0 ∈ D − be an arbitrary point.For all z 0 > x 0 , we denote by γ z0 x0 the maximal solution of the Cauchy problem complemented with the condition γ(x 0 ) = z 0 , and we denote by z0 x0 , r z0 x0 the associated (open) interval.Notice that since the right-hand side of ODE (5.1) is locally Lipschitz on the set {(x, γ), 0 < x < γ}, the maximal solution will be defined as long as 0 < x < γ.
The following result provides more properties on the maximal solutions.Lemma 5.2.Assume that α satisfies conditions (2.2) and let x 0 ∈ D − be fixed.
) < 0 for any x ∈ (x 0 , r z x0 ); (iii) for z sufficiently large, we have r z x0 = +∞.Before proceeding to the proof of this result, we turn to the main construction of the stopping boundary γ.Let (5.5) Moreover, whenever z * (x 0 ) < ∞, we denote , and Lemma 5.3.Assume that α satisfies conditions (2.2), and let x 0 be arbitrary in ) < 0 for some x 1 ≥ x 0 , then by (5.1), γ z x0 is decreasing in a neighborhood of x 1 and as long as x, γ z x0 (x) ∈ Int(Γ − ).Since ), which implies that r z x0 < ∞.Therefore Z(x 0 ) is bounded by a, and z * (x 0 ) < ∞.Since x 0 ∈ D − , we have Γ(x 0 ) ∈ Z(x 0 ), and therefore z * (x 0 ) ≥ Γ(x 0 ).We next assume to the contrary that z * (x 0 ) = Γ(x 0 ) and work toward a contradiction.Notice that D − is an open set as a consequence of the continuity of the function Lg.Then there exists ε > 0 such that (x 0 , x 0 + 2ε) ) and z ∈ Z(x 1 ).For the same reasons as before, we have ).We are now ready for the following proof.Proof of Proposition 5.1.We first define γ and then prove the announced properties.
1. Let 4. We next prove (iii).Assume Γ ∞ < ∞ and let x 0 ∈ D − be arbitrary.Then by the continuity of Lg, Lg(Γ ∞ , Γ ∞ ) = 0, and therefore x 0 < Γ ∞ .Assume that r * x0 > Γ ∞ , and let us work toward a contradiction.Then by continuity of the flow with respect to the initial data, there exists ε > 0 such that for any ].By Lemma 5.2(ii), we deduce that (x, γ z x0 (x)) ∈ Γ + on the same interval.By the definition of Γ ∞ and recalling that ∂ ∂z Lg > 0, we get that z ∈ Z(x 0 ).By the arbitrariness of z in (z * (x 0 ) − ε, z * (x 0 )), this contradicts the definition of z * (x 0 ). 5. We finally prove (iv).First, the claim is obvious when D is bounded, as γ(x) = x for x ≥ sup D. We then concentrate on the case where D is not bounded.From Proposition 4.3, either D − is bounded or Lg(x, x) < 0 for any x ∈ [Γ max , ∞), and by Lemma 5.3, r * x0 ≥ u(x 0 ).In both cases, there exists x 0 ∈ D − such that r * x0 = ∞.To complete the proof, we now intend to show that, for a > 0 and x > x 0 large enough, γ(x) ≤ x + a.
The function γ ↓ defined in the previous proposition will be the second part of our boundary.We denote by γ ↑ the boundary constructed in the previous paragraph.We now check that the two boundaries γ ↑ and γ ↓ do intersect.This is provided in the following proposition.
Notice also that if γ ↓ is degenerate, then γ = γ ↑ .Remark 5.1.Notice that, if x > 0, γ is not differentiable at the point x.Indeed, assuming to the contrary that x > 0 and γ is differentiable at x, it follows from the increase of γ ↑ and the decrease of γ ↓ that γ (x) = 0.By ODE (5.1) satisfied by γ ↑ , we see that Lg(x, z) = 0, so that z = Γ(x).Following the proof of Proposition 5.5, this also implies that x = ζ, the point where the minimum of Γ is attained.By differentiating (5.1) and using γ (x) = 0, we compute that the second derivative of γ at the right of x is given by γ

Definition of v and verification result.
We now have all the ingredients to define our candidate function v and to prove that it coincides with the value function V defined by (2.7).
We first decompose Δ into four disjoint sets.We define ) is a partition of Δ.Notice that if (x, z) ∈ A 2 , then by Proposition 5.1(iii), x ≤ Γ ∞ , and recall that x < z were defined in Proposition 5.5, while φ ↓ and φ ↑ were defined by (5.15).Notice also that A 2 is not necessarily connected.We refer to Figure 3 for a better understanding of the different areas.Let we define v in the following way: The main result of this section is the following.Theorem 6.1.Let the coefficient α satisfy conditions (2.2) and (4.1).Let γ be given by Proposition 5.1 and v be defined by (6.2)-(6.5).Then v = V , and θ * = inf{t ≥ 0; Z t ≥ γ(X t )} is an optimal stopping time.
Moreover, if τ is another optimal stopping time, then θ * ≤ τ a.s.Proof.From Proposition 5.1, Lemmas 6.2 and 6.3, and Propositions 6.4 and 6.5, v and γ satisfy the assumptions of Theorem 3.1.

More precisely, except on
Proof.From the definition of v, φ ↓ , and φ ↑ , it is immediate that v can be extended as a C 2,1 function on any Cl(A i ).
Let us denote by v i the expression of v on Cl(A i ).Since φ ↓ satisfies (5.12), it is immediate that v is C 0 w.r.t.(x, z) and C 1 w.r.t.x on the boundary (v 1 with v 4 and v 2 with v 4 ).On z = z, it is easy to check that the expressions of v 2 and v 3 coincide.It is also true for v 1 and v 3 since φ ↓ satisfies (5.12) and x = φ ↓ (z).It is straightforward that it is also C 1 and even C 2 w.r.t.x.
We now show that v satisfies the boundary conditions.Lemma 6.3.F or all z ≥ 0, v(0, z) = z 2 2 and v z (z, z) = 0. Proof.Since S(0) = 0, v(0, z) = z 2 2 is immediate.For (z, z) ∈ Int(A 4 ), since g z (z, z) = 0, we have v z (z, z) = 0.For (z, z) ∈ Int(A 3 ) it is immediate that v z (z, z) = 0.For (z, z) ∈ Int(A 2 ), since γ ↑ satisfies ODE (5.1), To complete the proof, we need to show that v z (z, z) = 0 and The previous computations and the definition of v on A 3 and A 4 show that at those points, v z (z, z) has right and left limits that are both equal to 0, so we have the result.Proposition 6.4.Let the coefficient α satisfy conditions (2.2) and (4.1).Then the function v is bounded from below and lim z→∞ v(z, z) − g(z, z) = 0.
Proof.If Γ ∞ < ∞, it is immediate since in this case, by Proposition 5.1(iii), v = g outside a compact set, v is continuous and g is nonnegative.So let us focus on the case Γ ∞ = ∞.If (4.1) is satisfied, by Proposition 4.3, we know that α is bounded.We write α ≤ M .
We first prove that v is bounded from below and that v(z, z) − g(φ ↑ (z), z) → 0 as z → ∞.A 1 is bounded because of the definition of γ ↓ , and A 3 is bounded by definition.Since v = g on A 4 and g ≥ 0, we need only check that v is bounded from below on A 2 .
Finally, we show that g(z, z) − g(φ ↑ (z), z) → 0. Indeed, we compute Using Proposition 4.1(ii) and (6.7), we get Using again Proposition 4.1, we also get and as a consequence, Therefore we finally have lim z→∞ v(z, z) − g(z, z) = 0.The final property of v required by the verification Theorem 3.1 is the following.Proposition 6.5.Let the coefficient α satisfy conditions (2.2) and (4.1).Then v ≤ g on Δ and v < g on the continuation region {(x, z) ∈ Δ; x > 0 and z < γ(x)}.
Proof.We analyze separately the different subsets A 1 , A 2 , A 3 .
On A 3 : Recall the definition of K given by (6.1).For x ≤ z ≤ z, we have The latter expression is nonnegative and positive if x = 0. Since v and g are continuous, the result for A 1 and A 2 tells us that v(., z) ≤ g(., z) so that v ≤ g on A 3 and v < g if x = 0.

Examples.
7.1.Brownian motion with negative drift.We first observe that the problem is degenerate for a standard Brownian motion.Indeed, in this case, α(x) = 0 and S(x) = x.Since (2.11) will never be satisfied for a nondecreasing and convex function , Proposition 2.1 tells us that V and g will be infinite if satisfies (2.13).Moreover, for any 0 < x ≤ z and any convex and nondecreasing function , we have the following: (i) E x,z T 0 = +∞, (ii) E x,z Z T0 = E x,z (Z T0 ) 2 = +∞, (iii) V and g are infinite everywhere except for x = 0. Point (i) is a classical result, (ii) comes directly from (2.10), and (iii) comes from (ii) and arguments similar to the proof of Proposition 2.1.
We now consider the following diffusion for constant μ < 0 and σ > 0: α , and S (x) = e αx .We have an interesting homogeneity result for this process, which allows us to assume that α = 1.In the following statement, we denote by γ α the corresponding boundary, given by Theorem 6.1.
Proposition 7.1.Let α > 0 be given, and consider the quadratic loss function Proof.Let X be a drifted Brownian motion with parameter α X = α, and define X = αX.The dynamics of X is so that α X = −2μα σ 2 α 2 = 1.Let Z be the corresponding running maximum, started from αz.Then Z = αZ, T 0 (X) = T 0 ( X) = T 0 , and for any θ, This equality implies that if τ is optimal for one problem, it is also optimal for the other one.Together with the minimality of θ * , it means that which completes the proof.

The CIR-Feller process.
Let b ≥ 0, μ < 0, and σ > 0; then the dynamics of X is Here, α(x) = α x x+b with α > 0. In the degenerate case b = 0, we are reduced to the context of the Brownian motion with negative drift.We then focus on the case b > 0 with a quadratic loss function (x) = x 2 2 .Proceeding as in the proof of Proposition 4.3, we can see that Γ ∞ < ∞, unlike in the case b = 0.
7.3.Ornstein-Uhlenbeck process.The dynamics of X is now given by dX t = μX t dt + σdW t so that α(x) = αx, S (x) = e α x 2 2 .This case and the Brownian motion with negative drift case can be seen as the extreme cases of our framework.Indeed, here α(x) = αx is the "most increasing" concave function, while α(x) = α is the "least nondecreasing" function.
As for the Brownian motion with negative drift, we have a homogeneity result for this process, for (x) = x 2 2 , which allows us to assume that α(x) = x.Proposition 7.2.Let α(x) = αx with α > 0 and (x) = x 2 2 .Then the corresponding boundary γ α satisfies Proof.We follow the proof in the case of a Brownian motion with negative drift.Let X be a process with α X (x) = αx.Then the process X = √ αX is such that α X = 1.Denote by Z the corresponding running maximum process.Then Z = √ αZ, T 0 (X) = T 0 ( X) = T 0 , and for any θ, Then by the minimality of θ * we have which provides the required result.Then, again in the case (x) = x 2 2 , we show that Γ is decreasing in a neighborhood of 0 so that ζ > 0 and that Γ ∞ < +∞.
Proposition 7.3.For an Ornstein-Uhlenbeck process; • Lg(x, Γ 0 ) > 0 for x > 0 in a neighborhood of 0, and therefore Γ ↓ is not degenerate; • Lg(z, z) > 0 in a neighborhood of +∞, and therefore 2 .Therefore, as x → 0, we can write Since α > 0 and Γ 0 > 0 by Proposition 4.2, Lg(x, Γ 0 ) > 0 for x > 0 and sufficiently small.Finally, Figure 5 is a numerical computation of the boundary γ for (x) = x 2 2 .While we do not prove it, we can see that γ is, in this case, decreasing first and then increasing.Although it does not affect the shape because of Proposition 7.2, it was computed for α = 1.8. Extension to general loss functions.Except for sections 2 and 3, the previous analysis considered only the case of the quadratic loss function (x) = x 2 2 .In fact, as the reader has probably noticed, the quadratic loss function plays a special role here, since (3) = 0, inducing a substantial simplification of the analysis of the set Γ + and the asymptotic behavior of Lg.
Unfortunately, we were not able to extend some crucial properties established in the quadratic case.Therefore, this section can be seen as a first attempt for the present more general framework.In particular, the case of a general loss function introduces the possibility that the free boundary γ is decreasing until it reaches the diagonal, a case which was not possible for a quadratic loss function.

Additional assumptions and shape of Γ.
Recall from section 3 that we assume (3.1) holds true.Moreover, if is not the quadratic loss function, we require the following technical assumptions: is C 3 , > 0, > 0, (3) ≥ 0 and , , satisfy (3.1), (8.1) Notice that (8.1)-(8.3)are satisfied for exponential loss functions (x) = λe x with λ > 0 or for power loss functions of the form λ(x + ε) p with ε > 0 and p ≥ 2. They are mainly needed in order to derive asymptotic expansions similar to Proposition 4.1.
The main problem is that Lg is no longer concave w.r.t.x, and it is not clear how to show that Γ is U -shaped.In fact Propositions 4.2(i) and 4.3 are crucial, but we are unable to prove them in general.Therefore we assume the following conditions: ∃ζ ≥ 0 such that Γ is decreasing on [0, ζ] and increasing on [ζ, +∞), (8.4) if lim Unfortunately, we failed to derive conditions directly on and α that guarantee that these conditions hold true.
In the present context, notice that in contrast to Proposition 4.2(iii), Γ 0 may be larger than Γ ∞ .This means that we have a new possibility for the shape of γ: γ ↑ (x) = x for every x ≥ x.

Since
> 0, the Cauchy problem is well defined for any x 0 > 0 and γ(x 0 ) > x 0 , and the maximal solution is defined as long as γ(x) > x.
In order to extend Proposition 5.1, the asymptotic results of Proposition 4.1 must be adapted; see section 9.3.Using Proposition 9.1, we can easily adapt the proofs of Lemmas 5.2 and 5.3 and show that they still hold true.However, in order to adapt the proof of Proposition 5.1, we make the following assumption: either α(x) → ∞ as x → ∞, (8.7) or in Proposition 9.1(ii), for any a > 0 and ϕ(z) = z − a, δ ≡ 1. (8.8)This additional assumption is made in order to prove that for sufficiently large x, Lg(x, x + a)  Then, Proposition 5.5 still holds, except how a new case can occur; that is, γ ↓ (x * ) = x * and x * ≥ Γ ∞ , which implies Γ 0 > Γ ∞ .
Remark 8.1.In the new case stated above, the condition x * ≥ Γ ∞ is not a priori a consequence of γ ↓ (x * ) = x * , since there is no reason in general for the set Int(Γ − ) to be connected.
Finally, Theorem 6.1 can be proved in the same way for a general loss function, using the asymptotic expansions of Proposition 9.1, where v is defined by formulas generalizing (6.2) to (6.5).
Assume now that Condition (2.13) holds true.The implications (ii) =⇒ (ii) =⇒ (iii) follow immediately from the definition of g in (2.10) together with condition (2.13) and the nondecrease of .
Notice that if (2.11) holds for x = 0, then (2.10) is also valid for x = 0. Remark 9.1.Without assuming (2.13), (i) and (ii) can hold true while (iii) does not.Indeed, consider for example a process with scale function S(x) = e x 2 and the loss function (x) = Proof.(i) The proof is close to the proof of Proposition 4.1(ii).First, as ϕ is measurable and satisfies 0 ≤ ϕ(z) ≤ z, the expressions make sense and the integrals exist.Then, using Proposition 4.1(i) and integrating by parts, we have As ϕ(z) < z if z > 0, (z − ϕ(z)) > 0, and this implies that inf 0≤θ≤1 E|θ − τ |. * Received by the editors June 23, 2011; accepted for publication (in revised form) May 7, 2012; published electronically September 4, 2012.This research was supported by the Chair Financial Risks of the Risk Foundation sponsored by Société Générale, the Chair Derivatives of the Future sponsored by the Fédération Bancaire Française, and the Chair Finance and Sustainable Development sponsored by EDF and Calyon.

Fig. 2 .
Fig. 2. On the left part, the graph of γ is inside Int(Γ − ) and γ is decreasing.

8. 2 .
The increasing part of the boundary.In order to determine the increasing part of the free boundary, ODE (5.1) is replaced byγ = Lg(x, γ) (γ − x) 1 − S(x)

(a) 1 −
S(x)   S(x+a)> 1 + ε, while the other arguments of the proof remain exactly the same.