Completeness in standard and differential approximation classes: Poly-(D)APX-and (D)PTAS-completeness

Several problems are known to be APX -, DAPX -, PTAS -, or Poly-APX-PB -complete under suitably deﬁned approximation-preserving reductions. But, to our knowledge, no natural problem is known to be PTAS -complete and no problem at all is known to be Poly-APX -complete. On the other hand, DPTAS - and Poly-DAPX -completeness have not been studied until now. We ﬁrst prove in this paper the existence of natural Poly-APX - and Poly-DAPX -complete problems under the well known PTAS - reduction and under the DPTAS -reduction (deﬁned in “G. Ausiello, C. Bazgan, M. Demange, and V. Th. Paschos, Completeness in di ﬀ erential approximation classes , MFCS’03”), respectively. Next, we deal with PTAS - and DPTAS -completeness. We introduce approximation preserving reductions, called FT and DFT , respectively, and prove that, under these new reductions, natural problems are PTAS -complete, or DPTAS -complete. Then, we deal with the existence of intermediate problems under our reductions and we partially answer this question showing that the existence of NPO -intermediate problems under Turing-reduction is a su Y cient condition for the existence of intermediate problems under both FT - and DFT -reductions. Finally, we show that min coloring is DAPX -complete under the DPTAS - reduction. This is the ﬁrst DAPX -complete problem that is not simultaneously APX -complete.


Introduction
Many NP-complete problems are decision versions of natural optimization problems. Since, unless P = NP, such problems cannot be solved in polynomial time, a major question is to find polynomial algorithms producing solutions "close to the optimum" (in some prespecified sense). Here, we deal with polynomial approximation of NPO problems, i.e., of optimization problems the decision versions of which are in NP. A polynomial approximation algorithm A for an optimization problem Π is a polynomial time algorithm that produces, for any instance x of Π, a feasible solution y = A(x). The quality of y is estimated by computing the so-called approximation ratio. Two approximation ratios are commonly used in order to evaluate the approximation capacity of an algorithm: the standard ratio and the differential ratio.
By means of these ratios, NPO problems are then classified with respect to their approximability properties. Particularly interesting approximation classes are, for the standard approximation paradigm, the classes Poly-APX (the class of the problems approximated within a ratio that is a polynomial, or the inverse of a polynomial when dealing with maximization problems, on the size of the instance), APX (the class of constant-approximable problems), PTAS (the class of problems admitting a polynomial time approximation schemata) and FPTAS (the class of problems admitting a fully polynomial time approximation schemata). Analogous classes can be defined under the differential approximation paradigm: Poly-DAPX, • FPTAS and DFPTAS under two new reductions called FT and DFT, respectively.
Finally, for reductions FT and DFT, we try to apprehend if they allow existence of intermediate problems and we partially answer this question by proving that such problems do exist provided that there exist intermediate problems in NPO under the seminal Turing-reduction.
Let us note that no problem was known to be Poly-APX-complete until now, since the results in [12] only prove the existence of Poly-APX-PB-complete problems. On the other hand, the question about the existence of Poly-DAPX-complete problems has not, to our knowledge, been handled until now. The existence of PTAS-complete problems is proved here by means of a FPTAS-preserving reduction (called FTreduction). It is somewhat weaker than the F-reduction of [6], but it has the merit that natural problems are shown to be PTAS-complete under it (while this seems to be not true for the F-reduction). Indeed, we show that, under FT-reduction, any polynomially bounded NP-hard problem of PTAS is PTAS-complete. Next, we propose a reduction preserving membership in DFPTAS and show that, under it, natural problems as   , or   , both in planar graphs, are DPTAS-complete. Here also, we use another notion of polynomial boundness, called diameter polynomial boundness, and show that any diameter polynomially bounded NP-hard problem of DPTAS is DPTAS-complete.
The paper is organized as follows: in Section 2, we recall some basic definitions and present the two new reductions. In Sections 3 and 4, we show Poly-APX and Poly-DAPX-completeness, respectively. In Sections 5 and 6, we present our completeness results for PTAS and DPTAS. The results on intermediate problems are given in Section 7. Finally, in Section 8, it is proved that   is DAPX-complete under DPTAS-reduction. This is the first problem that is DAPX-complete but not APX-complete. Definitions of problems used and/or discussed in the paper, together with specifications of their worst solutions are given in Appendix A.

Polynomial approximation
We firstly recall some useful definitions about basic concepts of polynomial approximation that will be used in the sequel.
• I is the set of instances (and can be recognized in polynomial time); • given x ∈ I, Sol(x) is the set of feasible solutions of x; the size of a feasible solution of x is polynomial in the size |x| of the instance; moreover, one can determine in polynomial time if a solution is feasible or not; • Given x ∈ I and y ∈ Sol(x), m(x, y) denotes the value of the solution y of the instance x; m is called the objective function, and is computable in polynomial time; we suppose here that m(x, y) ∈ N; • opt ∈ {min, max}; in what follows, we will use notations opt(Π) = max, or min to denote that Π is a maximization, or a minimization problem, respectively.
Given a problem Π in NPO, we distinguish the following three different versions of it: • the constructive version denoted also by Π, where the goal is to determine a solution y * ∈ Sol(x) satisfying m(x, y * ) = opt{m(x, y), y ∈ Sol(x)}; • the evaluation problem Π e , where we are only interested in determining the value of an optimal solution; • the decision version Π d of Π where, given an instance x of Π and an integer k, we wish to answer the following question: "does there exist a feasible solution y of x such that m(x, y) k, if opt = max, or m(x, y) k, if opt = min?".
Given an instance x of an optimization problem Π, let opt(x) be the value of an optimal solution, and ω(x) be the value of a worst feasible solution. This value is the optimal value of the same optimization problem (with respect to the set of instances and the set of feasible solutions for any instance) defined with the opposite objective (minimize instead of maximize, and vice-versa) with respect to Π. We now define the two ratios the most commonly used for the analysis of approximation algorithms, called standard and differential in the sequel. For y ∈ Sol(x), the standard approximation ratio of y is defined as r(x, y) = m(x, y)/ opt(x). The differential approximation ratio of y is defined as δ(x, y) = |m(x, y) − ω(x)|/| opt(x) − ω(x)|. Following the above, standard approximation ratios for minimization problems are greater than, or equal to, 1, while for maximization problems these ratios are smaller than, or equal to 1. On the other hand, differential approximation ratio is always at most 1 for any problem.
Let λ be a function mapping the instances of a problem Π to [0, 1], or to [1, +∞). An algorithm A guarantees standard (resp., differential) ratio λ if and only if, for any instance x of Π, r(x, A(x)) λ(x), or r(x, A(x)) λ(x), depending whether Π is a maximization or a minimization problem (resp., δ(x, A(x)) λ(x)). A problem Π is standard (resp., differential) λ-approximable if and only if there exists a polynomial algorithm that guarantees standard (resp., differential) ratio λ.
We now formally define the approximation classes Poly-APX, APX, PTAS and FPTAS with which we deal in this paper.
• APX is the class of constant-approximable NPO problems, i.e., for which there exist polynomial algorithms guaranteeing ratio λ for a λ that does not depend on any parameter of the instance.
• FPTAS is the class of NPO problems admitting a fully polynomial time approximation schemata; such schemata are polynomial time approximation schemata (A ε ) ε∈]0,1] , where the complexity of any A ε is polynomial in both the size of the instance and in 1/ε.
Classes Poly-DAPX, DAPX, DPTAS and DFPTAS for the differential approximation paradigm can be defined analogously (recall that differential approximation ratio is always less than, or equal to, 1; so, differential approximation classes are defined analogously to the standard ones for maximization problems).
We now recall what is called a polynomially bounded problem and introduce a notion of diameter boundness, very useful and intuitive when dealing with the differential approximation paradigm.

Definition 2.
An NPO problem Π is polynomially bounded if and only if there exists a polynomial q such that, for any instance x and for any feasible solution y ∈ Sol(x), m(x, y) q(|x|). It is diameter polynomially bounded if and only if there exists a polynomial q such that, for any instance x, | opt(x) − ω(x)| q(|x|).
The class of polynomially bounded NPO problems will be denoted by NPO-PB, while the class of diameter polynomially bounded NPO problems will be denoted by NPO-DPB. Analogously, for any (standard or differential) approximation class C, we will denote by C-PB (resp., C-DPB) the subclass of polynomially bounded (resp., diameter polynomially bounded) problems of C.
We also need the following definitions, introduced in [12], that will be used later.
• A problem Π ∈ NPO is said additive if and only if there exist an operator ⊕ and a function f , both computable in polynomial time, such that: -with any solution y ∈ sol Π (x 1 ⊕ x 2 ), f associates two solutions y 1 ∈ sol Π (x 1 ) and y 2 ∈ sol Π (x 2 ) such that m(x 1 ⊕ x 2 , y) = m(x 1 , y 1 ) + m(x 2 , y 2 ).
• Let Poly be the set of functions from N to N bounded by a polynomial. A function F : N → N is hard for Poly if and only if for any f ∈ Poly, there exist three constants k, c and n 0 such that, for any n n 0 , f (n) kF (n c ).
• A maximization problem Π ∈ NPO is canonically hard for Poly-APX if and only if there exist a transformation T from 3 to Π, two constants n 0 and c and a function F , hard for Poly, such that, given an instance x of 3 on n n 0 variables and a number N n c , instance x ′ = T (x, N ) belongs to I Π and verifies the following properties: 3. given a solution y ∈ sol Π (x ′ ) such that m(x, y ′ ) > N/F (N ), one can polynomially determine a truth assignment satisfying x.
Note that, since 3 is NP-complete, a problem Π is canonically hard for Poly-APX if and only if any decision problem Π ′ ∈ NP reduces to Π along Items 1 and 2 just above.

Reductions
First, let us recall that, given a reduction R and a set C of problems, a problem Π ∈ C is C-complete under R if and only if any problem in C R-reduces to Π.
Five basic and two new reductions will be used in this paper. Among the former, the first one is the seminal Turing-reduction between optimization problems as it appears in [10]. It preserves optimality of solutions and hence membership in PO (the optimization problems solvable in polynomial time; obviously, PO ⊆ NPO).
Let Π and Π ′ be two problems in NPO. Then, Π reduces to Π ′ under Turing-reduction (denoted by Π ≤ T Π ′ ) if and only if, given an oracle optimally solving Π ′ , we can devise an algorithm optimally solving Π, in polynomial time if is polynomial.
The other four basic reductions, PTAS, E, DPTAS and F that will be discussed or used in what follows, are defined in [7,12,1,6], respectively, and mentioned here for reasons of readability.
As we have already mentioned, the E-reduction has been defined in [12] in an attempt to be applied uniformly at all levels of approximability. It is slightly weaker than the L-reduction of [15] and preserves membership in FPTAS. We say that a problem Π E-reduces to Π ′ (Π ≤ E Π ′ ) if and only if there exist two polynomially computable functions f and g and a constant c such that: • for any x ∈ I Π , f (x) ∈ I Π ′ ; moreover, there exists a polynomial p such that opt(f (x)) p(|x|) opt(x); • for any x ∈ I Π and any y ∈ sol Π ′ (f (x)), g(x, y) ∈ sol Π (x); furthermore, ǫ(x, g(x, y)) cǫ(f (x), y) where for x ∈ I Π and z ∈ sol Π (x), ǫ(x, z) = r(x, z) − 1, if opt(Π) = min and ǫ(x, z) = (1/r(x, z)) − 1, if opt(Π) = max.
As it is proved in [12], if a problem Π is additive and canonically hard for Poly-APX, then any problem in Poly-APX-PB E-reduces to Π. As    is additive and canonically hard for Poly-APX, it is Poly-APX-PB-complete, under the E-reduction.
The DPTAS-reduction has been introduced in [1] in order to provide DAPX-completeness results. It preserves membership in DPTAS. For two NPO problems Π and Π ′ , Π ≤ DPTAS Π ′ if and only if there exist three functions f , g and c, computable in polynomial time, such that: . . , f i ), for some i polynomial in |x|, then the former implication becomes: One of the basic features of differential approximation ratio is that it is stable under affine transformations of the objective functions of the problems dealt. In this sense, problems for which the objective functions of the ones are affine transformations of the objective functions of the others are approximate equivalent for the differential approximation paradigm (this is absolutely not the case for standard paradigm). The most notorious case of such problems is the pair    and   . Affine transformation is nothing else than a very simple kind of differential-approximation preserving reduction, denoted by AF, in what follows. Two problems Π and Π ′ are affine equivalent if Π ≤ AF Π ′ and Π ′ ≤ AF Π. Obviously affine transformation is a DPTAS-reduction.
Finally, the F-reduction has been introduced in [6] and, as the E-reduction, it preserves membership in FPTAS. For two NPO problems Π and Π ′ , Π F-reduces to Π ′ if and only if there exist three polynomially computable functions f , g and c such that: there exists a polynomial p such that,for all ε > 0 and for all x ∈ I Π , c(x, ε) = 1/p(|x|, 1/ε); moreover, ∀x ∈ I Π , ∀ε ∈]0, Under F-reduction,   - -B has been proved PTAS-complete in [6]. We now introduce two new reductions, denoted by FT and DFT, preserving membership in FPTAS and DFPTAS, respectively.
Let Π and Π ′ be two NPO maximization problems. Let Π ′ α be an oracle for Π ′ producing, for any α ∈]0, 1] and for any instance if and only if, for any ε > 0, there exists an algorithm A ε (x, Π ′ α ) such that: • for any instance x of Π, A ε returns a feasible solution which is a (1 − ε)-standard approximation; • if Π ′ α (x ′ ) runs in time polynomial in both |x ′ | and 1/α, then A ε is polynomial in both |x| and 1/ε.
For the case where at least one among Π and Π ′ is a minimization problem it suffices to replace 1 − ε or/and 1 − α by 1 + ε or/and 1 + α, respectively. Reduction DFT, dealing with differential approximation, can be defined analogously. Clearly, FT-(resp., DFT-) reduction transforms a fully polynomial time approximation schema for Π ′ into a fully polynomial time approximation schema for Π, i.e., it preserves membership in FPTAS (resp., DFPTAS). Observe also that AF-reduction, mentioned above, is also a DFT-reduction.
The F-reduction is a special case of FT-reduction since the latter explicitly allows multiple calls to oracle (this fact is not explicit in F-reduction; in other words, it is not clearly mentioned if f and g are allowed to be multi-valued). Also, FT-reduction seems allowing more freedom in the way Π is transformed into Π ′ ; for instance, in F-reduction, function g transforms an optimal solution for Π ′ into an optimal solution for Π, i.e., F-reduction preserves optimality; this is not the case for FT-reduction. This freedom will allow us to reduce non polynomially bounded NPO problems to NPO-PB ones. In fact, it seems that FT-reduction is larger than F. This remains to be confirmed. Such proof is not trivial and is not tackled here.
In what follows, given a class C ⊆ NPO and a reduction R, we denote by C R the closure of C under R, i.e., the set of problems in NPO that R-reduce to some problem in C.

Poly-APX-completeness
As mentioned in [12], the nature of the E-reduction does not allow transformation of a non-polynomially bounded problem into a polynomially bounded one. In order to extend completeness in the whole Poly-APX we have to use a larger (less restrictive) reduction than E. In what follows, we show that PTAS-reduction can do it. The basic result of this section is the following theorem.

Theorem 1. If Π ∈ NPO is additive and canonically hard for Poly-APX, then any problem in Poly-APX
PTAS-reduces to Π.
Proof. Let Π ′ be a maximization problem of Poly-APX and let A be an approximation algorithm for Π achieving approximation ratio 1/c(·), where c ∈ Poly (the case of minimization will be dealt later in Remark 1). Let Π be an additive problem, canonically hard for Poly-APX, let F be a function hard for Poly and let k and c ′ be such that (for n n 0 , for a certain value n 0 ) nc(n) k(F (n c ′ ) − 1). Let, finally, x ∈ I Π ′ , ε ∈]0, 1[ and n = |x|.

Construction of f (x, ε)
Set m = m(x, A(x)); then m opt Π ′ (x)/c(n). If we try to reproduce identically the analogous proof of [12], we would be faced to the problem that quantity mc(n) is not always polynomially bounded; in other words, transformation f might be not-polynomial. In order to remedy to this, we will uniformly partition the interval [0, mc(n)] of possible values for opt Π ′ (x) into q(n) = 2c(n)/ε sub-intervals (remark that q is a polynomial). Consider, for i ∈ {1, . . . , q(n)}, the set of instances Set N = n c ′ . We construct, for any i, an instance χ i of Π such that: Define f (x, ε) = χ = ⊕ 1 i q(n) χ i and observe that c(n)/q(n) = ε/2. Then, Construction of g(x, y, ε) Let y be a solution of χ and let j be the largest i for which m(χ i , y i ) > N/F (N ), where y i is the track of y on χ i . Then, one can compute a solution ψ ′ of x such that: Furthermore, by definition of j, we have: We define ψ = g(x, y, ε) = argmax{m(x, ψ ′ ), m(x, A(x))}. Note that m(x, ψ) max{m, jmε/2}.

Remark 1.
For the case where the problem Π ′ (in the proof of Theorem 1) is a minimization problem, one can reduce it to a maximization problem (for instance using the E-reduction of [12], p. 12) and then one can use the reduction of Theorem 1. Since the composition of an Eand a PTAS-reduction is a PTAS-reduction, the result of Theorem 1 applies also for minimization problems.
Combination of Theorem 1, Remark 1 and of the fact that    is additive and canonically hard for Poly-APX ([12]), produces the following concluding theorem.

4 Poly-APX-completeness under the differential paradigm
We now deal with the existence of Poly-DAPX-complete problems. This section consists of two parts. The former is about Poly-DAPX-PB-completeness, while the latter one deals with Poly-DAPX-completeness. Let us note that the former, studied in Section 4.1, will not be used for proving the existence of Poly-DAPXcomplete problems. We include it just for showing that Poly-APX-PB-completeness is natural also for the differential paradigm.

Poly-DAPX-DPB-completeness
The main result of this section is the following theorem proving a sufficient condition for a problem to be Poly-DAPX-DPB-hard.
Proof. Let Π be a problem canonically hard for Poly-APX, for some function F hard for Poly. Let Π ′ ∈ Poly-DAPX-DPB be a maximization problem (the minimization case is analogous), let A be an approximation algorithm for Π ′ achieving differential approximation ratio 1/c(·), where c ∈ Poly. Let finally x be an instance of Π ′ of size n, and p be a polynomial such that p(| · |) opt(·) − ω(·).

Consider the set of NP-instances
. . , p(n). Let k and c ′ be such that (for n n 0 , for some n 0 ) nc(n) kF (n c ′ ). In the sequel, we consider, without loss of generality, that n k (and hence c(n) F (n c ′ )).

Construction of f (x, ε)
Set N = n c ′ . One can build, for any i, an instance In other words, f is multi-valued (and does not depend on ε).

Construction of g(x, y, ε)
Let y = (y 1 , . . . , y p(n) ) be a solution of f (x, ε). Set L y = {i : m(χ i , y i ) > N/F (N )}. For any i ∈ L y , one can determine a witness of the fact that x ∈ I i , i.e., two solutions ψ i 1 and ψ i 2 of x such that Define ψ = g(x, y, ε) = argmax i∈L y {m(x, A(x)), m(x, ψ i 1 )}.

Transfer of differential ratios
Set q = |opt(x) − ω(x)|. Then, x ∈ I q ; hence opt(χ q ) = N . Consider the two following cases: • if q ∈ L y , then, using (8), we get: ψ q 1 (and hence ψ) is necessarily an optimal solution for x; • if m(χ q , y q ) N/F (N ), then, since opt(χ q ) = N (and ω(χ q ) 0), we get: From (9) and (10), the reduction just described is a DPTAS-reduction with c(ε) = ε and the proof of the theorem is complete.

Poly-DAPX-completeness
We now generalize Theorem 3 to the whole Poly-DAPX by proving the following theorem.
Proof. Let Π be canonically hard for Poly-APX, for some function F hard for Poly, let Π ′ ∈ Poly-DAPX be a maximization problem and let A be an approximation algorithm for Π ′ achieving differential approximation ratio 1/c(·), where c ∈ Poly. Finally, let x be an instance of Π ′ of size n. As in the case of the standard approximation paradigm, we cannot directly use the proof of Theorem 3 because quantity opt(x) − ω(x) may be non-polynomially bounded. We will use the central idea of [1] (see also [2] for more details). We will define a set Π ′ i,l of problems derived from Π ′ . For any pair (i, l), Π ′ i,l has the same set of instances and the same solution-set as Π ′ ; for any instance x and any solution y of x, Note that, for some pairs (i, l), Π ′ i,l may be not in Poly-DAPX (hence, use of an algorithm for Π ′ , supposed to be in Poly-DAPX, may be impossible for Π ′ i,l ). Next, considering x as instance of any of the problems Π ′ i,l , we will build an instance χ i,l of Π, obtaining so a multi-valued function f . Our central objective is, informally, to determine a set of pairs (i, l) such that we will be able to build a "good" solution for Π ′ using "good" solutions of χ i,l .
Let ε ∈]0, 1[; set M ε = 1 + ⌊2/ε⌋ and let c ′ and k be such that (for n n 0 for some n 0 ) nc(n) kF (n c ′ ) (both c ′ and k may depend on ε). Assume finally, without loss of generality, that n k and set N = n c ′ . Then, 1/F (N ) 1/c(n). Set m = m(x, A(x)). In [1], a set F of pairs (i, l) is built such that: • |F | is polynomial with n; • there exists a pair (i 0 , l 0 ) in F such that:

Construction of g(x, y, ε)
Let y = (y q i,l , (i, l) ∈ F , q ∈ {0, . . . , M ε }) be a solution of f (x, ε). Set L y = {(i, l, q) : m(χ q i,l , y q i,l ) > N/F (N )}. For each (i, l, q) ∈ L y , one can determine a solution ψ q i,l of x (seen as instance of Π ′ i,l ) with value at least q.
Using the fact that    is canonically hard for Poly-APX, Theorem 4 directly exhibits the existence of a Poly-DAPX-complete problem.

Theorem 5.    is Poly-DAPX-complete under the DPTAS-reduction.
Note that we could obtain the Poly-DAPX-completeness of canonically hard problems for Poly-APX even if we forbade DPTAS-reduction to be multi-valued. However, in this case, we should assume (as in Section 3) that Π is additive (in this case, the proof of Theorem 4 would be much longer).

PTAS-completeness
We now study PTAS-completeness under FT-reduction. The basic result of this section (Theorem 6) follows immediately from Lemmata 1 and 2. Lemma 1 introduces a property of Turing-reduction for NP-hard problems. In Lemma 2, we transform (under certain conditions) a Turing-reduction into a FT-reduction. Proofs of the two lemmata are given for maximization problems. The case of minimization is completely analogous.

Lemma 1. If an NPO problem Π ′ is NP-hard, then any NPO problem Turing-reduces to Π ′ .
Proof. Let Π be an NPO problem and q be a polynomial such that |y| q(|x|), for any instance x of Π and for any feasible solution y of x. Assume that encoding n(y) of y is binary. Then 0 n(y) 2 q(|x|) − 1. We consider the following problemΠ (see also [4]) which is the same as Π up to its objective function that is defined by mΠ(x, y) = 2 q(|x|)+1 m Π (x, y) + n(y).
Clearly, if mΠ(x, y 1 ) mΠ(x, y 2 ), then m Π (x, y 1 ) m Π (x, y 2 ). So, if y is an optimal solution for x (seen as instance ofΠ), then it is also an optimal solution for x (seen, this time as instance of Π).
Remark now that forΠ, the evaluation problemΠ e and the constructive problemΠ are equivalent. Indeed, given the value of an optimal solution y, one can determine n(y) (hence y) by computing the remainder of the division of this value by 2 q(|x|)+1 .
Since Π ′ is NP-hard, we can solve the evaluation problemΠ e if we can solve the (constructive) problem Π ′ . Indeed, • we can solveΠ e using an oracle solving, by dichotomy, the decision versionΠ d ofΠ; •Π d reduces to the decision version Π ′ d of Π ′ by a Karp-reduction (see [3,10] for a formal definition of this reduction); • finally, one can solve Π ′ d using an oracle for the constructive problem Π ′ . So, with a polynomial number of queries to an oracle for Π ′ , one can solve bothΠ e andΠ, and the proof of the lemma is complete.
We now show how, starting from a Turing-reduction (that only preserves optimality) between two NPO problems Π and Π ′ where Π ′ is polynomially bounded, one can devise an FT-reduction transforming a fully polynomial time approximation schema for Π ′ into a fully polynomial time approximation schema for Π. Lemma 2. Let Π ′ ∈ NPO-PB. Then, any NPO problem Turing-reducible to Π ′ is also FT-reducible to Π ′ .
Proof. Let Π be an NPO problem and suppose that there exists a Turing-reduction between Π and Π ′ . Let Π ′ α be an oracle computing, for any instance x ′ of Π ′ and for any α > 0, a feasible solution y ′ of x ′ such that r(x ′ , y ′ ) 1 − α. Moreover, let p be a polynomial such that for any instance x ′ of Π ′ and for any feasible solution y ′ of x ′ , m(x ′ , y ′ ) p(|x ′ |).
Let x be an instance of Π. The Turing-reduction claimed gives an algorithm solving Π using an oracle for Π ′ . Consider now this algorithm where we use, for any query to the oracle with the instance x ′ of Π ′ , the approximate oracle Π ′ α (x ′ ), with α = 1/(p(|x ′ |) + 1). This algorithm produces an optimal solution, since a solution y ′ being an (1 − (1/(p(|x ′ |) + 1)))-approximation for x ′ is an optimal one (recall that we deal with problems having integer-valued objective functions, cf., Definition 1). Indeed, It is easy to see that this algorithm is polynomial when Π ′ α (x ′ ) is polynomial in |x ′ | and in 1/α. Furthermore, since any optimal algorithm for Π can be a posteriori seen as a fully polynomial time approximation schema, we immediately conclude Π ≤ FT Π ′ and the proof of the lemma is complete.
Combination of Lemmata 1 and 2, immediately derives the basic result of the section expressed by the following theorem.

Corollary 4. Any NPO-DPB problem in DPTAS is DPTAS-complete under DFT-reductions.
The following concluding theorem deals with the existence of DPTAS-complete problems. Proof. For DPTAS-completeness of    , just observe that, for any instance G, ω(G) = 0. So, standard and differential approximation ratios coincide for this problem; moreover, it is in both NPO-PB and NPO-DPB. Then, inclusion     in PTAS suffices to conclude its inclusion in DPTAS and, by Corollary 4, its DPTAS-completeness.     and     are affine equivalent; hence     ≤ AF    . Since AF-reduction is a particular kind of DFT-reduction, the DPTAS-completeness of     is immediately concluded.

About intermediate problems under FT-and DFT-reductions
FT-reduction is weaker than the F-reduction of [6]. Furthermore, as mentioned before, this last reduction allows existence of PTAS-intermediate problems. The question of existence of such problems can be posed for FT-reduction too. In this section, we partially answer this question via the following theorem.

Theorem 10. If there exists an NPO-intermediate problem for the Turing-reduction, then there exists a problem PTAS-intermediate for FT-reduction.
Proof. Let Π be an NPO problem, intermediate for the Turing-reduction. Suppose that Π is a maximization problem (the minimization case is completely similar). Let p be a polynomial such that, for any instance x and any feasible solution y of x, m(x, y) 2 q(|x|) . Consider the following maximization problem Π where: • instances are the pairs (x, k) with x an instance of Π and k an integer in {0, . . . 2 q(|x|) }; • for an instance (x, k) of Π, its feasible solutions are the feasible solutions of the instance x of Π; • the objective function of Π is: We will now show the three following properties:

Proof of Property 1
Remark that Π is clearly in NPO-PB. Consider ε ∈]0, 1] and the algorithm A ε which, on the instance (x, k) of Π, solves exactly (x, k), if |(x, k)| 1/ε; otherwise, it produces some solution. Algorithm A ε is polynomial and guarantees standard approximation ratio 1 − ε. Therefore, Π is in PTAS.

Proof of Property 2
Remark that Π ≤ T Π. Indeed, let x be an instance of Π. We can find an optimal solution of x solving log(2 p(|x|) ) = p(|x|) instances (x, k) of Π (by dichotomy). Note that if Π were in FPTAS, it would be polynomial since the fully polynomial time approximation schema A ε applied on instance (x, k) with ε = 1/(|(x, k)| + 1) is an optimal and polynomial algorithm. The fact that Π ≤ T Π would imply in this case that Π is polynomial.

Proof of Property 3
Assume that Π is PTAS-complete (under some FT-reduction). Then,     FT-reduces to Π. Let be an oracle solving Π. Then, we immediately obtain an optimal algorithm for Π, polynomial if is so. Clearly, this algorithm can be considered as a fully polynomial time approximation schema for Π. Reduction     ≤ FT Π provides a fully polynomial time approximation schema for     and, since it is in NPO-PB, we get an optimal (and polynomial, if is so) algorithm for it. In other words, if Π is PTAS-complete, then     ≤ T Π. To conclude,     is NPO-complete under Turing-reduction, since it is NP-hard (cf., Lemma 1). Therefore, if Π were PTAS-complete, Π would be NPO-complete under Turing-reduction. The proof of Property 3 and of the theorem are now completed.
We now state an analogous result about the existence of DPTAS-intermediate problems under DFTreduction.

Theorem 11. If there exists an NPO-intermediate problem under Turing-reduction, then there exists a problem DPTAS-intermediate, under DFT-reduction.
Proof. The proof is analogous to one of Theorem 10, up to modification of definition of Π (otherwise, Π / ∈ DPTAS, because the value of the worst solution of an instance (x, k) is |(x, k)| − 1; we have to change it in order to get ω((x, k)) = 0 for any instance (x, k)). We define Π as follows: • instances of Π are, as previously, the pairs (x, k) where x is an instance of Π and k is an integer between 0 and 2 q(|x|) ; • for an instance (x, k) of Π, its feasible solutions are the feasible solutions of the instance x of Π, plus a solution y 0 x ; • the objective function of Π is: Then, the result claimed is get in exactly the same way as in the proof of Theorem 10.

A new DAPX-complete problem not APX-complete
All DAPX-complete problems given in [1] are also APX-complete under the E-reduction ( [12]). An interesting question is if there exist DAPX-complete problems that are not also APX-complete for some standardapproximation preserving reduction. In this section, we positively answer this question by the following theorem. Proof. Consider problem    and remark that standard ratio for it coincides with differential ratio of  . In fact, these problems are affine equivalent; so,    ≤ AF   (13)    is MAX-SNP-hard under L-reduction ( [11]) that is, as mentioned already, a particular kind of the E-reduction. On the other hand, MAX-SNP E = APX-PB ( [12]). Since   -B ∈ APX-PB,   -B ≤ E   . Furthermore, E-reduction is a particular kind of PTAS-reduction; hence,   -B ≤ PTAS   . Standard and differential approximation ratios for   -B, on the one hand, standard and differential approximation ratios for   , and differential ratio of  , on the other hand, coincide. So, Reductions (13) and (14), together with the fact that the composition DPTAS • AF is obviously a DPTASreduction, establish immediately the DAPX-completeness of   and the proof of the theorem is complete.
As we have already mentioned,   is, until now, the only problem known to be DAPXcomplete but not APX-complete. In fact, in standard approximation paradigm, it belongs to the class Poly-APX and is inapproximable, in a graph of order n, within n 1−ε , ∀ε > 0, unless NP coincides with the class of problems that could be optimally solved by slightly super-polynomial algorithms ( [9]).