An improved general procedure for lexicographic bottleneck problems

In combinatorial optimization(cid:0) the bottleneck (cid:1)or minmax(cid:2) problems are those problems where the objective is to (cid:3)nd a feasible solution such that its largest cost coe(cid:4)cient elements have minimum cost(cid:5) Here we consider a generalization of these problems(cid:0) where under a lexicographic rule we want to minimize the cost also of the second largest cost coe(cid:4)cient elements(cid:0) then of the third largest cost coe(cid:4)cients and so on(cid:5) We propose a general rule which leads(cid:0) given the considered problem(cid:0) to a vectorial version of the solution procedure for the underlying sum optimization (cid:1)minsum(cid:2) problem(cid:5) This vectorial procedure increases by a factor of k (cid:1)where k is the number of di(cid:6)erent cost coe(cid:4)cients(cid:2) the complexity of the corresponding sum optimization problem solution procedure(cid:5)


Introduction
In most of the classical combinatorial optimization problems, the objective function is an additive function of the single variables. These problems are often denoted as minsum or sum optimization problems (SOP). Minmax or bottleneck optimization problems (BOP) are the easiest way to deal with scenarios where the variables can be related to each other under an ordinal scale rather than a cardinal one. In a BOP, the objective is to nd a feasible solution where the largest cost coe cient elements have minimum cost. In many combinatorial optimization problems (e.g., the bottleneck assignment problem 10]), the minmax version is easier than the minsum version as it can besolved by tackling log k searches of feasible solutions of minsum problems where k is the number of distinct cost-coe cients, as the search for feasibility is often much easier than the search for optimality. Consider now a generalization of a minmax problem by requiring also the second largest cost coe cients elements in the solution to be minimum, then the third largest cost coe cient and so on. This kind of problems arrive, for instance (see, for example 11]) in the evaluation of fragmented alternatives in multicriteria decision aid. We will denote these problems in the remainder of the paper as lexicographic bottleneck optimization problems (LBOP). In this case, a straightforward solution procedure does not exist except for those problems (e.g., the shortest spanning tree) whose SOP, BOP and LBOP versions are all optimally solved at the same time by the greedy algorithm as they are optimization problems over the set of bases of a matroid 12]. The relevant literature on this topic is quite limited to our knowledge. Burkard  two di erent solution procedures, the rst based on coe cients scaling and the second based on an iterative approach. The two procedures have comparable computational complexities, and the authors report that it is preferable to apply the rst procedure for small k and the second procedure for large k. Both procedures create numbers growing very fast with k in such a way that they cannot practically handle medium-size problem-instances. Recently Calvete and Mateo 4] proposed a primal-dual algorithm for a generalized lexicographic multiobjective network ow problem. The interested reader may consider also the survey paper by Burkard and Zimmermann 3] on the algebraic versions of various optimization problems. Purpose of this work is to present a solution procedure which is a rearrangement of the rst solution procedure of 1] based on a vectorial representation of the cost coe cients. We show that this procedure outperforms both their procedures in terms of computational complexity for all realistic problems where k n 2 log k, where n is the number of non-zero variables involved in the optimal solution, and, moreover, that the vectorial representation forbids the numbers explosion phenomenon for large k. The paper proceeds as follows. In section 2 we introduce the relevant de nitions and notation. In section 3 we present the procedure and prove its optimality. In section 4 we illustrate the procedure on the LBOP versions of two well known combinatorial optimization problems, the shortest path problem and the assignment problem. Section 5 concludes the paper with nal remarks.

Notation and de nitions
Let consider a combinatorial optimization problem involving m variables x 1 : : : x m with cost coe cients or weights w 1 : : : w m . Let denote with X the set of feasible solutions to the problem.
Let W 1 = m a x i:x i >0 fw i g be the largest active w eight of a feasible solution x = fx 1 : : : x m g (we denote with active the weights corresponding to non-zero variables hence, there exist n active weights) and correspondingly W 2 the second largest active w eight and so on.
We s a y that 1. the solution x = fx 1 : : : x m g is optimal for a SOP if, 8x = fx 1 : : : 2. the solution x = fx 1 : : : x m g is optimal for a BOP if, 8x = fx 1 : : : x m g 2 X, 3. the solution x = fx 1 : : : x m g is optimal for a LBOP if 8x = fx 1 : : : x m g 2 X there does not exist any a c t i v e w eight W l such t h a t : Finally, a s i n 1 ] , w e denote by T m n (SOP) the worst-case running time to solve a m-variable SOP with n non-zero variables in the optimal solution where each elementary operation (e.g., addition) requires constant time. 3 The solution procedure: a vectorial approach Given a LBOP, substitute each w eight w j with a vector of cost coe cients fc 1 c 2 : : : c k g. The entries refer to the k di erent weights of the problem and are indexed in decreasing order of the weights values. All entries are set to 0 except the one that refers to the original weight w j which is set to 1 for instance, the vector corresponding to the largest weight i s f1 0 0 : : : 0 0g.
Consider now this problem as a SOP, where, as each weight is a vector of k components, each algebraic sum involving the weights becomes a vectorial sum of the corresponding k entries and each solution value is a vector of these k entries. We denote this problem with vectorial representation as VSOP (vectorial sum-optimization problem).
In order to apply a SOP solution procedure to a VSOP, let de ne how to compare two di erent solutions recall that any solution value f(X) = P m i=1 w i x i is now written as f(X) = f P j:w j =c 1 x j P j:w j =c 2 x j : : : P j:w j =c k x j g.
We s a y that, given two solutions X and X , f(X ) < f (X ) () 9 l 0 @ X j:w j =c l Proof: Suppose by contradiction X to be non optimal for the LBOP. Then there exists a solution X 0 which dominates X according to (3). But then f(X 0 ) < f (X ) according to (4). Notice that any lexicographic problem is a VSOP which results in a complete and transitive binary relation on the vector space 7] and, as such, always admits an order-preserving numerical representation as a SOP 8] through a non always trivial process like i n 1 ] . Indeed, our gain in e ciency is obtained by directly solving the VSOP instead of its numerical representation.

Two illustrative applications 4.1 Lexicographic bottleneck shortest path problem
Consider the well known Dijkstra's algorithm 6] for the sum-optimization shortest path problem.
Given a graph G(N A ), let l ij bethe cost of the directed arc a ij connecting node i to node j. Denote by S N (resp., S N) the set of labeled (resp., unlabeled) nodes. Let ; i br the set of successors of node i (i.e., ; i = fj : l ij 6 = 0g). Let (i) bethe current shortest path value from the source node to node i 2 N and let P(i) be the current predecessor of i in that path. A pseudocode of Dijkstra's algorithm, assuming node 1 to be the source node, is the following: -? ; ; @ @ @ @ @ @ @ R @ @ @ @ @ @ @ R @ @ @ @ @ @ @ I @ @ @ @ @ @ @ I Step 1: S = f1g, S = f2 : : : Ng (j) = 1, 8j 6 2 ; 1 (j) = l 1j , P(j) = 1, 8j 2 ; 1 Step 2: find j 2 S such that (j) = min i2S f (i)g set S = S T fjg, S = S ; f jg IF jSj = 0 RETURN (i) Step 3: 8i 2 ; j T S: (i) = minf (i) (j) + l ji g P(i) = j IF (i) = (j) + l ji GO TO Step 2 Consider an instance of the lexicographic bottleneck shortest path problem shown in gure 1, where we w ant to nd the shortest path from node A to node G. For this, we will apply Dijkstra's algorithm.
As one can see in gure 1, there are four di erent cost coe cients (we consider that absence of arc can beconsidered as arc of cost coe cient 1), hence the vectorial representation of the costs matrix is as follows (by 1 we denote the string h1 1 1 1i): The solution procedure works as follows: Step
A perfect matching (no exposed vertices) would correspond to the optimal assignment. As the matching is not perfect, the solution is not optimal and, hence, a hungarian tree has been detected: the sets of labelled (unlabelled) rows I + (I ; ) and columns K + (K ; ) are the following: I + = fA Dg, I ; = fB Cg, K + = f2g, K ; = f1 3 4g. Hence, = min i2I + k2K ;fc 0 i k g = c 0 D 3 = 0 0 1 0 ;1 0]. The reduced cost matrix is then updated by setting c 0 i k = c 0 i k ; , for i 2 I + and k 2 K ; , c 0 i k = c 0 i k + , for i 2 I ; and k 2 K + , c 0 i k remains unchanged otherwise.
As the matching is perfect, the assignment A ; 2, B ; 4, C ; 1 and D ; 3 is optimal.

Final remarks
The proposed produre was sought to handle LBOP. However, it can be applied to any problem for which the following conditions hold: ordinality of the scale associated to the cost-vector and existence of at least a weak order on any subset of feasible solutions (which i s a l w ays the case in a lexicographic order). The following steps may be considered in a future research: verify if it is possible to weaken the above necessary conditions without increasing the time-complexity of the proposed procedure devise exact or approximation algorithms for problems where there exists either a partial order or a simple re exive binary relation on the solution set.