0% found this document useful (0 votes)
11 views

Gadhi - 2005 Optimality Conditions For D.C. Vector Optimization Prob Under Reverse Convex Constraints

Uploaded by

noureddine.dahbi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Gadhi - 2005 Optimality Conditions For D.C. Vector Optimization Prob Under Reverse Convex Constraints

Uploaded by

noureddine.dahbi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Journal of Global Optimization (2005) 33: 527–540 © Springer 2005

DOI 10.1007/s10898-004-8318-4

Optimality Conditions for D.C. Vector Optimization


Problems Under Reverse Convex Constraints
N. GADHI,1 M. LAGHDIR2 and A. METRANE3
1
Département de Mathématiques et Informatique, Faculté des Sciences Dhar-Mahraz, B.P.
1796 Atlas, Fès, Morocco (e-mail: [email protected]); 2 Département de Mathematiques,
Faculté des Sciences. B.P. 20 El Jadida, Morocco (e-mail: [email protected]); 3 GERAD
and École Polytechnique de Montréal, Québec, C.P. 6079, Succ. Centre-Ville, Montréal,
Québec, Canada H3C 3A7 (email: [email protected])

(Received 1 April 2003; accepted 3 December 2004)

Abstract. In this paper, we establish global necessary and sufficient optimality conditions
for D.C. vector optimization problems under reverse convex constraints. An application to
vector fractional mathematical programming is also given.

Mathematics Subject Classifications (1991). Primary 90C29, Secondary 49K30.

Key words: convex mapping, D.C. mapping, global weak minimal solution, optimality
condition, reverse convex, subdifferential

1. Introduction
In very recent years, the analysis and applications of D.C. mappings (differ-
ence of convex mappings) have been of considerable interest [11,18,27,31].
Generally, nonconvex mappings that arise in nonsmooth optimization are
often of this type. Recently, extensive work on the analysis and optimiza-
tion of D.C. mappings has been carried out [7,8,21]. However, much work
remains to be done.
Reverse convex optimization, that is, minimizing an extended real-val-
ued convex function subject to a reverse convex constraint, constitutes a
general framework for a large class of nonconvex optimization problems
including D.C. optimization (minimizing or maximizing a difference of two
extended real-valued convex functions), maximizing a convex function over
a convex set, and minimizing a convex function over a reverse convex set,
i.e., the complement of a convex subset of a convex set. This subject has
received increased attention in recent years mainly for numerical purposes
[13,28,30], duality theory in D.C. optimization [15,16,23] or from the point
of view of necessary and sufficient conditions for optimality with the use
of subdifferential calculus of convex analysis and regularising techniques [6,
10,11,19,20,29].
In this paper, we are concerned with the multiobjective optimization
problem
528 N. GADHI ET AL.


Y + − Minimize f (x) − g(x)
(P )
subject to : x ∈ X \ S

where X and Y are Banach spaces, f, g: X → Y are Y + -convex, proper


and lower semi-continuous mappings, S is a nonempty open convex sub-
set of X, and Y + ⊂ Y is a pointed, convex and closed cone with nonemp-
ty interior. Our approach consists of using a special scalarization function
introduced in optimization by Hiriart-Urruty [10] to detect necessary and
sufficient optimality conditions for (P ). Here, convex analysis theory plays
a crucial role in our investigation.
Applying Corollary 3.3 and Theorem 3.4, we deduce optimality condi-
tions for the special multiobjective optimization problem
 
p f1 (x) fp (x)
R+ − Minimize ,...,
g1 (x) gp (x)
/ −intZ +
subject to : h (x) ∈

where f1 , . . . , fp , g1 , . . . , gp : X → R are lower semicontinuous functions such


that

fi (x)  0 and gi (x) > 0 for all i = 1, . . . , p

Z + is a nonempty closed convex cone and h is a Z + -convex mapping


defined from X into another Banach space Z.
The rest of the paper is written as follows : Section 2 contains basic defi-
nitions and preliminary material. Section 3 is devoted to main results (opti-
mality conditions). Section 4 discusses an application to vector fractional
mathematical programming with reverse convex constraints.

2. Preliminaries
Throughout this paper, X, Y, Zand W are Banach spaces whose topolog-
ical dual spaces are X ∗ , Y ∗ , Z ∗ and ∗ +
 Let Y ⊂ Y (respec-
 +W , respectively.

+ +
tively Z ⊂ Z ) be a pointed  Y ∩ −Y = {0} , convex and closed
cones λY + ⊂ Y + for all λ  0 with nonempty interior introducing a par-
tial order in Y ( respctively in Z ) defined by

y1 Y y2 ⇔ y2 ∈ y1 + Y + .

We adjoin to Y tow artificial elements +∞ and −∞ such that

−∞ = − (+∞) , y1 − ∞ Y y2 for all y1 , y2 ∈ Y.


OPTIMALITY CONDITIONS FOR D.C. VECTOR 529

Moreover

y2 Y y1 + ∞ = +∞ for all y1 , y2 ∈ Y ∪ {+∞} .

The negative polar cone Y ◦ of Y + is defined as


  
Y ◦ = y ∗ ∈ Y ∗: y ∗, y  0 for all y ∈ Y +

where ., . is the dual pairs.


Since convexity plays an important role in the following investigations,
recall the concept of cone-convex mappings.
The mapping f : X → Y ∪ {+∞} is said to be Y + −convex if for every α ∈
[0, 1] and x1 , x2 ∈ X

αf (x1 ) + (1 − α) f (x2 ) ∈ f (αx1 + (1 − α) x2 ) + Y + .

DEFINITION 2.1. A mapping h: X → Y ∪ {+∞} is said to be Y + -D.C. if


there exists two Y + -convex mappings f and g such that

h (x) = f (x) − g (x) ∀x ∈ X.

Let us recall the definition of the lower semicontinuity of a mapping. For


more details on this concept, we refer the interested reader to [4,22].

DEFINITION 2.2. [22] A mapping f : X → Y ∪ {+∞} is said to be lower


semicontinuous at x̄ ∈ X, if for any neighborhood V of zero and for any
b ∈ Y satisfying b Y f (x̄), there exists a neighborhood U of x̄ in X such
that

f (U ) ⊂ b + V + (Y + ∪ {+∞}).

DEFINITION 2.3. [24,32] Let f : X → Y ∪ {+∞} be a Y + -convex mapping.


The vectorial subdifferential of f at x ∈ domf is given by

∂ v f (x) = {T ∈ L(X, Y ) : T (h) Y f (x + h) − f (x)∀h ∈ X} .

REMARK 2.1. When f is a convex function, ∂ v f (x) reduces to the well-


known subdifferential
  
∂f (x) = ∂A.C f (x) = x ∗ ∈ X∗ : f (x) − f (x̄)  x ∗ , x − x for all x ∈ X .
530 N. GADHI ET AL.

REMARK 2.2. Let f : X → Y ∪ {+∞} be a Y + –convex mapping. If f is


also continuous at x, then

∂ v f (x) = ∅.

The next concept was introduced in [5] in finite dimension. We give it in


the infinite case.

DEFINITION 2.4. Let U be a nonempty subset of Y . A functional g: U →


R∪ {+∞} is called Y + -increasing on U, if for each y0 ∈ U
 
y ∈ y0 + Y + ∩ U implies g (y)  g (y0 ) .

In [14], and using the separation Hahn-Banach geometric theorem, B.


Lemaire set the following proposition which generalize both Gol’shtein’s
result [9] and Levin’s result [17]. He used, for a simple application
h: Y → R∪ {+∞}, and another application which is Y + −increasing g: Y →
R∪ {+∞}, the convention that

g ◦ h (x) = g (h (x)) if h (x) ∈ dom (g) and g (+∞) = +∞.

Consequently, g ◦ h is an application from X into R∪ {+∞} and its effective


domain is given by

dom (g ◦ h) = dom (h) ∩ h−1 dom (g) .

PROPOSITION 2.1. [14] Let X and Y be two real Banach spaces. Consider
a mapping h from X into Y ∪ {+∞} and a function g from Y into R∪ {+∞}.
If
(i) h is Y + −convex,
(ii) g is convex, Y + −increasing and continuous in some point of h (X).
Then
 
∂ (g ◦ h) (x) = ∪ ∂ y ∗ ◦ h (x) .
y ∗ ∈∂g(h(x))

In the sequel, weshall need the following result of [4]. Under the nonemp-
tiness of the set x ∈ X: h (x) ∈ −int Y + , one has
 
∂ (δ−Y + ◦ h) (x̄) = ∗ ∪ + ◦ ∂ y ∗ ◦ h (x̄) (2.1)
y ∈(−Y )
y ∗ ,h(x̄) =0
OPTIMALITY CONDITIONS FOR D.C. VECTOR 531

where the symbol , denotes the bilinear pairing between Y and Y ∗ , and
δS is the indicator function of S.

REMARK 2.3. Notice that the function y → δ−Y + (y) is Y + −increasing.


Moreover for any Y+ -convex mapping h: X → Y ∪ {+∞}, the composite
function δ−Y + ◦ h is also convex.

For a subset A of Y , we consider the function



d (y, A) if y ∈ Y \A
A (y) =
−d (y, Y \A) if y ∈ A

where d (y, A) = inf {u − y : u ∈ A}. This function was introduced by Hiri-
art-Urruty [10] (see also [12]), and used after by Ciligot-Travain [2], and
Amahroq and Taa [1].
The next proposition has been established by Hiriart-Urruty [10].

PROPOSITION 2.2. [10] Let A ⊂ Y be a pointed closed convex cone with


nonempty interior and A = Y . The function A is convex, positively homoge-
neous, 1-Lipschitzian, decreasing on Y with respect to the order introduced by
S. Moreover (Y \A) = {y ∈ Y : A (y) > 0} , int (A) = {y ∈ Y : A (y) < 0} and the
boundary of A: bd (A) = {y ∈ Y : A (y) = 0}.

It is easy to verify the following lemma.

LEMMA 2.3. The function : Y → R defined by

 (y) = −int(Y + ) (y)


 
is Y + -increasing on Y .

Let K be a closed convex subset of X. The normal cone N (K, x̄) to K at


x̄ is denoted by
  
N (K, x̄) = x ∗ ∈ X∗ : 0  x ∗ , x − x̄ for all x ∈ K .

This cone can be also written as

N (K, x̄) = ∂δK (x̄)

where δK is the indicator function of K. Properties of the subdifferential


and the normal cone can be found in Rockafellar [25].
As a direct consequence of Proposition 2.2, one has the following result.
532 N. GADHI ET AL.

PROPOSITION 2.4. [2] Let A ⊂ Y be a closed convex cone with a nonempty


interior. For all y ∈ Y . one has

0∈
/ ∂S (y) .

3. Optimality Conditions
We begin by giving necessary optimality condition for the optimization
problem
 +
Y − Minimize f (x) − g (x)
(P1 ) :
subject to: x ∈ C,

where f, g: X → Y ∪ {+∞} are convex and lower semi-continuous mappings


and C a closed set.
The point x ∈ C is an efficient (respectively weak efficient) solution of
(P1 ) if (f − g) (x) is a Pareto (respectively weak Pareto ) minimal vector of
(f − g) (C).
For all the sequel, we assume that x ∈ dom (f ) ∩ dom (g).

LEMMA 3.1. If x̄ ∈ C is a weak minimal solution of (P1 ) with respect to


Y + , then for all T ∈ ∂ v g (x̄) , x̄ solves the following scalar convex minimiza-
tion problem

Minimize −int(Y + ) (f (x) − f (x̄) − T (x − x̄))
(P2 )
Subject to x ∈ C.

Proof. Suppose the contrary. There exist x0 ∈ C such that

−int(Y + ) (f (x0 ) − f (x̄) − T (x0 − x̄)) < −int(Y + ) (0) = 0.

This implies with Proposition 2.4 that


 
f (x0 ) − f (x̄) − T (x0 − x̄) ∈ −int Y + . (3.1)

By assumption, since T ∈ ∂ v g (x̄), one has

T , x0 − x̄ ∈ − (g (x0 ) − g (x̄)) − Y + (3.2)


   
From (3.1) , (3.2) and the fact that int Y + + Y + ⊂ int Y + , we obtain
 
f (x0 ) − g (x0 ) − (f (x) − g (x)) ∈ −int Y +

which contradicts the fact that x̄ is a weak minimal solution of (P1 ).


OPTIMALITY CONDITIONS FOR D.C. VECTOR 533

THEOREM 3.2. Assume that f is finite and continuous at x̄. Ifx̄ is a weak

minimal solution of (P1 ) then for all T ∈ ∂ v g (x̄) there exist y ∗ ∈ −Y + \ {0}
such that
 
y ∗ ◦ T ∈ ∂ y ∗ ◦ f (x) + N c (C, x̄) .

Here, N c (C, x) designes the Clarke normal cone to C at x .

Proof. Set H (.) = f (.) − f (x̄) − T (. − x̄).


• On the one hand, as −int(Y + ) is Y + -increasing and H is Y + -convex, then
−int(Y + ) ◦ H is convex.
• On the second hand, as −int(Y + ) and H is continuous, then −int(Y + ) ◦ H
is continuous.
Consequently, −int(Y + ) ◦ H is locally Lipschitzian. Denoting by k0 > 0 the
Lipschitz constant of −int(Y + ) ◦ H , due to the Clarke penalization [3], there
exists k  k0 such that
 
0 ∈ ∂ c −int(Y + ) (H (.)) + kdC (x) .

Applying the sum rule [3], we obtain


 
0 ∈ ∂ c −int(Y + ) (H (.)) (x) + k∂ c d (., C) .

Since H is Y + -convex and −int(Y + ×Z+ ) (.) is convex continuous in 0 and


Y + -increasing, due to Proposition 2.1, there exist y ∗ ∈ ∂−int(Y + ) (0) such
that
 
0 ∈ ∂ y ∗ ◦ H (x) + N c (C, x̄) .

Since −int(Y + ) (.) is a convex function and −int(Y + ) (0) = 0 we have for all
y ∈Y
 
−int(Y + ) (y)  y ∗ , y

and hence for all y ∈ −Y +


 ∗    
y , y  −int(Y + ) (y) = −d y, Y \ − Int Y +  0.
 ◦
That is y ∗ ∈ −Y + . We conclude  from
 Proposition 2.4 that y ∗ = 0.

Consequently, there exist y ∗ ∈ −Y + \ {0} satisfying
  
0 ∈ ∂ y ∗ ◦ f + −y ∗ ◦ T , x − x̄ (x) + N c (C, x̄) .
534 N. GADHI ET AL.

 ◦
Finally, for all T ∈ ∂ v g (x̄), there exist y ∗ ∈ −Y + \ {0} such that
 
y ∗ ◦ T ∈ ∂ y ∗ ◦ f (x) + N c (C, x̄) . 

Let S be a nonempty open convex subset of X. Setting C: = X\S, one has


Theorem 3.3 which gives necessary optimality conditions for the reverse
optimization problem (P ).

THEOREM 3.3. Assume that f is finite and continuous at x̄ and that x̄ is



 minimal solution of (P ). Then, for all T ∈ ∂ g (x̄) there exist y ∈
v
a weak
+ ◦
−Y \ {0} such that
 
y ∗ ◦ T ∈ ∂ y ∗ ◦ f (x) − N (S, x̄) .

 ◦
Proof. Let T ∈ ∂ v g (x̄). Applying Theorem 3.2, there exist y ∗ ∈ −Y + \ {0}
such that
 
y ∗ ◦ T ∈ ∂ y ∗ ◦ f (x) + N c (C, x̄) . (3.3)

Since S is an open convex subset, it is also epi-Lipschitzian at x̄ [26]. By a


result of Rockafellar [26], we conclude that

N c (X \ S, x̄) = −N (S, x̄) . (3.4)

Combining (3.3) and (3.4), we get the result.

REMARK 3.1. In Theorem  3.3,◦if x is an interior point of S, then for all


T ∈ ∂ v g (x̄) there exist y ∗ ∈ −Y + \ {0} such that
 
y ∗ ◦ T ∈ ∂ y ∗ ◦ f (x) .

THEOREM 3.4. Suppose that f, g: X → Y ∪ {+∞} are convex, proper and


lower semicontinuous, S is a nonempty open convex subset  of X◦ and x̄ ∈
domf ∩ domg is a boundary point of S. If there exists y ∗ ∈ −Y + \ {0} such
that
   
∂ε y ∗ ◦ g (x) + N (S, x̄) ⊂ ∂ε y ∗ ◦ f (x) for all ε > 0. (3.5)

Then x̄ is a weak minimal solution of (P1 ).


OPTIMALITY CONDITIONS FOR D.C. VECTOR 535

Proof. As in Theorem 3.3,

N c (X \ S, x̄) = −N (S, x̄) .

Since ∂ c d (., X \ S) (x̄) ⊂ N c (X \ S, x̄), inclusion (3.5) becomes


   
∂ε y ∗ ◦ g (x) − ∂ c d (., X \ S) (x̄) ⊂ ∂ε y ∗ ◦ f (x) , for all ε > 0.

Consequently, for all ε > 0


   
∂ε y ∗ ◦g (x)+∂d (.,S)(x̄)−∂ c d (.,X \S)(x̄) ⊂ ∂ε y ∗ ◦f (x)+∂d (.,S)(x̄).

As ∂S (x̄) ⊂ ∂d (., S) (x̄) − ∂ c d (., X \ S) (x̄), we get


   
∂ε y ∗ ◦ g (x) + ∂S (x̄) ⊂ ∂ε y ∗ ◦ f (x) + ∂d (., S) (x̄) for all ε > 0

which yields that


   
∂ε y ∗ ◦ g (x) + ∂S (x̄) ⊂ ∂ε y ∗ ◦ f + d (., S) (x) for all ε > 0. (3.6)

Since S is convex continuous, one has


   
∂ε y ∗ ◦ g + S (x) = ∂ε y ∗ ◦ g (x) + ∂S (x̄) for all ε > 0. (3.7)

From (3.6) and (3.7), we obtain


   
∂ε y ∗ ◦ g + S (x) ⊂ ∂ε y ∗ ◦ f + d (., S) (x) for all ε > 0.

By the classical Hiriart-Urruty [11] sufficient conditions, x̄ minimize the


function

y ∗ ◦ f (x) − y ∗ ◦ g (x) + d (x, X \ S) .

We conclude that x̄ (a boundary point of S) is a minimum of the problem

Minimize y ∗ ◦ (f (x) − g (x))


subject to : x ∈ X \ S.
 ◦
Finally, due to y ∗ ∈ −Y + \ {0} , x̄ is a weak minimal solution of (P1 ).
536 N. GADHI ET AL.

4. Application
In this section, we give an application to vector fractional mathematical
programming under reverse convex constraints. Let f1 , . . . , fp , g1 , . . . , gp :
X → R be convex and lower semicontinuous functions such that

fi (x)  0 and gi (x) > 0 for all i = 1, . . . , p.

We denote by φ the mapping defined as follows:


 
f1 (x) fp (x)
φ (x) := ,..., .
g1 (x) gp (x)

We investigate the vector optimization problem



 ∗ p
R+ − Minimize φ (x)
P :
/ −intZ +
subject to : h (x) ∈

where Z + is a nonempty closed convex cone and h is a Z + -convex map-


ping defined from
 X into Z.
Setting S: = x ∈ X: h (x) ∈ −intZ + , we assume that S = ∅ and X\S = ∅.
Then we have the following results.

LEMMA 4.1. Let x̄ be a feasible point of problem (P ∗ ).x̄ is a weak minimal


solution of (P ∗ ) if and only if x̄ is a weak minimal solution of the following
problem
 p  
 R+ − Minimize f1 (x) − φ1 (x̄) g1 (x) , . . . , fp (x) − φp (x̄) gp (x)
P :
subject to : x ∈ X\S

where φi (x̄) = (fi (x̄))/(gi (x̄)).

Proof. Let x̄ be a weak minimal solution of (P ∗ ). If there exists x1 ∈ x̄ +


BX such that x1 ∈ X\S and
 p
(fi (x1 ) − φi (x̄) gi (x1 )) − (fi (x̄) − φi (x̄) gi (x̄)) ∈ −Int R+ .

Since fi (x̄) − φi (x̄) gi (x̄) = 0, one has


fi (x1 ) fi (x̄)  p
− ∈ −Int R+
gi (x1 ) gi (x̄)
which contradicts the fact that x̄ is a weak minimal solution of (P ∗ ). So

x̄ is a weak minimal solution of P . The converse implication can be
proved in a similar way. The proof is thus completed.
OPTIMALITY CONDITIONS FOR D.C. VECTOR 537

LEMMA 4.2. Denoting by S̄ the closure in X of the subset S, we have



S̄ := x ∈ X: h (x) ∈ −Z + .

Proof. From the continuity assumption of h and the fact that the cone
Y + is closed

S̄ ⊂ x ∈ X: h (x) ∈ −Z + .

Conversely, let x ∈ X such that h (x) ∈ −Z + . From the nonemptiness of S,


there exists a ∈ X such that
 
h (a) ∈ −int Z + .

Setting xn := (1/n)a + (1 − (1/n)) x for any n  1, the sequence (xn )n1 con-
verges to x.

Since h is convex, one has


 
1 1    
h (xn ) ∈ h (a) + 1 − h (x) − Z + ∈ −int Z + − Z + ⊂ −int Z + ;
n n

which means that xn ∈ S. Then,



x ∈ X: h (x) ∈ −Z + ⊂ S̄.

Finally, the desired equality holds.

THEOREM 4.3. Let x be a boundary point of S and assume that f is finite



and
 continuous at x̄. If x̄ is a weak minimal solution
 ∗ of (P  ), then for all
∗ p
T1 , . . . , Tp ∈ ∂g1 (x̄) × · · · × ∂gp (x̄) there exist α1 , . . . , αp ∈ R+ \ {0} and

z∗ ∈ −Z + such that z∗ , h (x) = 0 and
p p

 
φi (x) αi∗ Ti ∈ ∂ αi∗ fi (x) − ∂ z∗ ◦ h (x) .
i=1 i=1

 
Proof. Let T1 , . . . , Tp ∈ ∂g1 (x̄) × ·· · × ∂gp (x̄). Applying Lemma
 4.1 and
∗ ∗ p ∗ + ◦
Theorem 3.3, there exist α1 , . . . , αp ∈ R+ \ {0} and z ∈ −Z such that
z∗ , h (x) = 0 and
p p

φi (x) αi∗ Ti ∈ ∂ αi∗ fi (x) − N (S, x̄) . (4.1)
i=1 i=1
538 N. GADHI ET AL.

Using Lemma 4.2,

δS̄ = δ−Z+ ◦ h.
 
Since N (S, x̄) = N S̄, x̄ , one obtains

N (S, x̄) = ∂δS̄ (x̄) = ∂ (δ−Z+ ◦ h) (x̄) . (4.2)

Combining (2.1) , (4.1) and (4.2), we get the result.

THEOREM 4.4. Suppose that f, g: X → Y ∪ {+∞} are convex, proper and


lower semicontinuous, S is a nonempty open convex subset of X and  ∗x̄ ∈ domf ∗
∩
domg is a boundary point of S.  Suppose
 also that there exists α 1 , . . . , α p ∈
p ∗ + ◦ ∗
R+ \ {0} such that for any z ∈ −Z one has z , h (x) = 0 and
p
 p

 
∂ε φi (x) αi∗ gi (x) + ∂ z∗ ◦ h (x̄) ⊂ ∂ε αi∗ fi (x) for all ε > 0.
i=1 i=1
(4.3)

Then, x is a weak minimal solution of (P ∗ ).

Proof. As previously,
 ∗ 
NS (x) = ∂δS (x) = ∂ (δ−Z+ ◦ h) (x) = ∪ ◦
∂ z ◦ h (x̄) .
z∗ ∈(−Z + )
z∗ ,h(x̄) =0

Consequently, from inclusion (4.3), one has


p
 p

∂ε φi (x) αi∗ gi (x) + NS (x) ⊂ ∂ε αi∗ fi (x) for all ε > 0.
i=1 i=1

Finally, applying Theorem 3.4, we finish the proof.

EXAMPLE 4.1. Let f and g: R → R be given functionals with


1
f (x) = |x| and g (x) = x 2 .
2
We consider h: R → R defined by

x if x > 0
h (x) =
0 if x  0.
In this case, ∂ε g (0) = {0} , ∂ε f (0) = [−1 − ε, 1 + ε] and ∂h (0) = [0, 1]. Under
these assumptions, we remark that (4.3) is satisfied.
OPTIMALITY CONDITIONS FOR D.C. VECTOR 539

Acknowledgment
My sincere acknowledgments to the referees for their insightful remarks
and to Pr. T. Amahroq for his suggestions which improved the original ver-
sion of this work.

References
1. Amahroq, T. and Taa, A. (1997), On Lagrange-Kuhn-Tucker multipliers for multiobjec-
tive optimization problems, Optimization, 41, 159–172.
2. Ciligot-Travain, M. (1947), On Lagrange-kuhn-Tucher multipliers for pareto optimiza-
tion problems, Numerical Functional Analytical and Optimization, 15, 689–693.
3. Clarke, F.H. (1983), Optimization and Nonsmooth Analysis, Wiley-Interscience.
4. Combari, C., Laghdir, M. and Thibault, L. (1994), Sous-différentiel de fonctions conv-
exes composées, Annals Science Mathematicals Québec, 18, 119–148.
5. Dauer, J.P. and Saleh, O.A. (1993), A characterization of proper Minimal Points as solu-
tions of sublinear Optimization Problems, Journal of Mathematical Analysis and Appli-
cations, 178, 227–246.
6. Elhilali Alaoui, A. (1996), Caractérisation des fonctions D.C., Annals of Science and
Mathematical Québec, 20, 1–13.
7. Flores-Bazan, F. and Oettli, W. (2001), Simplified optimality conditions for minimizing
the difference of vector-valued functions, Journal of Optimization Theory and Applica-
tion, 108, 571–586.
8. Gadhi, N. and Metrane, A. Sufficient Optimality Condition for Vector Optimization
Problems Under D.C. Data, To appear in Journal of Global Optimization.
9. Gol’shtein, E.G. (1971), Duality Theory in Mathematical Programming and its Applica-
tion, Nauka, Moscow.
10. Hiriart-Urruty, J.B. (1979), Tangent Cones, generalized Gradients and Mathematical
Programming in Banach spaces, Mathematical Operational Research, 4, 79–97.
11. Hiriart-Urruty, J.B. (1989), From Convex Optimization to Nonconvex Optimization. Non-
smooth Optimization and related Topics. In: Clarke, F.H., Demyanov, V.F. and Giannessi,
F. (eds.), Plenum Press, pp. 219–239.
12. Hiriart-Urruty, J.B. and Lemaréchal, C. (1993), Convex Analysis and Minimization Algo-
rithms I, Springer, Berlin, Germany.
13. Horst, R. and Tuy, H. (1996), Global optimization (Deterministic Approach ), 3rd ed.,
Springer, New York, 1996.
14. Lemaire, B. (1995), Subdifferential of a convex composite functional. Application to
optimal control in variational inequalities in Nondifferentiable Optimizaton. In: Pro-
ceeding Sopron, September 1984 lecture notes in economics and mathematical systems,
springer, pp. 103–117.
15. Lemaire, B. and Volle, M. (1998), Duality in D.C. programing, in nonconvex optimiza-
tion and its applications, 27, 331–345.
16. Lemaire, B. (1998), Duality in reverse convex optimization, Siam Journal of Optim, 8,
1029–1037.
17. Levin, V.L. On the subdifferential of a composite functional, Soviet Mathematical Dok-
lady, 11, 1194–1195.
18. Martinez-Legaz, J.E. and Seeger, A. (1992), A formula on the approximata subdifferen-
tial of the difference of convex functions, Bulletin of the Australian Mathematical Soci-
ety, 45, 37–41.
540 N. GADHI ET AL.

19. Martínez-Legaz, J.-E. and Volle, M. (1999), Duality in D.C. programming : The case of
several D.C. constraints, Journal of Mathematical Analysis Application, 237, 657–671.
20. Michelot, C. (1987), Caractérisation des minima locaux des fonctions de la classe D.C.,
Université de Dijon.
21. Oettli, W. (1995), Kolmogorov conditions for minimizing vectorial optimization prob-
lems, OR Spektrum, 17, 227–229.
22. Penot, J.-P. and Théra, M. (1982), Semi-continuous mappings in general topology, Arch-
Mathematical, 38, 158–166.
23. Penot, J.-P. (2001), Duality for anticonvex programs, Journal of Global optimization, 19,
163–182.
24. Raffin, C. (1969), Contribution à l’étude des programmes convexes définis dans des espaces
vectoriels topologiques. Thèse, Paris.
25. Rockafellar, R.T. (1969), Convex Analysis, Princeton University Press, New Jersey.
26. Rockafellar, R.T. (1980), Generalized directional derivatives ans subgradients of non-
convex functions, Canadian Journal of Mathematical, 32, 175–180.
27. Tao, P.D. and Hoai An, L.T. (1997), Convex analysis approach to D. C. programming:
Theory, algorithms and applications, Acta Mathematica Vietnamica, 22, 289–355.
28. Tao, P.D. and Souad, E.B. (1988), Duality in D.C. optimization. Subgradient methods,
Trends in mathematical optimization, Internat. Ser. Numer. Math. 84(c), Birkhauser Ver-
lag, Bassel, pp. 277–293.
29. Thach, P.T. (1993), Global optimality criterions and duality with zero gap in nonconvex
optimization problems, SIAM Journal of Mathematical Analaysis, 24, 1537–1556.
30. Tuy, H. (1995), D.C. Optimization: Theory, Methods and Algorithms, Handbook of Global
Optimization, Kluwer Academic Publishers, Norwell, MA, pp. 149–216.
31. Tuy, H. and Oettly, W. (1994), On necessary and sufficient conditions for global opti-
mality, Revista de Mathématicas Aplicadas, 15, 39–41.
32. Valadier, M. (1972), Sous-différentiabilité de fonctions convexes à valeurs dans un espace
vectoriel ordonné, Math. Scand., 30, 65–74.

You might also like