(Michel - Sakarovit) - Linear Programming PDF
(Michel - Sakarovit) - Linear Programming PDF
Electrical Engineering
Michel Sakarovitch
Linear Programming
With 12 Illustrations
All rights reserved . No part ofthis book may be translated or reproduced in any form
without written permission from Springer Science+Business Media, LLC
9 8 7 6 543 2
The se cond r ea son i s t he devel opment of compute rs. Computers are not onl y
nec e s s ar y to s ol ve operat ions research models, but the y h ave influenced directl y
ou r ways of th i nki ng about complex systems . In many i nst anc es one can observe
a paral le l deve l opment of operations re se arch and computer science.
It is conv enien t to consi der ope rations rese arch as consi sting of t wo
i nti mat e ly r e l ated parts :
v
vi fu~
Since ope rati ons resear ch model s are most often mathemat ical model s solved
with the help of compute r a lg or i t hms , one might be led to be lieve th at one can
consider the field tot all y as an abst racti on. However, thi s i s not the case.
Wh at mak es operat ions re se arch a fa s cinating s ubj ect f or st udy and re search is
th at th ere i s a di ale cti c rel ati onship betwe en the two parts . The mathemati cal
te chn ique s should be consi dere d in the cont ext of th e con crete probl ems t hat
they are i nt ended to sol ve. On the other hand, ope rati ons r esearch is much more
than a common- sense appr oac h to r eal-life problems.
In thi s book we are conce rne d wit h th e " model- s olvi ng" aspec ts of operat i ons
r e se arch . In fa ct , th e examples gi ven make no pretense to be r eali st i c mode ls.
They are propos ed as an over ly s i mp li f ied vers i on i n order to enab le the reader ,
as eas i ly as poss i ble, t o ac qua i nt hims el f with the type of pr obl ems that can be
t a ckl ed and t o e xempli f y th e mathemat ical t echn ique s use d. However , we woul d
like to empha s ize th at model buildin g i s bot h i mportant an d diffi cult . Bei ng
abl e to define the system, choose th e variabl e s and decide what can reasona bly
be neglected , r equires as much i nte l ligence and creativi ty as s o lving the model s .
The mat e r i a l pres ented here ha s been developed for an undergraduate course
fi rs t at th e Universit y of Ca l i f orni a (Ber ke l ey) and then at the Universit y of
Gr enob le (Fr an ce). The book i s l argel y self contained, the onl y prerequ1s1te
be in g a yea r of cal culus . Noti ons of l in ear algebra that a r e us eful a r e pre-
se nted in so me detail. In part i cul ar, I have discussed the solut i on of linear
s ys t e ms of equations t o the ex tent whi ch I think ne ces s ary for the unde rstandin g
of l in ear programming theory . Dependi ng on how much of the final t wo chapters
i s included and on the initia l l evel of the student s in line ar a l gebr a , the
top i cs in th is book can be cove r ed i n a quarter or a semester .
Thi s cours e i s "c omputer or iente d" in the sense that the al gor ithms (whi ch
a re first depicted on s i mp le exampl es ) are given in a ve r y intuitive algorithmi c
lan gua ge, which makes the ir coding on a comput e r rather s i mp le .
We are most grate ful to Bet ty Kami nsk i for her efforts and pers eve ran ce in
prep arin g the camer a copy from whi ch th i s book wa ~ produ ced.
ix
x Contents
REFERENCES 202
AIDE MEMO IRE AND INDEX OF ALGOR ITIIHS 203
INDEX 205
List of Figures
xi
Chapter 1. Introduction to Linear Programming
The goal of this chapter i s to introduce those opt imizat ion problems
which , just after World War II, G. B. Dantzig named "linear programs. "
The great success of linear programming (i .e ., the study of linear programs)
led authors who became interested in various ootimization problems to link
the term "programming" with that of a more or less fitted adj ective, thu s
calling these problems convex programming, dynamic programming, integer pro-
gramming, and so on . The result i s that in operations research th e t erm
"program" has acquired the very precise meaning "opti mi zati on problem . " It i s
not possible, however, to use the word "programming" for th e study of gene ral
problems of optimization (hence, we say "mathematical programming"), bec ause
more or less s i mul t aneous l y the term "proeram" was taking on anot her meaning
much more in harmony with the original one - - that of a sequence of instruc-
tions in the context of computer science. This nice example of the development
of scientific language does not make things clear for the beg inner . To avoid
confusion in this book we therefore use the term "pr ogr am" as equ ival ent to
an optimization problem and " code" or "computer code" for what is call ed a
program in computer science .
The notion of dual ity, which is central to the underst anding of linear
programming, i s introduced i n Chapter II . Necessary not ions of linear al gebra
are rev iewed in Chapt er III , and the concept of bas ic solutions i s defined in
Chapter IV. Chanter V i s devoted to the ~resentation of the s i mp l ex a lgori t hm.
The two phases of the s impl ex method are presented in Chapt er VI togeth er with
some theoretical results . In Chapter VII we present computational aspec ts of
the simplex and the revised s i mp l ex algorithm. The geometrical inte rpretation
of the simplex algorithm is given in Chapter VIII. Chapter IX contains s ome
complement s on duality, and parametric linear programming is presented in
Chapter X. Finally, Chapter XI is devoted to the presentation of a very i m-
portant special linear program : the transportation problem.
Pr oduct
1 2
I 2 1
Raw Ma t e r i a l
II 1 2
III o 1
l +
z 4x 5x be maximum s ubjec t to
2
Xl . x2 ~ 0
(PI) 2x + x ;:.. 8 ( I)
l 2
Xl + 2x 2 ;:.. 7 ( II)
x 2 ;:.. 3 ( III)
This problem has the immediate geomet r ic solution sh own i n Fi gur e I - I (nex t
page) •
Section I. Examples and Definitions of Linear Programs
' ,' 3
{ '2 "2
1 =22
2"+'2= 8
2 3 4 ' I
Sea t t le 3 5 4
This t able shows t h at t o convey x tons from New Yo r k to Phoe nix , f o r instan ce,
the cos t i s $6x . And the prob l em consis ts i n determining an optimal " t r ans -
portat i on plan , " L e., i n fi nding what are the quant ities of pr odu ct; S t o s end
from each har bor t o e ac h facto ry in s uch a way t hat :
(a) Demands are sati sfied (e ach f ac t ory receive s a t l east what i s nee ded ).
( 6) Quan t i t i e s sent fr om ea ch h arbor do not exceed availability .
(Y) Quan t i t ies sent ar e non-ne gat i ve .
(0 ) The t ota l tran sp o r t ation co s t is mini mum s ub ject to th e preced in g con -
s t r ai nts .
Le t us as s ign t o New Yor k harb or inde x l, t o Seattle harbor i ndex 2 , and
i ndi ces 1, 2 , an d 3 , respe ct i vely, t o the f actori es o f Den ver, Pho enix, and
Chi ca go . wi l l de not e t he q uant i ty o f pr oduc t S sen t from h arbor i ( i =l or
x
ij
2) t o factory j ( j =1 , 2 , or 3) e ac h week. The linear pro gram is t h en
4 Chapter I. Introduction to Linear Programming
X +X >400
ll 2l
X +X >300 ( a)
12 22
x
1 3+x2 3
>200 1
xll+x12+x13 <550
} ( 13)
x2l +x22+x23 <350
z=5x + 6x + 3x + 3x + 5x +4x } ( 6)
ll 12 13 2l 22 23
be minimum
Definition 1 : A linear pro gram i s an opt imi zati on probl em in whi ch:
(a ) The va r i ab l es o f the probl em a re cons t r ai ne d by a se t of line ar
equations and/ o r in equal i ti e s.
(b) Subject t o t hese cons trai nts, a func tion (c a l le d t he obj e ctive func-
t i on ) is to be maximized (or minimized ). This fun ction depends linearly on
the variab les.
Remark 2 : Very of ten, as is true in the e xamples depicted above, t he va r i abl es
may be interpreted as the l e vel o f ac tivi ty a t w~ich some pr ocesses will ope r a te .
The objecti ve f unct i on may be a reward on e tries to maxi mize o r a cost one tries
t o minimize.
Section 2. Definitions and Notations
cx ~ L cjx .
j ~l J
(1) L
j EJ
(1") I
i EI
J
(e) A deno tes t he submatrix of A, the a Iamen t s of whi ch ar e A~ for i E I,
I 1
j E J. Note that if I = {i } and J = {j} , A~ i s writt en AI.
Depending on the circumstances , we will thus write matrix A in
one of t he fol lowing forms:
Al 2 An
1 A Al
1 1
Al 2 An
2 A A
2 2 2 1 2 n]
A', [A , A , • • • ,A
Al 2
A . . . An A
m m m m
or mo re generally:
II 1 I
2 AP
A A .
J J J
1 1 I
II 1 I
A 2 AP
A
J J J
2 2 2
II 1 I
2 AP
A A
J J J
q q q
Section 2 . Definition s and Notations
+ I {1 . 2 , . . • .n }
P
J
l
+ J2 + + Jq {1 , 2 •. . . ,m}.
A~ ~
= B~
~
for all i 1,2, .. . ,-m; 1,2 .. .. ,n
A~ ~
fo r i 1,2 , . .. . m; 1 ,2 , ... , u
Note that:
(a) The transpose of a co l umn ve c t o r is a r ow ve c t or
(b) (AT)T A
(c) (Ax)T = xTAT
Definition 1 3 : Urn (or U when no ambiguity exists ) denot es the m xm- uni t
matrix, i .e .,
(U )~ =
. {I if j = i
m ~ 0 oth erwise
Examples : 1. Let
~[~]
1 1 0
A= 2 0 1
[: 1 0 0
:] b. [:]
c= [ 4 , 5 , 0 , 0 .O ]
y=[1,2,01
J = {1 , 2 , S }
8 Chapter I. Introductio n 10 Linear Programm ing
The n we nav e
2 1 4
°
1 2 1 5
T T T
A= 1 b [8 , 7 , 3]
°° c
°
° 1 °1 °
°° °
~ ~ m
1
J J-
A= 2 Ax b
J
1
yA [4 , 5, 1 , 2 , 0 ] ~ c
J-
c x = 22
J
b - m c > [4,5 ]
{ maximize z = cx s ubj e ct t o
Ax "' b ; x ~o
Section 3. Linear Programs in Canonical Fonn 9
(2) cx = z is maximized
subject to the cons t rain ts
Ax ~b , x;;. 0
where
A is a given mx n-mat r ix
b is a given m-column vector
c is a given n-row vect or
x is an (un kn own) n- c olumn vector
Traditiona l ly, (2) is usually wri t ten i n t he form
Ax ~ b, x ;;. 0
(p)
{
cx = z (Max)
where t he not at i on
ex = z (Max)
must be r e a d " z deno tes t he va lue of the scalar product ex an d we wi ll try and
make z maximum." z is t he "objective func tion. "
j = 1" . . . .n
1 2
AmX l + Amx2 + ... +Anx
mn ~ b
m
, 2
c~xl + c x
2 + ... +cn xn z (Max)
Remark 4: Cons t raints x;;.o (which mean , s ee Def i nition 11, x ;;. 0 for j = 1 ,
j
2, ••• ,n, could have been inc lude d in t he se t Ax ~ b. To do th i s , it would
s uffice t o define the (m + n ) x n-mat r i x A' an d t he (m + n )- column vector b ' by
10 Chapter I. Introduction to Linear Programming
b ' . [ :]
Definit i on 15 : Let us come ba ck t o the lin ear pro gram i n canonica l form :
Ax ';;' b, x ;;, 0
l
(p)
cx z (Max)
Ax';;'b , x ;;, 0
is ca l le d a " f eas ible s ol ut i on. " The s e t of su ch x ' s is the set o f f e as ible
so l ut ions . For th e sa ke of compac t ness , we pos e
J
AX ~ b , x :;;' 0
(p)
ex = z ( Max )
(4)
( Max)
I XI ~ - l xl ~ 0
( 5)
1x I = z ( Max )
( 6)
(Max)
Xl < I Xl > 0
(7)
I Xl = z (Max)
12 Chapter I. Introductio n to Linear Programming
The set of feasible solutions is t he segmen t [0, 1) (open on the righ t side)
b ut t here is no optimal solut ion.
It is t o prevent si tuations of that t ype (and a lso be cause strict i neq uali-
t i e s neve r ha ve physical significance in a model) that s trict inequali ties are
f orbidden i n l ine a r programming.
x ;;. 0
(PS) \ Ax =b
1ex = z (Max)
is said t o be i n "s tandard form ." If some const raints a re equations and ot he rs
are inequal i ti e s, we s ay that we ha ve a "mixed fo rm."
Remark 7: Note that l ine ar program (PS) can easily be writ ten i n canoni ca l
th
form. The i cons train t of (PS) is
with
Defini tion 17: Al t hough op timization problems (PS) and (PS) a re forma l ly
diffe rent , we see that t hey are, in fac t, "the same ."
I n gene ral, we wi l l say t hat two optimization problems (0) an d (0 ') are
"equ i valent" if t o each feasib le so l ution of on e of these prob lems, one can
fi nd a corresponding solution to t he other i n such a way t hat for a pair of
homologous so lutions, th e value s of t he objec tive f unc tions a re e q ual .
(Note t h at i n t he example above t he correspondence was simply iden t i ty. )
Section 4 . Equivalent Formulation s of Linear Programs 13
f Ai x + ~i = b i
(9) A,x -< b ~, - -
~
l ~i ;;. 0
Min [e x] = -Max [- ex ]
xE D xED
Assume that x is a solution of Max D[-cx) and let x E: D with
xE
ex < ex
Multiplying by - 1 , we ge t
-ex > - e x
which contradicts the f act that x is an opt imal so lution of the ruaximization
problem. So if one of th es e problems ha s an optimal solution, s o does the
other, and opt ima l soluti ons of on e of these prob l ems are solutions o f the
other .
14 Chapter I. Introduction to Linear Programming
( ll)
,J
c xi c 1Xl" + c xJ ;
z (Max)
Xl ; - Mi n l O, x ]
l
i s a fe asib le so l ut ion of ( 12) an d the va l ues of t he objec t i ve fllllc tions a re
eq ua l.
Ax ";;b x ;;. 0
(P) l cx =
n
z (Max)
( 13)
n
cons ti tute a " hyp er pl an e ." This hyperp lane separat e s R i n t wo r e gi ons , t wo
"half spaces , " such t h at t wo points are in t he same half space i f and on ly if
t he segme nt joinin g t hem does not i n tersect t he hyperp lane (1 3) .
The s e t of po i n t s sat isfyin g
Ax";; b
is t he intersect ion of half sp ace s
Ai x ";; b fo r i = 1, 2 , ... , m
i
This in tersec tion ( i ts e l f i n t e rs e ct e d wi th t he "non-negati ve orthant" x;;. 0)
is a " con vex polyhedron." A t etrahedron , a cube , and a diamond a re e xamples
3 2
of convex pol yhe dra in R . A conv e x po lygon is a conve x polyhedron in R
( s ee Fi gure 1. 1).
The points o f t h e polyhe dron of f e asible solutions t o (P ) whi ch satis fy
for an i E{ 1 ,2 , ... , m} or a jE{l,2, ... ,n }
ex = z
If> Chapter I. Introduction to Linear Programming
cu ts the domai n o f feas i b le s olut i ons " as f a r as pos sible " i n t he dire ct i on o f
i nc reas i ng z ( this i s t he pro cedur e we us ed i n sol vi n g e xa mp l e (P I) i n sect ion
1) . I n genera l , t h e i n t e rs e ct i on i s r educed t o a poi n t x wh ich i s a ve r t ex o f
t he polyhe dro n .
The s imple x a l go rithm proposes a jour ney from a start in g (nono ot i ma l)
ve r tex t o an optimal ve r t e x t h r ough a series of vis i ts of a cha i n of a dj a cen t
ve r t ices of the polyhe dron .
EXERCISES
Xl + 2x2 3x
3
1
2x x - 5x ~ 2
l + 2 3
(P) Xl ~ 0, "z -;;. 0
Xl + 3x2 - x ;;, 1
3
Xl + 2x2 + 3x 3 = z ( Hax)
and the manufacturer would like to find a systematic way of getting a result.
Write this problem as a linear program (the cost of blending is assumed to be
0) .
3. A farm has two lots, A and B (200 and 400 acres, respectively). Six
kinds of cereals , I, I I, III, IV, V, and VI, can be grown on lots A and B.
The profit for each 100 1b s , of ce r ea l i s:
-
I II III IV V VI
Profit/lOa Ibs , 24 31 14 18 61 47
To grow 100 Ibs , of cereal we need s ome area (in acres) and quantit ies of water
(in cubi c meters)'
I II III IV V VI
Area on lot A 0.01 0 .015 0.01 0 .008 0 .025 0 .0 2
The total vol ume of water that is available is 400,000 m3 • We try t o make a
maximum profit while r es pe cting va r i ous constraints. Write this problem as a
linear program.
i = 1 , 2 , ... , n (n > 3)
with t h re e unkn owns a, b, c has no s o l uti on . The physicist has two di fferent
ideas of what a goo d ad j ustment may mean :
(a) Find t he val ues of a , b, c that minimize
n
L la sin t i + b t an t i + c - Qil
i=l
18 Chapter I. Introducti on to Linear Programming
In e i t he r cas e , and for physi cal r easons, coe f ficients a, b, and c must be non -
negati ve . Show that e a ch of these adjustments can be written as a linear
pro gr am.
M M M M
l 2 3 4
PI 4 4 9 3
P 3 5 8 8
z
P 2 6 5 7
3
PI 0 0 0 3
P 0 5 0 0
z
P Z 0 4 1
3
7. Show t h a t t h e problem
AX = b
(wh ere Cl
j
and
( P)
Sj
l e x = z ( Max)
are g i ve n r e a I s ) i s a linear p ro g ram. Write ( p) in s t an d a r d
f o rm .
8. Sh ow t h at t h e p r ob lem
~ 0
+ ~v b x, v
I
Ax
(P )
ex - E Iv i l = z ( Max)
i= l
x = Cz
whe re C = AB
cx = z (Max)
(P) { ~X
cx = z (Hax)
x ;;' 0
Chapter II. Dual Linear Programs
(~) {
cx = z (t tax)
YA ;;;. c
( D)
{
yb = w (Hin )
1 2
Amx l + Amx2 + + Anx 0;;; b
mn m
I 2 n
c xl + c x2 + ... + c x
n
z (Max)
21
22 Chapter II. Dual Linear Programs
and ( D) is written
I
+... + A\m;;.
m
c
Z
+. . . + AZym ;;. c i
m
( D)
y > 0
i : l, 2, . . . , m
~ema rk Z: We note that the variab les of the dua l are i n a one - to-one co r r es-
pon dence with the cons t r ain ts of th e l inear prog ram we s tarted wit h (fo r con-
veni en ce t hi s linear p rogr am is called the " pri mal" ), whil e the cons t rain t s of
the dual are in one-to- one corresTJondence wi th the va r iab les of the prima l .
Thi s i s shown in t h e fo l l owi ng diagram :
X x X
Xl z 3 n
1 Al Z 3 An .;; b
Y A A
1 1 I I l
Z Al Z 3 An .;; b
y A A
Z Z Z Z Z X. ;;. 0
J
j=l ,Z, . . . , n
m Al 2 3 An .;; b
Y A A
m m m m m
IV IV IV IV
I Z 3 n
c c c c
-+ { objec t ive function
t o maximize
i
y ;;' 0 obj e ct i ve fu nction
to minimize
i =1 ,2, . .. , m
Readin g this table "ho rizontal ly " gives probl em (P) ; a " ve rt i cal" reading
gi ve s its dual.
Zy1 + YZ ;. 4
Section I. Formal Definition of the Dual Linear Program 23
-c
T
l >0
(D)
w' (Max)
( D) being in canoni cal form, we can take its dual in following the proce ss of
Definition 1. The dual of (D) is
T
x = u
(DD) can be written equivalently (by taking the transpose and multiplyin g
constraints by -1)
jAX b x ;;. 0
(PS)
1e x = z (ttax)
24 Chapter II. Dual Linear Programs
is
YA ;;. c ' s i. gn t
y un res t ra. c t.e d t.n
(DS) jyb = w ( Min)
To prove t h i s we write (PS) in cano n ica l f orm (as we l e arn e d t o do i n Se ct i on
1.4) :
Ai~ bi i 1, 2, . .. , m
cx = z (Max)
And we can app ly t o ( PS ') the pro ce s s of Def i ni tion 1 t o fi n d i t s dual . The
constraints o f (PS') (o t he r than x ~ 0 ) a r e in one of t wo groups . Let us
associate dual va r i ab l e y , i to t he i th cons t r ai nt of th e fi rs t gro up and y" i
to t he i th con stra int o f t he s e con d gr oup . Le t y ' (res p . y ") be th e m-row
th
ve ct o r the i componen t of wh ich is y , i (resp. y " i ) . Then t he dua l o f (PS ')
is
Y' A - y " A > C
(DS' ) jy 'b - y "b = w ( Min )
y ' , y" > 0
By po sin g y = y ' - y" , we s ee t hat (DS' ) i s equi va l ent to ( DS) (Sec t i on 1.4 ) .
In thi s c as e , when we go t o t he equi va lent canoni ca l f o rm" the cons t rai nt -" i l l
not be change d and t he dua l va r iab le yi mus t be ;;. O.
(b) A .x;;' b .
1 1
In this ca se , when we go t o th e equi vale nt canonica l form , the cons trai nt wil l
be mu l t i pl ied by -1 and t he corre s ponding dua l va r iab le yi mus t be .,; 0 ( _y i > O.
t Th i s is a l so deno ted y ~ o.
Section I. Formal Definition of the Dual Linear Program 25
(c) Aix = b i
In this ca s e , when we go t o the equ i va l ent can onical form , we have two c on-
st r a in ts
- Ai x .;;; - b
i
I f we ca l l y , i and y"i t he cor respond J.ng dua l variables we not e t h at e a ch t i me
one of thes e va r i abl e s is wri tten in an expression wi th a coefficien t , t he
o the r one is t here with t he oppos i te coeffici ent . So that we can pose
i ,i "i
y y - Y
t hi s va riab le of the dual (we say "dual va r i able " ) being not cons t raine d .
Go on as s umi ng th at the obj ecti ve f unc tion i s t o be maximized and t hat t he
cons t raint on p ri mal variable x is
j
(a) x ;;. 0
j
When we go t o t he equi va lent can oni ca l f or m we do not ch an ge va r i ab l e s and t he
co r r e s pon di ng co ns train t of t he dua l is
(b) x .;;; 0
j
x' x" X
I
j ' X"J. ): 0
j j
26 Chapter II. Dual Linear Programs
Primal-Minimization Dual-Maximization
The proof of the lower part of t he t able (which is equivalen t t o the firs t part
read f rom r i ght t o l e f t ) is left to t he reader as an exercise.
( 1)
Section 2, The Objective Function values of Dual Linear Programs 27
x , ;;. 0
J
j = I, ... , n
Since x ~ 0 , t his gi ve s
yAx ~ cx
which togeth er with (2 ') gives the desi r e d res ult. q. e. d.
Proof : Suppose that there exists it I , a fe as i ble s ol ution o f (P) s uch that
cx ' > cx . Si nc e cx = Yb we ha ve cx ' > y b , whi ch contradicts Theo rem 2 .
Remark 7: The impo r t ance of the co ro l lary o f Theo re m 2 lies in the fac t that i t
provide s a "certi fi cate of opt i mal i ty. " Ass ume th at you have f oun d an op t i mal
s ol ut i on of a linear pro gram (either by so lvi n g it with the simplex a lgori t hm
or by a good gue ss or by any other way) an d that you wan t to convince a
"supervi s or" that yo ur sol ut i on is i n fac t optimal. I t will t hen s uffice t o
exhi bit a dual feas i ble so l ut ion gi vi n g t he s ame va lue t o t he obje cti ve f unc tion .
We wi ll see l at er th at if t he sol ution has been ob ta i ned t hro ugh t he simplex
algorithm, y ou will ha ve at ha n d , t o geth er wi th y our op t ima l so lution of t he
primal, an optimal solut ion o f t he dua l .
gap
r an ge of ex r ange o f yb
1+ w min
1/ 1/ 1/1 / /// 1/1 // /// / / / /1 I I I 1I I I I I I I I I I 1I 1I I I I I I 1I I I
a S
zmax -> I
Figure 11.2: Possib le Range s of cx and yb
We wi ll see i n Chapt e r VI t hat there is, i n fact . no zap , Moreov er. if one
of t h e i n tervals is empty(i .e . , if one of t he proble ms does not have a feasible
solution ) , whi le t he o ther has a feasib le solution, t he l at t e r problem does not
~ave an opt i mal solution .
Xu + x IZ + x < 550
l3
x + x + x Z3 < 350 x. . > 0
ZI zz 1)
- XU K < - 400
ZI i= I , Z
(P Z) x xzz < - 300 j= I,Z,3
lZ
Xu + x < - ZOO
Z3
-sx
Il
- 6x
lZ
3x
13
- 3x Zl - sx ZZ 4x
Z3
z (Max)
Yl Y3 > -5
Yl Y4 > -6
y. > 0
1 -
Yl Ys > -3
i=l,Z,o oo,s
(DZ)
YZ - Y3 > -3
YZ Y4 > -5
YZ Ys > -4
j
n1 - 1T
l
.;; 5
nZ - 1T1 .;; 6
n3 - 1T .;; 3 ni , 1T.~ 0
l J
(3) .;; 3
nl - 1T
Z
l nZ
n3
-
-
1T
1T
Z
Z
';; 5
.;; 4
The manager of t he fi rm that nee ds product S is t hen for ced to agree that
given the transportation cos t s he will be be t ter off if he l e t s the t r ans po r t a-
t i on specialist t ake care of transpo rtation . The deal i s therefore closed to
Exercises 31
the satisfaction of the t ranspo rnat Lon specia lis t . But the latter has some
f re e dom in choo sing his prices (he promised only t h at constraints (3) wi l l be
satis fied). Since he wants to maxi mize his ret urn, he wants
EXERCISES
1. Wri te the dual of t he linear p r ogram
3x + X - ZX = 4
l z 3
Xl - ZxZ + 3x 3 .;; 1 xl ,xZ ~ 0
(p)
ZX
l
+ Xz x ~ Z >
3 x < 0
3
3x + 4x + ZX = z (Min)
l Z 3
Ax ';; b, x ~ 0
(P) {
ex = z (Max )
(a) Show t hat i f (p) is primal feasible, (P) has an obvious feasible
so lution .
(b) Show tha t i f (P) is dual feasib le, ( D) t he dual of (P) has an obv ious
feasible solution .
(c) Show t h at if (P) is bo th primal and dual feasib le, (P) has an obvious
op timal solution.
( d) Show th at if for an index i, we ha ve
and
32 Chapter II. Dual Linear Programs
(P ) has no f e as i b l e so l ut ion .
(e) Show t ha t if fo r an in de x we ha ve
and
(P) has no opt ima l so l ut ion .
2x + x ~ 6
l 2
xl x ~ 1
(p) 2
xl ' x2 ;;, 0
xl + x
2
~ 3
3x
l
+ 2x 2 z (tta x)
A.x ~ 0 = 1, 2 , . . . , m
1
! Y AJ >
m
L
i =l
.
yi =
A
1
j=l, Z, • • • , n
7. A f unc t Lon f : E ->- R i s said t o be "superaddi tive" (re s p . " s ub a ddi t i ve" )
if
Ax "; b x ;;. 0
(p) {
cx = z (Max)
is a s u~ e ra d dit i ve f lll\ct ion of b an d a s uba ddi t i ve f unc t ion o f c .
Ax "; b >0'0
( P)
{
( bT) x z (Hax)
Show t ha t i f the l i ne a r sys tem
Ax = b
ha s a so lut ion with x ;;. 0 , t h i s so lution i s an opti mal sol ut ion t o (P) .
~ AX=b x;;' 0
(P)
ex = z (Max)
we ca n see it is ma de of:
l. A sys tem of linear equat ions Ax = b (we a lso say a "linear sy s t ern") .
2. A se t o f n onnegativity const r ain t s on the va ri ab l e s x;;, o.
3. An objective f unc t i on cx = z , whic h we t r y t o make maximum.
Thus one must not be s urp r ise d in f i nding that the t heory o f line a r s ys -
tems plays a ce nt r a l r ol e in line ar programming . The goa l of thi s cha p te r i s
t o r eca l l elements of this theory that will be needed i n the s eque l. In the
fi rs t s ection we define t h e co ncept of "solution" of a (g enera l) linear s ystem
(wh ich may c ontain a n umbe r of e qua tions smal l e r t han , gr eat er than , or eq ua l
to the numbe r of unknowns ) an d of redund an cy. In Se ct i ons 2 an d 3 we show how
mani pulations on the e quat i ons of a linear s ystem can be vi ewe d as matri x
multiplica t i on of t he mat ri x o f coef f i cien t s [A,b] of the s ystem . Finally, i n
Sect ion 4 we introduce th e very important " pivot operation" an d des c r ib e
(withou t proof) how a line a r sys t em can be solved th ro ugh a sequence of pi vot
operations.
In t his ch ap t e r (as i n the p rece di n g on es ) A is an m x n -mat r i x , b is
an m-co lumn vec t or, an d c i s an n-row ve cto r . Consi de r the l i ne a r system
( 1 ) Ax = b
I n det ache d coefficien t fo rm, ( 1) is wri t ten
1 2
Al x l + Alx2 + ... n
+ Alxn b
l
2 l
1 2
A X + A2 x +
2
... n
+ A2 xn b
2
(1 1)
2
1
Amxl + Amx2 + ... + Anx
mn
b
m
( 1) can a l so be writt en
( 1") Aix = b. i 1, 2 , . .. ,m
~
35
36 Chapter III. Elements of the Theory of Linear Systems
(S) + o
(S') 2x + 2x -2 0
l 2
a r e , in fac t, iden tica l. We now define more r i gor ous l y t he equivalence of
l inea r sys tems.
Ax = b
if and only if these t wo sys tems ha ve the same so lution set, i.e.,
{x I Ax = b } = { x l Ax = b}
A.x = b fo r i = 1 , 2 , oo . , m
1. i
th
Mul t ip l ying the i equali ty by yi and adding, we ge t
yAx = yb
Thus x is a so l ution of (1). Conversely if x is a so lution of (1), it is a
solution of (1).
Def inition 3 : A linear sys tem ( 1 ) is s aid to be "incons is ten t" if t h ere
e xi sts an m- row ve cto r y such that
(2*) yA = 0, yb f a
Remark 3 : Fr om Theorem 1 it i s app a r ent that if (2*) hol ds, th en (1) h as n o
solut ion. The ( m + l ) st e qua t i on o f (1) would read
Ox = a f 0
which is c le a r ly in f eas i ble. In f act , we will see i n Section 4 t hat if ( 1) ha s
no s ol ut ion , then one can find y such that (2 *) holds, s o that a sys tem i s
incons i stent if an d only if it has no so l ut ion.
A l inear sys tem th at is neither i ncons i s tent nor r edund ant i s said t o bt
of "full rank." I n t h is ca s e , f rom (2 ) and ( 2*) we ha ve
( 2 ') yA = 0 ~ y = 0
+ 2x2 + x 3 5
+ 3x + x 7
2 3
+ 3x2 + 2x 3 8
-1
+ 3
38 Chapter III . Elements of theTheory of Linear Systems
xl + x2 2
x2 + x3 3
x2 + x3 3
xl + x2 2
2x
l
+ 3xZ + x 3 7
xl + 3x 2 + ZX 8
3
1 Z
Taking y = we ge t th e first e quat ion o f (E2) · Taking y = (- 3 ' 3)'
(1 , -1),
we ge t the seco nd eq ua t i on of (E Si mi l a rly , it can be shown t ha t equa tions
Z)'
of (Ei) can be gene r a t ed by lin ear combinat i on of t hose o f (E . Thus , from
2)
Theo rem 1, (Ei) and (E are equivalent. We lea ve i t to th e r eader t o prove
Z)
that (E and (E a re equivalen t.
2) 3)
By looking at (Ei), we cannot s ay much about i t s s o lut ion s e t, where as (E 2) ,
(E and (E a r e wr i tten in a way that gi ves directly a n e xpli ci t formulation
3), 4)
of the sol ution set. In (E ' we c an cons ider that x i s a paramete r ( or a
Z) 3
"free va riab le"; we wil l al so say a "nonba s i c variable") that can be give n a r -
bitrary value s , cor r es ponding va l ues of and bein g
V( E )
Z {xl' xz ' x 31 xl = -1 + x3' Xz 3 - x
3}
V(E
3) {xl ' x 2 ' x 3 ! xl z - xz ' x3 3 - x }
2
xl + Zx Z + x 5
I
3
l + 3x 2 +
( Ey) 2x x
3
xl + 3x Z + ZX 3 7
Section I. Solution of Linear Systems (Definition); Redundancy 39
Taking y = (3. -1. -1). we get (2*). which shows ~hat (E~) is incon s istent.
These examples should help the reader to understand what we mean by
"solving linear system (1)."
A
(3) Ax
such that there exists JC{1.2 ••••• n} for which AJ is. up to a permutat ion
of rows or columns. the unit matrix.
In this case. we say that (1) has been solved with respe ct t o the se t J
of indices (or variables). Set J is called "basic." The complementary se t
(3') b
(3") b
which makes it appear clearly that the system has been so lved with respect to
the basic set J of variables ( give arbitrary values to the nonbasi c vari ables
and deduce. by a few multiplications and additions. the correspondin g va lues of
basic variables).
40 Chapter III. Elements of the Theory of Linear Systems
But this f o r mal ism ass umes t hat we know wh i ch basic variable appears in
wh ich equa tion, Le ., t hat there is a (perfe ct) order on the ind ices in J. So
when one reads (3"), on e mus t take J as an ordered set or a "l i st. " For in-
s tance , (E is t he s olution of (E wit h r e sp ect to (1,3 ), whereas (E i s t he
3) l) 4)
s ol ut i on wi t h respect to (3 ,1).
Form ( 3" ) is very co nveni en t and wi l l be used in the s equel. The reader
is warned t hat especially when we a ctuall y comput e s olut ions o f sy s t ems, t hi s
formalism co n t a ins a slight ambi guity t hat ne cessi tate s s ome ca r e ( see , for
e xa mpl e , Remark V. l).
Remar k 6: Recall tha t we defin ed " solving" sys tem (1) wi th r espe ct to th e
(ordered) s e t of indic e s J as findin g an e quiva l e n t system
(3) Ax b
A
f or which A i s th e uni t matrix. (3) cannot contain a re dundan t equat ion.
This can be seen e i t he r by goin g back t o Remark 1 or by noting that no equat ion
of (3) can be ob t ained as a linear comb i na t i on of the othe r eq ua t i ons of the
s ystem , since each equat ion of (3) c on t a i ns a va r iab le th at i s not contained in
t he other equation s.
Rema rk 7 : Rec all that we def in ed (Defin ition 1. 3) the produ ct of mx n-matr i x
A by n x q-matrix B (o r the produ ct of B by A on it s le ft) a s be ing the
m x q-matrix C defined by
(4)
j j
(4 ' ) C AB
. th
which mean s that t o ob tain the J column of the prod uct , we j us t ha ve to
multiply matrix A by t h e n-column v e c t o r Bj• In other word s , the j th
co lumn of C j ust dep end s on A and on the j th c ol umn o f B.
Section 2. Solving Linear Systems Using Matrix Multiplication 41
A x (B x C) (A x B) x C
if t he prod uc t exis ts .
Cx B U
then
(a) B is sai d t o be "re gul a r ott
-1
(b) C i s sai d t o be the invers e of B and i s de no ted B •
-1 -1 -1 -1 -1
( i) Bx B = B x B =U ; (B ) B; f or a gi ven B, B ,
i f i t e xists , is un iq ue .
-1 -1
C x B
i == 1 , 2, . . . , m
th i
multipl yi ng the i e quali t y by B and addi ng , we ge t
k
42 Chapter III. Elements of the Theory of Linear Systems
A ~ C :] [:] 3
Take J 1,2 :
11 -1 ]
tl/3 2/3
J -1
(A) A
- [: 0
1
-~
~
a nd we ha v e the coefficien ts of (E .
2)
Section 2. Solving Linear Systems Using Matrix Multiplication 43
2. Cons i de r th e system
- 3x + 7x 3 - 2x4
l
5
(6)
+ 4x 2 + 2x 3 3x 4 33
Remark 11: To keep this text as c ompac t a s poss ible, the follo wing propertie s
will be stated without proof:
(3) b
equivalent to the full -rank s ystem (1), there exists a non singular
matrix B su ch that
A
BA, b Bb
Definition 6 : An " et a - mat r i x" is a uni t matrix except f or one of its co l umns.
An eta- ma t r i x of order m is thus complet ely s pec i f i e d when we a re given:
th
The eta-mat rix with ve ctor d being the r column will be denoted
D(rjd )
[J [~ ~]
0 4
1 3
D D(3 ;
l 2
0
0 2
l:} l: ~l
0 0
1 0
D2 D(1 ;
0 1
0 0
lJ l1 ~l
0 0
1 0
D D(l;
3
0 1
0 0
( i) The prod uc t of t wo e ta-ma t ric es D(r ;d) and D' (r; d' )
(sam e r) i s an et a -ma t r ix D(r ,d") = D(r;d) x D' ( r j d ") wi th
d' + d d' f or i f r
i i r
d'! {
~
d d' fo r i =r
r r
1
when t hi s i s t he c a s e , D'( r ;d ') = (D( r ; d) f i s given by
for if r
d'
i
f or i =r
D
-1
l
~
0
1
0
0
-Z
- 3/ Z
l/Z
-1
~ D
-1
3 [-:
-Z
-3
0
1
0
0
0
0
1
0
~
3. Fi n ding Equiva l en t Sys tems o f Line ar Eguat i ons : El e mentary Row Oper ations
As announced in Remark 10, we wi ll see how to solve (1) without comput i ng
(AJ)-l.
( 1) Ax b
by
aA x = ab
r r
{ ZXl - 3x Z + 7x 3 - ZX 4 5
( 6)
3x l + 4x Z + ZX 3x 4 33
3
46 Chapter III. Elements of the Theory of Linear Systems
Apply EROl(l,l/Z):
3 7 5
xl - 2" z + 2" x3 x4
I 2"
X
( 3 "i + 4 X
z+ Z x
3
- 3x 4 33
3 7 5
xl - 2" z + 2" x3 x
I
X 4 Z
17 17 51
2"x z - 2"x 3 Z
EROl(Z,Z/l7) gives
I xl
3
- 2" X
X
z-
7
z + 2" x3 - x4
x
3
5
2"
3
xl + Z x3 - x4 7
I X
z- x
3 3
(1" ) i = 1 , . . . ,m
b ~. il k
(1* ) I Ai X
Akx + SAr x b
k
+ Sb r
A. x b ~. for i=l , • • • , m
~
AkX b
k
Ax br
r
SAi Sb
r
and t hus
AkX + SAr x
A.x b ~. fo r i = 1 , _.• , m, i l j
~
so that
~x b
r
and
(*) SA x Sb r
r
Ai + SAr x b + Sb r
k
T(ER02 ( r , k , S» D(r;d )
with
{~
if i =k
d. = if i " k, i " r
1
1 if i =r
A
~~ b [3:]
=
/
T (ER01 ( 2 , 2/ 17) = T( ER02(2,1, 3/ 2» -- [1
0
31 J
and t he produc t
Section 4. Pivot Operation 49
i s equal t o
4 / 17 3/1~
B [
- 3/17 2/d
The r ead er is i nvit ed t o chec k t hat
B xA
~
0
1
2
-1
-~ B xb
t]
o r , i n ot he r word s that
-1
B
G -J (AJ ) -1 J {1 ,2 }
4. Pivot Operation
fo r al l i = 1 ,2 , ••• , m except i =r do
~ ER02(r,i, -A~)
end f or a l l
Remark 14: I n the pivot opera t ion , EROl(r, liAs ) i s performed befo re
r
ent er ing t he l oop. Thu s when per f orming e l emen t a ry r ow op er a tions of the
s ec ond kind (in t he l oop) ma t r i x A ha s been c hange d i n suc h a way t ha t
Exampl e : Linear system (6) has been sol ved, in the example following Def i ni-
tion 6 , by the s ucc e s s i on of two pivot operations:
(a) The fi r s t pivot op eration on Ai = 2 gives the new linear sys tem
(characteriz ed by matr ix A and ve ctor 6)
fS/i
~ -~
- 3/ 2 7 /2
A b
17/2 -17/2 ~l/~
(b) The sec ond pivot ope r a t i on on A; = 17/2 gives
~
0 2
~
A
~
1 -1
-J b
l:J
the "solut ion " of (5) .
We note that
A A
A DA b Db
with
~ 1/2
an d
D
:.3/2
J T(ER02(1 ,2,-3)) XT(EROl(1,1/2))
~ ~
A DA b fiA
with
~
3/ 1
D
2/ 17
J T(ER02(2 ,1,3/2)) x T( EROl ( 2, 2/ l 7) )
A A
operation on this element without s peaki ng of the linear sy s t em. Let [A, b]
be the augmented matri x of (1) aft er a pivot operation on AS has be en per-
r
formed. We have
s s
Aj
{Ai - A~A~/A~ if i I r { bi-brA/A r if i I r
(7) A. b.
1
Aj l As if i =r
1
b i AS if i =r
r r r r
i I r
{~
if
~~1
if i=r
th
Thus the pivot operat i on " creat es" or "makes appe ar " a unit column in th e s
j
column of A. Mo r eove r , i f A =0 (jl s ) and AS = 1 then
r r
Remark 16 : The pi vot operation i s a sequence of el ement ary r ow ope ra t ions and
eac h of th es e elementa r y r ow opera tions ca n be expres s ed as th e mUltipli cat i on
of the augment ed matri x of the sy s tem [A,b] by a nons ingular matri x on it s
left (s ee Remark 13). Thus th e pi vot operation i tse l f can be expressed as th e
mult ipli cati on of matri x [A,b] on i ts l eft by a nons ingular matri x Pi ( r , s ) ,
which i s th e product of the mat ric es correspondi ng t o e lement ar y r ow ope rations
compos i ng th e pi vot ope ra tion. Pi(r,s) , ca l led th e "pi vot matri x," is defined
by
Pi er ,s) i s th e product of m e ta- mat rice s all with the nontri vi al col loon in
th
th e r posi t i on . Thus, from Remark 12, pier, s) is an et a- mat rix
d.
1
r:l i As
r
/A
;
fo r
f or
i I r
i =r
52 Chapter III. Elements of the Theory of Linear Systems
[A,b ] pi er , s ) [A,b]
and
:th
r c.otllml1
0 0 0 _As / As 0 0 0 0
1 r
0 0 0 _As / As 0 0 0 0
2 r
0 0 0 _As / As 0 0 0 0
3 r
0 _As / As 0
r-2 r
0 0 0 _As / As 0 0 0 0
r -l r
:th
pier , s) 0 0 0 0 l iAS 0 0 0 0 r !tOW
r
0 0 0 0 _As / AS 0 0 0
r +l r
0 _As / As 0
r +2 r
0 0 0 0 _As /As 0 0 0
m-2 r
0 0 0 0 _As / As 0 0 0
m-l r
0 0 0 0 _As/As 0 0 0
r.1 r
xl + 2x + x3 5
2
(El) 2x l + 3x2 + x3 7
xl + 3x 2 + 2x 8
3
Xl x -1
A
3
(E l ) X2 + x3 3
OX + OX 0
2 3
2. Let us now try t o so lve system (E;). The same sequence of pivot
operations as in the preceding examp le l eads t o
Xl -x 3 -1
:::*
(E x
l) 2
+ x3 3
OX + OX
2 3
54 Chapter III. Elements of the Theory of Linear Systems
3. Solve
[:
-1
[A, b] -1 -3
2 -1
The system is solved through th e se quence of pivot operati ons shown in the
following tableau:
Xl x x x b
2 3 4
1 1 1 -1 1
1
1
-1
2
1
-1
-3
1
3
4
pi vot operati on
} r=1 , s =1
0( 1) [, :]
-1
.. 1
0
0
1 1 1 -1 1
0 -2 0 -2 2
[: 1
0 1 -2 2 3 1/ 2
p i vot ope ra t ion
} r =2, s=2
0(2) - 1/ 2
1 0 1 -2 2 112
0 1 0 J -1
0 0 -2 1 4
l/2J
[:
0
pi vot ope r at i on
} r =3, 0( 3)
-~/2
s =3
0
1 0 0 3/2 4
0 1 0 1 -1
0 0 1 -1 / 2 -2
0 (3) x 0( 2) x Del)
['I'1/2
3/4
3/4
- 1/2
-1/4
I/
-1~2J l
1
-1
2
r
-1
(AJ ) - l
EXERCISES
1. Prove that the f ol l owi ng two linear sys tems are equivalent :
2X + SX + 3x 12
j xll 2
+ 2x 2 x3
3
5
Xl + x2 x3
2x + 3x + x3 2
l 2
xl - x2 7x 3
3. Sol ve t he s ys tem
Xl + x + x + 10x + 2x 10
2 3 4 S
xl + x2 - x + + 4x 6
3 S
2x + 3x - 2x + 3x + 10x 17
l 2 3 4 S
4. Consi der t h e sy s t em
2x + 3x 2 + 4x 1
l 3
xl x + x 2
2 3
4x l + 3x + 2x -1
2 3
3x
l - x2 x
3
0
56 Chapter III. Elements of the Theory of Linear Systems
(b) Show t hat any equa t i on of thi s s ystem is redundant. Disc ard th e fourth
equation.
(c) Suppose that we want to know th e values of xl' x 2' x f or any right-hand
3
side . Sys tem (1) can be writt en as
(2)
3 2 1
(e) Compute t he pr oduc t 0 x0 x0 and exp la in you r re sul t.
6. Prove Remark 9.
9. Let Ax ~ b x > 0
(I') { ex ~ z (max )
be a linear program in st andar d form and ass ume that any equat i on of t he
sys t em
Ax ~ b
b x > 0
(P )
Z (Max)
( 1) Ax b
i s ful l rank (i.e., not redundan t and not inconsis ten t). Thi s i mplies t hat
m< n (see Remarks II 1.3 and II1.1I).
Defini t i on 1 : Given a linear prog ram (P) in s tandard form such that (1 ) i s
f ull r ank, we call the "ba1>.u.. " of (P) a se t J e U ,2 , ... .n} of indices s uch
J
th a t A i s s quare nonsingular .
In othe r words, J is a bas i s i f and only if li near sys tem (1) can be
solved wit h respec t t o J (see Sec tion II 1.1 and II 1.2) •
J
If J i s a basis of (P) , A i s t he "b as i s mat ri x" corresponding to J •
o j ¢J
57
58 Chapter IV. Bases and Basic Solutions of Linear Programs
Thi s solut ion is called the "b asic solution," correspondi ng to basi s J .
In other words, the ba si c solution co r respondi ng t o bas i s J is the
s ol ut i on of l inear sy s t em (1), whi ch we obtai n in makin g a l l nonbas i c vari abl es
equa l to ze ro .
Exampl e: By addi t i on of s l ack variabl es x ' x ' Xs ' l inear pr ogr am (PI ) has
3 4
been writte n in s t anda r d form in Sec t ion 1.4 .
2x + x + x 8
I 3 3
Xl + 2x + x4 7
2
(PI) x. > 0 fo r i=l,Z , •• . ,s
l
X + Xs 3
z
4x + sx z (Max)
I Z
{3,4, s} i s obvi ous ly a basis of (PI) ' The cor r e s pondi ng bas i c sol ution i s
Xl Xz o, x3 8 , x
4 7 , Xs 3
Xl 3, x2 Z, x
3 x4 o, X
s
x > 0
(P )
I; (11ax) - I; ( I; given scalar)
Remark 3: Let M be th e matri x of coe f fic i ents of l inear program (PI;) and
s uch th at MS = AS F o . A pivot ope ra t i on on
r r
t ran s fo rms th e matri x M into th e matrix if (s ee Remark lILIS ):
if i Fr
( 2)
if i =r
if i Fr
(2 ' ) b.
1 if i =r
60 Chapter I V. Bases and Basic Solutions of Linear Programs
(3) ~j
A
(3 ') 1;
(4)
if if. r
if i =r
S
The n i t is easy to check t hat after a pivo t operation on the el eme nt M AS f. 0
r r
of t he matrix M, we obtain
( 3*) c c - nA ; 1; 1; + nb
~ ~l
1 0 0
2 0 0
M
0 0 1
5 0 0 0
2
Pe r f o r m a pivot operation on H =1 • We obtain
3
~ J
0 0 -1
0 0 1 -2
M
0 0
0 0 0 -5
Pi ( 3 , 2)
Section 2. Writing a Linear Program in Canonical Form with Respect to a Basis 61
Ax = b x > 0
{ cx = z (I4ax) - I;;
and assume t hat lin ear sys tem Ax = b i s fu ll r ank. Let B be a nonsi ngu la r
m x m-matri x and y an m-row vec t or of coe f fi cients . Then (P 1;;) is
equivalent (a cc or di ng to Def initi on 1.15) to t he linear program
( BA) x Bb x > 0
{ (c-yA)x z(r.lax) - z:; - yb
Proof: (i ) We have s een i n Section 111. 2 that linear sys tem ( 1) is equi va lent
to
(IlA) x Bb
pr ovi de d t hat B is nonsingu lar . Thus (P2 and (Pz:;) have the s ame so l uti on set s ,
(i i) The va l ue of th e objec t ive fun ction of (PI;;) is
cx + I;; Y (Ax - b)
Thus f or all x sol ut ion of (1) ( f e as i bl e sol uti on of (PI;; ) and (P2) ) we have
Ax - b 0
and z z,
Def i nit ion 5 : Consi der a line ar prog ram ( P) where ( 1) i s full rank and l et
J be a bas i s of ( P) Appl y t he transformation des cri bed in Theorem 1 wi t h
( a) B
( b) Y 11 s ol ut i on of
J
(5) c
( P) becomes
62 Chapter IV. Bases and Basic Solutions of Linear Programs
i. ce , ,
0 if j E J
(6 ' ) (c(J))j
j j
c - rrA if j¢J
J
(a) A is, up t o a pe rmuta tion o f rows or co lumns, th e unit matri x .
J
(b) c 0 •
Ax : b x > 0
(P)
{ cx : Z (Max) -
Definition 6 : To make r e f erences to pivo t opera ti ons i n the sequel l i ght and
precise , we wi ll give t h e name
PIVOT (p ,q , r ,s ;M)
S
t o t he pivot ope ration on t he e lement Mr of the p x q- mat r i x M, whi ch t r ans-
f or ms M i nt o M, i .e . ,
M~ - Il
j
r
M~1 / Msr if i ;< r
~j
M.
1
lM;/ Ms
r r if i=r
~ J
0 0 -1
M' M
0 0 1 -2
1 0 0
0 0 0 -5
2x + x X 5
I 3 s
Xl + x 2x
4 s
(Pi ) x2 + X 3
s
4x I sx z(Max) - I S
s
(We see i n thi s exa mp le how th e fact t hat J' is the ordered se t (3 ,4, 2) mu s t
be tak en i nto account (s ee Remark 111. 5)) .
Let us now perform PIVOT(4 ,6, 2,I;W) . We get
~
a 1 -2 3
a a -2
Mil M' a a
a a -4 3
the matrix of coe fficients of l inear prog ram (PI ) writt en i n canoni ca l f or m
with r esp ect t o {3, 1, 2) = O ,4 , 2}U{I }\ {4 } Let us no\\' perf orm PIVOT
(4 ,6 , I ,s ; M" ) . \"Ie get :
J
a 1/ 3 - 2/ 3 1
~ 1 a 2/3 - 1/ 3 a
t
MOO '
= W' = ~ - 1/3 2/3 a
a -1 -2 a
I 2
- "3 x4 X
"3 x3 +
s
2 1
( Pi" ) xl + 3 X3 - 3 x4 3
1 2
x - "3 x3 + 3 x4 2
2
- x
3 - 2x4 z - 22
Section 3. Feasible Bases. Optimal Bases 65
Pr oof: Cons i der li ne ar program (Pc) as i nt roduce d in Defi ni ti on 5 and ass ume
that
( 7) c (J ) c - TT A < 0
We hav e
(AJ) - l b XJ , Xj>O
(Pc) Z (Max) - TT b
j
Z cx I (c TTAj ) x . ... TTb
J
jtJ
cx L (c j TT Aj) x . ... TT b TT b since x . = 0 for j¢J
j ¢J J J
Z - Z
Moreover,
w • z
mm max
cx 1I b
By assumption, we have
b x > 0
z (I~in)
EXERCISES
1. Conside r the l i near program
Xl + SX 2 + 4x 3 ~ 23
( P) 2x l + 6x 2 + SX3 ~ 28
{
4xl + 7x2 + 6x
3
= z (Mi n)
Ax < b x > 0
(P) { ex = z (Max)
4. Prove Remark 7.
68 Chapter IV. Bases and Basic Solutions of Linear Programs
+ 4x + 4x 3 + x4 < 24
2
( Xl 6x 4x 3x4 < 36 x. >0 i = 1, 2, 3,4
(P) 8xl +
2
+
3
+
1-
5x l + X2 + 6x + 2x4 z( Max)
3
(a) Write (P) in s tanda r d f orm (the s l ack va r iab les are named x and x6) .
5
(b) Writ e (P) i n canon ica l fo rm with re spect to th e basis {3, 4}
(c) Is {3, 4} a f eas i ble bas i s ? Is it an optima l basis?
(d) Assume that the ri ght-hand s i de of th e firs t cons t raint i s changed
from 24 into 26. Wh at is th e new value of th e so l ut io n ? By how
much has th e obj ec t i ve function been increased ?
(e ) What can you s ay if t he r i ght -h and side of t he fi rs t const rai nt i s
changed from 24 into 44?
Let
I C {I ,2 , ... , m} I {1, 2, •• • , m}/I
JC {1,2 , • • • . n} J {1,2 , ... ,n }/J
(i) II I + IJI m
J,
(ii ) [A (Um)1 ] i s nonsingular .
(i) Ii I + II I n
Exercises 69
(ii)
[::)J1 is nonsingu lar .
(b) Prove t hat a necessary and sufficient condi t ion for (I,J) to be a
bas i s of (P) is th at A~ be square nons i ngul ar .
I
(c) Pr ove t hat a neces s ary and suf f ic i ent condi tion for (I ,J) t o be a
basis of (P) is t hat (r,J) be a basis of (D) .
The s i mpl ex a l gori t hm was di scove red i n 1947 by G. Dant zig as a tool t o
solve l i ne ar programs . Simpl ex algori thm i s ce ntral t o this course on linear
programming bec ause it exemplifies the proce ss of operations re search de scribed
i n t he pre fa ce : it is not onl y a very e f f i ci ent way ( its e f f icie ncy i s not yet
compl etel y unders tood) of s ol vi ng prac tica l problems, us ed innumerable time s by
eng i neers, i ndus tri alists, and military people , but i t is al so the bas is of a
mathemat i ca l t heor y that can be us ed to prov e various result s .
We wil l try and keep t hes e two points of view in this t ext, i .e ., insi st
on the computational aspec ts (se e, in parti cul ar, Chapt e r VII ), of t he s i mpl ex
algori thm and sho w how i t can be used as a mathemat i ca l too l .
The simp l ex a lgori thm can on l y be appli ed to lin ear programs t hat are
wri t ten i n canonica l fo rm wi th re spect to a fea si bl e ba s i s . We show i n Chap t er
VI how t o accomp li sh t his.
He r e we ass ume t hat t he l i near program
Ax b x > 0
( P) {
cx z (Max)
is in f act wri t ten in canonical f orm wi t h r espect to t he f e asib l e ba sis ,J. Thus
t he t hree fo l lowing condi t i ons are s a t i sfied :
J
(1 ) A is , up toapermuta tionofrows , the mx m unit matrix .
( 2) b > 0 s i nce J i s a " f e as i bl e" ba s is .
(3)
i
c : 0 (see Sec tion IV. 2) .
J
Remark 1 : We need t o know the s t r uc t ur e of A • We define th e mapping
co l :f1,2 , • • • ,m}-+J
s uch that
70
Section I , A Particular Case 71
if j = col (i )
(4) A~1 if j cJ and jl col (i )
2x + x2 + x 8
l 3
xl + 2x2 + x4 7 x.1>- 0 i =1 , 2, • • • , 5
3
z (Max)
x >0 j = 1, 2, • • • ,m,m+l
j-
(PP)
+ x b
m+l m
z(Max)
We can also cons i der that ar e sl ack vari abl es (c f. Remark IV.4 ) :
72 Chapter V. The Simplex A lgorithm
I
Alx l < b
l
I
Ax < b xl > 0
2 I 2
(PPC)
I
Amx I < b
m
c ' xl z (Max)
The domain of f ea si ble s ol uti ons of (PPC) i s a part of t he half l ine Xl > 0 •
i =I,2 , o, m. Thus if Al < 0, th
Recall t ha t , f rom (2), bi ~ 0 for 00 th e i
1 -
inequalit y does not pl ay any role i n th e definition of th e s e t of fe as i bl e xl 's
(if Al = 0, i th const r ai nt i s a l ways i. f 'i d 1. f Ali < 0 ' t h 1· t h
s a t s a e t ;
. t 1S
. e con s r a m
1
a l ways sa t i s f i ed f or xl ':' 0) •
It i s thus s uffi ci ent t o determine V , the domain of f eas ib le so l ut i ons
of (PPC) , t o re s tric t one ' s at t ent i on t o con straint s of
(5)
(a) V {xl lo ~ xl }
(b) V h l lo ~ xl ~
1-
b. A x. b.
1 1 1 1
(s ee Theor em IV. 2) . The r eade r wi l l check that a fte r t he pivot operat ion
(PP) writes
74 Chapter V. The Simplex A lgorithm
x ~ a
A' b
= b - ~
r-l A~
1
+ -, xr + 1
A
r
(PP' )
A~+1 Al b
-7 Xr+l + xr+ 2 =b
r -l
-~
A
r
Note that a segment or a ha l f line ar e exampl es of con vex polyed rao We chec k
on this exa mp le t ha t t he optima l so l ut ion co rres ponds t o a ve r tex of th e
pol yedron.
Ax b x > 0
(PP)
{ cx Z (Max)
Section 2. Solving an Example 75
~I = [~ ~]
co l ( r ) s
Proof : Th i s the orem is just a syn t hes is o f what has be e n p rove d i n t h i s sec tion .
2. So l vin g an Examp l e
Let u s a ga i n co n s ider
2x + x2 + x 8
l 3
xl + 2x 2 + x
4
7 x. > 0
J -
( PI)
x + X 3 j =I ,2, • • • ,S
2 s
4x + SX z (Max)
l 2
x3 8
X
z+
Zx + x4 7 xZ,x3,x4,xS .:':. 0
Z
(PP (J , Z))
I X + X 3
z s
5x Z z (Max)
ZX + x X 5
l 3 s
+ X4 Zx 1 x > 0
(P i) xl S j
Xz + Xs 3 j =1,Z, ... ,5
4x
l
Sx
S z (Max) - 15
2x + x 5
l 3
Xl + x
4 xl ,x 2 ,x 3,x4 ~ a
(PPi (J ' ,1 ))
x 3
2
4x z(Max) - I S
l
z = IS + 4 = 4x + SX 2 19
l
x - 2x + 3xS 3
3 4
+ x4 2xS x. > a
r')
Xl J -
(P
X2 + X 3
s j = 1,2 ,ooo, S
- 4x4 + 3xS z(Max) 19
x + 3x 3
3 S
2xS X > a
Xl j
(PPi'(J",S))
x2 + Xs 3 j = 1, 2, 3,5
3xS z( Max) 19
Min[3/3 , 3/ 1]
,,- I 2
3" x 3 - 3" x4 + X
s
2 1
xl + 3" x 3 3" x 4 3
l' )
(P
x
1
3" x 3 +
2
3" x4 2
2
x - 2x z (Max) - 22
3 4
Geo me t ric I n t erpretat i on : Let us draw in p lan e (x ,x the domai n o f fea sib l e
l 2)
so lu tions of ( PI) ' It i s th e convex po lygon ABCDE of Figu re V. I .
I
Fi gure V, l : The Domain o f Feasib le So l ut ions of ( PI)
To s ta r t with, th e p rob lem wa s writt en i n c a non i ca l form wi t h r espe ct to the
ba s i s J = { 3 , 4, S} Th e corr espo ndi ng ba sic so l ution is x l = x = 0: we a re
2
in A. The n, x i nc rease s wh ile xl r emain s e qua l t o 0 ; the ma xi mum
2
va lue fo r x is 3 (if we want t o s t ay i n V) : we have tra vel e d a long
2
segment AB . On B, X
s =0 since we are l i mi t e d by co ns t rain t II I an d since
X me asure s how fa r we are from t he bo und on th i s co ns t rai nt . Point B cor -
s
re sponds to t he ba si c s o l ution a s s oci at ed wit h J '. lI'e then l e t xl in cre a se ,
l eavin g X
s=o. Th e maxi mum po ss i b le va l ue f or xl is I: we de s cribe seg-
ment BC . On C, x =0 (since we are limi ted by co n s t raint II ) an d t he
4
Section 3. The Simplex Algorithm: General Case 79
so lution i s the bas i c so lution associated with J ". We deci de t o inc rease
x ' t aking f rom cons trai nt II I, but leaving x4 = 0: we de s cr i be segment
S
CD. D co rresponds t o th e basic so l ution assoc iated with J", the optima l
so lution.
so lve (PI) aga in with the othe r choice ( le t ti ng index I fi rs t enter t he basis) .
He wil l then check t ha t t wo bas i s changes are su fficient i ns tead of three.
Remark 4 : Let us desc ribe the main s teps of t he proces s that ende d in t he
solution of (PI)
for k ¢J, k f. s
s
(6) Min [b/\ l
i EI
80 Chapter V. The Simple, Algorithm
(Thi s mini mum may be r eac hed fo r mor e th an one index; r i s one of t hem. )
x' X
S S
s-
x' b. Ai xs f or k= col(i)
k 1
x! 0 jO
J
(k) In perfo rmi ng the ope ra t i on PIVOT (m+l, n+l, r , s ;M) to t he matri x
M of coe fficie nts of li near progr am (P) , we obta i n t he mat rix of
coe f f icie nt of th i s linear progr am wr i t ten i n ca noni cal f orm wit h
r es pe ct t o bas is J . Nappi.ng " col " i s updat ed i n posi ng
col (r ) s
REPEAT the fo l lowing proce dure unti l ei t he r an opt i mal basis is obtain ed, or
a s e t of f easi ble so lut ions f or which z is bounded , is shown to exist .
( 6)
STEP 4: Perform the pivot ope rat ion (defined by r ow r and column s)
on the matri x of coefficients of linear program (P) . After
t hi s pivo t ope ration, (P) is wri tten in cano nical form wi t h
respec t t o
Le t J : =J co l (r): = s •
END REPEAT
Remark 7: The va lidi ty of the operations of t he simp lex algori t hm has been
prove d i n prece di ng deve l opments :
It remai ns t o pr ove th at the algo ri thm i s fin ite. Thi s will be done in Sec tion 4.
82 Chapter V. The Simp lex Algorithm
K Min [b /A~J}
i EI
s
( 7) s such t hat c Max . c j
J
Thi s choi ce is not neces s ar i ly good (as pr ogr am (PI) exemp li fi es -- s ee Remark
3); its jus t ifi ca ti on li es i n t he f ac t t hat i t gi ves t he gr ea t es t va r i at i on of
t he obj ec t i ve f uncti on by uni t i nc r ea se of t he va r iabl e .
S
Anot her argument cons i st s of as so c iat i ng wit h ea ch s s uch t ha t c >0
a l i ne index r ( s ), t he i nde x on whi ch we would perform t he pivo t ope rat i on
i n case i ndex s would be chose n . The co r re s pondi ng i nc rease of th e obje ct i ve
f unc t i on would then be
( 8)
Remar k 9: Simplex a lgo ri t hms (as any othe r a lgo r i t hm) are composed of condit iona l
ins t r uc ti ons (Steps 1 and 2) and of ope r at ions (Steps 3 and 4) . To pe rform
any of t hos e, we just need mat r i x M of coe f f i ci ents of th e l inear program ,
whi ch is t r ans f ormed a t each i teration by pivoti ng on e lement A~ . Thus ,
the so l ution of a l i near program by th e si mp lex a lgo r i t hm can be presente d in
gi ving th e sequence of t abl eaus of H. We present now, under t hi s compact f orm,
the solution of (PI) as i t has been obtained in Section 2:
x2 x3 x4 x5 b (PI) is i n canonical
xl
form wit h r e spect to :
2 1 1 0 0 8
1 2 0 1 0 7
0 1 0 0 1 3 ... J
---- ----- ----- ----------- ------ ---- - --
4 5* 0 0 0 0
2 0 1 0 -1 5
1 0 0 1 -2 1 ...
0 1 0 0 1 3
------ ----- ---- -- -- --- ---- ----- -- - ---- J'
4* 0 0 0 -5 - 15
0 0 1 -2 3 3 ...
1 0 0 1 -2 1
0 1 0 0 1 3
---- ---- ----- -- --- -- ------ -- --- - --- --- J"
0 0 0 -4 3* -1 9
0 0 1/3 - 2/ 3 1 1
1 0 2/3 - 1/ 3 0 3
0 1 -1 / 3 2/3 0 2 JU '
------- - -- - - --- -- - - - - ------ -- -- ------ -
0 0 -1 -2 0 - 22
Remark 10 : The reader is invi te d t o verify t hat t o sol ve t he li near prog ram
x > 0
(P ') {Axcx b
z (Min)
84 Chapter V. The Simplex Al gorithm
the onl y point to chan ge i n simplex algori t hmtis Step 1, which becomes :
S
~: Choos e an s s uch that C < 0 • If s uch an s does not exis t ,
bas i s J i s optimal . STOP.
Defi nition 1: The basic s ol ution a ss oci at ed wi t h the f ea s ibl e basis J of the
linear pro gr am
Ax = b x > 0
(P) { ex = z (Max)
K Min
i£ I
and we choos e r £ K. If IKI > 1, consi de r k £ K, kf r , \'1e will have,
a f ter pi vot ing ,
o
Conve rsely , if th e s tarti ng s ol ut i on is nonde gene r at.e and i f IKI = 1 at each
iterat ion, i t i s easy to see t hat th e s olution will r emain nondegenerate. Thus
degeneracy is c lose ly r el ated t o th e f act that I KI > 1 •
Proo f : Fr om one i te rat ion t o th e next , the value of s (which i s the value of
the objecti ve fun ct ion as s oc iated wi t h the cur r ent bas i c solut ion ) in c re ases by
cSb l AS
r r
Remark 12 : Until 1951 (t he si mp lex a lgo r i t hm was found i n 194 7) , it was not
known whether i t was pos sible t hat the a l gori t hm (beca use of cycling among
degenerat e sol uti ons) was nonterminating. In 1951 , Hoffman proposed an exampl e
wher e th e systemati c choice of the first r ow in ca se of deg eneracy would l ead
t o cyc l i ng . Beal e ( 1955) pro vi ded a s i mple r exa mple . The follo wing one is an
adap tat ion by V. Chvat al of a case proposed by K. T. Marsh all and J . W. Soorb all e:
l - 5 .5x2 -
0.5x 2.5x + 9x4 + x5 0
3
0.5x 1. 5x - 0. 5x + x4 + x 0 xj > 0
l- 2 3 6
Xl + x j=1,2 , ••• , 7
7
10x l 57x 2 - 9x - 24x z(Max)
3 4
Mos t pract ical prob lems a re dege ne rate. However , t he occu r rence of
cyc l i ng i n r eal probl ems is most except i onal (Ph. Wolfe repo r ted i n 1963 havi ng
come ac ross . s uch a case). This is why most computer codes pres entl y use d to
so l ve li nea r programs do not i nc l ude so phis ticat ed rout ines s uch as th e one that we
are about t o presen t to avoid cyc l i ng .
But from a theoreti cal s tandpoint , and es pec ial ly becaus e we use t he
simp lex algorithm as a mathemati cal too l t o pro ve theorems (see Chapte r VI) , we
mus t make s ur e tha t some rul e of choice f or t he pivot row i n case of degeneracy
will l ead to a s ol ution in a f ini t e numb er of s t eps. The "p erturbation tech-
nique" t hat we are about to present i s due to Cha rnes [1952] . The f ollowing
" small est subsc ri pt" ru le, proposed by R.G. Bland (1977) , cons is ts of :
Ax = b x > 0
( P)
{
ex = z (Max )
written i n can on ica l form with re spe ct t o s ome f e a sibl e bas is J . We as s oc iat e
with (P), th e line ar p rogram
i
b. + £ x > 0
1
(P ( £))
z(Ma x )
o f or k t. J
If not all c oe f fi c ie nts o f th i s polyn omial are zero, 1\(E ) ke ep s a cons t ant s ign
i n an ope n i n te r va l (O, h) for h >0 s ma ll e no ug h . Th i s si gn is that of
the nonz ero c oe f ficie nt s ;;k o f l ower i n dex k ; I f thi s si gn i s + , we will
say t h at p olynomi a l 1\( E) i s "po sitive" and we wr i te
1\(E ) >0
Gi ve n polynomi al s 1\' ( E) an d 1\" (E ) , we will s a y that 1\' ( E) is " gre at er th an"
1\" (E) and we not e "hat
1\' ( E) ~ 1\" (E )
if
1\' (E) - 1\" ( E) i 0
Section 4. Finiteness of the Simplex Algorithm 87
Remark 13: Relation} is a total order on polynoms (see Exercise 11). Note
that if A(E) is a positive po lynomial, then
A(0) > 0
Remark 14: Consider an m-column vector b (E) , eac h component of which being
a polynomial bi(E) of degree m in E
(9)
b-k. L Bl bk
1- i l
l
bO l 2 m
1 b1 b1 b
1
l b2
bO b2 2 bm
2
2
Q
O l 2 m
b b b b
m m m m
Q = BQ
~I
where ~ i s t he (m+l ) - ro w vec tor, a l l coe ffi ci ent s of which arc equa l t o
o at th e begi nning .
fo r i., r
Remark 18 : At each i te rat ion of the s i mp lex al gorithm app lied to t he pe r t urbe d
pr oblem ( P(E)) , the value of the obj ecti ve fun cti on cor respond i ng to the basi c
s olution associ at ed wi t h the cur r ent feas i bl e basis i s
-0 -1 -2 2 -mm
I;; + I;; E + I;; E + • •• + 1;; E
Remark 19: We now have a procedure that s ol ves any linear pr ogram (wr i t ten i n
canonical fo rm with r es pect t o a fe as ibl e bas is) in a f inite number of itera t i ons .
90 Chapter V. The Simplex Algorithm
Rer.lark 20 : What t he a r t ifac t of "pe rturbed progr am (P( E)) " and defini t i on of
"pos i ti ve polynomi a ls" in fac t br i n gs i s a mor e s en sit ive way of compar i ng r ows i n
case of t i e i n t he i ni t ial l in ear pr ogr am (si nce t he cons t ant t erm of polynomia l s
b (E) pl ays a dominant r ol e i n t he compa ri so n rul e) .
i
Thi s met hod ca n be presen t ed differen tl y in t erms of l exi cographi c
orde r in g. See Exerci se 13.
EXERCISES
4x + 4x + 4x + x4 < 44
I 2 3
x. > 0 i = 1, 2, 3, 4
1
8x I + 6x + 4x + 3x < 36
2 3 4
5X + x + 6x + 2x z( Max)
I 2 3 4
Compare your r esul t wi t h what was obtai ned in so l vi ng Exer cise I V.5 .
x2 < 5
( 2Xl +
Xl - x 2 2- x ,x 2 > 0
l
l Xl + x2 < 3
3x + 2x z (Max}
l 2
(a) Write t he dual (D) of (P) and give a gr aphic so lution of (D).
(b) Sol ve (P) by th e simp lex algorithm and check t he so luti on of (D).
(c) Call E the mat r i x of coefficients of slack variables x and
x6 in the canonical form relative to t he optimal bas i s . 5Che ck
t hat
E = G . :
and exp lain why.
( -. + 6x2 < 54
(
I xl + 2x
2 < 14 x l,x2 .::. 0
\ 3xl - x2 < 9
I\. 3x + x2 z(Max)
l
using the simp lex a lgorithm .
]
l
+ x2 <
4x l - 3x2 <
-3x l 2x
=- xl + x2 < 5
-<
+
2
xl ' x2' ->
2x + Y < 3
- 2x + Y <
x < = (x + y) < 5/2
x, y > 0
9. Let
b x > 0
(P)
z(Max)
Xl + 4x 2 + 2x 3 + 3x4 < 20
2x + x + 3x + x < 6
l 2 3 4
7xl + ll x + l Zx + 9x4 z (Max)
Z 3
11. Show that if A(E) is not identi call y 0 and i s not pos i t i ve , then
- A(E) i s posit ive . Show that the s um of t wo pos i t ive pol ynomia ls is a
pos itive pol ynomial . Show that t he relation " ~ " i s a t otal or de r on
pol ynomials.
12. Let ( P) be a l inear program written in ca noni ca l f orm wit h r esp ect to
a feasible basi s J ' Let G = (X,U) be th e directed gr aph defined in
O
the fo l l owi ng way:
Exercises 93
k
13. Indices of components of vectors of R are supposed to be ordered (take,
for instance, "natural order" 1 < 2 < •• •< k ), We defined the "lexicographic"
k k
order on vectors of R in the following way: a ER is 9--positive if
at 0 and if the first nonzero component of a is positive. We note that
a> 0
a} b
(a) Show that the order in which the words are written in a
dictionary is the lexicographic order .
(b) Given a linear program written in canonical form with respect
to a feasible basis J:
Ax b x > 0
(P) { ex z(Max)
~ AX - B X. > 0 j=1,2, • • • ,n
J
I cX = Z(9- oMax)
b x > 0
(P)
z (Max)
However, if condit ions (1) and (2) are s at i s f i ed but not condi ti on (3 ) , we
pos e
i cco l (i)
y
Ax = b x > 0
(P I) { (c -yA) x=z(Max} - yb
95
96 Chapter VI. The Two Phases of the Simplex Method
Ax = b x > 0
( P) {
ex = z (Max)
( 2) b > 0
A: + Uv b x, v > 0
(PA)
1i=l~ v·
1
lJi( mi n)
. th
Re mark 2 : The 1 cons t raint of (PA) i s written
b.
1
i.e . , vi mea sures the difference between the right -hand side b and Aix .
i
When a ll a r ti fi cial va r i ables are equa l to 0, x , t he f e as i bl e so l ut ion of
(PA) , i s thus a f easibl e so l ut ion of ( P) .
Ax + Uv b x,v ~ 0
(PA') { - eAx lJi (Min) eb
s i nce v. > 0 i = 1, 2, •. . ,m
1
(4) o i :::: 1, 2, .• • ,m
m m
L V. o<~
A
i=l
L V.
1
1 i=l
Theorem 3: Let
lAx + BV x,v.:: 0
(PA)
l~x + dV Ij!(Min)
be th e linear program ( PA) wr i t ten i n canoni cal form with r espect to an optimal
bas is (whi ch we know to exist from Theorem 1) . Then i f
Ai 0, b i = 0
.th
v is the basic variable in the 1 row,
r
th
th e r constraint of (P) (i .e ., Arx b r) was redundant .
98 Chapter VI. The Two Phases of the Simplex Method
Pr oof: Canonical form (PA) has been obtai ned thr ough a sequ ence of pivot
th
ope ra t ions . The i equation of CPA) ca n thus be cons i de re d as a linea r
combinat ion of the equa t ions of t he l inea r sys t em
Ax + Uv b
th
the coe f f i cient of t he r equa tion in this linear combi nation being di ffer ent
from 0 , The ve r y sa me linear combina tion app li ed to t he I inca r sys tem
Ax b
b
r
a
(a) If
r
was r ed undan t . We ca n t hus s uppre s s th e r t h eq uat i on of (PA) .
Defi nition 2 : The so l ution of linear progr am (PA) , f oll owed by t he expuls ion
of ar t i f i cial bas i c vari able s acco r di ng t o the proces s des cri bed i n the pr oof of
Theorem 4 , i s ca ll ed "ph as e I of t he s i mplex method . " After phase I has bee n
acc ompl is hed , the l inear program (P) i tse l f i s wr i t t en i n canonica l f orm wit h
r espect t o a fe asibl e basis (i f a f ea sible SOl ut ion exis ts ) and the s i mpl ex
Section I. The Two Phases of the Simplex Method 99
Ax + Uv b
as t he auxi l iary probl em in which z is a noncon strained basic vari able (and
thus wi ll never be ca ndi da te t o l eave t he bases ) . If (PA) i s so l ved i ns te ad
of (PA), then at t he end of phase I t he linear program is wri tten i n canonical
fo rm with r espe ct t o a f eas i bl e basis (( 1) , (2) , and (3) ar e satis fie d) and
t he operat i ons des cri bed in Rema rk 1 need not be pe rfo rmed. This i s wha t i s
generally done .
Remark 5 : If (P) contains a vari able, say Xl ' which is not cons t rai ned t o
be posi t ive or ze ro , we can go back t o th e gene ral cas e us i ng the method of
Chapter I (see Remark 1. 10) . This is cor r ec t but not very c leve r . We wi l l pre -
fer expressing Xl as a f uncti on of th e othe r va riables i n an equa ti on where
Xl has a nonze ro coe fficien t, rep lace Xl by th i s va l ue in the other equa tions
and in th e objective fu nct i on, and so lve th e r educ ed l inear pr ogram thus ob-
tai ned . Final ly , t he va l ue of Xl for the opt i mal so lution i s comput e d from
t he expression givi ng Xl (see Exercise 3). If more than one variable i s un-
cons t raine d , thi s procedure is exte nde d naturall y .
To maintain th e spa rs i ty of matrix A, it might be advantageous not t o
e li minate Xl but to cons i de r it as a basic vari abl e from wh i ch, once ent ered,
the ba sis will nev er be a candidat e to leave (the sa me argument a s in Remark 4- -
se e Chapte r VII) .
Remark 6 : In linear pr ogr am (P), it may hap pen that a vari abl e, say x is
th s'
contai ned in onl y one equation , say th e r equa tion . Then i f ASb > 0 , i t
r r
is not nec es s ary (a nd i n fac t, i t is r ather c l umsy) to i nt r oduce an arti f ic i a l
100 Chapter VI. The Two Phases of the Simplex Method
aX + Zx + x3 Z
1 Z
Xl X SX 1Z x , x , x ,x > 0
(P
+
z+ 3 l Z 3 4
a) Zx 6x
Xl + + + x 13
Z 3 4
Xl X x x z( Mi n}
+
z+ 3
+
4
We need only add art i f i ci al vari abl es f or th e f irst and second eq uations ;
x wi ll be the bas ic va ri ab le as s oci at ed with th e t hi rd equat ion (s ee Remar k 6) .
4
The auxil i ary prob lem is wr i t t en
a X1 + Zx + x + VI Z
Z 3
SX
x.1>- 0 i= 1,3,4 ,S
Xl + Xz + + V 1Z
(PA 3 z
a) + Zx + 6x + x 13 ViVz .:. 0
Xl z 3 4
Xz + SX + z 13
3
v + V W ( ~li n }
1 z
W(Min} - 14
z b
2 2 ....
1 5 12
2 6 13
5 13
-2 -3 - 6* -14
2 2
-4 -9 -5 2
-5 - 10 -6 1
-5 -9 -5 3
4 9 6 -2
Xl x x x VI v2 b
2 3 4
2 1 2 -<-
1 S 12
2 6 13
S 13
-1 -3 -6* - 14
2
- 9 -s 2
-10 -6
- 9 -s 3
9 6 -2
2 2
-1
- 10
- 9
-6
-S 1
t 3
1
-1 * 1 0 -
2 -1 -2 o
-1
-9 4 10 11
-9 4 9 12
o o
X3 + 2x 0
4
x x
2 4
x x .: 0
9x 11
1, 2,x3,x4
Xl 4
9x z(Min) 12
4
Section 2. Results That Can Be Proved by the Simplex Method 103
Here the initi al basi s found at th e end of phase I is al so opt i mal; i.e . ,
phas e II is finished at the s ame time as phas e I. This i s just due to good
luck.
Theorem 6: If two dual progr ams (P) and (D) both have a f eas ibl e s ol ut io n ,
they both have an optimal s ol ut ion and th e va l ues of the obj ecti ve func t ion f or
th e opt i mums a re equa l .
(P ) I: ~ : (MaX~ > 0
(D)
VA < c
Iyb = w
y .::. 0
(Min )
yA < 0
Ax b
yA ~ 0
( P)
yb z (Max)
(P) has a f easibl e so Iut i on y = O. By ass umption , the obj ec t i ve fun cti on is
bounded by O. From Theorem 5, (P) ha s an optima l bas ic so l ut i on. From th e
coro l lary of Theorem IV. 3, its dual has an optimal (and thus a fe as i b l e) solu-
tion . The dual of ( P) is
Exercises 105
Ax - b x > 0
{ Ox = wlMin)
Theor em 8 (Theorem of the Al ternatives ): One and one only of t he two fol lowing
sys tems of const raints has a solut ion
Ax = b yA < 0
(I) { x > 0 (II) Lb ~ 0
o ~ y Ax yb > 0
I f sys tem ( II) has no solution , the hypothe ses of Theor em 7 ar e f ul f i ll ed and
thus ( I) has a s ol ut i on.
Remark 8 : Other theore ms are presented as exercises . Their proofs follow the
same line .
EXERCISES
1, Solve the linear program
xl + x2 ~ 2
- xl + x2 ~ 3
(P)
xl > 4
3x + 2x 2 = z(Min)
l
xl + x2 + 2x 3 ~ 6
(P) 2x l - x2 + 2x 3 ~ 2
2x 2 + x = z(Max)
3
106 Chapter VI. The Two Phases of (he Simplex Method
- xl + x + x x
2 3 4
>
2x + 7x + x + 8x 7 xl < 0
l 2 3 4
(P)
2x l + 10x + 2x + 10x4 10 x ,x ,x > 0
2 3 2 3 4
2x l + l 8x + SX + l 4x z(~l ax)
2 3 4
2x + x - x > 4
l 2 3
- 3x > 2
2
( P) \ " I x x x .:: 0
l, 2, 3
- 3xl + 2x + x3 > 3
2
\ - xl + 3x
2
3x
3
z( Max)
Xl - x 2 .:: 1
( P) 2x l - x 2 .:: 6
xl + x2 = z (Mi n)
x X
2 - x3
Xl +
s x. > 0
1 -
( PA) 2x
l
x2 - x
4
+ x
6
6
1=1, . • . ,6
xl + x + MX + ~I x6 z( Min)
2 S
Xl + 3x Z 5x 3 10
Xl 4x 7x 3 -1
Z
Xl > - 3, Xz .:: z, X3 -> - 1
aZ + bZ + ZcZ 8
Z Z Z
18a + Il b + 10c 31
+ ZX ZX + x4 0
1 Xl Z 3
- xl + Zx + x + ZX4 Zl
Z 3
( I) Ax = 0, x >O ( I I) uA ':: 0 , uA ., 0
(I)
IAx + By + cz = 0
x ,y .:: 0 x., 0 ( II )
uA > 0
uB > 0
(A nonempty) uC = 0
+ By + Cz 0 uA > 0 uA ., 0
( I) lAxX> 0, Y> 0 ( II) uB > 0
(A nonempt y) uC =0
108 Chapter VI. The Two Phases of the Simplex Method
Xl + 3x 2 - SX3 2
( a)
xl - 4x - 7x 3
2 3
6x - SX > 7
l 2
( b) - 2x - 7x 2 > 2
l
- xl + 3x 2 < -1
- 3x - 2x + x + 2x 5
l 2 3 4
(c) - 2x
l
- x
2
+ 3x
3
+ SX
4
27
110 Pro ve t hat i f B i s a s quare anti s ymme tri c mat ri x , th e f oll owing sy s t em
has a s ol ut i on :
Bx .::. 0 x > 0
1Bl< + X > 0
Pr ove t hat
l - BX < 0
x > 0
- Bx - Ux < - e
has a so l ution o
xTAx > 0 Vx
t hen the sy s te m
Ax .::. 0, x.::. 0 , x I 0
109
11 0 Chapter VII. Computational Aspects of the Simplex Method
i -I ..
2 I l Ol - J X. + x.
1
i=1 , 2, ...... ,n
j =1 J
x. > o j=1 ,2 , ~ .. , n
J
n
I z Ulax )
j=l
n l
wh i c h require s 2 - itera tion s o f th e s i mp l e x a l go r i t hm when t he c r i t e r i on t o
S j
ch oos e th e e n t e r i n g variable is c = Max [ c J . Act ua ll y, examp l e s c an be con -
s t r uc te d th at be at any rul e. But the s~ examp l e s a re o f an aca demic nat u r e . On
" curren t" (not e th e l a ck of r i go r ou s meani ng of t hi s t erm) prob le ms , th e a lgo -
r i t hm work s very we 11 .
Re c entl y [1 9 79 ], Kha c h i a n p ropose d a n algo rithm that s olve s linear p rog rams
us in g a numb er of e lement a ry ope rat i on s (s uch a s addition s , compa r is o ns , and
mu l t ip lica t ions) , whi ch is bo un de d by a po lynomia l i n a qu antity th at exp res ses the
s i ze o f t he p rogram . This be auti ful mathe mati c al res u lt i s very lik el y t o be o f
mode rat e prac t ica l us e . Alth ou gh it i s po l yno mia l , Kha ch i an ' s a lgo r it hm seems
t o be fa r l e s s e f f i ci e nt t h a n t h e simpl e x to s o l ve mos t -- if no t a ll -- linear
p r og r a ms .
2. Nume r ica l Pi t f a ll s
Using th e f irst equation to e l imi nate xl and rounding off i nte r me diate r esult s
to thre e s i gni f ican t dig it s, we obtain
0,
will have no pre ci se mean i ng . Thus i n comput e r codes , zer o must be defi ned
through t wo small pos itive number s, t and t , ca lled " zero tol eran ce ." A
l z
scale r a i s consi de re d to be equa l to ze r o if
Ax = b x > a
( P) { cx = z (Max)
\'I e l et
(1) A(J) (AJ)- l A (we hav e (A(J))J =un )
(2) b (J ) (AJ) - l b
(4) c( J) c - 1T(J) A
(5 ) r;(J ) 1T (J )b
Section 3. Revised Simplex Algorithm 113
(1 ' ) (A(J))s
J JU{ s} \{co l( r) }
Remark 2: The main advantage of the r evi s ed simp lex a lgori thm is not so much a
reduct i on i n the number of operati ons (see Exercise 1) as the fact t hat r ound-
off e rrors canno t propagat e since we always work with t he initial mat rix of
dat a A.
Li near prog rams in which n (numbe r of va r i ables) is much large r than
m (number of equati ons) are freq uentl y met . I t may then happen t hat matrix A
J
(or A(J)) can not be hel d in t he cent ra l memo ry of the computer, whereas A
(or (AJ) - l) is of su fficien t ly li mi t ed size t o be con tai ned in t he f ast memo ry .
It s uffices t he n t o sto re on a periphe ra l de vice matrix
(6)
where Pi er ,s) i s th e pivot mat rix (c f. Sec t ion 111. 4), whic h has a s i ngl e
. Il mZ
nontrivi a l co l umn . I t i s eas y t o check t hat (6 ) can be pe r f or med Wlt
Z
addi ti ons and m mul t i p l i c at i ons .
I f we ca ll Dl, DZ, . . . , D , t he q f i r st pivot matrice s and J the
q q
basi s af te r t he qth iterat i on , we have
J -1
(6 ' ) (A q)
(b) The va l ue of q ,
J
Remark 4 : Ther e i s a very simple way t o avoi d in ve r s i on of mat rix A at each
iter at i on without usi ng (6 ) and (6 ' ) . It s uf fi ces t o note th at we do not r ea ll y
nee d (AJ) - l , but that we ca n obt ain what we ar e l ooki ng for (b(J) , n( J) ,
(A(J))s) by so lving li near systems :
(2 ' ) b
J
(3 ' ) c
Section 3. Revised Simplex Algorithm 115
(I")
We now have three linear systems to solve. The losst i n number of opera-
tions i s substantial since when we had (AJ)-l, we needed only to perform
matrix multiplications (2), (3), (I'). But some advantages can compensate for
this loss:
Linear program (P) i s not written in canonical form with r espect to a feasible
basis, but a feasible basis J i s known.
J
(3' ) c
(1 II)
(2') b
and let = {i! (A(J)) l.~ > O} (1 "f/l bec aus e of Step 2).
Choose an r such that
t Not e t hat when (AJ)-l is used, there is a gain with respect to s implex
a lgori t hm.
11 6 Chapter VII. Computational Aspects of the Simplex Method
STEP 4: Let
J: = J U {s} \ {co l I r ) }
co l( r) : = s
END REPEAT
550
x
2
+ x
5
300
- 5x l - 6x2 - 3x - 3x 5x - 4x z (Max)
3 4 5 6
r ~~J
l
o
~l
1 234 °0 0° ( - 5 , - 6 , - 3, - 4 )
(n .n . ' . " i,
o
We ge t
TIl 2 3 4
- 6, TI = -4 , TI = I, TI °
A3 AS Al A2 A4 A6
c -3 + 6 = 3, c -5 + 4 - 1; c c c c
=
°
3 i s the index t hat "ent er s" th e basi s o We sol ve
1
[1
~
0
°
1
OJ
0
* b = ~3~1°
400
o ° ° 300
Section 4 . Linear Programs with Bounded variables 117
a nd
We get
A
b 250 b 300 b 150
l 2 3
A3 A3 A3
Al A2 0 A -1
3
(- 5 , -6, -3, -3 )
We get
l 2 3 4
n -3, n = - 1, n - 2, n = -3
AS A6 Al A2 A3 A4
c - 5 + 4 = -1 ; c = 4 + = - 3; c =c =c =c 0
Ax = b j =1,2, .. . ,n
(PB)
l cx = z(Max)
with a , e R U { _oo} ,
J
Remark 5 : The " us ual " li ne ar program unde r st andard form (say ( P) o f Remark 1)
is a spe ci al c a s e o f (PB) with
\18 Chapter VII. Computational Aspects of the Simplex Method
o j = 1,2 , • . • , n
+ 00 j = 1 ,2 , .. . , n
_ 00 < U
j
, B. < 00 j = 1 , 2,. . . , n
J
We pose
x. x. u.
J J J
b b Ao.
n
b x"- , u > 0
(PB ') H B- U
cx z (Max ) - CU
z( Ma x) - 7t b
J J
with J = {l , 2, .. . . n l \ J, 7t so lut i on of 7tA = c i s t he l i near pr ogram
(PB) "wri tt en in canon ical form wi t h r e sp ec t t o t he bas i s J ."
Section 4. Linear Programs with Bounded variables 119
or
is called a " bas i c s ol ut i on associ at ed with basis J . " Note that to a gi ven
n
basis, there correspond 2 - m basic solutions in this context.
Theorem 1: A basic fe asibl e solution as soci ated wit h basis J i s an opt imal
solution of (PB) , i f the following condi t i ons are sat i s f i ed:
J
(7 ' ) cj 7TA > 0 - -> X. 8.
J J
j lTAj < 0 x.
(7") c -> OJ
J
z +
7Tb + +
z - z < 0
s s
If Ic - nA I = 0 , re lati on s (7 ') an d (7 ") a re verifi ed an d , from The or em I,
t he pr es en t ba s i c so l ut i on i s op tima l. As s ume t ha t we have
and thus,
s
='\ ( th e case c S - Tf As > 0 and xs = as i s s i milar ). The idea
x
co nsis t s - - as i n t h e s i mp le x a l gorithm - - o f h avin g va riabl e X de c r e a s e from
s
i t s p re sent value B wi thout c han ging the v a l ue of t he o ther no nb as ic va r ia -
s
b l es and ha vi ng t h e ba s ic va r iab l es adj us te d so t hat l in e ar s ys t e m Ax = b
r ema i n s ve rified . The dimi nut i on o f x wi ll be limit ed a t t he fi r st o ccu r -
s
rene e of one o f th e f o llowi ng e vents :
s
1. Exp r e s s what h appe ns whe n c s - TIA > 0 (c f , Remark 7) a nd ho w
th e al gor ithm p rocee ds .
Example : We wi ll so lve
6x l + SX2 + 3x 3 + x4 + 2x S 16
o< x ~ j j ; 1 ,2 , .. . ,S
13x l + 2x
2
+ 9x
3
+ x
4
+ Sx
S
z(Max ) - j
Firs t I t era t i on : TI ;
Second I teration: TI 3
j 1 2 3 4 5
j j
c - TIA -1 5 - 13 0 -2 -1
1 2 3 4 5
J _ nAj
Ic -1 2 - 21/ 2 3/2 - 3/ 2 0
EXERC ISES
1. Count t he number of ope r a t ions (a ddi t ions, mul t ipl ica tio ns , compar i sons )
needed f or one ite r ati on of t he simp lex algorit hm an d fo r one i te ration
of t he r evi s ed s i mp l ex al gori t hm.
xl + x + x 6
2 3
x + x + x 4 x. > 0
4 5 6 J -
xl + x 5 j =1 , 2, . • .,6
4
x2 + x 3
5
x + x 2
3 6
3x l + x + 2x + 6x + 2x 5 + 4x z (mi n)
2 3 4 6
3. Wr i te li nea r program (PB) of Sec t ion 4 in s tanda rd form wit hou t making
t he ass ump tion t ha t
r4., ...8x
l
z•
4x
3
+ x
72
1 2. xl 2. 10
2 2. x2 2. 5
3 2. x 3 < 6
4 2. x4 2. 8
I 5x l + x 2 + x 3 + 2x4 z (Max)
"
usi ng the method of Se ction 3.
(K )
l alx l + a 2x 2 +• •• +anxn 2. b
clx l + c 2x2 +•• . +cnxn = z( Max)
( 1) c
> > -1l
- a n
(a) Show assumpt ion (1) can be done "without lo ss of gene ra li ty ."
(c) Propose a very simp le algori t hm gIVIng directl y (wi t hout pivoti ng
or iterat i on) an optimal sol uti on of l inear prog ram (KB) obtai ned
from (K) by adding constra i nts
Cd) Check t hat t he a lgo ri thm you fou nd gives directly t he s ol ution
of t he problem sol ve d as an example i n Sec tion 3.
60 o
The foll owin g sets are not convex :
n;
De fini tion Z: Let p and q be t wo poi nt s of R r eca ll t hat x be lon gs
t o segment [pq] i f and onl y i f
~
R. (xq) H (pq)
(Z)
R. (px) ( I -A) R,Cpq)
p q
Xl A + 3 (l - A)
o< A< 1
Xz ZA - 1(1 - A)
A 1, x p; A 1/ 4, x = [ 5/~
- 1/4
A l i Z, x
L;J; A 0, x q
1 Z n
c x x =
c Xl 2 +
+ <> 0 0 + C Ct
n
126 Chapter VIII. Geometric Interpretation of the Simplex Method
2.--...
-I q
2,
defines a hypenplane. Note that in R a hyperplane r edu ces to a s t r ai ght
3,
line and in R a hyperp l ane is a plane .
The inequality
(4) cx < a
Pro of: Let p and q both be long t o th e half space (4) , i.e .,
(4 ' ) cp < a
(4 " ) cq < a
Section I . Convex Programmin g 127
X AP + ( 1 - A) q a < A<
ClAp + (1 - A) q) ; ACp + ( I - A) cq
Now si nce
a < A
a < 1- A
A.x
1
< b.
- 1
i = 1, 2, ... Jrn
x. > a j ; 1 ,2 , • • . ,n
J
defines a hal f space.
Defi ni tion 6: The opt imiza t ion prob lem (mat hema tica l prog r am) " Fi n d the
minimum of a convex fu nc tion f over a con vex s et C" i s a "c on ve x p r o gram , It
Mi n [ f (x))
XE:C
f( x )
q
f ( X) f-----l--~~/
b x
It i s not a conve x func t ion s i nce t he cu rve does no t l i e be low th e se gme nt [pq J.
In t hi s example , x i s a l ocal minimum fo r f(x ) but i t is not a globa l mi ni -
mum (x is one) .
Rema r k 2 : From Remark I we deduce t hat a linear pr ogram (whic h can al ways be
wr i tten unde r the form of a minimiza tion prob lem) i s a co nve x pro gram.
x a l o c al mi n i mum
x a gl oba l mi nimum
i. e ., f ( x) < f(x) ,
x = AX + (1 - A) x i s a po int in C for 0 < A < 1 , s i nce C is
convex. (See Figure VI I I . 3) . Moreover ,
s i nce f i s a c on vex f un ction . Now, f or 0 < A < 1, and s i n c e f (x) < f (x)
we have
R. (x x )
f ( x) < f ( x)
- 2x + x <
I 2
xl - 2x 2 ::. 5/2
l+'1 ·
xl - 2 - l 2
x 2 ::. 3
xl + 2x 2 = z (Max)
x x x4 X x6 z b
XI 2 3 s II I
-2 I I I
I -2 I 5/ 2
1 -1 I 3
1/ 2 1 I 3
- ---- ---- - --- -- - ---- ---- -- --- - - -- -- - - --------
1 1 0
/
-2 1 1 1
-3 2 1 9/ 2
-1 1 1 4
5/2 -1 1 2 ...
----- -- -- ------ - -- - - - - - - -- ---- - --- - - - --------
5 -2 1 -2
t
1 1/ 5 4/5 2 3/5
4/ 5 1 6/ 5 6 9/ 10
3/5 1 2/5 4 4/ 5
1 - 2/ 5 2/5 4/5
--- - - ---- - - - -- - ---- - --- - - -- - - -- ------ ----- ---
u -2 1 -6
xl 0 xl 0 Xl 4/5
x2 0 x2 x 2 3/ 5
2
x x 0 x 0
3 3 3
(5) x 5/2 (5 ' ) x4 9/2 ( 5") x4 6 9/1 0
4
x5 3 x5 4 x 4 4/ 5
5
x6 3 x6 2 x 0
6
z = 0 z = 2 z 6
We can r epres ent in t he x x plane the f ea si bil ity domai n for Pro blem
l 2
(P) It is th e convex po lygon OABCDE shown i n Fi gur e VI I I . 4 . In thi s f i gure,
we notice the fo l lowi ng
ill
10 The points that correspond to basic so l utions (5), (5' ), (5") are vertices
0, A, B of the po lygon .
I I
x2 "3 Xs - "3 x6
5
x + "3 x6 8
3 X
s
4 2 I
x4 - "3 s + "3 x6
X
"2
2 2
Xl + 3" Xs + 3" x6 4
o Xs - 2 x
6 z - 6
1
4 x4
r
"2
(5 " ') x2 X
s 0 z =6
x3 8 x6 =0
( 6)
2 ~ - l5 x3
5
These prope r ties are gene ral one s , as wil l be seen now .
p,q e C
a = ;\p + ( 1 - ;\ ) q p=q=a
0 < ;\ <
e£ E
p, q £ C
e = ;\p + ( 1 - ;\ ) q p,q c E
0 < ;\ < 1
Remark 4 : De fi ni t ions 8 and 9 have the s ame form . They can be r estated i n t he
f oll owing way: "A point a ( res p . a segment E) of a con vex polyhedral set
C i s a verte x (resp . an edge) if every time thi s point a (resp . any poi nt of
t hi s s egment E) i s th e middle point of a se gment cont ai ne d in C, th i s se gment
is r educed to a (r es p. i s contained in E) .
Section 1. Geome tric Interpretation of the Simplex Algorithm 135
s
A
L _ - - - - - -
Figure VIII .s: An Illustration of Definitions 8 and 9
Example.: Consider the tri angl e ABC shown in Figure VIII .s . It i s c lear that
the definitions just given are consistent with everyday l an guage: The three
vertices of thi s tri angl e are A, B, and C and the th r ee edges are [AB], [BC],
and [CA] .
Defini ti on 10 : Two verti ces of a convex pol yhedral se t are sai d t o be " adj acent"
if there i s an ed ge j oining t hem.
o j O
( 7)
x Ap + ( 1 - A) q 0 < A< 1
Ap . + (1 - A)q . o jO
J J
jO
Thu s
p q x
x. = 0 j i J, j Is
J
(8 ) o < xs
x AP + (l - A) q 0 < A< t
b °2. Ps 2. x;
b o 2. 'Is 2. x;
Section 2. Geometric Interpretation of the Simplex Al gorithm 137
Ax < b x > 0
( P)
l ex = z (Max)
C hlAx 2. b, x > O}
Ax = b x > 0
( P) {
e x = z (Ma x)
0 j ¢j jfs
(8 ' )
F x
J
> 0
=b _ ASx
s
j tJ
IXlaX cs [c j ]
j
(e) When we are at a vertex such that no e dge corre sponds to an increase
in the value of the obj ective fun ction, we stop : the present s ol u-
tion is optimal .
EXERCISES
1. In general, if C is a convex pol yhedral s et, a s ubs e t F of C will be
called a face if
X E F }
p,q E C
= > p q eF
j
X = Ap + (1 - A) q
O< A<l
(al Show that vertices and edge s are face s of a convex pol yhedral se t.
Namel y, a vert ex i s a zero-dimensional face , an edg e i s a one-
dimensional f ace .
(i)
-2x + x < 1
l 2
xl - 2x2 < 2
(i i)
140 Cha pter VIII. Geome tric Interp retation of the Simplex Method
(i ii )
xl - x 2 < 3
xl + x < 7
2
( P) xl - x 2 > x ,x > 0
l 2
xl < 5
2x
l
- x2 = z (Max)
(e) Wr i t e and so lve the dua l of (P) . Give ltU the opt i ma l so lutions
of t he dual . How many basic optimal so lutions are t here ?
3
5. Consider th e set of poi nts i n R that satisfy
-Xl + 3x 2 + x ~ 9
3
4x l - 2x + x ~ 4
2 3
Exercises 141
Ax < b x > 0
{ cx = z(Max)
sh ow that
(c) It is poss ib le t hat ( P) has seve ral optimal bases but just
one optimal solution . What can be said t hen of t hi s optimal
so lut ion?
7. Given two non empty, closed conve x pol yhedra C and C' with C(J C' cj> ,
show t hat t here exis t s a hype rpl ane that s t ric t ly separates them.
yA + y'A' o
yb + y'b' < 0
y'b' )
and we refer the reader t o Chapt er II for t he defi nitions and preli mina ry result s
about dua lity . Let us fir st r eca l l here t he mos t important th eorems obtai ned so
far .
cx < yb
142
Section I. Theorems on Duality: Complementary Slackness Theore m 143
Theorem VI .6 : If t wo dua l linear programs (P) and (D) both have a feasible
so lution , th ey bot h have an optimal so l ut ion and t he values of the objec tive
f unc tions fo r the optimums are equa l.
(i i) If ( P) (re sp, ( D) ) has a f easi ble so l ution but not (D) (resp. not
(P)) , th en (P) (re sp , (D)) ha s a cl a ss of unbounded s olutions .
Remark Z: It may happen t hat ne ither (P) nor (D) has a fea sib l e so l ut i on.
Example :
Xl X
z Yl + YZ > 1
xl ,xZ.::.O
(P) xl Xz -1 (D) Yl + YZ < - 1
xl + Xz z (Max) Yl YZ = W (Mi n)
(1') ha s a fe as i b l e s o l ut ion
(1') ha s an (1') h as no (1') h a s no
opt i ma l op t i ma l f e a sib l e
s o l ut i on so l ut ion so l ut ion
(0) h a s an w
mi n
= z
max
optima l impo ss ib le i mpos s i bl e
The ore m VI - 6
.....'" so l ution
.n
..Ul. .
oj
"
0
.....
'" ;.J
4-< :l
(0) h a s no
.....
oj 0 op t i ma l
Ul
Ul i mpossib l e impo s sib l e w-+ _ 00
s o l ut i on
~
:;-
~
(0) ha s no
feasible impo s sibl e z -+ + 00 poss i b l e
so lution
"tight" if A.x
1
b.
1
if A.x
1
< b.
1
( 1')
tcx
x + U~ b
z ( Max )
x ,~.::. 0
(D) r
yb
A
-
nU = c
,; (Min)
y ,n .::. 0
Let x,~ and y, n be f eas i bl e sol utions t o (P) and (D), respectively.
th
Let us mul t i pl y th e i con straint of (P) by the corresp onding s lack variab le
yi and add up fo r i= 1,2, • • • .n , We get
( 1)
.t h
Similarly, let us multip ly th e J cons t rai nt of (D) by x. and add up fo r
J
j = 1, 2 , • •• .n , We get
(1 ' ) yAx nx cx
(2) y~ + nx cx
Neces sary conditi on : Let x,y be opti ma l sol ut i ons of (P) and (D) ,
respectively. Then f rom Theorem VI .6, we have
yb - cx 0
and f ro m ( 2)
y~ + nx 0
But
m n
y ~ + nx
-i - -j -
L
i =l
y si + L n x
j
j =l
Each t er m of this sum i s nonnegative , so t hat t he sum can be zero only i f each
term is O. Thus we have
-i
Y > 0 =:> ~i 0
-i
~i > 0 => Y 0
(3)
iij > 0 => x.
J
0
x. > 0 =>
J
iij 0
146 Chapter IX. Complemems on Duality: Economic lnterpretation of Dual Variables
(since Xl > 0)
(s ince x > 0)
2
Yl = .'1 11 , Y2 = 2/ 11
Remark 4: Theorem 1 i s s omet i mes cal led the "weak" complementar y s l acknes s
th eorem. It may happen that f or a coupl e x,y of opt imal s ol ut i ons to (P)
and ( D) , we have s i multaneous l y tight cons t r ai nt and t he co r respondi ng dua l
var iable equa l t o zer o.
Section 2. Economic Interpretation of Dual variables 147
Bewar e that t hi s t heor em does not as s ure th at the optima l so lut ions i n
que st i on ar e bas i c ones . It may happen that no couple of basic opti mal sol u-
ti ons s at isfi es t he st r ong complementary s lackne s s theorem.
In Sec t ion 11. 3, we gave an economic i nt erp re tat ion of the dual of the
tr ans port at i on probl em ( P ) . We now give ot he r i llustrations .
2
(a) The Pill Manu.6a.etWtVt ' <I PlWb.e.em : Suppose th at a hous ewife ha s to f i nd
a minimum- cost di et by buyi ng a combinat ion of f i ve foo ds , su bj e ct to the con-
s t r ai nt s th at t he die t wi l l pr ovi de at l east 21 unit s of vitamin A and 12 units
of vit amin B, th e propert ies of th e fi ve foo ds under cons iderati on being gi ven
by
Food 2 3 4 5
Vit o A content o 2
Cost 20 20 31 11 12
The hous ewi f e wil l have t o so lve t he f oll owin g linear program:
> 21
( 4) > 12
Now ass ume t hat a mer chan t or pi ll manufac ture r posse sse s pi lls of vi t amin A
and pi ll s of vi tami n B. He wan t s to know at whic h pri ce s he must se l l th e pi ll s
148 Chapter IX. Complements on Duality: Economic Interpretation of Dual Variables
i n orde r t o:
< 20
Y2 < 20
Y1 + 2Y2 < 31
(S)
Y
1
+ Y
2
< 11
2Y + Y2 < 12
1
2lY + l 2Y = w (Ma x)
l 2
The hous ewife probl em (4) i s so lve d in t he s ucce ss ion of simp lex t ab l eaus
s hown be low .
xl x x x X sl 52 b
2 3 4 s
1 1 1 2 -1 21 <-
I 2 1 1 -1 12
---- - - - --- - - - - ---- - -- - -- ---- - - - ---- - - - - - - -- ----- -----
* * - 29 - 29 - 48 +20 +20 z - 660
t
1/ 2 1/ 2 1/ 2 1 - 1/ 2 21/2
- 1/ 2 1 3/ 2 1/ 2 1/2 -1 3/2 -e-
--- - - - - -- -- - ----- - - -- - -- - - - - - - - - - -- - - - - -- - - ----- -----
24 * -S -S * -4 20 z - l S6
t
1 -1 -1 1 -1 1 9
-1 2 3 1 1 -2 3
------ --- - - - - - - - - - -- - --- -- - - -- - - -- - - - -- - -- -- - -- -- - - - - - -
19 10 10 * * 1 +10 z - 141
Section 2. Economic Interpretation of Dual \\lri ables 149
J
c
2
+ TI 11
2
+ TI 12
Thus , wit hout doing any computation , we know t hat an optima l so l ut ion of
t he pil l manufac t ure r 's prob lem (5) is
10
1 x 21 + 10 x 12 141
Ib) MaJtg-<'na..e. PtU.CeA 60/[, PtWductWn PtWb.e.em (PI ) : Ass ume th at the mana ge r
of th e f irm that was depicte d in Sec tion I. l .a might buy some ext ra quant iti es
of th e di f fe rent r aw mate rials. The ques t i on we will t ry and ans wer now is:
" What prices is t he manager ready t o pay f or t hes e ext ra quant i t i es?" Re call
th at prob lem (PI) was wri t ten in the fo llowi ng f or m:
150 Chapter IX. Complements on Duality: Eco nom ic lnterpretatiun of Dual Variables
1 2Xl + x < 8 (1 )
2
xl + 2x 2 < 7 ( II )
(PI) x ,x > 0
l 2
x < 3 ( II I)
2
4x + SX z (Nax)
I 2
xl = 3 , x = 2 , Z = 22.
an d t h at t h e op t i mal s o l ut i on was
2
It t a kes th e man a ger ve r y li ttl e t h ink ing to di scov er t ha t , g i ven t ha t fo r
the opt i mal s oluti on he ha s got I unit o f r aw mate rial III in exces s , he i s not
re ad y t o bu y an y ex t ra quan ti ty o f r aw mat erial III, what eve r it s pri ce might be .
In economi c t erms, we s h ou l d s a y th at a co mmodi ty th at i s i n exc es s ha s a va l lie
e qua l t o ze r o.
Now s upp os e th at t he s upply o f commodi t i e s II an d II I are he ld fixe d an d
that we wan t t o know whe the r it pays t o buy e xtra qu antiti es o f r a w mate r i a l I .
Not knowing t he s ubt let ies of s en si t i vi t y anal ysi s fo r li near programs, the
manage r of t h e fi rm decide s t o t r y to so l ve hi s lin e ar p rog r am wit h a s upp l y o f
9 ( in s tead o f 8) f or r a w mat e rial 1. He act ua ll y find s that t h e s a me basis i s
optima l an d th at the s o l ut i on i s
z 22 + I
2x + x2 < 8 + 9
I
Xl + 2x 2 < 7
(P i) XI 'X 2 > 0
X < 3
2
4x + SX z (Max)
r 2
Section 2. Economic Interpretation of Dual \lu i abies 151
he finds th at th e op t i mal basis is J = {l,3,S} and the opt imal solut ion is
Xl = 7 , x = 0, z = 28
2
Xl =8/3, x 8/ 3, z = 22 + 2
2=
( A J) - 1 ( b + 6b ) > 0
~. e . , J i s a l so an op ti ma l basi s f or
(16 ) r cx
x b + 6b
z(Ma x)
X > 0
(AJ) - lb + (AJ) - l Ob
o j tJ
n b + n ob
j
4. -c re p r e s ent s th e cost o f ope rat i ng acti vi ty a t leve 1 if
j
c <0 •
(D) : Given a uni.t profit (cos t) f or each of t he n act ivi t ies j , and a
suppl y (demand) f or each of the m commodit i es i, what mus t be t he un i t pr ice
of each commodi ty i such th at the t ot al va l ue of commodi ties cons ume d minus
th e t otal val ue of commodities produced be mini mum, sub j ect to the constraint s
t hat f or each acti vit y j, the t ot al value of con sumed commodi t i es mi nus the
t ot al value of produced commodi t ies - - for a level of ac t i vit y equal t o I- -
will be gr eate r than or equa l t o t he uru t profit of ac tivi ty j ?
EX ERCISES
1. What is t he dua l of
Xl + x3
x2 + x3 2
(P) xl + x4 2
x2 + x4 4
xl + x + x3 + x z (Max)
2 4
Solve the dua l of (P) . From this so l ution , what can be s ai d about (P)?
Check di rect ly on (P) that what can be s ai d about (P) by s t udying i t s
dua l i s i n f act true .
3. Prove t he f ollowing " s t r ong complement ar y sl ac knes s t heo rem" : If bot h dual
linear programs (P) and (D) have f easib l e s ol ut ions , it i s pos sibl e t o
fi nd a pa ir of opt i mal so l utions s uch th at:
xl = 1, x2 = 2, x = 0, x =4, Xs =0
3 4
is an optimal so l ution of
- 2x x 3x 4
S <
+ +
3 4
3x + 4x + X < 3 x. > 0
l 3 s 1
x
2 - x
3
+ 2x
S < 2
<
z (Max)
xl + 2x 2 < 14 x. > 0 i = 1, 2
1
2X - x < 10
l 2
xl - x2 < 3
2x + x2 z (Max)
I
(a) Feasib l e?
(b) Basi c?
(c) Opt imal ?
Exercises 155
7. Use the compl ement ar y slackness theorem to prove that the fea sibl e solution
of transport ation problem of Exercise 1.6 i s in f act optimal.
8. What can be said of the marginal prices when the optimal solution of the
linear program
Ax < b x > °
{
ex = Z (Max)
9. Let V C JRn V'c lRm and F v X V' .... lR • XE V', Ye V' is a saddle
point for F if:
° °
l
Ax < b yA > c
~ yb
(P) - x > (D) y ~
cx = z(Max) = w(M1fi)
HINT : To pr ove suffi ciency, use the fa ct that (*) is true for any
x and any y ,
Chapter X. The Dual Simplex Algorithm: Parametric
Linear Programming
s tudying
x > a
Z (Max)
(wher e c and f a r e n-row vectors and ~ i s a parameter) for vari ous values
of u -- Le ., for different r el ati ve weights of thes e obj e ctive funct ion s--
gives some insight i nto th e way t he opti ma l so l uti on depends on each obje ct ive
f unction.
J
(1) A is, up t o a permutati on of columns, the unit mat r ix
(thi s pe r mutat ion i s given by the fun cti on " col" ) .
(2) c < a
J
(3) c = O.
j J U {s}\{col(r)}
How will we determi ne on wh i ch i ndices r and s perform the pivot operat i on?
We begi n by choosi ng a r ow index r s uch t hat br < O. If s uch an i ndex
doe s not exist , J is an optimal basis . We stop. Thus the index l eavi ng t he
ba s i s wi ll be col ( r). Let us ca ll s th e index ente ring th e bas is ; we
ex ami ne now how s is chose n .
The cos t vect or c r el ative to th e basi s j will be equa l t o
158 Chapter X. The Du"1 Simplex Algorithm: Parametric Linear Programmin g
c c - 1TA
r
In or der t hat condition (3) be s ati sfi ed af te r pivot ing , we mus t have
-s s s
c c 1IA 0
r
and thus
11 : cS/A s
r
(J _cs / As
r
A
r
x b
r
< 0
s
c c - ~ A
AS r
r
By as sump t i on we have
c < 0 ;
k
Thus fo r a ll k such that A > 0,
r -
we have ck < O, In orde r t ha t condi t i on
(2) be sat i s fi ed after pivo t ing , we th en t ake s defi ned by
( 4)
Section I. Dual Simplex Algorithm 159
Li near Pro gr am (P) is written in canon ica l form wit h r espect to basi s J
and c < O. The mappi ng " col " is de fi ned as i n Remark V.!.
J = J U {s } \ {col(r) }
end repeat
Remark 1: The reader will note how c los e ly t he dua l simp lex a lgorithm paral lel s
(or mirrors ) the (primal ) s i mpl ex (s ee Sect i on V.3) . The f ac t t hat t he dua l
simp le x al gorit hm r oughl y r educes t o th e pr i mal s implex pe rformed on t he dual
wil l be apparen t i n th e fo l lo wing example. The pro of of t hi s propert y i s l eft
as an ex ercise .
In Step 3 , th e choice of s in cas e of a t i e can be made using a pe r t ur -
bation t e chni que or lexicographic r ul e . The principl es of th e r evi se d and th e
dua l s i mpl ex algorithm ca n be combin ed.
S i=1,2 , oo..,3
w(Min )
-ZYI - YZ + Y4 4
(S) - YI ZyZ Y3 + YS S y.1 -> 0 i=I ,Z, ••• , S
We now give the solution of th i s l i near progr am using the dual s i mpl ex al gorithm
in tabular form:
YI Y2 Y:; Y4 Ys z b
-2 -I I -4
-I -2 -I I -S ...
-- - -- - --- -- ------ --- --- ------- ----- --- - - -- -- --
-S -7 -3 I
.*
-2 -I I -4 ...
I 2 I -I 5
---- -- ----- - - --- - - ---- - --- - --- -- ------ - -- - - - - -
-5 - I. -3 I 15
Z I -I 4
-3 I 2 -I -3 ...
-- - - - - - - - - - --- - -- --- - - - ------ - - - --- - - - - -------
-3* -I -3 I 19
~:;~J
I 2/3 1/ 3 -2/3
I
- -- - - - ----- _:!L~_ _: ~ L ~ _ __ ! L ~ __ --------
-I -3 -2 I
Section 2. Parametric Linear Programming \6 \
Remark 2: The dual s i mpl ex algorithm is frequen tly used in "post optimization. "
After having sol ved a l i near program, it might happen th at we want to add con-
s traints t hat are not satisfied for the present "opt imal " solution. These new
constraints ar e a sour ce of "infe as ibility." I t is much more efficient to
app ly a few steps of t he dual simp lex a l gor i t hm t han t o begin again the so l u-
tion f rom scratch. This s ituation often occurs in combina toria l optimization
when one so lves integer linear programs (.<.. e., li near programs with integri ty
constraints on the variab les -- see Reference [4]) .
Ax b x > 0
(P) {
cx z (Max)
2xl + x2 + x 8
3
xl + 2x + x 7 x. > 0 i =1 ,2, .. . ,S
2 4 1
(Q~) X2 X
+ + 3
s
(4 + ~)xl + SX z (Max)
2
1 2
"3 x 3 3 x4 + X
s
2 1
xl + 3 x3 3 x4 3
(6) x. > 0 i= 1 ,2 ,o •• , S
1
1 2
2 -
X 3 x3 + 3 x4 2
)lx x - 2x z (Max) - 22
l 3 4
1 2
3 x3 3 x4 + X
s
2 1
xl + 3 x3 3 x4 3 x, > 0 i =1, 2, • • • , S
1
(7)
1 2
x2 3" x 3 + 3 x4 2
2 1
(3 )l + 1) x
3
+ (3 u -z) x 4 z(Max) - 22 - 3 )l
an d
4 1
c "3 j.! - 2
We get
x + X 3
2 s
+ x 1
xl + 2 x3 4
2
(8) x1. -> 0 i= 1 ,2 , . .. , S
3 1
2 x2 2 x3 + x 3
4
1
(2 j.!-3) - (} u + 2) x 3 Z (Max) - 16 - 4j.!
x3 - 2x + 3x 3
4 S
xl + x 2x
4 S
( 9)
x1. -> 0
x + Xs 3
2 i =1,2 , . . . ,5
(-j.! - 4)x
4
+ (2 j.!+ 3) X
s z (Max) - 19 - u
2x +x X 5
l 3 s
Xl x4 - 2x
S
(9 ' ) x. >0 i =l ,2 , • • • ,5
1 -
+ x + X 3
2 s
( u + 4) x
l Sxs z (~lax) - 15
_00 -4 - 3/ 2 6 +00
1.1
I I
xl 0 I 1 I 3 4
I I
I
X 3 I
3 I
2 I 0
2 I I
I
x3 5 I
3 I
0 I
0
I I
I
x 1 I
0 I 0 I
3
4 I I
I
X 0 I
0 1 I
3
s I
I
I
I
I I
I I I
Z IS I
19 + 1.1 I 22 + 31.1 I
16 + 41.1
I I I
Zmox= 15
10
-4 -312 o
Figure x. r. Zmax vs . 1.1 fo r
Section 2. Parametric Linear Programming 165
Xz
2x + x2 + x 8 + 2j.l
I 3
(
1 2
'3 x 3 x + X 2lJ
'3 4 s
( 10) j'l x
+
2
'3 x 3
1
~ x .)• +
1
'3 x 4
2
'3 x4
3
2 + 4lJ
u x. > 0
1
i e l , 2 , ...... , 5
I 2 .)
l x
3
2x
4
z (Max) - 22 - l()lJ
1 - 2lJ > 0,
- 3 - 11 -> 0 , 2 + 411 > 0
i .c. ,
1
- .!.2 -< u < -
- 2
2x
3
+ x 4 - 23 Xs
3
2 + 3lJ
1 1 S
xl + 2 x3 "2 Xs 2 x. > 0
1 -
( 11)
i= 1 ,2 , . 0> ,S
x
2
+ X
s 3 + 2lJ
Sx-
.)
- 3 X
s z - 2S - l ° ll
X + x 3 + 211
2 5
xl + 2x + x 7 + 711
2 4
(12)
3x + x3 6x - 6 - 1211
2 4
3x 2 8x z - 28 - 28\.1
4
The ba si s {5 , 1,3} r emain s feas ib le , and thus opt imal , fo r - 1 ~ \.I < - 1/ 2.
For 11 < - 1, t he second equ ati on i s i nfeas ib l e .
The se r esult s can be summarize d i n the followi ng t abl e :
,
I I
,
I
, ,
11 -1
, -1 / 2 I , 1/ 2 , +ool
I
I ,
I
0
, 7 + 7\.1
I
, 7/2
I
3 -
I
, 5/2
I
5/2
xl , ,
I \.I
,
I
,
I
x2 0 I
, 0 I 0 , 2 + 411 I 4 I 3 + 211
, I , I I
x
3
6
, -6 - 1211 ,
, ,
0 I 0 I 0 I 0
x4 0 0
, 0 , 0
I
, 0 , - 3/ 2 + 311
I
, ,
I
, I I I
x5 1
, 3 + 211 , 2 , 1 - 211 , 0 , 0
- - ---- -------, ---------l ----------t--------------!
, -------!---- ------------
,
0 , 28 + 2811 ,
I I
Z 14 , 22 + 1611 , 30 , 25 + 1011
z
40 max
u • -I
1
III \
III
o "--.-- r--''-r-..---\---r---,.- - - -- -
(c) General resu lts in parametric programming: Let us cons i der the f ol l owi ng
linear programs :
(PfJ) {
Ax =b +
ex = z (Max)
I.l d x > a
(X b
(c + fJf ) x
x >
w(Max)
a
Defini t ion 1 : Recall (cf , Definition VIILS) t hat a r eal - valued fu nction g
defined on a convex se t C is convex if
{ ( x , ~ ) lAx = b + ~ d , x ~ o}
x AX + ( l +A )x o< 1.< 1
IJ 1.0 + ( 1 - 1.) 0
And thu s
( *) Z(IJ) > cx
EXERC ISES
1. Us e the du a l s i mp lex a l go r i t hm t o s o lve
(1')
xl + X
z = z(Max)
Zo So lve , us ing th e du a l s i mp l e x a lgo r i t hm, th e linear pro g ram
xl - X
z <
xl >
> 3
x l ,x z> ()
xl + X
z
x l + 2x Z Z (Min)
Exercises 171
Xl + x2 > 2
- xl + x > 3 x ,x > 0
2 l 2
xl > 4
3x + 2x z( Mi n)
l 2
4. Describe the l exi cogr aphic method appl ied to th e dual simplex al gorithm .
Ax < b x > 0
(P) { ex = z (Max)
which i s neither primal feas ible (so me b are negative) , nor dua l f eas i bl e.
i
Let e be the m-co lumn vec to r each component of which is equa l t o and
~ = - ~lin i [bi l •
Prove that app lying the t echni que of Section 2(b) t o sol ve
Xl + 2x
2 - 3x
3 - x
4 2. - 1
2x l + x + 2x + 3x < 3 x. > 0
2 3 4 1 -
Xl + 3x x + x4 z (Max) i=1, 2, • • • ,4
2 3
I aX
2xx
l
•
+
x2
X2
2x
+
+
2x
2x
3
x3
3
<
>
6
z (Max)
x. > 0
1
i =1, 2, .'
l 2
172 Chapter X . The D ual Simplex Algorithm: Parametric Linear Programmi ng
(a) For a = -1 .
xl + X
2 > 2
2x2 > 3 x ,x 2 > 0
(P ) xl + l
a
(2 + a ) xl + 4x2 = z (Mi.n)
when a va r ies.
xl + x2 > 3 - a
(~) xl + 2x > 2 + a
2
(2 + a) xl + 4x2 z (Max)
-2 xl + x2 < + ]J
xl + 2x z (Max)
2
Chapter XI . The Transportation Problem
q
L t a k=1, 2, • ••• p
u k
t=l
P
tu > 0
(T) L tk£ b t=1 .2 • • • • •q
k=l t
q p
L
t =l
L dut u z(Min )
k=l
where
for k=1.2 • • • • •p
f or t= 1.2 • • • •• q
173
174 Chapter XI. Th e Transportation Problem
( 1)
p q I' q
I I Clt u Cl I a
k
Cl I
\',=1
b\',
k=l \',= 1 k=l
Ref.1ark 2: Let u s c ons i de r t h e prob lem met by an i n dus t ria li st who want s to
th
tra nsport at minima l co st a c e r t a i n c ommodi t y from p fa ctor ie s (in t he k
factory a quant i t y ~ of thi s conunod i t y i s avai lab le ) t o q war ehouse s (t he
deman d in th e I', t h wa rehouse is bR. ) ; un it cost s o f s h i pp i n g ( L e. , the c o s t
o f s h i pp i ng unit ) from f a ct ory k to warehous e I', is dk l', ' Formu lation
o f t hi s p r ob lem is a s fo ll ows (t kl', denot es the amount o f commo di t y s h i ppe d
from f actory k t o wareh ou s e R. ) :
Section I . The Problem 175
L t k9, ~ ak k=1, 2 , • • • , p
£=1
t k9, ~ 0
P
(2) L t k£ > b£ £=1 , 2 , • •• , q
k=l
q p
L L du t k £ z (Min )
£=1 k= l
q p q P
(1 " ) L b£ < L
L £=1 t
u
< L a
k
£=1 k=l k=l
p q
L ak - L b£
k =1 9,= 1
Remark 3 : A prob lem of th i s t ype has been i nvest i gat ed by Monge . The pr es ent
forma l ism was f i rs t st udied by Hitchcock , Kantoro vi t ch , and Koopman . Pro blem
(P ) of Chapt e r I i s a t r ans port at i on prob lem.
2
The mat hemati ca l propert i es of l i near pro grams t hat are tran s port ati on
prob lems come from the fac t t hat th e mat r ix A has a ve r y s peci a l st r uct ur e .
This s pec i al st r uct ure i s due to th e f act th at t here i s a graph on wh i ch the
probl em i s defin ed . Solut ion methods ot he r than th e r evi s ed s i mp l ex a lgo r i t hm
th at we are about t o pres ent he re ex i s t t o solve tran s port at i on prob l ems. We
can , in pa r tic ular , ci te th e "llungaYi an method" of H.W . Kuhn, a met hod that i s
a l s o named "primal-dual" (s ee [6 ]) .
jAx f x > 0
(T)
I ex z (Min )
wher e
• A is a (p - q) * pq- mat ri x
c is a pq-row vec t or
j
d
fo r
r x.
J
t
U
(3) q (k - 1) + 9-
Remark 5 : Each variab le t £ appea rs once and onl y once (with coeffi cient 1)
k
in th e gro up of the p first equat ions of ()
T . t h e kt h equat 10n
( actuall y 1n .
of th i s gro uW. Each var iabl e t £ appears once and onl y once (wi t h coeffi cient
k
1) in th e gro up of th e q last equations of (T) (a ct ua l ly in the £th equa-
t i on of th i s gro up) . Thus mat r ix A has t he following propert i es:
(i i i) Any (p+q) -col umn vec tor with proper t ies (i) and (ii ) i s
a co l umn of A.
A 1
De finition Z: We will give t he n ame "n ons ingular triangular" matrix (in brief ,
"triangul ar" mat rix) t o a -6q u.aJl.e nons ingul ar mat rix B sati s fy i ng th e fo llowing
(recursive) de finit i on :
(4 ) Bx f
(4 ' ) x.
J
Subs t i t ut i ng
x by it s val ue (4 ' ) i n the other equa t i ons of (4) , we get a
j
linear s ys tem of di mens i on n- l (if B was of dimen s i on n) t he mat rix of whic h
i s tri angul ar.
.t h
+1 i f th e 1 r ow of B belongs to th e gr oup
of the p f irst r ows of A
yi
.th
-1 i f t he 1 r ow of B be longs t o the gro up
of t he q l ast ro ws of A
CorOl lary :t Matrix A is t ot all y uni modul ar; i . e ., every sq uare s ubmatrix of
A ha s a dete r min ant t hat is eq ua l t o +1 , - 1 , or O.
Proof : After a permutation on rows and co lumns (which does not change th e
absol ute va l ue of t he determinant), a t ri angul ar matrix can be wri t ten under
th e class ica l ~ form wit h
a if j >i
The val ue of t he det er min ant of such a folat r i x is equa l t o th e pro duct of i t s
diagona l e lements , whic h ar e a l l eq ua l t o 1 if B is a s ubmatrix of A.
has exact l y two l' s i n each column. I t is not t r i angul ar si nce propert y (i i )
of Remar k 5 is not satisfie d.
Remark 10: Eve r y basic so lution of l in ear program (T) will be in teger val ued
if components of vec t or s a and b are i nte gers. Let be a subse t of t he
se t of rows of A such t hat
rank (A)
x. 0 j iJ
(4 " ) J
J
AIXJ f
I
J J
But A i s a s qua re nons ingul ar s ubmat r ix of A, i oe ", A is tri angul ar and
I I
J
a ll nonzero e le ment s of A ar e eq ua l t o 1 • Thus (4 ") i s a sys t em of t ype
I
( 4) that wi l l be solved by addi t i ons and subt r act i ons . If f has inte ge r
I
compone nts, x wi l l also have inte ger component s .
J
Theo rem 2 : A pos s es s es p+q-l l ine arl y in de pendent col umn s . Any cons traint
of (T) i s r e dundant . Af t e r de l et i on of a s i ng le cons t rai nt . t he l inear sy s t em
thus obtained is f ul l r ank .
i
if < i 2 p
y
if p+1 < i 2 p+q
yA 0
1 1 I I I I I
1
1
I
I
1
q- l
1
1
I
I
q p- l
Section 2. Properties of the Transportation Problem 181
u l, u2,··· , up
v l,v2 ,· · · ,v q
such t hat
( 5) k=1, 2, • • • ,p ; i =1, 2 , •• • , q
(6) o k=1,2 , • • • ,p ; i =I , 2 , •• • , q
~ + vi ~ dkl k=1,2 , •• • , p; i =1 ,2 , •• • , q
*
(T ) P q
L aku k + L bivi w(Max)
k=l i =l
Condit ion (5) gua ran tees t ha t ~ and vi ar e a f ea s ible so l ution of (T*) .
If (5) and (6) are sati sfied , we have
Thus we have a couple of fea s ib le so l utions to (T) and (T*) t hat s at isfy
th e complementary s l acknes s theorem.
(i ) The s um of t h
t kJl, on th e k r ow i s equa l to a •
k
(ii) The s um o f £,t h
t k£, on th e co l umn i s eq ua l t o h£, •
We not e that t his present ati on o f t he t ran s por t at i on probl em is much mor e
compact t han th e usu al fo r mali sm s i n ce matrix A ha s p+q rows and pX q
co l umns, as oppos ed to t he p+1 rows an d q+1 co l umn s of t he p r e ceding
t a bl e . I n fact , each entry of thi s tab le corresponds t o a co l umn of A.
6
2
4
1
6
3
2
2 3 13 7 o 6 2 5 8 15 9 (M i n )
Remark 14: To initi ali ze, we l ook for a f ea sibl e bas i s J and the co r res pond-
ing basic so lution. J wil l deno t e the set of couples ( k ,~ ) in th e basis .
From Theorem 2 , we have IJ I = p+q- l. The procedure i s t he f ol l owing :
184 Chapter XI . The Transportation Problem
( i) If p=1, q > 1
J { (l , 1) , (l, 2 ) , •• • , ( 1 , q)}
(7) d Mi n{k=I , 2 , • • • , p ,· 1, 2,
rs 0 =
;0 ••• , q } (dkol
;0
We l et
J: J U { (r , s))
( 8) t
rs
J: 0; P: = { 1, 2 ,. oo, p} ; Q: = { 1,2, o. o, q}
d
rs Min [dk9, l : J : = J U { (r , s ) }
{k£P; 9, £Q}
/* i f mo re than one co up le (r ,s) i s cand i dat e , choos e anyone */
if a < b th en
r - s
a . b P: p'd r}
r ' s
b . a : Q:
s' r
end wh i Ie
if Ipi = th en
~ sco do
J : = JU{( r, s) }; t b
rs s
end fo r all
fo r a l l r£P do
TJ:=
en d fo r all
J U { ( r, s ) }; t
rs =a r
end i f
en d i niti a l iz a t i on routin e
Section 3. Solution of the Transportation Problem 187
Proof: Let n be t he number of time s we go through the " whil e" l oop , At each
step IpI + IQI i s dec reased by uni t. At t he exi t of th e l oop we have,
say , IpI = I . Thus we wil l t hen have
I P1 + IQI p +q - n
IQI p+q - n - I
n + IQI p +q -l
( *) o o
th
and t hus xr s = 0 is the unique possibility for th e r equa ti on of (*) .
The arg ument f ol l ows a lo ng this line on the successi ve l y r ed uced tran sportation
probl ems.
188 Chapter XI. The Transportation Problem
Co r o llary : Tran sport at i on prob lem (T) , as de fin e d in Defini tion 1, a lways
ha s a f e a s i bl e s o l ution .
(9' )
f o r a ll (k ,R,)E:J
J
AI ' be i n g tri an gul a r (9) ( or (9 ')) , can be so lved very easil y , a s we see
i n th e n e xt e x ampl e .
v =3 ,
2
since
Then , f rom v =3 ,
2 we de du ce th at "z = - 3 and f rom v4 = 7 th at "s = 2 •
Fi na lly , fro m u = 2 we ge t v = 13 • And we get th e t a bl e T( 2 ) s hown
3 3 o
on t he n e xt pag e.
Section 3. Solution of the Transportation Problem 189
r ( 2)
o
---I
for a l l
end fo r all
kE P'
v£ :
and
d k£ - uk;
£E:{ 1. 2 • • • • •q} \Q
Q: =QU {£}
(k. £)E: J do
P' : = 0
I
end for al l
P':=P 'U{ k}; P: =PU{k}
Q' : = 0
end whi 1e
end rout ine
Remar k 16: We wi l l not give a fo rmal proo f of thi s r out in e . pI and Q' ar e
th e se t of i ndic es f or which va l ues of uk and v£ have bee n compute d i n the pre-
cedin g step . The r eader wi l l underst and th e me chani sm of th e a l gor i t hm by
applyi ng i t t o th e preceding ex ampl e .
190 Chap ter XI. The Transportation Problem
( 10 ) d u v
rs r s
f or th e c o l umn in de x t o en t e r th e ba s i s . We n ow have to de ci de wh i ch i n de x
will l e a ve the ba sis . Befo re gi v i n g a gene r a l an swe r , we wi l l go hack t o our
examp le .
T ( 3)
a
Z 8
3 8
Section 3. Solution of the Transportation Problem 191
Thi s i mplies, i n consi der i ng th e s econd column and the third r ow, that
4 + e
+ e
- e
r ( 4)
o 1L-------!..".jL---:~=--___7~----=7f_..___-7i J ' = { ( 1, 1) , (l ,2 ) , (2 ,2 ),
(2, 3) , (3, 3), (3 , 4) }
r eS)
o
Ind ex ( 1 ,3) i s now a candi dat e t o ent e r the basi s . The chain of adj us t ments
on th e basic variab les is gi ven by t he fo l lowi ng tab l e :
Section 3. Solution of the Transportation Problem 193
T ( 7)
o
T( B)
o
( 11) f
194 Chapter XL The Transportation Prob lem
and we l ook fo r
+
(i ) .I U{o} can be partition ed in t o t hree subse ts : .I ' , .I , .I wi th
x . E: .I '
J
x. does not depend on e
J
x.
J
x. + e
J
x. E: .I
J
x. x. - e
J J
+
Remark 19 : Once we have determined .I' , .I and .I we have
g Mi n [i .]
j E:.I- J
and the index of th e va ri ab le t o l eave i s
TE:.I wit h
Section 3. Solution of the Transportation Problem 195
T1
if 1{9, ' I (k , 9, ' ) e:J \ J '} 1 I then
9, : ~ unic elemen t ({ 9, '1 (k , 9, ')e:J \ J' })
J ': ~J 'U {(k, 9,)} ; P : ~ P\ {k} ; EXIST: TRUE
end if
e nd f or a ll
fo r a l l 9,e:Q do
~
T1
If j{ k ' l ( k ' . 9,)e:J \ J ' }1 I t hen
k: ~ unic e lement ( {k' !(k ' . 9,)e:J \ J '})
J' : ~ J 'U { (k . 9, ) } ; Q: ~ Q\ U } ; EXIST: TRUE
e n d if
end for all
until EXIST ~ FAL SE
k : ~ r ; j : ~s
i t e rate
~: = unic e l eme nt ({9, 'I (k . 9, ' ) e:J \ J ' ; 9, ' l' j ) } ; j: ~k
ex i t 9, ~ s
T J -: ~J -U {( k, 9,)}; k : = un i c elemen t ({ k ' ! ( k '. 9, ) e:J \J ' : k'l'j});J+: ~J+U {(k .9,)}
end iterate
J: ~
J \ W\ ~) }
en d routine
196 Chapter XI. The Tranxportation Problem
We let the reader check that thi s routine performs the chan ge of basi s and the
change of basic s ol ut i on as indicated in Remark s 18 and 19. The solution al -
gorithm for the tranportation prob l em is gi ven next .
Ca l l In it i a l i zat i on routine
iterate
~aJI dual variab les computation ro ut i ne
I f in d (r,s) such t hat drs - u r - v S
eXit : drs - u r - Vs ~ 0
Cal I change of basis routine
r
end te rate
j
if not
and
n
L tu Q,=1,2 , • • • ,n
k=l
n
CA) L tu k=1,2, •• . ,n
Q,=l
n n
L L dkQ,tkQ, z( Mi n)
k=l Q, =l
Section 4. The Assignment Problem 197
Theo rem 5: Prob lem (A) can be so lved i n i gnor i ng integri ty cons traints
t Jd,d O,l} . More pr ec i se I y , ever y basic opt i ma l sol ut ion of
n
I t
H £=1,2, ••. , n
k=l
n
(A' ) I t
H k=1,2, ••• ,n
£=1
n n
I I dkR, t kR, z(Min)
k=l £=1
EXERCISES
i s optimal . Is it uni c ?
3 3 6 2 2
200 Chapter XI. The Transportation Problem
(e) Give all th e so lutions of t his transp ort at i on prob l em for a l l va l ues
of (a , B).
8. We cons i der an as s i gnment pr oblem f or which the obj e ct ive f unc t ion is t o be
maxi mized (thin k of dk£ as r eturns instead of cos ts )
Solve the ex amp le de f in ed by th e r eturn matri x.
Exercises 20 1
16
I;J
7
rI:
12
10
7
14
2
0
6 10
5 5 2 3
5 3 4
4 0 2 0
1 4 5 3 3
2 5 4 6 6
[3J Ga le , D. , The Theolt lj 06 UneiVL Econo!1Kc Mo deb.., McGr aw Hill, Ne w Yor k , 1960.
[6 J For d , L.R . an d u.n . Fu l ke r s o n , F.cOW6 -<-n Netwolt /v." Prin c eton Un i ver s it y
Press , 1962 .
(7 J Si monn a r d , M. , U nea/t PM glUUnmUl q, Pren t ice Hall, Eng l c wood Cl i ffs , New
Jer sey , 1965.
202
Aide Memoire and Index of Algorithms
AID E MEMOIRE
Ax = b x > 0
(P) { ex = Z (Max)
x > 0
Ax = b
J
Basis: A se t J of m indices suc h th at A is nonsingul ar .
Feasibl e basis : A basis s uch th at th e cor respond i ng basi c sol ut ion i s f eas ible ,
i. ,e., a basis J s uch that
11
203
204 Aide Memoire and Index of Al gorithm s
c c - rrA.
Optimal basis: A basi s s uch that th e co r res pondi ng bas ic s ol ution i s optimal;
by vi r t ue of theorem l V.3, a bas i s is optimal if and onl y i f
c c - rrA < O.
Bas i c optimal sol ution: A ba sic feasibl e sol ut i on wh i ch i s opt imal , or i n other
words , a basic s ol ut ion which cor responds to an optimal ba s i s .
205
106 Ind ex