Scheduling Problems and Solutions
Scheduling Problems and Solutions
Uwe Schwiegelshohn
Constraints
Task Resources,
Time
(Jobs) (Machines)
Objective(s)
Areas:
n Manufacturing and production
n Transportations and distribution
n Information - processing
Example 1 Paper Bag Factory
n Different applications
l unknown processing time
l known distributions (average, variance)
l priority level
n Multitasking environment
l preemption
n Minimization of sum of expected weighted completion
times
Information Flow Diagram in a
Manufacturing System
Production planning,
master scheduling Orders, demand forecasts
Capacity Quantities,
status due dates
Material requirements,
planning, Material requirements
capacity planning
Scheduling
and
rescheduling Detailed scheduling
Schedule
performance Schedule
Dispatching
Shop
status Shopfloor
management
Shopfloor
Information Flow Diagram in a
Service System
Status (history)
Database Forecasting
Forecasts
Data
Prices rules
Scheduling Yield
management
Accept/
reject Place order,
(conditions) make reservations
Customer
Abstract Models
Job properties:
l : single machine
Pm : identical machines in parallel
Q m : machines in parallel with different speeds
Rm : unrelated machines in parallel
Fm : flow shop with m machines in series
l each job must be processed on each machine using the same route
l queues between the machines
FIFO queues => permutation flow shop
FFc : flexible flow shop with c stages in series and several
identical machines at each stage, one job needs processing on
only one (arbitrary) machine at each stage
Machine Environment
n no – wait (nwt)
è A job is not allowed to wait between two successive executions
on different machines (Fm , FFc )
n recirculation (recirc)
Objective Functions
1 if Cj > dj
n Uj =
0 otherwise
Objective Functions (2)
Lj Tj Uj
1
Cj Cj Cj
dj dj dj
α|β|γ
jobs 1 2 3 4 5 6 7 8 9 10
pj 8 7 7 2 3 2 2 8 8 15
Precedence Constraints Original
Schedule
2 jobs 1 2 3 4 5 6 7 8 9 10
1 pj 8 7 7 2 3 2 2 8 8 15
10
3 = job completed
5 8
6 7 9
1 2 8 9
4 6 5 7 3 10
0 10 20 30
Precedence Constraints (2)
Processing Time 1 Unit Less
2 jobs 1 2 3 4 5 6 7 8 9 10
1 pj 7 6 6 1 2 1 1 7 7 14
10
3 = job completed
5 8
6 7 9
1 2 9
4 6 5 7 8 3 10
0 10 20 30
Precedence Constraints (3)
Original Processing Times and 3 Machines
2 jobs 1 2 3 4 5 6 7 8 9 10
1 pj 8 7 7 2 3 2 2 8 8 15
10
3 = job completed
5 8
6 7 9
1 2
4 5 8 3 10
6 7 9
0 10 20 30
Active Schedule
Machine 1 1
Machine 2 2 1
Machine 3 2
0 2 4 6 8 t
Example:
Consider again a schedule with three machines and two jobs. The
routing of the two jobs is the same as in the previous example.
n The processing times of job 1 on machines 1 and 2 are both equal
to 1.
n The processing times of job 2 on machines 2 and 3 are both equal
to 2.
Semi – active Schedule (Example)
Machine 1 1
Machine 2 2 1
Machine 3 2
0 2 4 6 8 t
Semi-active
X Nondelay Active
All Schedules
A Venn diagramm of the three classes of non preemptive schedules;
the non preemptive, nondelay schedules, the active schedules, and the semi-active schedules
Complexity Hierarchy
1 || Σ Cj ∝ 1 || Σ wj Cj ∝ Pm || Σ wj Cj ∝ Q m | prec | Σ wj Cj
Complex cases
α | β | L max ∝ α | β | Σ Uj
α | β | L max ∝ α | β | Σ Tj
Rm FJc
Qm FFc Jm
Pm Fm Om
• machine environment
Complexity Hierarchies of
Deterministic Scheduling Problems
0 0 0 0 0 0 0 0 0
Σwj Tj Σwj Uj
ΣCj Lmax
Cmax
• objective functions
Deterministic Scheduling Problems
polynomial NP – hard
time solution
NP – hard strongly
ordinary sense NP- hard
pseudo
polynomial solution
Complexity of Makespan Problems
1. 1 || Cmax
2. P2 || Cmax
3. F2 || Cmax
4. Jm || Cmax FFc || Cmax Jm || Cmax
5. FFc || Cmax
P2 || Cmax F2 || Cmax
Hard
Easy 1 || Cmax
Complexity of Maximum Lateness
Problems
1. 1 || Lmax
2. 1 | prmp | Lmax
3. 1 | rj | Lmax
4. 1 | rj , prmp | Lmax
5. Pm || Lmax
Easy
Total Weighted Completion Time
1 || Σ wj Cj
Weighted Shortest Processing Time first (WSPT)
wj
(Smith ratio)
pj
Proof by contradiction:
WSPT rule is violated It is violated by a pair of neighboring task h and k
t
h k
t
k h
l* satisfies l*
l
∑
j =1
wj ∑ wj
j =1
δ factor
= max
l* 1≤l ≤k l
∑ ∑ p j
of this
chain pj
j =1 j =1
l* determines the δ-factor of the chain 1, ... , k
Algorithm: Total Weighted
Completion Time with Chains
Whenever the machine is available, select among the remaining chains
the one with the highest δ-factor. Schedule all jobs from this chain
(without interruption) until the job that determines the δ-factor.
Proof concept
There is an optimal schedule that processes all jobs 1, ... , l* in
succession + Pair wise interchange of chains
Example: Total weighted
Completion Time with Chains
n Consider the following two chains:
1 2 3 4
and
5 6 7
The weights and processing times of the jobs are given in the following table
jobs 1 2 3 4 5 6 7
wj 6 18 12 8 8 17 18
pj 3 6 6 5 4 8 10
Example: Total Weighted
Completion Time with Chains (2)
24
n δ-factor of first chain (6 + 18) (3 + 6) = Job 2
9
25 24
n δ-factor of second chain (8 + 17) (4 + 8) = < Job 6
12 9
è Job 1 and 2 are processed first
12 24
n δ-factor of remaining part of first chain < Job 3
6 9
è Job 5 and 6 are scheduled next
w7 18 12
n = < Job 3 is scheduled next
p7 10 6
w 4 8 18
n = < Job 7 and finally Job 4
p 4 5 10
Algorithm: Total Weighted
Completion Time with Chains (2)
n 1 | prec | Σ wj Cj strongly NP hard for arbitrary precedence
constraints
n 1 | rj ,prmp | Σ wj Cj strongly NP hard
è WSPT (remaining processing time) is not optimal
Example: Select another job that can be completed before the next
release
1 | rj ,prmp | Σ Cj is easy
1 | rj | Σ Cj is strongly NP hard
1 || Σ wj (1 – e -rCj ) can be solved optimally with Weighted Discounted
Shortest Processing Time first (WDSPT) rule:
− rp j
wj • e
− rp j
1− e
Maximum Lateness
1 | prec | hmax
Cj*,Cj**
j* j**
hj(Cj) hj**
hj*
Cj*,Cj**
j** j*
Example: Minimizing Maximum
Cost
jobs 1 2 3
pj 2 3 5
hj (Cj ) 1 + Cj 1.2 Cj 10
n Cmax = 2+3+5 = 10
Proof:
n reduction of 3-Partition to 1 | rj | Lmax integers a1, ... , a3t, b
3t
b b
4
< aj <
2 ∑a
j =1
j = t*b
è n = 4t –1 jobs
rj = jb + (j –1), pj = 1, dj = jb + j, j = 1, ... , t –1
rj = 0, pj = aj – t +1, dj = tb + (t – 1), j = t, ... , 4t – 1
1 || Lmax Complexity Proof
n Finding bounds:
If there is a better schedule than the one generated by a branch ⇒ the
branch can be ignored
1 | rj , prmp | Lmax can be solved by the preemptive EDD rule (non delay
schedule)
If this rule creates a non preemptive Schedule ⇒ Optimality
Branch and Bound Applied to
Minimizing Maximum Lateness
jobs 1 2 3 4
pj 4 2 6 5
rj 0 1 3 5
dj 8 12 11 10
1 3 4 3 2 Lmax = 5
0 4 5 10 15 17
Add j* to J
Delete j* from Jc
Go to Step 3.
n Step 3 If
∑p
j∈J
j ≤ d j*
Go to Step 4,
otherwise
let k* denote the job which satisfies
p k * = max (p j )
j∈J
Delete k* from J
Add k* to Jd
n Step 4 If Jc = ∅ STOP, otherwise go to Step 2.
Algorithm: Minimizing Number of
Tardy Jobs (Proof of Optimality)
n The worst case computation time is that of simple sorting
O(n•log(n))
n Optimality proof
d1 ≤ d2 ≤ ... ≤ dn (appropriate ordering)
Jk subset of jobs [1, ... , k]
(I) maximum number of Jobs |Jk | = Nk among [1, ... ,k] completed
by their due dates
(II) Of all subsets of [1, ... ,k] with Nk jobs completed by their due
dates the subset with the smallest total processing time
è Jn corresponds to optimal schedule
Algorithm: Minimizing Number of
Tardy Jobs (Proof of Optimality) (2)
n Proof concept : induction
è correct for k=1 ⇒ assumption correct for k
Jk +1 ≤ Nk + 1 and {k + 1}∈ Jk +1
√ minimum total processing time
2. Job k+1 is added to set Jk and is not completed in time ⇒ one job
(longest processing time) is deleted
è Nk+1 = Nk
è total processing time of Jk is not increased
è no other subset of [1, ... ,k+1] can have Nk on-time completions and a
smaller processing time
Example: Minimizing Number of
Tardy Jobs
jobs 1 2 3 4 5
pj 7 8 4 6 6
dj 9 17 18 19 21
n Job 1fits J1 = {1}
n Job 2fits J2 = {1, 2}
n Job 3 does not fit J3 = {1, 3 }
n Job 4 fits J4 = {1, 3, 4}
n Job 5 does not fit J5 = {3, 4, 5}
è schedule order 3, 4, 5, (1, 2) Σ Uj = 2
1 || Σ wjUj is NP hard
è all due dates being the same
è knapsack problem
è size of the knapsack: d = dj
è size of item j: pj
è benefit of item j: wj
Example: Minimizing Number of
Tardy Jobs (2)
n Heuristic WSPT rule (wj / pj ordering)
è ratio ∑ w U (WSPT)
j j
may be very large
∑ w U (OPT)
j j
jobs 1 2 3
pj 11 9 90
wj 12 9 89
dj 100 100 100
Total Tardiness
2. Any sequence that is optimal for the second instance is optimal for
the first instance as well
Assumption: d1 ≤ ... ≤ dn
pk = max (p1, ... , pn)
è kth smallest due date has largest processing time
3. There is an integer δ, 0 ≤ δ ≤ n – k such that there is an optimal
sequence S in which job k is preceded by all other jobs j with
j ≤ k+δ and followed by all jobs j with j > k+δ.
è An optimal sequence consists of
1. jobs 1, ..., k-1, k+1, ..., k+δ in some order
2. job k
3. jobs k+ δ+1, ... , n in some order
Ck ( δ) = ∑p
j≤ k + δ
j
Algorithm: Minimizing Total
Tardiness
è Use of a special true subset of {1, ..., n}
n J(j, l, k): all jobs in the set {j, ..., l} with a processing time ≤ pk but job k is not
in J(j, l, k)
n V(J(j, l, k), t) is the total tardiness of this subset in an optimal sequence that
starts at time t.
n Algorithm: Minimizing Total Tardiness
Initial conditions:
V(∅, t) = 0
V({j}, t) = max (0, t+ p j –dj)
Recursive relation:
V( J( j, l,k ), t ) = min ( V( J( j, k '+δ, k ' ), t ) + max( 0, Ck ' ( δ) − dk ' ) + V( J(k '+ δ + 1, l,k ' ), Ck ' ( δ )))
δ
polynomial in n pseudopolynomial
Example: Minimizing Total
Tardiness
jobs 1 2 3 4 5
pj 121 79 147 83 130
dj 260 266 266 336 337
0 + 81 + 317
è V({1, ..., 5}, 0) = min 0 + 164 + 223 = 370
76 + 294 + 0
optimal schedule
n Running time is bounded by a polynomial (fixed degree) in n and 1 ε
Total Tardiness: An Approximation
Scheme (2)
a) n jobs can be scheduled with 0 total tardiness iff the EDD schedule
has 0 total tardiness
è Tmax (EDD ) ≤ ∑ T ( OPT ) ≤ ∑ T (EDD ) ≤ n • T
j j max (EDD )
K
n Step 3 Apply Algorithm „Minimizing Total Tardiness “ (slides 60/61) to the
rescaled data.
Example: PTAS for Minimizing
Total Tardiness
jobs 1 2 3 4 5
pj 1210 790 1470 830 1300
dj 1996 2000 2660 3360 3370
n Optimal total tardiness 3700 !
n Tmax(EDD)=2230 ε = 0.02 → K = 2.973
è optimal sequences for rescaled problem
1, 2, 4, 5, 3 and 2, 1, 4, 5, 3
n Objective Σ Ej + Σ T j
è more difficult than total tardiness
Special case dj = d for all jobs j
Properties
• No idleness between any two jobs in the optimal schedule (the first
job need not start at time 0)
• Schedule S ⇒ 2 disjoint set
• Optimal Schedule:
Early jobs (J1) use Longest Processing Time first (LPT)
Late jobs (J2) use Shortest Processing Time first (SPT)
Total Earliness and Tardiness (2)
Assume that the first jobs can start its processing after t = 0 and
p1 ≥ p2 ≥ ... ≥ pn
Algorithm: Minimizing Total Earliness
and Tardiness with a Loose Due Date
α | β | γ1 (opt), γ2
primary secondary
objectives objectives
1 || Σ Cj (opt), Lmax
dj = dj + z
new deadline old due dates
The Optimal Schedule
n Proof: If the first condition is not met, the schedule will miss a
deadline
è Pair wise exchange of job l and job k (not necessarily adjacent)
è decreases Σ Cj if the second condition is not valid for l and k
Algorithm: Minimizing Total
Completion Time with Deadlines
∑
n
n Step 1 Set k = n, τ = j =1
p j , Jc = {1, ... , n}
n Step 2 c
Find k* in J such that
d k * ≥ τ and
p k * ≥ p l , for all jobs l in Jc such that d l ≥ τ .
n Step 3 Decrease k by 1.
Decrease τ by pk*
Delete job k* from Jc .
n Step 4 If k ≥ 1 go to Step 2, otherwise STOP.
Example: Minimizing Total
Completion Time with Deadlines
jobs 1 2 3 4 5
pj 4 6 2 4 2
dj 10 12 14 18 18
n τ = 18 ⇒ d4 = d 5 = 18 ≥ τ
p4 = 4 > 2 = p5
è last job : 4
è τ = 18 – p 4 = 14 ⇒ d3 = 14 ≥ 14 d5 = 18 ≥ 14
p5 = 2 = p3
è either job can go in the now last position : 3
è τ = 14 – p 3 = 12 ⇒ d5 = 18 ≥ 12 d2 = 12 ≥ 12
p2 = 6 > 2 = p5
è next last job 2
è τ = 12 – p 2 = 6 ⇒ d 5 = 18 ≥ 6 d1 = 10 ≥ 12
p1 = 4 > 2 = p5
è sequence 5 1 2 3 4 (3 1 2 5 4)
Multiple Objectives
Multiple Objectives:
1 | β | Θ 1γ 1 + Θ 2 γ 2
normalization Θ1 + Θ 2 = 1
Pareto-Optimal Schedule
è 1 | β | Θ1γ 1 + Θ 2 γ 2 → 1 | β | γ 2 (opt ), γ1
Θ 1 → 1 and Θ2 → 0
è 1 | β | Θ1γ 1 + Θ 2 γ 2 → 1 | β | γ1(opt ), γ 2
γ1 : ∑ C j γ 2 = L max
Trade-offs between total completion
time and maximum lateness
∑Cj
n Step 1 Set r = 1
Set Lmax = Lmax(EDD) and d j = d j + L max .
n Step 2 Set k = n and Jc = {1, ... , n}.
∑
n
Set τ = p and δ = τ.
j= 1 j
n Step 3 Find j* in Jc such that
d j* ≥ τ ,and
p j* ≥ pl for all jobs in Jc such that dl ≥ τ .
Put job j* in position k of the sequence.
n Step 4 If there is no job l such that dl < τ and pl > p j* ,go to Step 5.
Otherwise find j** such that
τ − d j* * = min( τ − dl )
l
n Step 5 Decrease k by 1.
Decrease τ by p j*..
Delete job j* from Jc.
If k ≥ 1 go to Step 3,
otherwise go to Step 6.
n Step 6 Set Lmax = Lmax + δ.
If Lmax > Lmax(SPT/EDD), then STOP.
Otherwise set r = r + 1, d j = d j + δ ,and go to Step 2.
jobs 1 2 3 4 5
pj 1 3 6 7 9
dj 30 27 20 15 12
1 98, 2 5,4,3,1,2 32 29 22 17 14 1
2 77, 3 1,5,4,3,2 33 30 23 18 15 2
3 75, 5 1,4,5,3,2 35 32 25 20 17 1
4 64, 6 1,2,5,4,3 36 33 26 21 18 2
5 62, 8 1,2,4,5,3 38 35 28 23 20 3
6 60, 11 1,2,3,5,4 41 38 31 26 23 3
7 58, 14 1,2,3,4,5 44 41 34 29 26 Stop
n 1 || Θ1 ∑wj Cj + Θ2 Lmax
Extreme points can be determined in polynomial time (WSPT/EDD and EDD)
è But the problem with arbitrary weights Θ1 and Θ2 is NP – hard.
Parallel Machine Models
n 2 Step process
èAllocation of jobs to machines
èSequence of the jobs on a machine
n Assumption: p1 ≥ p 2 ≥ K ≥ pn
n Simple problem: Pm || C max
n Special case: P2 || C max
èEquivalent to Partition (NP-hard in the ordinary
sense)
Heuristic algorithm
max
∑p
j= 1
j starting time of job n
è C max (LPT ) − p n ≤
m
n −1 n
∑pj =1
j
1
∑p
j=1
j
C max ( LPT ) ≤ p n + = p n (1 − )+
m m m
n
∑p
j= 1
j
≤ C max (OPT )
m
Heuristic algorithm (3)
n
1
p n (1 − ) + ∑ p j m
4 1 C max (LPT ) m j =1
− < ≤
3 3 m C max ( OPT ) C max (OPT )
n
1
p n (1 −) ∑ pj m 1
p n (1 − )
= m + j= 1 ≤ m +1
C max (OPT ) C max ( OPT ) C max ( OPT )
1
4 1 p n (1 −
)
è
− < m +1
3 3m C max ( OPT )
Cmax(OPT) < 3p n
è On each machine at most 2 jobs
è LPT is optimal for this case
Example: A Worst Case Example
for LPT
jobs 1 2 3 4 5 6 7 8 9
pj 7 7 6 6 5 5 4 4 4
n 4 parallel machines
n Cmax(OPT) = 12 7+5; 7+5; 6+6; 4+4+4;
n Cmax(LPT) = 15 7 4 4
7 4
6 5
6 5
C max ( LPT ) 15 5 4 1 16 − 1
= = = − =
C max ( OPT ) 12 4 3 12 12
intree
tree
outtree
Intree
Level
5 starting jobs
3
highest level Imax
2
N(l) number of
1 jobs at level l
Intree (2)
r
H ( I max + 1 − r ) = ∑ N (I
k =1
max + 1− k)
C max ( CP ) 4
≤ for 2 machines
C max ( OPT ) 3
Example: A Worst Case Example
of CP
n 6 jobs, 2 machines
1 4
3 6
1 : 1 5 2 1 5
2 : 2 3 4 6 3 6 4
4 3
Example: Application of LNS rule
n Alternative approach:
è LNS: Largest Number of Successors first
è optimal for in- and outtree
1 2 3
4 5 2 machines
6 1, 4, 6 : 2 successors
4 1 2 3 1 2 3
6 5 4 6 5
Example: Application of LNS rule
(2)
n Generalization to arbitrary processing times
n CP, LNS: largest total amount of processing
Pm | pj = 1, Mj | Cmax
1 2 3 4
LFJ 1 4 5 6
2 7 8
3
2 1 5 7
optimal 3 4 6 8
§ Pm | prmp | Cmax
n Preemptions ⇒ often simpler analysis
m
∑x
m
i=1
ij = pj ∑x
i =1
ij ≤ Cmax
processing less
than makespan
n
n Lower bound
n
Cmax ≥ max p1, ∑ p j m = C * max
j=1
Algorithm
n Step 1: Processing all jobs on a single machine
⇒ makespan ≤ m • C*max
n Step 2: Cut into m parts
n Step 3: Execution of each part on a different machine
Example: Application of LRPT Rule
∑p
j =1
( j) ( t ) ≥ ∑ q( j ) (t ) for all k = 1, ........n
j=1
n n
∑ p (t + 1) ≤ ∑ p (t) − 1
j =1
j
j =1
j
n n
∑ q (t + 1) ≤ ∑ q (t ) − 1
j =1
j
j =1
j
if p( t ) ≥m q( t ) ⇒ p( t + 1) ≥m q( t + 1)
Proof: Application of LRPT Rule
(2)
n LPRT yields an optimal schedule for Pm | prmp | Cmax
'
rule* p( t ) → p ( t + 1)
LRPT p( t ) → p( t + 1)
'
è p (t + 1) ≥ m p( t + 1)
n p1 = 8 p2 = 7 p3 = 6
1 3 2 1
2 3 2 1 3
0 5 10 t
Example: Application of LRPT in
Continuous Time
n Consider the same jobs as in the previous example.
As preemptions may be done at any point in time,
processor sharing takes place.
The makespan is now 10.5.
1 1, 2, 3
2 2, 3 1, 2, 3
0 5 10 t
Example: Application of LPRT in
Continuous Time (2)
n Qm | prmp | Cmax
n Optimal schedule for Qm | prmp | Cmax
m−1 n
∑
p1 p1 + p 2 j=1
pj ∑ pj
Cmax
≥ max , , m −1 , n
j =1
v1 v1 + v 2
∑ v j ∑ v j
j =1 j=1
for v1 ≥ v2 ≥ ........ ≥ vm
Example: Application of LPRT in
Continuous Time (3)
§ Longest Remaining Processing Time on the Fastest
Machine first (LRPT – FM) yields an optimal schedule for
Qm | prmp | Cmax
n Schedule:
è Job 1 → machine with two speed units
è Job 2 → machine with three speed unit
è time 1 all jobs have a remaining processing time equal to 6
è from time 1 each job occupies one speed unit machine
è Makespan is equal to 7
(nondelay schedule)
è p(1) ≤ p(2) ≤ p(3) ≤ ..... ≤ p(n-1) ≤ p(n)
n
n is integer (Otherwise add job with processing time 0)
m
è n • m coefficients : m times n
m times n – 1
:
m times 2
m times 1
n Pm || ∑ wj Cj ⇒ NP hard
Example: Application of WSPT
Rule
jobs 1 2 3
pj 1 1 3
wj 1 1 3
n 2 machines
n 3 jobs
n Any schedule is WSPT
è w1 = w2 = 1 - ε ⇒ WSPT is not necessarily optimal
n Approximation factor
∑ w C ( WSPT) < 1 (1+
j j
2) (tight)
∑ w C (OPT) 2
j j
n Minimize ∑ ∑ ∑
i= 1 j= 1 k = 1
kp ij x ikj
contribution of job j to ∑ Cj
LFJ rule is optimal for Pm | pj =1, Mj | ∑ Cj
m n
n Constraints: ∑ ∑
i=1 k =1
x ikj = 1 j = 1, ... ,n
∑
j= 1
x ikj ≤ 1 i = 1, ... , m
k= 1, ... , n
each position is taken not more than once.
n xikj ∈ {0, 1} No preemption of jobs
è Weighted bipartite matching: n jobs ⇒ n • m positions
jobs 1 2 3
p1j 4 5 3
p2j 8 9 3
1 1 2
∑ Cj = 16
2 3
Total Completion Time with
Preemptions
n Problem Qm | prmp | ∑ Cj
è There exists an optimal schedule with
Cj ≤ Ck if pj ≤ pk for all j and k.
n Proof: Pair wise exchange
n The SRPT-FM rule is optimal for Qm | prmp | ∑ Cj .
n Shortest Remaining Processing Time on the Fastest Machine
v1 ≥ v2 ≥ ... ≥ vn Cn ≤ Cn-1 ≤ ... ≤ C1
n There are n machines.
è more jobs than machines ⇒ add machines with speed 0
è more machines than jobs ⇒ the slowest machines are not used
Proof: SRPT-FM rule is optimal for
Qm | prmp | ∑ Cj
v1Cn = pn
v2Cn + v1(Cn-1 – Cn ) = pn-1
v3Cn + v2(Cn-1 – Cn) + v1(Cn-2 – Cn-1) = pn-2
:
vnCn + vn-1(Cn-1 – Cn) + v1(C1 – C2) = p1
v1C‘n ≥ v1Cn
v2C‘n + v1C‘n-1 ≥ v2Cn + v1Cn-1
:
vnC‘n + vn-1C‘n-1 + ... + v1C‘1 ≥ vnCn + vn-1Cn-1 + ... + v1C1
Proof: SRPT-FM rule is optimal for
Qm | prmp | ∑ Cj (3)
machines 1 2 3 4
vi 4 2 2 1
jobs 1 2 3 4 5 6 7
pj 8 16 34 40 45 46 61
C1=2 C2=5 C3=11 C4=16 C5=21 C6=26 C7=35
1 2 3 4 5 6 7
2 3 4 5 6 7
3 4 5 6 7
∑Cj = 116
4 5 6 7
0 5 10 15 20 25 30 35 t
Due – Date Related Objectives
0 K
-dj
t
K 0
Cmax
jobs 1 2 3 4
dj 4 5 8 9
pj 3 3 3 8
n P2 | prmp | Lmax
n 4 jobs
n Is there a feasible schedule with Lmax = 0 ? (dj = dj)
jobs 1 2 3 4
rj 5 4 1 0
Pj 3 3 3 8
n Permutation Schedule: j1 , j2 , K , jn
i
Ci , j1 = ∑ pl , j1 i = 1, K, m
l =1
k
C1, jk = ∑ p1, jl k = 1, K, n
l =1
Ci , jk = max(Ci −1, jk , Ci , jk −1 ) + pi , jk
i = 2, K, m k = 2, K, n
Direct Graph for the Computation
of the Makespan in Fm|prmu|Cmax
5 5 3 6 3
p1, jk
p2 , j k 4 4 2 4 4
4 4 3 4 1
p3, j k
p4 , j k 3 6 3 2 5
Direct Graph for the Computation
of the Makespan in Fm|prmu|Cmax
p2, j1 p i , jk pi , jk +1 ...
... ... ...
5 5 3 6 3
4 4 2 4 4
4 4 3 4 1
3 6 3 2 5
Directed Graph, Critical Paths and
Gantt Chart (2)
5 5 3 6 3
4 4 2 4 4
4 4 3 4 1
3 6 3 2 5
0 10 20 30
Critical Path
5 2 3 6 3
p1, jk
p2 , j k 1 4 3 4 4
4 4 2 4 4
p3, j k
p4 , j k 3 6 3 5 5
Example: Graph Representation
and Reversibility (2)
5 2 3 6 3
1 4 3 4 4
4 4 2 4 4
3 6 3 5 3
Example: Graph Representation
and Reversibility (3)
5 2 3 6 3
1 4 3 4 4
4 4 2 4 4
3 6 3 5 5
0 10 20 30
SPT(1)-LPT(2) Schedule is
optimal for F2||Cmax
n Problem F2||C max with unlimited storage (optimal solution is always
permutation)
n Johnson‘s rule produces an optimal sequence
- Job partitioning into 2 sets
Set I : all jobs with p1j ≤ p2j
Set II : all jobs with p2j < p1j
- Set I : Those jobs are scheduled first in increasing order of p1j (SPT)
- Set II : Those jobs are scheduled afterwards in decreasing order of
p2j (LPT)
è SPT (1) – LPT(2) schedule
n Proof by pairwise adjacent interchange and contradiction
n Original schedule job l ⇒ job j ⇒ job k ⇒ job h
Cij : completion time (original schedule)
C‘ij : completion time (new schedule)
SPT(1)-LPT(2) Schedule is
optimal for F2||Cmax (2)
n Interchange of j and k
è Starting time of job h on machine 1 is not affected (C1l + p1j + p1k)
Starting time of job h on machine 2:
C2k = max ( max ( C2l, C1l + p1j) + p2j, C1l + p 1j + p 1k) + p2k
= max ( C2l + p2j + p2k,
C1l + p1j + p2j + p2k,
C1l + p1j + p1k + p 2k)
C‘2j = max (C2l + p2k + p 2j, C1l + p 1k + p2k + p 2j, C1l + p 1k + p1j + p2j)
n Assume another schedule is optimal
n Case 1 : j ∈ Set II and k ∈ Set I
è p1j > p 2j and p 1k ≤ p2k
è C1l + p1k + p2k + p 2j < C1l + p 1j + p1k + p2k
C1l + p1k + p1j + p2j ≤ C1l + p1j + p2j + p2k
è C‘2j ≤ C2k
Multiple Schedules That Are
Optimal
n Case 2 : j , k ∈ Set I and pij > p1k
è p1j ≤ p2j , p1k ≤ p2k
C1l + p1k + p2k + p2j
≤ C1l + p1j + p2j + p2k
C1l + p1k + p1j + p2j
n Case 3 : j , k ∈ Set II and p2j < p2k
è similar to Case 2
n There are many other optimal schedules besides SPT(1) – LPT(2)
schedules
n Fm | prmu | Cmax : Formulation as a Mixed Integer Program (MIP)
n Decision variable xjk = 1 if job j is the kth job in the sequence
Auxiliary variables
Iik W i,k+1
W ik
Machine i +1 pi+1(k-1) pi+1(k) pi+1(k+1)
è m −1 n −1
∑p
i=1
i(1) + ∑I
j=1
mj
m −1 n n −1
n MIP :
min ∑ ∑ x j1p ij + ∑ Imj
i =1 j =1 j= 1
n subject to
n
∑ j=1
x jk = 1 k = 1, ... , n
∑
k =1
x jk = 1 j = 1, ... , n
n n
I ik + ∑
j= 1
x j,k + 1 p ij + W i,k + 1 − W ik − ∑
j=1
x jk p i + 1 , j − I i + 1,k = 0
k = 1, ..., n-1; i = 1, ..., m-1
W i1 = 0 i = 1, ..., m-1 xjk ∈ {0,1} j=1, ...,n
I1k = 0 k = 1, ..., n-1 k=1, ...,m
W ik ≥ 0 i = 1, ..., m-1; k = 1, ..., n
Iik ≥ 0 i = 1, ..., m; k = 1, ..., n-1
F3||Cmax is strongly NP-hard
m
n Slope index Aj for job j A j = − ∑ (m − ( 2i − 1))p
i =1
ij
A1 = -(3 x 5) – (1 x 4) + (1 x 4) + (3 x 3) = -6
A2 = -(3 x 5) – (1 x 4) + (1 x 4) + (3 x 6) = +3
A3 = -(3 x 3) – (1 x 2) + (1 x 3) + (3 x 3) = +1
A4 = -(3 x 6) – (1 x 4) + (1 x 4) + (3 x 2) = -12
A5 = -(3 x 3) – (1 x 4) + (1 x 1) + (3 x 5) = +3
Example: Application of Slope Heuristic
(2)
n F2 || ∑ Cj is strongly NP – hard
è Fm | prmu | ∑ Cj is strongly NP – hard
as sequence changes are not required in the optimal schedule for 2
machines
Flow Shops with Limited Intermediate
Storage
D i, j k = max( D i − 1, j k + p i, jk , D i + 1, jk − 1 )
D m , jk = D m − 1, j k + p m , jk
è Critical path in a directed graph
Weight of node (i, jk) specifies the departure time of job jk from machine i
Edges have weights 0 or a processing time
Directed Graph for the Computation of
the makespan
i,j1k
0 p i + 1, j k
i+1,jk
p m, jn
m,j1 m,jn
Graph Representation of a Flow Shop
with Blocking
jobs j1 j2 j3 j4 j5
p 1, j k 5 5 3 6 3
p 2 , jk 4 4 2 4 4
p 3 , jk 4 4 3 4 1
p 4 , jk 3 6 3 2 5
5 5 3 6 3
5 0 5 0 3 0 6 0 3
4 4 2 4 4
4 0 4 0 2 0 4 0 4
4 4 3 4 1
4 0 4 0 3 0 4 0 1
3 0 6 0 3 0 2 0 5 3 6 3 2 5
0 10 20 30
Flow Shops with Limited Intermediate
Storage (2)
cities 0 1 2 3 4
b,j 0 2 3 3 9
a,j 0 8 4 6 2
n Proportionate case
FFc | pij = p j | Cmax
non preemptive preemptive
LPT heuristic LRPT heuristic
è NP hard optimal for a single stage
Example: Minimizing Makespan with
LPT
n p1 = p2 = 100 p3 = p4 = … = p102 = 1
n 2 stages: 2 machines at first stage
1 machine at second stage
1 2
1st stage
3 – 102 2 201
Optimal schedule
3 – 102 1 102 2 2nd stage
0 100 200 301
1 3 – 102
LPT heuristic
2 2
2 1 2 3 – 102
0 100 200 300 400
Flexible Flow Shop with Unlimited
Intermediate Storage (2)
n FFc | pij = pj | ∑ Cj
n SPT is optimal for a single stage and for any numbers of stage with a
single machine at each stage
n SPT rule is optimal for FFc | pij = pj | ∑ Cj if each stage has at least as
many machines as the preceeding stage
n Proof:
Single stage SPT minimizes ∑ Cj and the sum of the starting times
∑ (Cj – pj)
c stages: C j occurs not earlier than cpj time units after its starting time
at the first stage
Same number of machines at each stage:
SPT: each need not wait for processing at the next stage
n n
n The route of every job is fixed but not all jobs follow the same route
n J2 || Cmax
n J1,2 : set of all jobs that have to be processed first on machine 1
n J2,1 : set of all jobs that have to be processed first on machine 2
n Observation:
If a job from J1,2 has completed its processing on machine 1 the postponing
of its processing on machine 2 does not matter as long as machine 2 is not
idle.
n A similar observation hold for J2,1
è a job from J1,2 has a higher priority on machine 1 than any job form J2,1 and vice
versa
n Determing the sequence of jobs from J1,2
è F2 || Cmax : SPT(1) – LPT(2) sequence
è machine 1 will always be busy
n J2 || Cmax can be reduced to two F2 || Cmax problems
Representation as a disjunctive graph G
h,k i,k
n 4 machines , 3 jobs
jobs machine sequence processing times
1 1, 2, 3 p11 = 10, p21 = 8, p31 = 4
2 2, 1, 4, 3 p22 = 8, p12 = 3, p42 = 5, p32 = 6
3 1, 2, 4 p13 = 4, p 23 = 7, p 43 = 3
(i, j) ∈ Ω
n Ω’ ⊆ Ω
n t(Ω) smallest starting time of a operation
Generation of all Active Schedules
n v’: successor of v
èSet D’ of the selected disjunctive edges at v’è G(D’)
Lower Bound for Makespan at v’
10 8
1,1 2,1 3,1
4
0
0 8 3 5 6
U 2,2 1,2 4,2 3,2 V
0 3
1,3 4
2,3 7
4,3
Application of Branch an Bound
Level 1
Ω = {(1,1), (2,2),(1,3 )}
t( Ω ) = min( 0 + 10,0 + 8,0 + 4) = 4
i* = 1
Ω' = {(1,1),(1,3)}
Schedule Operation (1,1) first
10 8
1,1 2,1 3,1
4
0 10
0 8 3 5 6
U 2,2 1,2 4,2 3,2 V
10
0 3
1,3 4
2,3 7
4,3
Schedule Operation (1,1) first
jobs 1 2 3
pij 10 3 4
rij 0 10 10
dij 12 13 14
jobs 1 2 3
pij 8 8 7
rij 10 0 14
dij 20 10 21
No disjunctive arc
Level 0
2,1
(1,1) (1,3) scheduled first
Level 1 scheduled on machine 1
LB=28 first on
LB=28
machine 1
1 1 3 2 (or 1 2 3)
2 213
3 12
4 23
Makespan: 28
Gantt chart for J4 || Cmax
Machine 1 1 3 2
Machine 2 2 1 3
Machine 3 1 2
2 3
Machine 4
0 10 20 30
Shifting Bottleneck Heuristic
10 8
1,1 2,1 3,1
4
0
0 8 3 5 6
S 2,2 1,2 4,2 3,2 T
0 3
1,3 4
2,3 7
4,3
Example: Shifting Bottleneck
Heuristic (3)
n Iteration 2
n 1 | rj | Lmax problem for machine 2
optimal sequence 2, 1, 3 → Lmax(2)=1
n 1 | rj | Lmax problem for machine 3
optimal sequences 1, 2 and 2, 1 → Lmax(3) =1
Similar Lmax (4) = 0
n Machine 2 is selected : M0 = {1, 2}
Cmax ({1,2}) = C max ({1}) + Lmax (2) = 27 + 1 = 28
Disjunctive edges are added to include machine 2
Resequencing for machine 1 does not yield any improvement
n Iteration 3
No further bottleneck is encountered
Lmax(3)=0, Lmax(4)=0 machines 1 2 3 4
è Overall makespan 28 sequences 1, 2, 3 2, 1, 3 2, 1 2, 3
Open Shops
n O2 || Cmax
n n
C max ≥ max ∑ p 1 j , ∑ p 2 j
j =1 j =1
n In which cases is Cmax strictly greater than the right hand side of the
inequality?
n Non delay schedules
è Idle period only iff one job remains to be processed and this job is
executed on the other machine: at most on one of the two machines
n Longest Alternate Processing Time first (LAPT)
Whenever a machine is free start processing the job that has the
longest processing time on the other machine
n The LAPT rule yields an optimal schedule for O2 || Cmax with
makespan
n n
C max = max max( p1j + p 2 j ), ∑ p 1j ,∑ p 2 j
j ∈ {1,...,n} j=1 j =1
Open Shops (2)
n Assumption
p1j ≤ p1k ; p2j ≤ p1k
è longest processing time belongs to operation (1, k)
LAPT: Job k is started on machine 2 at time 0
è Job k has lowest priority on machine 1
n It is only executed on machine 1 if no other job is available for
processing on machine 1
a) k is the last job to be processed on machine 1
b) k is the second to last job to be processed in machine 1 and the
last job is not available due to processing on machine 2
n Generalization: The 2(n-1) remaining operations can be processed in
any order without unforced idleness.
n No idle period in any machine → optimal schedule
Open Shops (3)
M1 1 4 2 3
no unnecessary increase in
M2 2 1 3 4 makespan