0% found this document useful (0 votes)
14 views

Scheduling Problems and Solutions

Uploaded by

1046376493
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Scheduling Problems and Solutions

Uploaded by

1046376493
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 190

University Dortmund

Faculty of Electrical Engineering


Computer Engineering Institute

Scheduling Problems and Solutions

Uwe Schwiegelshohn

CEI University Dortmund


Summer Term 2003
Scheduling Problem

Constraints
Task Resources,
Time
(Jobs) (Machines)

Objective(s)

Areas:
n Manufacturing and production
n Transportations and distribution
n Information - processing
Example 1 Paper Bag Factory

n Different types of paper bags


n 3 production stages
l printing of the logo
l gluing of the side
l sewing of one or both ends
n several machines for each stage
l differences in speed and function
l processing speed and quantity
l setup time for change of bag type
n due time and late penalty
n minimization of late penalties, setup times
Example 2
Gate Assignments at Airport

n Different types of planes (size)


n Different types of gates (size, location)
n Flight schedule
l randomness (weather, take off policy)
n Service time at gate
l deplaning of passengers
l service of airplane
l Boarding of passengers
n Minimization of work for airline personnel,
airplane delay
Example 3 Tasks in a CPU

n Different applications
l unknown processing time
l known distributions (average, variance)
l priority level
n Multitasking environment
l preemption
n Minimization of sum of expected weighted completion
times
Information Flow Diagram in a
Manufacturing System
Production planning,
master scheduling Orders, demand forecasts

Capacity Quantities,
status due dates

Material requirements,
planning, Material requirements
capacity planning

Scheduling Shop orders,


constraints release dates

Scheduling
and
rescheduling Detailed scheduling
Schedule
performance Schedule

Dispatching

Shop
status Shopfloor
management

Data collection Job loading

Shopfloor
Information Flow Diagram in a
Service System

Status (history)
Database Forecasting

Forecasts
Data

Prices rules
Scheduling Yield
management

Accept/
reject Place order,
(conditions) make reservations

Customer
Abstract Models

Job properties:

pij :processing time of job j on machine i


(p j : identical processing time of job j on all machines)
rj :release data of job j (earliest starting time)
dj :due date of job j (completion of job j after dj => late
penalty)
dj :deadline (dj must be met)
Machine Environment

l : single machine
Pm : identical machines in parallel
Q m : machines in parallel with different speeds
Rm : unrelated machines in parallel
Fm : flow shop with m machines in series
l each job must be processed on each machine using the same route
l queues between the machines
FIFO queues => permutation flow shop
FFc : flexible flow shop with c stages in series and several
identical machines at each stage, one job needs processing on
only one (arbitrary) machine at each stage
Machine Environment

Jm : job show with m machines with a separate


predetermined route for each job. A machine may be
visited more than once by a job => recirculation
FJc : flexible job shop with c stages and several identical
machines at each stage, see FFc
Om : Open shop with m machines
l each job must be processed on each machine
Restrictions and Constraints

n release dates job properties


n sequence dependent setup times
è Sijk : setup time between job j and job k on machine i
è (Sjk : identical for all machines)
è (S0j : startup for job j)
è (Sj0 : cleanup for job j)
n preemption (prmp)
è the processing of a job can be interrupted and later resumed (on
the same or another machine)
n precedence constraints (prec)
è one (or more) jobs must be computed before another job can be
started
l representation as a directed acyclic graph (DAG)
Restrictions and Constraints (2)

n machine breakdowns (brkdwn)


è machines are not continuously available e.g. m(t) identical parallel
machines are available at time t
n machine eligibility restrictions (Mj )
è Mj denotes the set of parallel machines that can process job j (for
Pm and Q m )
n permutation (prmu) see F m
n blocking (block)
è A completed job cannot move from one machine to the next due
to limited buffer space in the queue it blocks the previous
machine (Fm , FF c )
Restrictions and Constraints (3)

n no – wait (nwt)
è A job is not allowed to wait between two successive executions
on different machines (Fm , FFc )
n recirculation (recirc)
Objective Functions

n Cj : Completion time of job j


n Lj = Cj – dj : lateness of job j
è may be positive or negative
n Tj = max (Lj , 0) tardiness

1 if Cj > dj
n Uj =
0 otherwise
Objective Functions (2)

Lj Tj Uj

1
Cj Cj Cj
dj dj dj

n Makespan Cmax : max (C 1 ,...,Cn )


è completion time of the last job in the system
n Maximum Lateness L max : max (L1,..., Ln )
Objective Functions (3)

n Total weighted completion time Σ wj Cj


n Total weighted flow time (Σ wj ( Cj – rj )) = Σ wj Cj – Σ wj rj
const.
n Discounted total weighted completion time
è (Σ wj (1 – e -rCj )) 0<r<1

n Total weighted tardiness (Σ wj Tj )


n Weighted number of tardy jobs (Σ wj Uj )
n Regular objective functions:
è non decreasing in C1 ,...,Cn
è Ej = max (-Lj , 0) earliness
l non increasing in Cj

n Σ Ej + Σ Tj , Σ wj‘ Ej + Σ wj‘‘ Tj not regular obj. functions


Description of a Scheduling Problem

α|β|γ

machine environment objective (to be


minimized)
constraints,
processing,
characteristics
Examples:
n Paper bag factory FF3 | rj | Σ wj Tj
n Gate assignment Pm | rj , M j | Σ wj Tj
n Tasks in a CPU l | rj prmp | Σ wj Cj
n Traveling Salesman l | sjk | Cmax
Classes of Schedules

n Nondelay (greedy) schedule


è No machine is kept idle while a task is waiting for processing
Nondelay schedule need not be optimal!

Example: P2 | prec | Cmax

jobs 1 2 3 4 5 6 7 8 9 10
pj 8 7 7 2 3 2 2 8 8 15
Precedence Constraints Original
Schedule
2 jobs 1 2 3 4 5 6 7 8 9 10

1 pj 8 7 7 2 3 2 2 8 8 15
10

3 = job completed

5 8

6 7 9

1 2 8 9

4 6 5 7 3 10

0 10 20 30
Precedence Constraints (2)
Processing Time 1 Unit Less
2 jobs 1 2 3 4 5 6 7 8 9 10

1 pj 7 6 6 1 2 1 1 7 7 14
10

3 = job completed

5 8

6 7 9

1 2 9

4 6 5 7 8 3 10

0 10 20 30
Precedence Constraints (3)
Original Processing Times and 3 Machines

2 jobs 1 2 3 4 5 6 7 8 9 10

1 pj 8 7 7 2 3 2 2 8 8 15
10

3 = job completed

5 8

6 7 9

1 2

4 5 8 3 10

6 7 9

0 10 20 30
Active Schedule

It is not possible to construct another schedule by changing


the order of processing on the machines and having at
least one task finishing earlier without any task finishing
later.
Example :
Consider a job shop with three machines and two jobs.
n Job 1 needs one time unit on machine 1 three time units on
machine 2.
n Job 2 needs two time units on machine 3 and three time units on
machine 2.
n Both jobs have to be processed last on machine 2.
Active Schedule (Example)

Machine 1 1

Machine 2 2 1

Machine 3 2

0 2 4 6 8 t

An active schedule that is not nondelay.


It is clear that this schedule is active; reversing the sequence of the two jobs on machine 2
postpones the processing of job 2. However, the schedule is not nondelay. Machine 2
remains idle until time 2 while there is already a jab available for processing at time 1.
It can be shown that there exists for Jm || γ an optimal schedule that is active provided the
objective function γ is regular.
Semi – active Schedule

No operation can be completed earlier without changing the


order of processing on any one of the machines.

Example:
Consider again a schedule with three machines and two jobs. The
routing of the two jobs is the same as in the previous example.
n The processing times of job 1 on machines 1 and 2 are both equal
to 1.
n The processing times of job 2 on machines 2 and 3 are both equal
to 2.
Semi – active Schedule (Example)

Machine 1 1

Machine 2 2 1

Machine 3 2

0 2 4 6 8 t

A semi - active schedule that is not active.


Consider the schedule under which job 2 is processed on machine 2 before job 1. This
implies that job 2 starts its processing on machine 2 at time 2 and job 1 starts its
processing on machine 2 at time 4. This schedule is semi-active. However, it is not
active, as job 1 can be processed on machine 2 without delaying the processing of job
2 on machine 2.
Venn Diagram of Classes of
Schedules for Job Shops
Optimal Schedule

Semi-active

X Nondelay Active

All Schedules
A Venn diagramm of the three classes of non preemptive schedules;
the non preemptive, nondelay schedules, the active schedules, and the semi-active schedules
Complexity Hierarchy

Some problems are special cases of other problems


α1 | β1 | γ1 ∝ (reduces to) α2 | β2 | γ2

1 || Σ Cj ∝ 1 || Σ wj Cj ∝ Pm || Σ wj Cj ∝ Q m | prec | Σ wj Cj

Complex cases
α | β | L max ∝ α | β | Σ Uj
α | β | L max ∝ α | β | Σ Tj

variation of dj + logarithmic search


Complexity Hierarchies of
Deterministic Scheduling Problems

Rm FJc

Qm FFc Jm

Pm Fm Om

• machine environment
Complexity Hierarchies of
Deterministic Scheduling Problems

rj sjk prmp prec brkdwn Mj block nwt recrc

0 0 0 0 0 0 0 0 0

• processing restrictions and constraints


Complexity Hierarchies of
Deterministic Scheduling Problems

Σwj Tj Σwj Uj

Σwj Cj ΣTj ΣUj

ΣCj Lmax

Cmax

• objective functions
Deterministic Scheduling Problems

Deterministic scheduling problems

polynomial NP – hard
time solution

NP – hard strongly
ordinary sense NP- hard

pseudo
polynomial solution
Complexity of Makespan Problems

1. 1 || Cmax
2. P2 || Cmax
3. F2 || Cmax
4. Jm || Cmax FFc || Cmax Jm || Cmax
5. FFc || Cmax

P2 || Cmax F2 || Cmax

Hard

Easy 1 || Cmax
Complexity of Maximum Lateness
Problems
1. 1 || Lmax
2. 1 | prmp | Lmax
3. 1 | rj | Lmax
4. 1 | rj , prmp | Lmax
5. Pm || Lmax

Pm || Lmax 1 | rj | Lmax 1 | rj , prmp | Lmax

Hard 1 || Lmax 1 | prmp | Lmax

Easy
Total Weighted Completion Time

1 || Σ wj Cj
Weighted Shortest Processing Time first (WSPT)

wj
(Smith ratio)
pj

WSPT rule is optimal for 1 || Σ wj Cj

Proof by contradiction:
WSPT rule is violated It is violated by a pair of neighboring task h and k
t
h k

S1: Σ wj Cj = ... + wh(t+p h) + wk(t + ph + p k)


Total Weighted Completion Time

t
k h

S2: Σ wj Cj = ... + wk(t+pk) + wh(t + pk + ph)

Difference between both schedules S1 und S2


S1 - S2: wk ph – wh pk > 0 (improvement by exchange)
wk wh
>
pk ph

Complexity dominated by sorting O (n log(n))


Total Weighted Completion Time

Use of precedence constraints


l | prec | Σ wj Cj
Independent chains!

Chain of jobs 1, ... , k

l* satisfies l*
 l 

j =1
wj  ∑ wj
 j =1


δ factor
= max
l* 1≤l ≤k  l 
∑  ∑ p j
of this
chain pj 
j =1  j =1 
l* determines the δ-factor of the chain 1, ... , k
Algorithm: Total Weighted
Completion Time with Chains
Whenever the machine is available, select among the remaining chains
the one with the highest δ-factor. Schedule all jobs from this chain
(without interruption) until the job that determines the δ-factor.

Proof concept
There is an optimal schedule that processes all jobs 1, ... , l* in
succession + Pair wise interchange of chains
Example: Total weighted
Completion Time with Chains
n Consider the following two chains:

1 2 3 4
and
5 6 7

The weights and processing times of the jobs are given in the following table

jobs 1 2 3 4 5 6 7
wj 6 18 12 8 8 17 18
pj 3 6 6 5 4 8 10
Example: Total Weighted
Completion Time with Chains (2)
24
n δ-factor of first chain (6 + 18) (3 + 6) = Job 2
9

25 24
n δ-factor of second chain (8 + 17) (4 + 8) = < Job 6
12 9
è Job 1 and 2 are processed first
12 24
n δ-factor of remaining part of first chain < Job 3
6 9
è Job 5 and 6 are scheduled next

w7 18 12
n = < Job 3 is scheduled next
p7 10 6

w 4 8 18
n = < Job 7 and finally Job 4
p 4 5 10
Algorithm: Total Weighted
Completion Time with Chains (2)
n 1 | prec | Σ wj Cj strongly NP hard for arbitrary precedence
constraints
n 1 | rj ,prmp | Σ wj Cj strongly NP hard
è WSPT (remaining processing time) is not optimal

Example: Select another job that can be completed before the next
release
1 | rj ,prmp | Σ Cj is easy
1 | rj | Σ Cj is strongly NP hard
1 || Σ wj (1 – e -rCj ) can be solved optimally with Weighted Discounted
Shortest Processing Time first (WDSPT) rule:
− rp j
wj • e
− rp j
1− e
Maximum Lateness

1 | prec | hmax

n hj (t): non decreasing cost function


n hmax = max (h1, (C1), ... , hn (Cn))
n Backward dynamic programming algorithm
n Cmax = Σ pj : makespan
n J: set of all jobs already scheduled (backwards)
è in [C max − ∑ p j, C max ]
j∈J

è Jc = {1, ... , n} \ J: jobs still to be scheduled


è J‘ ⊆ Jc : jobs that can be scheduled (precedence constraints)
Algorithm: Minimizing Maximum
Cost
n Step 1 Set J = ∅, Jc = {1, ... , n} and J‘ the set of all jobs with no
successors.

n Step 2 Let j* ∈ J' be such that


    
h j*  ∑ p j  = min  h j  ∑ pk  
  
 j∈Jc  j∈J'
  k∈Jc  
Add j* to J
Delete j* from Jc
Modify J‘ to represent the new set of schedulable jobs.

n Step 3 If Jc = ∅ STOP, otherwise go to Step 2.

This algorithm yields an optimal schedule for 1 | prec | hmax


Algorithm: Minimizing Maximum
Cost (Proof of Optimality)
n Assumption: The optimal schedule and the schedule of the previous
algorithm are identical at positions k+1, ... , n
n At position k (time t)
è Optimal schedule job j**
è Schedule of algorithm job j* with h j**(t) > h j*(t)
n Job j* is scheduled at position k‘ < k in the optimal schedule
n j* is removed and put at position k
è hj(Cj) does not increase for all jobs {1, ... , n} \ {j*}
è hj*(t) < h j**(t) (Algorithm j**)
n The exchange cannot increase hmax
è An optimal schedule and schedule of the previous algorithm are identical
at positions k, k+1, ..., n
Algorithm: Minimizing Maximum
Cost (Proof of Optimality)(2)
hj**
hj(Cj)
hj*

Cj*,Cj**
j* j**

hj(Cj) hj**

hj*

Cj*,Cj**
j** j*
Example: Minimizing Maximum
Cost
jobs 1 2 3
pj 2 3 5
hj (Cj ) 1 + Cj 1.2 Cj 10

n Cmax = 2+3+5 = 10

n h3(10) = 10 < h1(10) = 11 < h2(10) = 12


è Job 3 is scheduled last

n h2(10 – p3) = h2(5) = 6 = h1(5)


è Optimal schedules 1,2,3 and 2,1,3
1 || Lmax and 1 | prec | hmax

n 1 || Lmax special case of 1 | prec | hmax


n hj = C j – dj Earliest Due Date first, nondelay schedule
n 1 | rj | Lmax strongly NP complete

Proof:
n reduction of 3-Partition to 1 | rj | Lmax integers a1, ... , a3t, b
3t
b b
4
< aj <
2 ∑a
j =1
j = t*b

è n = 4t –1 jobs

rj = jb + (j –1), pj = 1, dj = jb + j, j = 1, ... , t –1
rj = 0, pj = aj – t +1, dj = tb + (t – 1), j = t, ... , 4t – 1
1 || Lmax Complexity Proof

n Lmax ≤ 0 if every job j (1,..., t – 1) can be processed from rj to


rj + pj = dj and all other jobs can be partitioned over t intervals of
length b ⇒ 3 – Partitions has a solution

r1 d1 r2 d2 r3 d3 r t-2 d t-2 r t-1 d t-1

0 b b+1 2b+1 2b+2 3b+2 3b+3 tb+t–1

1 | rj | Lmax is strongly NP – hard


Optimal Solution for 1 | rj | Lmax

Optimal solution for 1 | rj | Lmax with branch and bound


è Tree with n+1 levels
n Level 0: 1 node
n Level 1: n nodes ( a specific job scheduled at the first position)
n Level 2: n*(n-1) nodes (from each node of Level 1 n – 1 edges
to nodes of Level 2)
(a second specific job scheduled at the second position)
è Nodes at Level k specify the first k positions

Assumption: rjk < min (max( t, rl ) + p l )


l∈J

J: jobs that are not scheduled at the father node of Level k – 1


t: makespan at the father node of Level k – 1
è jk need not to be considered at a node of Level k with this specific father
at Level k – 1
Optimal Solution for 1 | rj | Lmax (2)

n Finding bounds:
If there is a better schedule than the one generated by a branch ⇒ the
branch can be ignored
1 | rj , prmp | Lmax can be solved by the preemptive EDD rule (non delay
schedule)
If this rule creates a non preemptive Schedule ⇒ Optimality
Branch and Bound Applied to
Minimizing Maximum Lateness
jobs 1 2 3 4
pj 4 2 6 5
rj 0 1 3 5
dj 8 12 11 10

n Level 1 (1, ?, ?, ?) (2, ?, ?, ?) (3, ?, ?, ?) (4, ?, ?, ?)


è disregard (3, ?, ?, ?) and (4, ?, ?, ?)
(Job 2 can be completed not after r3 and r4)
n Lower bound for node (1, ?, ?, ?)

1 3 4 3 2 Lmax = 5
0 4 5 10 15 17

n Lower bound for node (2, ?, ?, ?)


2 1 4 3 Lmax = 7
0 1 3 7 12 18
Branch and Bound Applied to
Minimizing Maximum Lateness (2)
n Lower bound for node (1, 2, ?, ?)
1, 2, 3, 4 (non preemptive, Lmax = 6)
è Disregard (2, ?, ?, ?)
n Lower bound for node (1, 3, ?, ?)
1, 3, 4, 2 (non preemptive, Lmax = 5) optimal
è Disregard (1, 2, ?, ?)
n Lower bound for node (1, 4, ?, ?)
1, 4, 3, 2 (non preemptive, Lmax = 6)

1 | rj , prec | Lmax similar approach


è more constraints (precedence) ⇒ less nodes
Tardy Jobs 1 || Σ Uj

n set A: all jobs that meet their due dates


è Jobs are scheduled according to EDD
n set B: all jobs that do not meet their due dates
è These jobs are not scheduled!

Solution with forward algorithm


J: jobs that are already scheduled (set A)
Jd: jobs that have been considered and are assigned to set B
Jc: jobs that are not yet considered
Algorithm: Minimizing Number of
Tardy Jobs
n Step 1 Set J = ∅, Jd = ∅, Jc = {1, ... , n}
n Step 2 Let j* denote the job which satisfies d j * = min ( d j )
j∈J c

Add j* to J
Delete j* from Jc
Go to Step 3.
n Step 3 If
∑p
j∈J
j ≤ d j*
Go to Step 4,
otherwise
let k* denote the job which satisfies

p k * = max (p j )
j∈J
Delete k* from J
Add k* to Jd
n Step 4 If Jc = ∅ STOP, otherwise go to Step 2.
Algorithm: Minimizing Number of
Tardy Jobs (Proof of Optimality)
n The worst case computation time is that of simple sorting
O(n•log(n))
n Optimality proof
d1 ≤ d2 ≤ ... ≤ dn (appropriate ordering)
Jk subset of jobs [1, ... , k]
(I) maximum number of Jobs |Jk | = Nk among [1, ... ,k] completed
by their due dates
(II) Of all subsets of [1, ... ,k] with Nk jobs completed by their due
dates the subset with the smallest total processing time
è Jn corresponds to optimal schedule
Algorithm: Minimizing Number of
Tardy Jobs (Proof of Optimality) (2)
n Proof concept : induction
è correct for k=1 ⇒ assumption correct for k

1. Job k+1 is added to set Jk and it is completed by its due date


è Jk+1 = Jk ∪ {k+1} ?

Jk +1 ≤ Nk + 1 and {k + 1}∈ Jk +1
√ minimum total processing time

2. Job k+1 is added to set Jk and is not completed in time ⇒ one job
(longest processing time) is deleted
è Nk+1 = Nk
è total processing time of Jk is not increased
è no other subset of [1, ... ,k+1] can have Nk on-time completions and a
smaller processing time
Example: Minimizing Number of
Tardy Jobs
jobs 1 2 3 4 5
pj 7 8 4 6 6
dj 9 17 18 19 21
n Job 1fits J1 = {1}
n Job 2fits J2 = {1, 2}
n Job 3 does not fit J3 = {1, 3 }
n Job 4 fits J4 = {1, 3, 4}
n Job 5 does not fit J5 = {3, 4, 5}
è schedule order 3, 4, 5, (1, 2) Σ Uj = 2

1 || Σ wjUj is NP hard
è all due dates being the same
è knapsack problem
è size of the knapsack: d = dj
è size of item j: pj
è benefit of item j: wj
Example: Minimizing Number of
Tardy Jobs (2)
n Heuristic WSPT rule (wj / pj ordering)

è ratio ∑ w U (WSPT)
j j
may be very large
∑ w U (OPT)
j j

n Example: WSPT: 1, 2, 3 Σ wjUj = 89


2, 3, 1 Σ wjUj (OPT) = 12

jobs 1 2 3
pj 11 9 90
wj 12 9 89
dj 100 100 100
Total Tardiness

n 1 || Σ Tj : NP hard in the ordinary sense


è pseudo polynomial time algorithm
l polynomial in the size of the valves
n Elimination criterion (Dominance result)
è A large number of sequences can be disregarded ⇒ new precedence
constraints ⇒ easier problem
1. If pj ≤ pk and dj ≤ dk then there exists an optimal sequence in which
job j is scheduled before job k
Proof by simple exchange

2 problem instances p1, ..., pn


First instance d1, ..., dn
C‘k: latest possible completion sequence (S‘)
Second instance
d1, ..., dk-1 , max(dk ,C‘k), dk+1, ..., dn
S‘‘ optimal sequence Cj‘‘ completion time of job j in sequence S‘‘
Total Tardiness (2)

2. Any sequence that is optimal for the second instance is optimal for
the first instance as well
Assumption: d1 ≤ ... ≤ dn
pk = max (p1, ... , pn)
è kth smallest due date has largest processing time
3. There is an integer δ, 0 ≤ δ ≤ n – k such that there is an optimal
sequence S in which job k is preceded by all other jobs j with
j ≤ k+δ and followed by all jobs j with j > k+δ.
è An optimal sequence consists of
1. jobs 1, ..., k-1, k+1, ..., k+δ in some order
2. job k
3. jobs k+ δ+1, ... , n in some order

Ck ( δ) = ∑p
j≤ k + δ
j
Algorithm: Minimizing Total
Tardiness
è Use of a special true subset of {1, ..., n}
n J(j, l, k): all jobs in the set {j, ..., l} with a processing time ≤ pk but job k is not
in J(j, l, k)
n V(J(j, l, k), t) is the total tardiness of this subset in an optimal sequence that
starts at time t.
n Algorithm: Minimizing Total Tardiness
Initial conditions:
V(∅, t) = 0
V({j}, t) = max (0, t+ p j –dj)
Recursive relation:
V( J( j, l,k ), t ) = min ( V( J( j, k '+δ, k ' ), t ) + max( 0, Ck ' ( δ) − dk ' ) + V( J(k '+ δ + 1, l,k ' ), Ck ' ( δ )))
δ

where k‘ is such that


p k ' = max( p j ' j'∈ J( j, l, k ))
Optimal value function
V({1, ..., n},0)
Algorithm: Minimizing Total
Tardiness (2)
n O(n³) subsets J(j, l, k) Σ pj points in t
è O(n³ Σ pj ) recursive equations
n Each recursion takes O(n) time
è Running time O(n 4 Σ pj )

polynomial in n pseudopolynomial
Example: Minimizing Total
Tardiness
jobs 1 2 3 4 5
pj 121 79 147 83 130
dj 260 266 266 336 337

n k=3 (largest processing time) ⇒ 0 ≤ δ ≤ 2 = 5 – 3

V(J(1, 3, 3, ), 0) + 81 + V(J(4, 5, 3), 347)


n V({1, 2, ..., 5}, 0)=min V(J(1, 4, 3), 0) +164 + V(J(5, 5, 3), 430)
V(J(1, 5, 3), 0) + 294 + V(∅, 560)
n V(J(1, 3, 3), 0) = 0 Sequences 1, 2 and 2, 1
n V(J(4, 5, 3), 347) = 347 +83 – 336 +347 + 83 +130 – 337 = 317
Sequence 4, 5
n V(J(1, 4, 3), 0) = 0 Sequences 1, 2, 4 and 2, 1, 4
n V(J(5, 5, 3), 430) = 430 + 120 – 337 =223
Example: Minimizing Total
Tardiness (2)
n V(J(1, 5, 3), 0) = 76 Sequences 1, 2, 4, 5 and 2, 1, 5, 4

0 + 81 + 317
è V({1, ..., 5}, 0) = min 0 + 164 + 223 = 370
76 + 294 + 0

Optimal sequences 1, 2, 4, 5, 3 and 2, 1, 4, 5, 3


Total Weighted Tardiness

n 1 || Σ wjTj is NP complete in the strong sense


n Proof by reduction of 3 – Partition
n Dominance result
If there are two jobs j and k with dj ≤ dk , pj ≤ pk and wj ≥ wk then there
is an optimal sequence in which job j appears before job k.
Total Tardiness: An Approximation
Scheme

n NP – hard problems ⇒ Finding in polynomial time a (approximate)


solution that is close to optimal.
n Fully Polynomial Time Approximation Scheme A for 1 || Σ Tj :
∑ T ( A ) ≤ (1 + ε)∑ T (OPT ),
j j

optimal schedule
n Running time is bounded by a polynomial (fixed degree) in n and 1 ε
Total Tardiness: An Approximation
Scheme (2)
a) n jobs can be scheduled with 0 total tardiness iff the EDD schedule
has 0 total tardiness
è Tmax (EDD ) ≤ ∑ T ( OPT ) ≤ ∑ T (EDD ) ≤ n • T
j j max (EDD )

maximum tardiness of any job in the EDD schedule

b) V(J,t): Minimum total tardiness of job subset J assuming processing


starts at t ≥ 0
è There is a time t* ≥ 0 such that
V(J, t)=0 for t ≤ t* and
V(J, t)>0 for t > t*
⇒ V(J, t* + δ) ≥ δ for δ ≥ 0
è Using the pseudopolynomial algorithm to compute V(J, t) for
t* < t < n • Tmax (EDD)
( ≥0 )

è Running time bound O(n 5 • Tmax(EDD))


Total Tardiness: An Approximation
Scheme (3)
c) Rescaling p' j = p j K 
dj some factor K
d' j =
K
S: optimal sequence for rescaled problem
∑ Tj*(S):total tardiness of sequence S for processing times K• p‘j≤ pj
and due dates dj
∑ Tj(S): total tardiness of sequence S for pj < K • (p‘j + 1) and dj
n ( n + 1)
è∑ Tj * (S ) ≤ ∑ T j ( OPT ) ≤ ∑ T j ( S ) < ∑ T j * (S ) +
2
•K
n ( n + 1)
∑ jT ( S ) − ∑ jT * ( S ) < K •
2
n ( n + 1)
∑ jT ( S ) − ∑ j
T ( OPT ) < K •
2

Select K = • T max ( EDD )
n ( n + 1)
Algorithm: PTAS for Minimizing
Total Tardiness
è ∑ T ( S ) − ∑ T (OPT ) ≤ ε • T
j j max (EDD )

è Running time bound O ( n 5 • Tmax ( EDD ) / K ) = O ( n 7 / ε )

n Algorithm: PTAS for Minimizing Total Tardiness


n Step 1 Apply EDD and determine Tmax.
If Tmax = 0, then ∑ Tj = 0 and EDD is optimal; STOP.
Otherwise set
 2ε 
K =   Tmax ( EDD )
 n ( n + 1) 
n Step 2 Rescale processing times and due dates as follows:
d
p ' j = p j K  d 'j = j

K
n Step 3 Apply Algorithm „Minimizing Total Tardiness “ (slides 60/61) to the
rescaled data.
Example: PTAS for Minimizing
Total Tardiness
jobs 1 2 3 4 5
pj 1210 790 1470 830 1300
dj 1996 2000 2660 3360 3370
n Optimal total tardiness 3700 !
n Tmax(EDD)=2230 ε = 0.02 → K = 2.973
è optimal sequences for rescaled problem
1, 2, 4, 5, 3 and 2, 1, 4, 5, 3

optimal sequence for total tardiness


the original problem (original data):3704
3704 ≤ 1.02 • 3700
Total Earliness and Tardiness

n Objective Σ Ej + Σ T j
è more difficult than total tardiness
Special case dj = d for all jobs j
Properties
• No idleness between any two jobs in the optimal schedule (the first
job need not start at time 0)
• Schedule S ⇒ 2 disjoint set

early completion late completion


Cj ≤ d Cj > d
job set J1 job set J2

• Optimal Schedule:
Early jobs (J1) use Longest Processing Time first (LPT)
Late jobs (J2) use Shortest Processing Time first (SPT)
Total Earliness and Tardiness (2)

• There is an optimal schedule such that one job completes exactly


at time d
Proof: j* starts before and completes after d
|J1| < |J2| shift schedule to the left until j* completes at d
|J1| > |J2| shift schedule to the right until j* completes at d
|J1| = |J2| j* can be shifted either way

Assume that the first jobs can start its processing after t = 0 and
p1 ≥ p2 ≥ ... ≥ pn
Algorithm: Minimizing Total Earliness
and Tardiness with a Loose Due Date

n Step 1 Assign job 1 to set J1


Set k = 2.
n Step 2 Assign job k to set J1 and job k + 1 to set J2 or vice
versa.
n Step 3 If k+2 ≤ n – 1 , set k = k+2 and go to Step 2
If k+2 = n, assign job n to either set J1 or set J2 and
STOP
If k+2 = n+1, all jobs have been assigned; STOP.

If job processing must start at 0 ⇒ problem is NP – hard


Heuristic algorithm (effective in practice)
p1 ≥ p2 ≥ ... ≥ pn
Algorithm: Minimizing Total Earliness
and Tardiness with a Tight Due Date

n Step 1 Set τ1 = d and τ2 = Σ pj - d.


Set k = 1.
n Step 2 If τ1 > τ2, assign job k to the first unfilled position in the
sequence and set τ1 = τ1 – pk.
If τ1 < τ2, assign job k to the last unfilled position in the
sequence and set τ2 = τ2 – pk.
n Step 3 If k < n, set k = k + 1 and go to Step 2.
If k = n, STOP
Example: Minimizing Total Earliness
and Tardiness with a Tight Due Date

n 6 jobs with d = 180


jobs 1 2 3 4 5 6
pj 106 100 96 22 20 2

n Applying the heuristic yields the following results.


τ1 τ2 Assignment Sequence

180 166 Job 1 Placed First 1xxxxx

74 166 Job 2 Placed Last 1xxxx2

74 66 Job 3 Placed First 13xxx2

-22 66 Job 4 Placed Last 13xx42

-22 44 Job 5 Placed Last 13x542

-22 12 Job 6 Placed Last 136542


Example: Minimizing Total Earliness
and Tardiness with a Tight Due Date
(2)
n Objective Σ w‘Ej + Σ w‘‘Tj , dj = d
è Generalization of Σ Ej + Σ Tj possible using w‘ – w‘‘
n Objective Σ wj‘Ej + Σ wj‘‘Tj , dj = d
è use of (wj / pj) and WLPT and WSPT instead of LPT and SPT
n Objective Σ w‘Ej + Σ w‘‘Tj , different due dates
è NP – hard
a) Sequence of the jobs
b) Idle times between the jobs
è dependent optimization problems
n Objective Σ wj‘Ej + Σ wj‘‘Tj , different due dates
è NP – hard in the strong sense
(more difficult than total weighted tardiness)
Predetermined sequence ⇒ timing can be determined in polynomial time
Primary and Secondary Objectives

A scheduling problem is solved with respect to the primary objective. If


there are several optimal solutions, the best of those solutions is
selected according to the secondary objective

α | β | γ1 (opt), γ2
primary secondary
objectives objectives

1 || Σ Cj (opt), Lmax

è schedule all jobs according to SPT


If several jobs have the same processing time use EDD to order these jobs
è SPT/EDD rule
Reversal of Priorities

n Reversal of priorities 1 || Lmax (opt), Σ Cj


è Lmax is determined with EDD
è Z := L max

Transformation of this problem:

dj = dj + z
new deadline old due dates
The Optimal Schedule

Both problems are equivalent


è The optimal schedule minimizes Σ Cj
and guarantees that each job completes
by its deadline
è In such a schedule job k is scheduled
n
last if d ≥ k p ∑j= 1
j
n
pk ≥ pL for all L such that dL ≥ ∑
j= 1
p j

n Proof: If the first condition is not met, the schedule will miss a
deadline
è Pair wise exchange of job l and job k (not necessarily adjacent)
è decreases Σ Cj if the second condition is not valid for l and k
Algorithm: Minimizing Total
Completion Time with Deadlines


n
n Step 1 Set k = n, τ = j =1
p j , Jc = {1, ... , n}

n Step 2 c
Find k* in J such that
d k * ≥ τ and
p k * ≥ p l , for all jobs l in Jc such that d l ≥ τ .
n Step 3 Decrease k by 1.
Decrease τ by pk*
Delete job k* from Jc .
n Step 4 If k ≥ 1 go to Step 2, otherwise STOP.
Example: Minimizing Total
Completion Time with Deadlines
jobs 1 2 3 4 5
pj 4 6 2 4 2
dj 10 12 14 18 18
n τ = 18 ⇒ d4 = d 5 = 18 ≥ τ
p4 = 4 > 2 = p5
è last job : 4
è τ = 18 – p 4 = 14 ⇒ d3 = 14 ≥ 14 d5 = 18 ≥ 14
p5 = 2 = p3
è either job can go in the now last position : 3
è τ = 14 – p 3 = 12 ⇒ d5 = 18 ≥ 12 d2 = 12 ≥ 12
p2 = 6 > 2 = p5
è next last job 2
è τ = 12 – p 2 = 6 ⇒ d 5 = 18 ≥ 6 d1 = 10 ≥ 12
p1 = 4 > 2 = p5
è sequence 5 1 2 3 4 (3 1 2 5 4)
Multiple Objectives

n The optimal Schedule is always non preemptive


even if preemptions are allowed

Multiple Objectives:
1 | β | Θ 1γ 1 + Θ 2 γ 2

weighted sum of two (or more) objectives


γ 1 , γ 2 objectives

normalization Θ1 + Θ 2 = 1
Pareto-Optimal Schedule

n A schedule is called pareto-optimal if it is


not possible to decrease the value of one
objective without increasing the value of the
other
Θ 1 → 0 and Θ → 1 2

è 1 | β | Θ1γ 1 + Θ 2 γ 2 → 1 | β | γ 2 (opt ), γ1
Θ 1 → 1 and Θ2 → 0

è 1 | β | Θ1γ 1 + Θ 2 γ 2 → 1 | β | γ1(opt ), γ 2
γ1 : ∑ C j γ 2 = L max
Trade-offs between total completion
time and maximum lateness

∑Cj

Lmax(EDD) Lmax(SPT/EDD) Lmax


Pareto-Optimal Solutions

n Generation of a pareto-optimal solutions

Find a new pareto-optimal solution:


Determine the optimal schedule,Lmax
minimum increment of Lmax
decrease the minimum Σ Cj

Similar to the minimization of the


total weighted completion time with deadlines

Start with the EDD schedule


end with SPT/EDD schedule
Algorithm: Determining Trade-Offs between
Total Completion Time and Maximum
Lateness

n Step 1 Set r = 1
Set Lmax = Lmax(EDD) and d j = d j + L max .
n Step 2 Set k = n and Jc = {1, ... , n}.

n
Set τ = p and δ = τ.
j= 1 j
n Step 3 Find j* in Jc such that
d j* ≥ τ ,and
p j* ≥ pl for all jobs in Jc such that dl ≥ τ .
Put job j* in position k of the sequence.
n Step 4 If there is no job l such that dl < τ and pl > p j* ,go to Step 5.
Otherwise find j** such that
τ − d j* * = min( τ − dl )
l

for all l such that dl < τ and pl > p j* .


Set δ ** = τ − d j* * .
If δ** < δ ,then δ = δ **.
Algorithm: Determining Trade-Offs between
Total Completion Time and Maximum
Lateness (2)

n Step 5 Decrease k by 1.
Decrease τ by p j*..
Delete job j* from Jc.
If k ≥ 1 go to Step 3,
otherwise go to Step 6.
n Step 6 Set Lmax = Lmax + δ.
If Lmax > Lmax(SPT/EDD), then STOP.
Otherwise set r = r + 1, d j = d j + δ ,and go to Step 2.

Maximum number of pareto – optimal points


è n(n – 1)/2 = O(n²)
Maximum number of pareto – optimal schedule O(n • log(n))
è Total complexity O(n³ • log(n))
Example: Determining Trade-Offs between
Total Completion Time and Maximum
Lateness

jobs 1 2 3 4 5
pj 1 3 6 7 9
dj 30 27 20 15 12

n EDD sequence 5,4,3,2,1 ⇒ Lmax (EDD) = 2


c3 = 22 d 3=20

n SPT/EDD sequence 1,2,3,4,5 ⇒ Lmax (SPT/EDD) = 14


c5 = 26 d 5 = 12
Example: Determing Trade-Offs between
Total Completion Time and Maximum
Lateness (2)
Pareto – optimal current τ + δ
Iteration r (∑Cj, Lmax ) δ
schedule

1 98, 2 5,4,3,1,2 32 29 22 17 14 1
2 77, 3 1,5,4,3,2 33 30 23 18 15 2
3 75, 5 1,4,5,3,2 35 32 25 20 17 1
4 64, 6 1,2,5,4,3 36 33 26 21 18 2
5 62, 8 1,2,4,5,3 38 35 28 23 20 3
6 60, 11 1,2,3,5,4 41 38 31 26 23 3
7 58, 14 1,2,3,4,5 44 41 34 29 26 Stop
n 1 || Θ1 ∑wj Cj + Θ2 Lmax
Extreme points can be determined in polynomial time (WSPT/EDD and EDD)
è But the problem with arbitrary weights Θ1 and Θ2 is NP – hard.
Parallel Machine Models

n 2 Step process
èAllocation of jobs to machines
èSequence of the jobs on a machine
n Assumption: p1 ≥ p 2 ≥ K ≥ pn
n Simple problem: Pm || C max
n Special case: P2 || C max
èEquivalent to Partition (NP-hard in the ordinary
sense)
Heuristic algorithm

n Assign the remaining job with the longest


processing time to the next free machine (LPT)
C (LPT) 4 1
n Approximation factor: C (OPT ) ≤ 3 − 3m
max

max

n Contradiction: Counter example with smallest n


è Property: The shortest job n is the
l last job to start processing (LPT).

l last job to finish processing.

è If n is not the last job to finish processing then


l deletion of n does not change C max (LPT)

l but it may reduce Cmax (OPT)

èA counter example with n – 1 jobs


Heuristic algorithm (2)

è All machines are busy in time interval [0, Cmax(LPT) – pn]


n −1

∑p
j= 1
j starting time of job n
è C max (LPT ) − p n ≤
m

n −1 n

∑pj =1
j
1
∑p
j=1
j

C max ( LPT ) ≤ p n + = p n (1 − )+
m m m
n

∑p
j= 1
j

≤ C max (OPT )
m
Heuristic algorithm (3)
n
1
p n (1 − ) + ∑ p j m
4 1 C max (LPT ) m j =1
− < ≤
3 3 m C max ( OPT ) C max (OPT )
n
1
p n (1 −) ∑ pj m 1
p n (1 − )
= m + j= 1 ≤ m +1
C max (OPT ) C max ( OPT ) C max ( OPT )
1
4 1 p n (1 −
)
è
− < m +1
3 3m C max ( OPT )

Cmax(OPT) < 3p n
è On each machine at most 2 jobs
è LPT is optimal for this case
Example: A Worst Case Example
for LPT

jobs 1 2 3 4 5 6 7 8 9
pj 7 7 6 6 5 5 4 4 4
n 4 parallel machines
n Cmax(OPT) = 12 7+5; 7+5; 6+6; 4+4+4;
n Cmax(LPT) = 15 7 4 4
7 4
6 5
6 5
C max ( LPT ) 15 5 4 1 16 − 1
= = = − =
C max ( OPT ) 12 4 3 12 12

è Tight factor Time complexity O(n•log(n))


Example: A Worst Case Example
for LPT (2)
n Arbitrary ordering of jobs (any nondelay schedule)
C max ( LIST ) 1
≤2−
C max ( OPT ) m
n There are better bounds for other algorithms.
n Pm | prec | Cmax ⇒ is at least as hard as Pm | | Cmax
(strongly NP hard
for 2 ≤ m < ∞)

n Special case m ≥ n ⇒ P∞ | prec | Cmax


è Pm | pj = 1, prec | Cmax ⇒ NP hard
è Pm | pj = 1, tree | Cmax ⇒ easily solvable with Critical Path Method (CPM)

intree
tree
outtree
Intree

Level

5 starting jobs

3
highest level Imax
2
N(l) number of
1 jobs at level l
Intree (2)

r
H ( I max + 1 − r ) = ∑ N (I
k =1
max + 1− k)

è the highest r levels

n Performance of CP algorithm for arbitrary precedence


constraints

C max ( CP ) 4
≤ for 2 machines
C max ( OPT ) 3
Example: A Worst Case Example
of CP
n 6 jobs, 2 machines

1 4

almost fully connected


bipartite graph
2 5

3 6

1 : 1 5 2 1 5

2 : 2 3 4 6 3 6 4

4 3
Example: Application of LNS rule

n Alternative approach:
è LNS: Largest Number of Successors first
è optimal for in- and outtree

1 2 3

4 5 2 machines

6 1, 4, 6 : 2 successors

4 1 2 3 1 2 3

6 5 4 6 5
Example: Application of LNS rule
(2)
n Generalization to arbitrary processing times
n CP, LNS: largest total amount of processing
Pm | pj = 1, Mj | Cmax

n Mj are nested: 1 of 4 conditions is valid for jobs j and k.


è Mj = Mk
è Mj ⊂ Mk
è Mk ⊂ Mj
è Mk ∩ Mj = ∅
Theorem: The optimal LFJ rule

n Least Flexible Job first (LFJ)


n Machine is free Pick the job that can be
scheduled on the least number at machines
è LFJ is optimal for Pm | p j = 1 , M j | C max

if the M j are nested.


§ Proof by contradiction
j is the first job that violates LFJ rule.
j* could be placed at the position of j
by use of LFJ rules.
è M j ∩ Mj * = ∅ and | M j* |<| Mj |
è M j* ⊂ M j
è Exchange of j* and j still results in an optimal schedule.
è LFJ is optimal for P2 | p j = 1 , M j | Cmax
Example: Application of LFJ rule

n Consider P 4 | p j = 1 , M j | Cmax with eight jobs. The eight M j sets are:


è M1 = {1,2}
è M2 = M3 = {1,3,4}
è M4 = {2}
è M5 = M6 = M7 = M8 = {3,4}

1 2 3 4
LFJ 1 4 5 6
2 7 8
3
2 1 5 7
optimal 3 4 6 8

§ LFN (Least Flexible Machine) does not guarantee


optimality for this example either.
Linear Program

§ Pm | prmp | Cmax
n Preemptions ⇒ often simpler analysis
m

∑x
m

i=1
ij = pj ∑x
i =1
ij ≤ Cmax
processing less
than makespan
n

∑ xij ≤ Cmax x ij > 0


j =1

processing time makespan on each machine


Makespan with Preemptions

n Solution of LP: Processing of each job on each machine ⇒


Generation of a schedule

n Lower bound
 n 
Cmax ≥ max p1, ∑ p j m = C * max
 j=1 
Algorithm
n Step 1: Processing all jobs on a single machine
⇒ makespan ≤ m • C*max
n Step 2: Cut into m parts
n Step 3: Execution of each part on a different machine
Example: Application of LRPT Rule

n Alternative:Longest Remaining Processing Time first (LRPT)


è Infinite number of preemptions
n Example: 2 jobs p1 = p2 = 1 1 machine
Time period e : Switching after e
è Makespan : 2 Total completion time 4 – e
(Total completion time optimal case : 3)
§ Vector of remaining processing times at time t
(p1(t), p2(t), ..........pn(t)) = p (t)
p( t) ≥ q( t ) m
p(t ) majorizes q(t ) if
k k

∑p
j =1
( j) ( t ) ≥ ∑ q( j ) (t ) for all k = 1, ........n
j=1

pj(t) : j the largest element of p(t )


qj(t) : j the largest element of q(t )
Example: Application of LRPT
Rule (2)
n Example: (Vector Majorization)
Consider the two vectors p(t ) = (4, 8, 2, 4) and q(t ) = (3, 0, 6, 6).
Rearranging the elements within each vector and putting these
in decreasing order results in vectors (8, 4, 4, 2) and (6, 6, 3, 0).
It can be easily verified that p( t) ≥ m q( t ) .

§ If p( t) ≥ m q( t ) then LRPT applied to p(t ) results in a larger or equal


makespan than obtained by applying LRPT to q(t ) . (Discrete Times)
Proof: Application of LRPT Rule

n Proof: Induction on the total amount of remaining


processing
n Induction base: 1, 0, 0, ....... 0 and 1, 0, 0, ....... 0

n n

∑ p (t + 1) ≤ ∑ p (t) − 1
j =1
j
j =1
j

n n

∑ q (t + 1) ≤ ∑ q (t ) − 1
j =1
j
j =1
j

if p( t ) ≥m q( t ) ⇒ p( t + 1) ≥m q( t + 1)
Proof: Application of LRPT Rule
(2)
n LPRT yields an optimal schedule for Pm | prmp | Cmax

n Proof by induction on total amount of remaining processing.


§ Induction base: Lemma holds for less than m
jobs with 1 unit processing time left
n
è LRPT is optimal for ∑ p (t ) ≤ N − 1
j =1
j

Vector p(t ) with ∑ p (t ) = N


j =1
j
Proof: Application of LRPT Rule
(3)
n Contraction: Another rule* is optimal at time t
(but LPRT is valid from time t+1 award)

'
rule* p( t ) → p ( t + 1)

LRPT p( t ) → p( t + 1)

'
è p (t + 1) ≥ m p( t + 1)

èMakespan of rule* is not smaller than


makespan of LRPT
Example: Application of LRPT in
Discrete Time
n Consider two machines and three jobs 1, 2 and 3, with processing
times 8, 7, and 6.
The makespan is 11.

n p1 = 8 p2 = 7 p3 = 6

1 3 2 1

2 3 2 1 3

0 5 10 t
Example: Application of LRPT in
Continuous Time
n Consider the same jobs as in the previous example.
As preemptions may be done at any point in time,
processor sharing takes place.
The makespan is now 10.5.

1 1, 2, 3

2 2, 3 1, 2, 3

0 5 10 t
Example: Application of LPRT in
Continuous Time (2)
n Qm | prmp | Cmax
n Optimal schedule for Qm | prmp | Cmax

 m−1 n

 ∑
p1 p1 + p 2 j=1
pj ∑ pj 
Cmax 
≥ max  , , m −1 , n
j =1 
v1 v1 + v 2 
 ∑ v j ∑ v j 
 j =1 j=1 

for v1 ≥ v2 ≥ ........ ≥ vm
Example: Application of LPRT in
Continuous Time (3)
§ Longest Remaining Processing Time on the Fastest
Machine first (LRPT – FM) yields an optimal schedule for
Qm | prmp | Cmax

§ Discrete framework (speed and time)


Replace machine i by vi machines of unit speed.
A job can be processed on more than one
machine in parallel if the machines are
derived from the same machine.
Example: Application of the
LRPT-FM Rule
n 2 machines and 3 jobs
n processing times 8, 7, 6
n machine speed v1 = 2, v2 = 1

n Schedule:
è Job 1 → machine with two speed units
è Job 2 → machine with three speed unit
è time 1 all jobs have a remaining processing time equal to 6
è from time 1 each job occupies one speed unit machine
è Makespan is equal to 7

§ Continuous time: Multiplication of the processing times by a large


number K and the speed by a large number V
§ LRPT-FM rules yields optimal schedules if applied to
Qm | rj, prmp | Cmax
The Total Completion Time
Without Preemptions
n Total completion time without preemptions
P(j): processing time of job in position j on
a single machine

∑C j = n • p(1) + (n − 1) • p( 2 ) + ...... + 2 • p( n−1) + p( n )

(nondelay schedule)
è p(1) ≤ p(2) ≤ p(3) ≤ ..... ≤ p(n-1) ≤ p(n)

for an optimal schedule


The Total Completion Time
Without Preemptions (2)
n SPT rule is optimal for Pm || ∑ Cj

n
n is integer (Otherwise add job with processing time 0)
m

è n • m coefficients : m times n
m times n – 1
:
m times 2
m times 1

n Pm || ∑ wj Cj ⇒ NP hard
Example: Application of WSPT
Rule

jobs 1 2 3
pj 1 1 3
wj 1 1 3
n 2 machines
n 3 jobs
n Any schedule is WSPT
è w1 = w2 = 1 - ε ⇒ WSPT is not necessarily optimal

n Approximation factor
∑ w C ( WSPT) < 1 (1+
j j
2) (tight)
∑ w C (OPT) 2
j j

n Pm | prec | ∑ Cj : strongly NP-hard


Pm | pj = 1, outtree | ∑ Cj

n CP rule is optimal for Pm | pj = 1, outtree | ∑ Cj


n Valid if at most m jobs are schedulable.
n t1: last time instance CP is not applied ( instead CP‘)
n string 1: longest string not assigned at t1
n string 2: shortest string assigned at t 1
n C1‘: completion time of last job of string 1 under CP‘
n C2‘: completion time of last job of string 2 under CP‘

n If C1' ≥ C2 '+1 and machines are idle before C1‘ – 1 ⇒ CP is better


Otherwise CP is worse than CP‘.
n CP rule is not optimal for intrees!
LFJ rule is optimal for Pm | pj =1, Mj | ∑ Cj

n The LFJ rule is optimal for Pm | pj =1, Mj | ∑ Cj if the Mj sets are


nested.
n Proof: see Makespan proof
n Pm | pj = 1, Mj | ∑ Cj ∝ Rm || ∑ Cj
Pm | pj = 1, Mj | ∑ Cj ∝ Qm || ∑ Cj
Rm || ∑ Cj : special integer program (solvable in polynomial time)

n xikj = 1 if job j is scheduled as the kth to last job on machine i.


n xikj = 0 otherwise
m n n

n Minimize ∑ ∑ ∑
i= 1 j= 1 k = 1
kp ij x ikj

contribution of job j to ∑ Cj
LFJ rule is optimal for Pm | pj =1, Mj | ∑ Cj

m n
n Constraints: ∑ ∑
i=1 k =1
x ikj = 1 j = 1, ... ,n

each job is scheduled exactly once.


n


j= 1
x ikj ≤ 1 i = 1, ... , m
k= 1, ... , n
each position is taken not more than once.
n xikj ∈ {0, 1} No preemption of jobs
è Weighted bipartite matching: n jobs ⇒ n • m positions

n The optimal schedule may have unforced idleness.


Example: Minimizing Total completion
Time with Unrelated Machines

jobs 1 2 3
p1j 4 5 3
p2j 8 9 3

1 1 2

∑ Cj = 16
2 3
Total Completion Time with
Preemptions

n Problem Qm | prmp | ∑ Cj
è There exists an optimal schedule with
Cj ≤ Ck if pj ≤ pk for all j and k.
n Proof: Pair wise exchange
n The SRPT-FM rule is optimal for Qm | prmp | ∑ Cj .
n Shortest Remaining Processing Time on the Fastest Machine
v1 ≥ v2 ≥ ... ≥ vn Cn ≤ Cn-1 ≤ ... ≤ C1
n There are n machines.
è more jobs than machines ⇒ add machines with speed 0
è more machines than jobs ⇒ the slowest machines are not used
Proof: SRPT-FM rule is optimal for
Qm | prmp | ∑ Cj

v1Cn = pn
v2Cn + v1(Cn-1 – Cn ) = pn-1
v3Cn + v2(Cn-1 – Cn) + v1(Cn-2 – Cn-1) = pn-2
:
vnCn + vn-1(Cn-1 – Cn) + v1(C1 – C2) = p1

Adding these additions yields


v1Cn = pn
v2Cn + v1Cn-1 = pn + pn-1
v3Cn + v2Cn-1 + v1Cn-2 = pn + pn-1 + pn-2
:
vnCn+vn-1Cn-1 + ... + v1C1 = pn + pn-1 + ... + p1
Proof: SRPT-FM rule is optimal for
Qm | prmp | ∑ Cj (2)

n S‘ is optimal ⇒ C‘n ≤ C‘n-1 ≤ ... ≤ C‘1


n C‘n ≥ pn/v1 ⇒ v1C‘n ≥ pn
n Processing is done on jobs n and n –1
n ≤ (v1 + v2)C‘n + v1(C‘n-1 – C‘n)
è v2C‘n + v1C‘n-1 ≥ pn + pn-1
è vkC‘n + vk-1C‘n-1 + ... + v1C‘n-k+1 ≥ pn + pn-1 + ... + pn-k+1
è

v1C‘n ≥ v1Cn
v2C‘n + v1C‘n-1 ≥ v2Cn + v1Cn-1
:
vnC‘n + vn-1C‘n-1 + ... + v1C‘1 ≥ vnCn + vn-1Cn-1 + ... + v1C1
Proof: SRPT-FM rule is optimal for
Qm | prmp | ∑ Cj (3)

n Multiply inequality i by αi ≥ 0 and obtain ∑ C‘j ≥ ∑ Cj ⇒ The proof is


complete if those αi exists.
è αi must satisfy
v1α1 + v2α2 + ... + vnα n = 1
v1α2 + v2α3 + ... + vn-1α n = 1
:
v1αn= 1
n Those αi exists as v1 ≥ v2 ≥ ... ≥ vn
Example: Application of SRPT-FM
Rule

machines 1 2 3 4
vi 4 2 2 1

jobs 1 2 3 4 5 6 7
pj 8 16 34 40 45 46 61
C1=2 C2=5 C3=11 C4=16 C5=21 C6=26 C7=35

1 2 3 4 5 6 7

2 3 4 5 6 7

3 4 5 6 7
∑Cj = 116
4 5 6 7

0 5 10 15 20 25 30 35 t
Due – Date Related Objectives

n Pm || Cmax ∝ Pm || Lmax ⇒ NP – hard (all due dates 0)


n Qm | prmp | Lmax
n Assume Lmax = z
è Cj ≤ dj + z ⇒ set d j = dj + z (hard deadline)
n Finding a schedule for this problem is equivalent to solving
Qm | rj, prmp | Cmax
n Reverse direction of time. Release each job j at K – dj (for a
sufficiently big K)
dj
t

0 K

-dj
t

K 0
Cmax

è Solve the problem with LRPT- FM for Lmax ≤ z and do a logarithmic


search over z
Example: Minimizing Maximum
Lateness with Preemptions

jobs 1 2 3 4
dj 4 5 8 9
pj 3 3 3 8
n P2 | prmp | Lmax
n 4 jobs
n Is there a feasible schedule with Lmax = 0 ? (dj = dj)

jobs 1 2 3 4
rj 5 4 1 0
Pj 3 3 3 8

n Is there a feasible schedule with Cmax < 9?


Flow Shops and Flexible Flow
Shops

n Each job must follow the same route è sequence of


machines
n Limited buffer space between neighboring machines
èBlocking can occur
n Main objective: makespan (related to utilization)
n First-com-first-served principle is in effect è Jobs cannot
pass each other: permutation flow shop
Permutation Schedule

n Permutation Schedule: j1 , j2 , K , jn
i
Ci , j1 = ∑ pl , j1 i = 1, K, m
l =1
k
C1, jk = ∑ p1, jl k = 1, K, n
l =1

Ci , jk = max(Ci −1, jk , Ci , jk −1 ) + pi , jk

i = 2, K, m k = 2, K, n
Direct Graph for the Computation
of the Makespan in Fm|prmu|Cmax

n Consider 5 jobs on 4 machines with the following


processing times
jobs j1 j2 j3 j4 j5

5 5 3 6 3
p1, jk

p2 , j k 4 4 2 4 4

4 4 3 4 1
p3, j k

p4 , j k 3 6 3 2 5
Direct Graph for the Computation
of the Makespan in Fm|prmu|Cmax

p1, j1 p1, j2 ... ... p1, jn


... ...

p2, j1 p i , jk pi , jk +1 ...
... ... ...

... ... pi +1, jk pi+1, jk +1 ... ...

pm, j1 ... ... ... p m , jn


Directed Graph, Critical Paths and
Gantt Chart (1)

5 5 3 6 3

4 4 2 4 4

4 4 3 4 1

3 6 3 2 5
Directed Graph, Critical Paths and
Gantt Chart (2)

5 5 3 6 3

4 4 2 4 4

4 4 3 4 1

3 6 3 2 5

0 10 20 30
Critical Path

n The makespan is determined by a Critical path in the directed graph:


n Comparing 2 permutation flow shops with
pij(1) pij(2) and pij(1) = p(2)m+1-i,j

è Sequencing the jobs according to permutation j 1, ... , jn in the first flow


shop produces the same makespan as permutation jn, ... , j1 in the
second flow shop (Reversibility)
Example: Graph Representation
and Reversibility

n Consider 5 jobs on 4 machines with the following


processing times
jobs j1 j2 j3 j4 j5

5 2 3 6 3
p1, jk

p2 , j k 1 4 3 4 4

4 4 2 4 4
p3, j k

p4 , j k 3 6 3 5 5
Example: Graph Representation
and Reversibility (2)

5 2 3 6 3

1 4 3 4 4

4 4 2 4 4

3 6 3 5 3
Example: Graph Representation
and Reversibility (3)

5 2 3 6 3

1 4 3 4 4

4 4 2 4 4

3 6 3 5 5

0 10 20 30
SPT(1)-LPT(2) Schedule is
optimal for F2||Cmax
n Problem F2||C max with unlimited storage (optimal solution is always
permutation)
n Johnson‘s rule produces an optimal sequence
- Job partitioning into 2 sets
Set I : all jobs with p1j ≤ p2j
Set II : all jobs with p2j < p1j
- Set I : Those jobs are scheduled first in increasing order of p1j (SPT)
- Set II : Those jobs are scheduled afterwards in decreasing order of
p2j (LPT)
è SPT (1) – LPT(2) schedule
n Proof by pairwise adjacent interchange and contradiction
n Original schedule job l ⇒ job j ⇒ job k ⇒ job h
Cij : completion time (original schedule)
C‘ij : completion time (new schedule)
SPT(1)-LPT(2) Schedule is
optimal for F2||Cmax (2)
n Interchange of j and k
è Starting time of job h on machine 1 is not affected (C1l + p1j + p1k)
Starting time of job h on machine 2:
C2k = max ( max ( C2l, C1l + p1j) + p2j, C1l + p 1j + p 1k) + p2k
= max ( C2l + p2j + p2k,
C1l + p1j + p2j + p2k,
C1l + p1j + p1k + p 2k)
C‘2j = max (C2l + p2k + p 2j, C1l + p 1k + p2k + p 2j, C1l + p 1k + p1j + p2j)
n Assume another schedule is optimal
n Case 1 : j ∈ Set II and k ∈ Set I
è p1j > p 2j and p 1k ≤ p2k
è C1l + p1k + p2k + p 2j < C1l + p 1j + p1k + p2k
C1l + p1k + p1j + p2j ≤ C1l + p1j + p2j + p2k
è C‘2j ≤ C2k
Multiple Schedules That Are
Optimal
n Case 2 : j , k ∈ Set I and pij > p1k
è p1j ≤ p2j , p1k ≤ p2k
C1l + p1k + p2k + p2j
≤ C1l + p1j + p2j + p2k
C1l + p1k + p1j + p2j
n Case 3 : j , k ∈ Set II and p2j < p2k
è similar to Case 2
n There are many other optimal schedules besides SPT(1) – LPT(2)
schedules
n Fm | prmu | Cmax : Formulation as a Mixed Integer Program (MIP)
n Decision variable xjk = 1 if job j is the kth job in the sequence
Auxiliary variables

n Iik : amount of idle time on machine i between the processing of jobs


in position k and k+1
n W ik: amount of waiting time of job in position k between machines i
and i+1
Relationship between Iik and W ik
n ∆ik: difference between start time of job in position k+1 on machine
i+1 and completion time of job in position k on machine i
n pi(k) : processing time of job in position k on machine I

è ∆ik= Iik + pi(k+1) + Wi,k+1 = Wik + p i+1(k) + Ii+1,k


Constraints in the Integer Program
Formulation
∆ik

Iik W i,k+1

Machine i pi(k) pi(k+1)

W ik
Machine i +1 pi+1(k-1) pi+1(k) pi+1(k+1)

W ik > 0 and Ii+1, k = 0


Multiple Schedules That Are
Optimal (2)
n Minimizing the makespan ≡ Minimizing the idle time on machine m

è m −1 n −1

∑p
i=1
i(1) + ∑I
j=1
mj

earliest start time of intermediate idle time


job in position 1 at machine k
n
n Remember : p i ( k ) = ∑
j= 1
x jk p ij

è there is only one job at position k!


MIP

 m −1 n  n −1
n MIP :
min  ∑ ∑ x j1p ij  + ∑ Imj
 i =1 j =1  j= 1
n subject to
n

∑ j=1
x jk = 1 k = 1, ... , n


k =1
x jk = 1 j = 1, ... , n
n n
I ik + ∑
j= 1
x j,k + 1 p ij + W i,k + 1 − W ik − ∑
j=1
x jk p i + 1 , j − I i + 1,k = 0
k = 1, ..., n-1; i = 1, ..., m-1
W i1 = 0 i = 1, ..., m-1 xjk ∈ {0,1} j=1, ...,n
I1k = 0 k = 1, ..., n-1 k=1, ...,m
W ik ≥ 0 i = 1, ..., m-1; k = 1, ..., n
Iik ≥ 0 i = 1, ..., m; k = 1, ..., n-1
F3||Cmax is strongly NP-hard

n F3 || Cmax is strongly NP-hard


n Proof by reduction from 3 – Partition
n An optimal solution for F3 || Cmax does not require sequence changes
⇒ Fm | prmu | Cmax is strongly NP – hard
n Fm | prmu, pij = pj | Cmax : proportionate permutation flow shop
n The processing of job j is the same on each machine
n
n C max = ∑p
j=1
j + ( m − 1) max( p 1 ,..., p n ) for

Fm | prmu, pij = pj | Cmax (independent of the sequence)


also true for Fm | prj = pj | Cmax
Similarities between single machine
and proportionate flow shop

n Similarities between single machine and proportionate (permutation)


flow shop
- 1 || ∑ Cj Fm | prmu, pij = pj | ∑ Cj
- 1 || ∑ Uj Fm | prmu, pij = pj | ∑ Uj
- 1 || hmax Fm | prmu, pij = pj | hmax
- 1 || ∑ Tj Fm | prmu, pij = pj | ∑ Tj (pseudo polynomial algorithm)
- 1 || ∑ wjTj Fm | prmu, pij = pj | ∑ wjTj (elimination criteria)

m
n Slope index Aj for job j A j = − ∑ (m − ( 2i − 1))p
i =1
ij

n Sequencing of jobs in decreasing order of the slope index


Example: Application of Slope Heuristic

n Consider 5 jobs on 4 machines jobs j1 j2 j3 j4 j5


with the following processing
p1, jk 5 2 3 6 3
times
p 2, jk 1 4 3 4 4
p 3, jk 4 4 2 4 4
p 4, jk 3 6 3 5 5

A1 = -(3 x 5) – (1 x 4) + (1 x 4) + (3 x 3) = -6
A2 = -(3 x 5) – (1 x 4) + (1 x 4) + (3 x 6) = +3
A3 = -(3 x 3) – (1 x 2) + (1 x 3) + (3 x 3) = +1
A4 = -(3 x 6) – (1 x 4) + (1 x 4) + (3 x 2) = -12
A5 = -(3 x 3) – (1 x 4) + (1 x 1) + (3 x 5) = +3
Example: Application of Slope Heuristic
(2)

n F2 || ∑ Cj is strongly NP – hard
è Fm | prmu | ∑ Cj is strongly NP – hard
as sequence changes are not required in the optimal schedule for 2
machines
Flow Shops with Limited Intermediate
Storage

n Assumption: No intermediate storage, otherwise one storage place is


modeled as machine on which all jobs have 0 processing time
n Fm | block | Cmax
n Dij : time when job j leaves machine i Dij = Cij
n For sequence j1, …, jn the following equations hold
i
D i, j1 = ∑p
l =1
l, j1

D i, j k = max( D i − 1, j k + p i, jk , D i + 1, jk − 1 )

D m , jk = D m − 1, j k + p m , jk
è Critical path in a directed graph
Weight of node (i, jk) specifies the departure time of job jk from machine i
Edges have weights 0 or a processing time
Directed Graph for the Computation of
the makespan

0,j1 0,j2 0,jn


p1, j1 p1, j2
0 p1, jn
0
1,j1 1,j2
i-1,jk+1
p 2, j 1
p i, j k + 1
0

i,j1k
0 p i + 1, j k

i+1,jk
p m, jn

m,j1 m,jn
Graph Representation of a Flow Shop
with Blocking
jobs j1 j2 j3 j4 j5
p 1, j k 5 5 3 6 3
p 2 , jk 4 4 2 4 4
p 3 , jk 4 4 3 4 1
p 4 , jk 3 6 3 2 5

5 5 3 6 3
5 0 5 0 3 0 6 0 3

4 4 2 4 4
4 0 4 0 2 0 4 0 4

4 4 3 4 1
4 0 4 0 3 0 4 0 1

3 0 6 0 3 0 2 0 5 3 6 3 2 5

0 10 20 30
Flow Shops with Limited Intermediate
Storage (2)

n The reversibility result holds as well:


n If pij(1) = p(2)m+1-I,j then sequence j1, …, jn in the first flow shop has the
same makespan as sequence jn, …., j1 in the second flow shop
n F2 | block | Cmax is equivalent to Traveling Salesman problem with
n+1 cities
n When a job starts its processing on machine 1 then the proceeding
job starts its processing on machine 2
è time for job jk on machine 1
max( p 1, jk , p 2, jk − 1 )
Exception: The first job j* in the sequence spends time p1,j* on machine 1
Distance from city j to city k
d0k = p1k
dj0 = p2j
djk = max (p2j, p1k)
Example: A Two Machine Flow Shop
with Blocking and the TSP

n Consider 4 job instance with processing times


jobs 1 2 3 4
P1,j 2 3 3 9
P2,j 8 4 6 2

n Translates into a TSP with 5 cities

cities 0 1 2 3 4
b,j 0 2 3 3 9
a,j 0 8 4 6 2

n There are two optimal schedules


n 1423⇒ 0→1→4→2→3→ 0 and
n 1432
Example: A Two Machine Flow Shop
with Blocking and the TSP (2)

n Compansion SPT(1) – LPT(2) schedules for unlimited


buffers:
1, 3, 4, 2; 1, 2, 3, 4 and 1, 3, 2, 4
n F3 | block | Cmax is strongly NP – hard and cannot be
described as a travelling salesman problem
Special cases of Fm | block | Cmax

n Special cases of Fm | block | Cmax


è Proportionate case: Fm | block, pij = pj | Cmax
n A schedule is optimal for Fm | block, pij = pj | Cmax if and
only if it is an SPT- LPT schedule
n
n Proof : C ≥ ∑ p + ( m − 1) max( p ,..., p )
max j 1 n
j =1

optimal makespan with unlimited buffers


n Proof – concept:
l Any SPT-LPT schedule matches the lower bound
l Any other schedule is strictly greater than the lower bound
A schedule is optimal for Fm | block, pij = p j | C max
if and only if it is an SPT- LPT schedule

n SPT – part: A job is never blocked


n LPT – part: No machine must ever wait for a job
è The makespan of an SPT – LPT schedule is identical to an SPT – LPT
schedule for unlimited buffers.
n Second part of the proof by contradiction
The job jk with longest processing time spents contributes m time its
processing time to the makespan
n If the schedule is no SPT- LPT schedule
è a job jh is positioned between two jobs with a longer processing time
è this job is either blocked in the SPT part or the following jobs cannot be
processed on machine m without idle time in between
Heuristic for Fm | block | Cmax
Profile Fitting (PF)
n Local optimization
- Selection of a first job (e.g. smallest sum of processing time)
- Pick the first job as next, that wastes the minimal time on all m
machines.
- Using weights to weight the idle times on the machines depending
the degree of congestion
Example: Application of PF Heuristic
(unweighted PF)

j1 j2 j3 j4 j5 n First job: job 3 (shortest total processing


jobs
time)
p 1, jk 5 5 3 6 3 n Second job : job 1 2 4 5
p 2 , jk 4 4 2 4 4 idle time 11 11 15 3
è job 5
p 3 , jk 4 4 3 4 1 è Sequence: 3 5 1 2 4 makespan 32
p 4, jk 3 6 3 2 5
makespan for unlimited storage
è optimal makespan
n First job: job 2 (largest total processing time)
è Sequence: 2 1 3 5 4 makespan 35

F2 | block | Cmax = F2 | nwt | Cmax


but Fm | block | Cmax ? Fm | nwt | Cmax
Flexible Flow Shop with Unlimited
Intermediate Storage

n Proportionate case
FFc | pij = p j | Cmax
non preemptive preemptive
LPT heuristic LRPT heuristic
è NP hard optimal for a single stage
Example: Minimizing Makespan with
LPT

n p1 = p2 = 100 p3 = p4 = … = p102 = 1
n 2 stages: 2 machines at first stage
1 machine at second stage

1 2
1st stage
3 – 102 2 201
Optimal schedule
3 – 102 1 102 2 2nd stage
0 100 200 301

1 3 – 102
LPT heuristic
2 2

2 1 2 3 – 102
0 100 200 300 400
Flexible Flow Shop with Unlimited
Intermediate Storage (2)

n FFc | pij = pj | ∑ Cj
n SPT is optimal for a single stage and for any numbers of stage with a
single machine at each stage
n SPT rule is optimal for FFc | pij = pj | ∑ Cj if each stage has at least as
many machines as the preceeding stage
n Proof:
Single stage SPT minimizes ∑ Cj and the sum of the starting times
∑ (Cj – pj)
c stages: C j occurs not earlier than cpj time units after its starting time
at the first stage
Same number of machines at each stage:
SPT: each need not wait for processing at the next stage
n n

è ∑C j = sum of the starting times + ∑ cp


j=1
j
j=1
Job Shops

n The route of every job is fixed but not all jobs follow the same route
n J2 || Cmax
n J1,2 : set of all jobs that have to be processed first on machine 1
n J2,1 : set of all jobs that have to be processed first on machine 2
n Observation:
If a job from J1,2 has completed its processing on machine 1 the postponing
of its processing on machine 2 does not matter as long as machine 2 is not
idle.
n A similar observation hold for J2,1
è a job from J1,2 has a higher priority on machine 1 than any job form J2,1 and vice
versa
n Determing the sequence of jobs from J1,2
è F2 || Cmax : SPT(1) – LPT(2) sequence
è machine 1 will always be busy
n J2 || Cmax can be reduced to two F2 || Cmax problems
Representation as a disjunctive graph G

n Jm || Cmax is strongly NP hard


n Representation as a disjunctive graph G
Set of nodes N : Each node corresponds to an operation (i, j) of job j
on machine i
Set of conjunctive edges A: An edge from (i, j) to (k, j) denotes that
job j must be processed on machine k
immediately after it is processed on
machine i
Set of disjunctive edges B: There is a disjunctive edge from any
operation (i, j) to any operation (i, h), that
is, between any two operations that are
executed on the same machine
è All disjunctive edges of a machine form a cliques of double arcs
Each edge has the processing time of its origin node as weight
Directed Graph for Job Shop with Makespan as
Objective

n There is a dummy source node U connected to the first operation of


each job. The edges leaving U have the weight 0.
n There is a dummy sink node V, that is the target of the last operation
of each job.

1,1 2,1 3,1


0
Source Sink
0
U 2,2 1,2 4,2 3,2 V
p43
0 p42
p43

1,3 2,3 4,3


p23
Feasible schedule

n Feasible schedule: Selection of one disjunctive edge from each pair


of disjunctive edges between two nodes such that the resulting graph
is acyclic
n Example
h,j i,j

h,k i,k

n D: set of selective disjunctive edges


n G(D): Graph including D and all conjunctive edges
n Makespan of a feasible schedule: Longest path from U to V in G(D)
è 1. Selection of the disjunctive edges D
è 2. Determination of the critical path
Disjunctive Programming Formulation

n yij: starting time of operation (i,j)


n Minimize C max subject to
ykj = yij + pij if (i,j) → (k,j) is a conjunctive edge
Cmax = yij + pij for all operations (i,j)

yij = yil + pil or


yil = yij + pij for all (i,l) and (i,j) with i = 1, …, m
yij = 0 for all operations (i,j)
Example: Disjunctive Programming
Formulation

n 4 machines , 3 jobs
jobs machine sequence processing times
1 1, 2, 3 p11 = 10, p21 = 8, p31 = 4
2 2, 1, 4, 3 p22 = 8, p12 = 3, p42 = 5, p32 = 6
3 1, 2, 4 p13 = 4, p 23 = 7, p 43 = 3

n y21 = y11 + p11 = y11 + 10


n Cmax = y11 + p11 = y 11 + 10
n y11 = y12 + p12 = y12 + 3 or y12 = y11 + p11 = y11 + 10
Branch and Bound Method to
Determine all Active Schedules

n Ω :set of all schedulable operations (predecessors of


these operations are already scheduled),
n ri, j :earliest possible starting time of operation

(i, j) ∈ Ω

n Ω’ ⊆ Ω
n t(Ω) smallest starting time of a operation
Generation of all Active Schedules

n Step 1: (Initial Conditions) Let Ω contain the first operation of each


job; Let rij = 0 , for all (i, j ) ∈ Ω
n Step 2: (machine selection) compute for the current partial
schedule t(Ω ) = min {r + p }
ij ij
( i, j )∈Ω

and let i* denote the machine on which the minimum is achieved.


Step 3: (Branching) Let Ω denote the set of all operations (i*,j) on
'
n
machine i* such that
ri* j ≤ t(Ω)

For each operation in Ω ' , consider an (extended) partial schedule


with that operation as the next one on machine i*.
For each such (extended) partial schedule, delete the operation
from Ω , include its immediate follower in Ω , and return to Step 2.
Generation of all Active Schedules

n Result: Tree with each active schedule being a leaf


n A node v in this tree: partial schedule
èSelection of disjunctive edges to describe the order of
all operations that are predecessors of Ω
n An outgoing edge of v: Selection of an operation
(i*, j) ∈ Ω' as the next job on machine i*
èThe number of edges leaving node v = number of
operations in Ω '

n v’: successor of v
èSet D’ of the selected disjunctive edges at v’è G(D’)
Lower Bound for Makespan at v’

n simple lower bound: critical path in graph G(D’)


n complex lower bound:
è critical path from the source to any unscheduled operation:
release date of this operation
è critical path form any unscheduled operation to the sink: due date
of this operation
è Sequencing of all unscheduled operations on the appropriate
machine for each machine separately
è1 | rj | Lmax for each machine (strongly NP-hard)
è Reasonable performance in practice
Application of Branch and Bound

10 8
1,1 2,1 3,1
4
0

0 8 3 5 6
U 2,2 1,2 4,2 3,2 V

0 3
1,3 4
2,3 7
4,3
Application of Branch an Bound
Level 1

n Initial graph: only conjunctive edges


èMakespan: 22
n Level 1:

Ω = {(1,1), (2,2),(1,3 )}
t( Ω ) = min( 0 + 10,0 + 8,0 + 4) = 4
i* = 1
Ω' = {(1,1),(1,3)}
Schedule Operation (1,1) first

10 8
1,1 2,1 3,1
4
0 10
0 8 3 5 6
U 2,2 1,2 4,2 3,2 V
10
0 3
1,3 4
2,3 7
4,3
Schedule Operation (1,1) first

n 2disjunctive edges are added


n (1,1) à (1,2)
n (1,1) à (1,3)
èMakespan: 24
Schedule Operation (1,1) first

n Improvements of lower bound by generating an


instance of 1 | rj | Lmax for machine 1

jobs 1 2 3

pij 10 3 4

rij 0 10 10

dij 12 13 14

n L max =3 with sequence 1,2,3


n Makespan: 24+3=27
Schedule Operation (1,1) first

n Instance of 1 | rj | Lmax for machine 2

jobs 1 2 3

pij 8 8 7

rij 10 0 14

dij 20 10 21

n Lmax = 4 with sequence 2,1,3


n Makespan: 24+4 = 28
Schedule operation (1,3) first

n 2 disjunctive edges are added è Makespan: 26


n 1 | rj | L max for machine 1
n Lmax = 2 with sequence 3, 1, 2
èMakespan: 26+2=28
Application of Branch an Bound
Level 2

n Level 2: Branch from node (1,1)

Ω = {(2,2), (2,1), (1,3 )}


t( Ω) = min(0 + 8,10 + 8,10 + 4 ) = 8
i* = 2
Ω' = {(2,2)}
n There is only one choice
n (2,2) à (2,1); (2,2) à (2,3)
n Two disjunctive edges are added
Branching Tree

No disjunctive arc
Level 0
2,1
(1,1) (1,3) scheduled first
Level 1 scheduled on machine 1
LB=28 first on
LB=28
machine 1

Level 2 (1,1) scheduled first


on machine 1
LB=28
(2,2) scheduled first
on machine 2
Continuation of the Procedure yields

machine job sequence

1 1 3 2 (or 1 2 3)

2 213

3 12

4 23

Makespan: 28
Gantt chart for J4 || Cmax

Machine 1 1 3 2

Machine 2 2 1 3

Machine 3 1 2

2 3
Machine 4

0 10 20 30
Shifting Bottleneck Heuristic

n A sequence of operations has been determined for a subset M0 of all


m machines.
è disjunctive edges are fixed for those machines
n Another machine must be selected to be included in M0: Cause of
severest disruption (bottleneck)
§ All disjunctive edges for machines not in M0 are deleted → Graph G’
Makespan of G’ : Cmax (M0)
è for each operation (i, j) with i ∉ M0 determine release date and due date
è allowed time window
n Each machine not in M0 produces a separate 1 | rj | Lmax problem
è Lmax(i): minimum Lmax of machine i
n Machine k with the largest Lmax(i) value is the bottleneck
è Determination of the optimal sequence for this machine → Introduction of
disjunctive edges
è Makespan increase from M0 to M0 ∪ {k} by at least Lmax(k)
Example: Shifting Bottleneck
Heuristic
n Resequencing of the operation of all machines in M0
jobs machine sequence processing times
1 1, 2, 3 p11 = 10, p21 = 8, p31 = 4
2 2, 1, 4, 3 p22 = 8, p12 = 3, p42 = 5, p32 = 6
3 1, 2, 4 p13 = 4, p 23 = 7, p 43 = 3
n Iteration 1 : M0 = ∅ G’ contains only conjunctive edges
è Makespan (total processing time for any job ) : 22
n 1 | rj | Lmax problem for machine 1:
optimal sequence 1, 2, 3 → Lmax(1)=5
n 1 | rj | Lmax problem for machine 2:
optimal sequence 2, 3, 1 → Lmax(2)=5
n Similar Lmax(3)=4, Lmax(4)=0
è Machine 1 or machine 2 are the bottleneck
Example: Shifting Bottleneck
Heuristic (2)
è Machine 1 is selected → disjunctive edges are added : graph G’’
Cmax ({1})=Cmax(∅) + Lmax(1) = 22 + 5 = 27

10 8
1,1 2,1 3,1
4
0

0 8 3 5 6
S 2,2 1,2 4,2 3,2 T

0 3
1,3 4
2,3 7
4,3
Example: Shifting Bottleneck
Heuristic (3)
n Iteration 2
n 1 | rj | Lmax problem for machine 2
optimal sequence 2, 1, 3 → Lmax(2)=1
n 1 | rj | Lmax problem for machine 3
optimal sequences 1, 2 and 2, 1 → Lmax(3) =1
Similar Lmax (4) = 0
n Machine 2 is selected : M0 = {1, 2}
Cmax ({1,2}) = C max ({1}) + Lmax (2) = 27 + 1 = 28
Disjunctive edges are added to include machine 2
Resequencing for machine 1 does not yield any improvement
n Iteration 3
No further bottleneck is encountered
Lmax(3)=0, Lmax(4)=0 machines 1 2 3 4
è Overall makespan 28 sequences 1, 2, 3 2, 1, 3 2, 1 2, 3
Open Shops

n O2 || Cmax
 n n 
C max ≥ max  ∑ p 1 j , ∑ p 2 j 

 j =1 j =1 

n In which cases is Cmax strictly greater than the right hand side of the
inequality?
n Non delay schedules
è Idle period only iff one job remains to be processed and this job is
executed on the other machine: at most on one of the two machines
n Longest Alternate Processing Time first (LAPT)
Whenever a machine is free start processing the job that has the
longest processing time on the other machine
n The LAPT rule yields an optimal schedule for O2 || Cmax with
makespan
 n n 
C max = max  max( p1j + p 2 j ), ∑ p 1j ,∑ p 2 j 
j ∈ {1,...,n} j=1 j =1 
Open Shops (2)

n Assumption
p1j ≤ p1k ; p2j ≤ p1k
è longest processing time belongs to operation (1, k)
LAPT: Job k is started on machine 2 at time 0
è Job k has lowest priority on machine 1
n It is only executed on machine 1 if no other job is available for
processing on machine 1
a) k is the last job to be processed on machine 1
b) k is the second to last job to be processed in machine 1 and the
last job is not available due to processing on machine 2
n Generalization: The 2(n-1) remaining operations can be processed in
any order without unforced idleness.
n No idle period in any machine → optimal schedule
Open Shops (3)

n Case 1: Idle period on machine 2


è job2 needs processing on machine 2 (last job on machine 2) and job l is
still processed on machine 1
è job l starts on machine 2 at the same time when job k starts on machine
1 p1k = p2l → machine 1 determines makespan and there is no idle time
on machine 1 → optimal schedule
n Case 2: Idle period on machine 1
è all operations are executed on machine 1 except (1, k) and job k is still
processed on machine 2
è makespan is determined by machine 2 → optimale schedule without idle
periods
è makespan is determined by machine 1 → makespan p 2k + p1k, optimal
schedule
General Heuristic Rule

n Longest Total Remaining Processing on Other Machines first rule


but Om || Cmax is NP hard for m = 3
(LAPT is also optimal for O2 | prmp | Cmax
n Lower bound
 m n 
Cmax ≥ max j∈max ∑ pij,j∈max ∑ pij 
 
{1,...,n} {1,...,m}
i=1 j=1

è The optimal schedule matches the lower bound


The problem O2 || Lmax is strongly NP hard (Reduction of 3 Partitions)
M1 1 4 2 3 unnecessary increase in
makespan
M2 2 1 3 4

M1 1 4 2 3
no unnecessary increase in
M2 2 1 3 4 makespan

You might also like