0% found this document useful (0 votes)
47 views

1 Allocation/Scheduling Problem Statement

The document discusses allocation and scheduling problems for tasks with precedence constraints, resource requirements, deadlines, and characteristics. It defines key terms like feasible schedule, offline vs online scheduling, preemptive vs non-preemptive, and provides examples of rate monotonic and earliest deadline first scheduling algorithms. It outlines assumptions of the algorithms and describes schedulability tests to determine if a set of periodic and sporadic tasks can be feasibly scheduled.

Uploaded by

Arindam Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

1 Allocation/Scheduling Problem Statement

The document discusses allocation and scheduling problems for tasks with precedence constraints, resource requirements, deadlines, and characteristics. It defines key terms like feasible schedule, offline vs online scheduling, preemptive vs non-preemptive, and provides examples of rate monotonic and earliest deadline first scheduling algorithms. It outlines assumptions of the algorithms and describes schedulability tests to determine if a set of periodic and sporadic tasks can be feasibly scheduled.

Uploaded by

Arindam Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

1 Allocation/Scheduling problem statement

Given a set of tasks, task precedence constraints,


resource requirements, task characteristics and dead-
lines;

To find: a feasible allocation/schedule on a given


computer.

< (T ) indicates the precedent task set of task T

Precedence graph i < j indicates that task


Ti must precede task Tj

Precedence operator is transitive:i < j and j <


k implies that i < k

Resource requirements processor time, mem-


ory, bus access.

1
Release time of a task time when all data
needed to begin execution of the task is available.

deadline time by which taks must complete ex-


ecution

relative deadline absolute deadline - release


time.

2
for task Ti , relative deadline= di and release
time =t; then Tabs,i = t + di .

deadline can be hard or soft, depends on task.

Nature of task periodic, sporadic, aperiodic.

Periodic task period=deadline, i.e. runs once


in a period.

Sporadic tasks invoked at irregular intervals,


characterized by an upper bound on the rate at
which it is invoked.

When there is no such upper bound, the taks is


aperiodic.

Feasible schedule if all tasks in the set start


after their release times and finish before their dead-
lines.

A-feasible algorithm A run on a set of tasks


results in a feasible schedule.

3
Define schedule as functional mapping S: set of
processors X time −− > set of tasks.

S(i,t) is the task scheduled to run on processor i


at time t.

Offline scheduling precomputed, specifying


the times of running periodic tasks with slots for
sporadic ones.

Online scheduling dynamically obtained as


tasks arrive - must be fast not to disturb the rhythm
of actual tasks.

Static priority algorithms task priority is


constant. Rate Monotonic (RM) is best known al-
gorithm.

Dynamic priority algorithm task priority


changes with time. best known- earliest deadline
first (EDF)

4
preemptive schedule tasks may be interrupted
by others.

non-preemptive schedule tasks run to com-


pletion, or blocked due to want of resources- not
flexible, anomalous.

5
preemptive schedule is flexible, critical tasks meet
deadlines by interrupting less critical ones.

requires storing all relevant register data for re-


suming the task later-not always possible.

Vast majority of scheduling problems with more


than two processors is NP-complete.

Uniprocessor scheduling is usually tractable.

Hence, allocation is made of task to processor and


then consider each processor for feasible schedule.

Change allocation of task in case the schedule be-


comes infeasible.

6
2 Notations of RM and EDF algorithms

n is no of tasks in task set S

ei is execution time of task Ti

Pi is period of Ti .

Ii is phasing of Ti ; k-th period begins at time


Ii + (k − 1)Pi

di is deadline of Ti relative to release time

Di is absolute deadline of Ti .

ri is release time of Ti .

h(t) is sum of execution times of task iterations


in task set T that have absolute deadlines not later
than t.

7
3 Assumptionsof RM and EDF algorithms

• A1. no task has any nonpreemptable section,


cost of preemption is negligible.

• A2.only processing requirements are significant,


memory or I/O is negligible.

• A3.all tasks are independent, hence no prece-


dence constraints are involved.

• A4.all tasks in the task set are periodic (for RM


only)

• A5.relative deadline of a task is its period (for


RM only)

8
4 More assumptions for RM algorithm

• A6.priority of task is inversely related to its pe-


riod

• A7.higher priority tasks can preempt lower pri-


ority tasks.

9
5 Example: Consider three periodic tasks

• Periods 2, 6 and 10.

• Execution times 0.5, 2.0 and 1.75.

• Task phasings 0, 1 and 3.

Since P 1 < P 2 < P 3; T1 has highest priority. Ev-


ery time it releases, T2 and T3 are preempted.

Similarly, T3 cannot execute while instances of


T1 and T2 are unfinished.

start end task scheduled instance


0.00 0.50 T1 1st
0.50 1.00
1.00 2.00 T2 1st
2.00 2.50 T1 2nd
2.50 3.50 T2 1st
3.50 4.00 T3 1st
4.00 4.50 T1 3rd
4.50 5.75 T3 1st
5.75 6.00
6.00 6.50 T1 4th
6.50 7.00
7.00 8.00 T2 2nd

10
6 Schedulability test

total utilization of tasks U < n(21/n − 1)

necessary and sufficient conditions for RM schedu-


lability (assuming zero phasing)

sort all tasks in the task set and for highest pri-
ority task T1, we require e1 < P1.

This is both necessary and sufficient since we


premept all other tasks when T1 is invoked.

For T2, the first iteration has to find enough time


in [0,P2]. Suppose T2 finishes at time t.

No of iterations of T1 released in the interval [0, t]


is ⌈(t/P1)⌉

11
All these invokations of T1 must complete along
with the invokation of T2 within the specified time
t.

So the condition means t = ⌈(t/P 1)⌉ ∗ e1 + e2


for all t within [0, P2].

One has to check if t > ⌈(t/P1)⌉ ∗ e1 + e2 for


some t that is multiple of P1 such that t < P2.

Note that there will be finite no of multiples of


P1 within P2.

12
Similarly, t > ⌈(t/P 1)⌉ ∗ e1 + ⌈(t/P2)⌉ ∗ e2 + e3
for some t as multiple of P1 and/or P2 and t < P3.

Let Wi(t) = Σej ∗ ⌈(t/Pj )⌉ and


Li(t) = Wi(t)/t
.

Li = min{Li(t) : 0 < t < Pi}


and
L = max{Li}

13
Wi(t) is total amount of work carried by T1 , T2, .., Ti
initiated in the interval [0, t].

Wi(t′) = t′ for task Ti to complete at some t’,


if such t’ exists.

Given a set of n periodic tasks, task Ti can be


feasibly RM-scheduled iff Li <= 1.

Compute Wi(t) only at times


ti = {lPj |j = 1, .., i; l = 1, .., ⌊Pi/Pj ⌋}
i.e when tasks are released.

14
RM scheduling conditions are:

RM1. If minimum (Wi(t)) < ti task Ti is RM-


schedulable.

RM2. If maxmin(Wi(t)/t) < 1 for i=1,..,n then


entire set is RM-schedulable.

15
Consider 4 tasks with execution times and peri-
ods as follows: (20,100);(30,150);(80,210);(100,400).

For first task, check with 100.


For T1 and T2, check at 100 and 150.
For T1 , T2 and T3, check at 100, 150, 200 and 210.
For T1, T2, T3 and T4, check at 100, 150, 200, 210,
300 and 400.

Draw the Wi(t) = t line on graph. If any part


of Wi (t) plot falls on or below this line, Ti is RM-
schedulable.

16
For first task, check if e1 < 100; which is true.

To incorporate T2 , check e1 + e2 < 100 or 2 ∗


e1 + e2 < 150 which are true.

To incorporate T3, check e1 + e2 + e3 < 100 or


2 ∗ e1 + e2 + e3 < 150 or 2 ∗ e1 + 2 ∗ e2 + e3 < 200
or 3 ∗ e1 + 2 ∗ e2 + e3 < 210.

To incorporate T4, check e1+e2+e3+e4¡100 or 2*e1+e2+e3+e4¡150


or 2*e1+2*e2+e3+e4¡200 or 3*e1+2*e2+e3+e4¡210.

or 3*e1+2*e2+2*e3+e4¡300 or 4*e1+3*e2+2*e3+e4¡400

Here, T1, T2 and T3 can be incorporated in RM-


schedule, T4 cannot be considered.

17
7 Incorporation of sporadic tasks

1. minimum interarrival time may be considered


as their period in RM.

2. define a fictitious periodic task of highest prior-


ity and some chosen fictitious execution period.
during the
scheduled run of this task, sporadic tasks are
allocated.

3. deferred server approach tackles the idle time


introduced in above approach. When no spo-
radic tasks are pending,
other tasks are executed in order of priority.
However, on arrival of sporadic tasks, they are
preempted.

18
Total utilization = U out of which processor uti-
lization allotted to sporadic tasks is Us .

Schedulability is guaranteed if U <= 1 − Us if


Us <= 0.5
U <= Us if Us > 0.5

Arbitrarily low utilization task set may not be schedu-


lable if Us > 0.5
—————

19
processor utilization U = Σ(ei/Pi) for n tasks.

full utilization occurs when RM-schedule meets all


deadlines and task-set is not RM-schedulable.

task-set is not RM-schedulable if execution of any


task is increased under full utilization condition.

20
start end task scheduled instance
0 2 T1 1
2 5 T2 1
5 7 T1 2
7 10 T2 2
10 12 T1 3
12 14
14 15 T2 3
15 17 T1 4
17 19 T2 3
19 20

example: P1=5, P2=7; e1=2, e2=3; I1=I2=0;


U=(2/5)+(3/7)=0.83.

But try to increase e1 or e2 and you will miss some


deadline.

21
For two tasks,r
it can be shown that (e1/P 1) +
(e2/P 2) < 2( (2) − 1) is the condition for RM
schedulability.

Let P 2 > P 1, T2 has lower priority.During one


period of T2 , there are ⌈(P2/P1)⌉ releases of T1.

To find maxe2 as function of e1 so that tasks are


RM-schedulable.

22
Case 1: Processor is not executing T1 at time P2.

For this every iteration of T1 in [0, P2] is completed


by P2 time instant. Last iteration of T1 is released

at time P1 ∗ ⌊P2/P1⌋. Hence we must have P1 ∗


⌊P 2/P 1⌋ + e1 ≤ P2, and maximum possible value
of e2 will be

e2 = P2 − e1 ∗ ⌈P 2/P 1⌉. Then corresponding


U = (e1/P1) + 1 − (e1 ∗ ⌈P2/P1⌉/P2) = 1 + e1 ∗
((1/P1) − (⌈P2/P1⌉/P2))

i.e U = 1 + e1 * (a quantity that is less tan or


equal to zero) together with restriction on e1.

That means utilization is monotonically non-increasing


in in e1.

23
Case 2: Processor executing T1 at time P2

For this, P1 ∗ ⌊P2/P1⌋ + e1 > P2. Task T2 must


complete execution by time last iteration of T1 in
[0, P2] is released.

This release occurs at time P1 ∗ ⌈(P2/P1)⌉. So,


T2 completes within [0, P1 ∗ ⌊(P2/P1)⌋].

T1 occupies the processor for the time ⌊(P2/P1)⌋ ∗


e1. Hensce, e2,max = ⌊(P2/P1)⌋ ∗ (P1 − e1).

U = (P1/P2)∗⌊(P 2/P 1)⌋+e1∗(1/P 1) − (⌊(P 2/P 1)⌋/P 2)


.

Since (1/P1) − (⌊(P2/P1)⌋/P2) >= 0, for P1 ∗


⌊(P2/P1)⌋ + e1 > P2; U is monotonically non-
decreasing in e1.

24
From the two cases, we find that the full utiliza-
tion occurs with minimum value that U can have;
occurs when

P1 ∗ ⌊(P2/P1)⌋ + e1 = P2. Let P2/P1 = I + f


and ⌈(P 2/P 1)⌉ = I + 1.

Then U = 1+((P2 − P1 ∗⌊(P2/P1)⌋)/P1)− ((P2 −


P1 ∗ ⌊(P2/P1)⌋)/P2) ∗ ⌈(P2/P1)⌉

or, U = 1 if f = 0 and U = (I + f 2 )/(I + f )


otherwise. Now for f > 0, U is minimized when I
is minimized.

Minimal value is attained for I = 1. Differenti-


ate U with respect to f giving 2f /(1 + f ) − (1 +
f 2)/((1 + f )2).
r
Setting the derivative to zero, we obtain f = (2)−

1; so that U = 2 ∗ ( 2 − 1).

This is the least upper bound of U under full uti-


lization. For n tasks, it is U = n ∗ (21/n − 1).

25
It can be shown that to increase utilization, ei =
Pi+1 − Pi.
For full utilization, en = 2P1 − Pn is required,
Pn < 2P1

U = (P2 − P1)/P1 + (P3 − P2)/P2 + ... + (Pn −


Pn − 1)/Pn − 1 + 2P1 − Pn/Pn

= (P 2/P 1)+(P 3/P 2)+...+Pn/Pn −1+2P 1/Pn−


n.

We must choose P1, P2, ..., Pn such that U is min-


imized subject to Pn < 2P1
————–

26
8 RM is optimal static priority algorithm

If any other algorithm can produce a feasible sched-


ule, so can RM.

Proceed by contradiction. Some algorithm A gen-


erates a feasible schedule for task set T while RM
fails.

Algorithm A is assigning priorities differently from


RM and schedules on that basis.

Consider two tasks Ti and Tj such that Priority(Ti) =


Priority(Tj ) + 1 as per A.

Further assume that Pi > Pj so that from peri-


odicity, RM prioritizes Ti to lower that Tj .

Let SA be the schedule produced by A. Consider


S’ produced by interchanging priorities of Ti and
Tj .

If all deadlines are met in SA , they are also met in


S’ due to the fact that period(ti) > period(Tj ).

27
Finding out all such Ti and Tj and exchanging in
similar manner, arrive at SRM -like priority assign-
ment.

Thus T is RM-schedulable, contradicting the start-


ing hypothesis.
————-

28
9 EDF is optimal for uniprocessors

Let some task set be Σ-schedulable, not EDF. Sup-


pose T2 is earliest absolute deadline missed by EDF.

T1 is last instant, before T2, when EDF had the


processor working on a task whose deadline ex-
ceeds T2.

Set T1 = 0 if no such instant exists. Only tasks


with absolute deadline < T2 are scheduled by EDF
in [T1, T2].

Any task executing in that interval has been re-


leased at or after T1. At T1, there was no task
with deadline

less than T2 pending; so that EDF could work on


task of deadline greater than T1.

29
Now we define:

A=those tasks Ti which are released in [T1, T2 ] and


Di < T2
B=those tasks Ti which are released in [T1, T2] and
Di > T2

By definition of T2, B is non-empty. Also, all tasks


in A met deadlines both in S as well as EDF.

30
Case 1: Processor is continuously busy over [T1, T2]
in case of EDF schedule.

For the subset A, all deadlines are met in both


cases. So, execution times for A are same for both.

But for subset B, under EDF at least one task


misses deadline, so that E EDF (B) < E Σ (B).

Since EDF kept the processor continuously busy,


S must have required more time than T2 − T1 for
execution.

This is a contradiction, that is E Σ (A) + E Σ (B) >


T2 − T1 which helps us prove that EDF is optimal

in the sense that if EDF fails, so does any other


alogorithm S.

31
Case 2: Processor is idle over some part of (T1, T2 )
due to EDF schedule.

Let t3 be the last instant when the processor was


idle, by definition t3 < T2 .

Processor has executed all tasks released prior to


t3. (otherwise how can it be idle ?)

So, now argument of case 1 holds for (t3,T2] and


same contradiction can be shown.

32

You might also like