0% found this document useful (0 votes)
40 views34 pages

Classical Scheduling Algorithms For Periodic Systems: Peter Marwedel TU Dortmund, Informatik 12 Germany

Ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views34 pages

Classical Scheduling Algorithms For Periodic Systems: Peter Marwedel TU Dortmund, Informatik 12 Germany

Ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Classical scheduling algorithms

for periodic systems

Peter Marwedel
TU Dortmund, Informatik 12

© Springer, 2010
Germany

2012 年 12 月 19 日 These slides use Microsoft clip arts. Microsoft copyright restrictions apply.
Structure of this course

2: Specification & Design


Design
Application Knowledge

modeling repository

3: 8:
ES-hardware 6: Application Test
mapping
4: system 7: Optimization
software (RTOS,
5: Evaluation &
middleware, …) validation & (energy,
cost, performance, …)

Numbers denote sequence of chapters


 p. marwedel,
informatik 12, 2012 - 2-
Classes of mapping algorithms
considered in this course

 Classical scheduling algorithms


Mostly for independent tasks & ignoring communication,
mostly for mono- and homogeneous multiprocessors
 Dependent tasks as considered in architectural
synthesis
Initially designed in different context, but applicable
 Hardware/software partitioning
Dependent tasks, heterogeneous systems,
focus on resource assignment
 Design space exploration using evolutionary
algorithms; Heterogeneous systems, incl.
communication modeling
 p. marwedel,
informatik 12, 2012 - 3-
Periodic scheduling

T1

T2

Each execution instance of a task is called a job.


Notion of optimality for aperiodic scheduling does not make
sense for periodic scheduling.
For periodic scheduling, the best that we can do is to design
an algorithm which will always find a schedule if one exists.
 A scheduler is defined to be optimal iff it will find a
schedule if one exists.

 p. marwedel,
informatik 12, 2012 - 4-
Periodic scheduling: Scheduling with
no precedence constraints

Let {Ti } be a set of tasks. Let:


 pi be the period of task Ti,
 ci be the execution time of Ti ,
 di be the deadline interval, that is,
the time between Ti becoming available
and the time until which Ti has to finish execution.
 li be the laxity or slack, defined as li = di - ci
 fi be the finishing time. p i

i di
ci li
t

 p. marwedel,
informatik 12, 2012 - 5-
Average utilization: important charac-
terization of scheduling problems:

n
Average utilization: ci

i 1 pi

Necessary condition for schedulability m


(with m=number of processors):

 p. marwedel,
informatik 12, 2012 - 6-
Independent tasks:
Rate monotonic (RM) scheduling
Most well-known technique for scheduling independent
periodic tasks [Liu, 1973].
Assumptions:
 All tasks that have hard deadlines are periodic.
 All tasks are independent.
 di =pi, for all tasks.
 ci is constant and is known for all tasks.
 The time required for context switching is negligible.
 For a single processor and for n tasks, the following
equation holds for the average utilization µ:
n
ci
    n(21/ n  1)
i 1 pi
 p. marwedel,
informatik 12, 2012 - 7-
Rate monotonic (RM) scheduling
- The policy -

RM policy: The priority of a task is a monotonically


decreasing function of its period.
At any time, a highest priority task among all those that are
ready for execution is allocated.

Theorem: If all RM assumptions are met,


schedulability is guaranteed.

 p. marwedel,
informatik 12, 2012 - 8-
Maximum utilization for guaranteed
schedulability

Maximum utilization as a function of the number of tasks:

n
ci
   n(21/ n  1)
i 1 pi
lim (n(21/ n  1)  ln(2)
n 

 p. marwedel,
informatik 12, 2012 - 9-
Example of RM-generated schedule

T1 preempts T2 and T3.


T2 and T3 do not preempt each other.

 p. marwedel,
informatik 12, 2012 - 10 -
Failing RMS

Task 1: period 5, execution time 3


Task 2: period 8, execution time 3
µ=3/5+3/8=24/40+15/40=39/40  0.975
2(21/2-1)  0.828

 p. marwedel,
informatik 12, 2012 - 11 -
Case of failing RM scheduling

Task 1: period 5, execution time 2


Task 2: period 7, execution time 4
µ=2/5+4/7=34/35  0.97
2(21/2-1)  0.828

t
Missed Missing computations
deadline scheduled in the next period
 p. marwedel,
informatik 12, 2012
leviRTS animation - 12 -
Intuitively: Why does RM fail ?

Switching to T1 too early, despite


early deadline for T2

T1
tt
T2
should be
completed
No problem if p2 = m p1, mℕ :

T1
tt fits
T2
 p. marwedel,
informatik 12, 2012 - 13 -
Critical instants

Definition: A critical instant of a task is the time at which


the release of a task will produce the largest response time.

Lemma: For any task, the critical instant occurs if that task
is simultaneously released with all higher priority tasks.

Proof: Let T={T1, …,Tn}: periodic tasks with i: pi ≦ pi +1.

Source: G. Buttazzo, Hard Real-time Computing Systems, Kluwer, 2002


 p. marwedel,
informatik 12, 2012 - 14 -
Critical instances (1)

Response time of Tn is delayed by tasks Ti of higher priority:

Tn

Ti
cn+2ci t
Delay
Delaymay
mayincrease
increaseififTTi istarts
startsearlier
earlier
Tn

Ti
cn+3ci t
Maximum
Maximumdelay
delayachieved
achievedififTTnnand
andTTi istart
startsimultaneously.
simultaneously.
 p. marwedel,
informatik 12, 2012 - 15 -
Critical instants (2)

Repeating the argument for all i = 1, … n-1:


 The worst case response time of a task occurs when it is
released simultaneously with all higher-priority tasks.
q.e.d.

 Schedulability is checked at the critical instants.


 If all tasks of a task set are schedulable at their critical
instants, they are schedulable at all release times.
 Observation helps designing examples

 p. marwedel,
informatik 12, 2012 - 16 -
The case i: pi+1 = mi pi

Lemma*: If each task period is a multiple of the period of


the next higher priority task, then schedulability is also
guaranteed if µ  1.
Proof: Assume schedule of Ti is given. Incorporate Ti+1:
Ti+1 fills idle times of Ti; Ti+1 completes in time, if µ  1.

Ti

Ti+1 t

T’i+1
Used as the higher priority task at the next iteration.
 p. marwedel,
informatik 12, 2012 * wrong in the book of 2003 - 17 -
More
Proof of the RM theorem
in-depth:

Let T={T1, T2} with p1 < p2.


Assume RM is not used  prio(T2) is highest:

p1
T1
c1

T2 t
c2
Schedule
Scheduleisis feasible
feasibleifif +c22 ≦
cc11+c ≦pp11 (1)
(1)

Define F=p
Define F= /p11::##of
p22/p ofperiods
periods of
of TT11fully
fullycontained
containedin
in TT22
 p. marwedel,
informatik 12, 2012 - 18 -
Case 1: c1  p2 – Fp1

Assume RM is used  prio(T1) is highest:

Case
Case1*:1*: cc11  pp22 –– FFpp11
(c
(c1 small
smallenough
enoughto tobebefinished
finishedbefore
before2nd
2ndinstance
instanceof
of TT2))
1 2

T1

T2 t
Fp1 p2
Schedulable
Schedulableifif (F +1)cc11 ++ cc22 pp22
(F+1) (2)
(2)

* Typos in [Buttazzo 2002]: < and  mixed up]


 p. marwedel,
informatik 12, 2012 - 19 -
Proof of the RM theorem (3)

Not RM: schedule is feasible if c1+c2  p1 (1)


RM: schedulable if (F+1)c1 + c2  p2 (2)
From (1): Fc1+Fc2  Fp1
Since F  1: Fc1+c2  Fc1+Fc2  Fp1
Adding c1: (F+1) c1+c2  Fp1 +c1
Since c1  p2 – Fp1: (F+1) c1+c2  Fp1 +c1  p2
Hence: if (1) holds, (2) holds as well
For case 1: Given tasks T1 and T2 with p1 < p2, then if the
schedule is feasible by an arbitrary (but fixed) priority
assignment, it is also feasible by RM.
 p. marwedel,
informatik 12, 2012 - 20 -
Case 2: c1 > p2 – Fp1

Case 2: c1 > p2 – Fp1


(c1 large enough not to finish before 2nd instance of T2)

T1

T2
t
Fp1 p2
Schedulable if F c1 + c2  F p1 (3)
c1+c2  p1 (1)
Multiplying (1) by F yields F c1+ F c2  F p1
Since F  1: F c1+ c2  F c1+ Fc2  F p1
 Same statement as for case 1.
 p. marwedel,
informatik 12, 2012 - 21 -
Calculation of the least upper
utilization bound

Let T={T1, T2} with p1 < p2.


Proof procedure: compute least upper bound Ulup as follows
 Assign priorities according to RM
 Compute upper bound Uup by setting computation times to
fully utilize processor
 Minimize upper bound with respect to other task
parameters
As before: F= p2/p1
c2 adjusted to fully utilize processor.

 p. marwedel,
informatik 12, 2012 - 22 -
Case 1: c1  p2 – Fp1

T1

T2 t
Fp1 p2
Largest possible value of c2 is c2= p2 – c1 (F+1)
Corresponding upper bound is

c1 c2 c1 p2 – c1 ( F  1) c1 c1 ( F  1) c1  p2 
U ub      1   1   ( F  1)
p1 p2 p1 p2 p1 p2 p2  p1 

{ } is <0  Uub monotonically decreasing in c1


Minimum occurs for c1 = p2 – Fp1

 p. marwedel,
informatik 12, 2012 - 23 -
Case 2: c1  p2 – Fp1

T1

T2
t
Fp1 p2

Largest possible value of c2 is c2= (p1-c1)F


Corresponding upper bound is:
c1 c2 c1 ( p1 – c1 ) F p1 c1 c1 p1 c1  p2 
U ub      F  F F   F
p1 p2 p1 p2 p2 p1 p2 p2 p2  p1 

{ } is  0  Uub monotonically increasing in c1 (independent of c1 if {}=0)


Minimum occurs for c1 = p2 – Fp1, as before.
 p. marwedel,
informatik 12, 2012 - 24 -
Utilization as a function of G=p2/p1-F

For minimum value of c1:


p1 c1  p2
F    F  
 p1  p2  p1 F   p2  p1 
  F  
 p2  p2 
U ub  F  F    F   F 
p2 p2  p1  p2 p2  p1  p2   p1  p1 
p
Let G  2  F ; 
p1
U ub 
p1
 F G  
2  F  G2 

 F  G2 

 F  G 2   F  G   (G  G 2 )

p2 p2 / p1  p2 / p1  F   F F  G F G
G 1  G 
 1
F G

Since 0 G< 1: G(1-G)  0  Uub increasing in F 


Minimum of Uub for min(F): F=1 
1 G2
U ub 
1 G
 p. marwedel,
informatik 12, 2012 - 25 -
Proving the RM theorem for n=2 end

1 G2
U ub 
1 G
Using derivative to find minimum of U ub :
dU ub 2G (1  G )  (1  G 2 ) G 2  2G  1
  0
dG (1  G ) 2
(1  G ) 2

G1  1  2; G2  1  2;
Considering only G2 , since 0  G  1 :
1
1  ( 2  1)2 4  2 2
U lub    2( 2  1)  2(2 2  1)  0.83
1  ( 2  1) 2

This proves the RM theorem for the special case of n=2

 p. marwedel,
informatik 12, 2012 - 26 -
Properties of RM scheduling

 RM scheduling is based on static priorities. This allows


RM scheduling to be used in an OS with static priorities,
such as Windows NT.
 No idle capacity is needed if i: pi+1=F pi:
i.e. if the period of each task is a multiple of the period
of the next higher priority task, schedulability is then
also guaranteed if µ  1.
 A huge number of variations of RM scheduling exists.
 In the context of RM scheduling, many formal proofs
exist.

 p. marwedel,
informatik 12, 2012 - 27 -
EDF

EDF can also be applied to periodic scheduling.


EDF optimal for every hyper-period
(= least common multiple of all periods)
 Optimal for periodic scheduling
 EDF must be able to schedule the example in which RMS
failed.

 p. marwedel,
informatik 12, 2012 - 28 -
Comparison EDF/RMS

RMS:

EDF:

T2 not preempted, due to its earlier deadline.


 p. marwedel,
informatik 12, 2012
EDF-animation - 29 -
EDF: Properties

EDF requires dynamic priorities


 EDF cannot be used with an operating system just
providing static priorities.
However, a recent paper (by Margull and Slomka) at DATE
2008 demonstrates how an OS with static priorities can be
extended with a plug-in providing EDF scheduling
(key idea: delay tasks becoming ready if they shouldn’t be
executed under EDF scheduling.

 p. marwedel,
informatik 12, 2012 - 30 -
Comparison RMS/EDF

RMS EDF
Priorities Static Dynamic
Works with OS with fixed Yes No*
priorities
Uses full computational No, Yes
power of processor just up till µ=n(21/n-1)

Possible to exploit full No Yes


computational power of
processor without
provisioning for slack

* Unless the plug-in by Slomka et al. is added.


 p. marwedel,
informatik 12, 2012 - 31 -
Sporadic tasks

If sporadic tasks were connected to interrupts, the execution


time of other tasks would become very unpredictable.
 Introduction of a sporadic task server,
periodically checking for ready sporadic tasks;
 Sporadic tasks are essentially turned into periodic tasks.

 p. marwedel,
informatik 12, 2012 - 32 -
Dependent tasks

The problem of deciding whether or not a schedule exists


for a set of dependent tasks and a given deadline
is NP-complete in general [Garey/Johnson].

Strategies:
1. Add resources, so that scheduling becomes easier
2. Split problem into static and dynamic part so that only a
minimum of decisions need to be taken at run-time.
3. Use scheduling algorithms from high-level synthesis

 p. marwedel,
informatik 12, 2012 - 33 -
Summary

Periodic scheduling
 Rate monotonic scheduling
 EDF
 Dependent and sporadic tasks (briefly)

 p. marwedel,
informatik 12, 2012 - 34 -

You might also like