0% found this document useful (0 votes)
543 views37 pages

Lieu Sol

The document discusses a stream of sporadic jobs with varying release times and execution times that would be difficult to model using a periodic task model. It provides an example job stream with inter-release times uniformly distributed from 9 to 11 and execution times uniformly distributed from 1 to 3. It then calculates the utilization of modeling this stream as a periodic task versus the average utilization of the actual sporadic job stream. The average utilization of the sporadic stream is found to be approximately 0.20, while modeling it as a periodic task yields a utilization of 0.3333, about 13% higher.

Uploaded by

Sanchit Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
543 views37 pages

Lieu Sol

The document discusses a stream of sporadic jobs with varying release times and execution times that would be difficult to model using a periodic task model. It provides an example job stream with inter-release times uniformly distributed from 9 to 11 and execution times uniformly distributed from 1 to 3. It then calculates the utilization of modeling this stream as a periodic task versus the average utilization of the actual sporadic job stream. The average utilization of the sporadic stream is found to be approximately 0.20, while modeling it as a periodic task yields a utilization of 0.3333, about 13% higher.

Uploaded by

Sanchit Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Q. 3.

1: Because sporadic jobs may have varying release times and execution times, the
periodic task model may be too inaccurate and can lead to undue under utilization of the processor
even when the inter release times of jobs are bounded from below and their executions are
bounded from above. As an example, suppose we have a stream of sporadic jobs whose inter
release times are uniformly distributed from 9 to 11. Their execution times are uniformly
distributed from 1 to 3.
a. What are the parameters of the periodic task if we were to use such a task to model the stream?
Sol: For the periodic task model we model a task using the lower bound on its period and the
upper bound
on its execution time (the worst case). In this case, the period, p = 9, and
the exeuction time, e = 3.
b. Compare the utilization of the periodic task in part (a) with the average utilization of the
sporadic job stream.
Sol: The utilization of a periodic task is its execution time divided by its period. In this case:
Uperiodic = eperiodic/pperiodic = 3/9 = 0.3333
Modeling the job as a stream of periodic jobs, the execution time is a random variable E
uniformly distributed from 1 to 3 time units, and the period is a random variable P uniformly
distributed from 9 to 11. Utilization is a random variable that is a function of E and P. In
particular, Usporadic = E/P. In general we can find the average value of U, E[U], we need to
integrate u fu(u), the probability density function of U from -infinity to infinity.
You can use the rules of probability to determine fu(u) from fe(e) and fp(p). In this case, after a
bit more math than I anticipated we find:
a.
b.

fu(u) =

0,
121/8 - 1/(8u2),
5,
9/(8u2) - 81/8,
0,

u < 1/11
1/11 u < 1/9
1/9 u < 3/11
3/11 u < 1/3
1/3 u

c.
d.
After integrating we find Usporadic = E[U] 0.20.
The utilization with the periodic task model is about 13 % more than if we use the average
utilization.

Q.3.2: Consider the real-time program described by the psuedo code below.
Names of jobs are in italic.
At 9 AM, start: have breakfast and go to office;
At 10 AM,
if there is class,
teach;

Else, help students;


When teach or help is done, eat_lunch;
Until 2 PM, sleep;
If there is a seminar,
If topic is interesting,
listen;
Else, read;
Else
write in office;
When seminar is over, attend social hour;
discuss;
jog;
eat_dinner;
work a little more;
end_the_day;
a) Draw a task graph to capture the dependencies among jobs.
Sol: The book was a bit vague on some points, so there will be much flexibility
in grading here. I've seen these drawn a number of different ways in different
papers... The important part was to capture the timing and dependecies and
make clear which dependencies were conditional.
The start time of start, teach, and help is given, so showing the feasible
interval for them is important. The only timing constraint is that sleep has to
end at 2PM, though so the deadline is looser than what one would expect. No
timing constraints are given for eat_lunch, so they could be left out or (10 AM,
2PM) would be reasonable. The deadline for sleeping is given, but no release
time is given for sleep oreat_lunch, so 10AM is the latest time we are bounded
by. There is no mention of any time after sleep so we have no information on
what the deadlines of any other tasks should be, unless we take "end of day" to
be the literal end of day at midnight.

b) Use as many precedence graphs as needed to represent all the possible paths of
the program.
Sol: Classical precedence graphs don't have conditional branches, so we have to
draw each path separately... Also there is no timing information.

Q.3.3:

job_1 | job_2 denotes a pipe: The result produced by job_1 is incrementally consumed
by job_2. (As an example, suppose that job_2 reads and displays one character at a time as each
handwritten character is recognized and placed in a buffer by job_1.) Draw a precedence
constraint graph to represent this producer-consumer relation between the jobs.

Sol: To show the pipeline relationship job_1 is broken into smaller jobs, one per character with
each job depending on the preceding one. Likewise, job_2 is broken up, but in addition to
depending on the previous character, each job in job_2 depending on the corresponding
character from job_1.

Q.3.4: Draw a task graph to represent the flight control system described in Figure 1-3.
a) Assume the producers and consumers do not explicitly synchronize (i.e., each consumer uses the
latest result generated by each of its producers but does not wait for the completion of the
producer.)
Sol: Producers and consumers do not synchronize, so there are no precedence constraints
between producers and consumers. You may have drawn arrows to show precedence
constraints between each job with the same release time, implied by the program listing. The
whole schedule repeats every 6/180ths = 1/30th of a second.

b) Repeat part (a), assuming that producers and consumers do synchronize.


Sol: The text says inner loops depend on outer loops and avionics tasks, output depends on
inner loops. If you drew the constraints based on program order, only a few additional arcs
need to be drawn because the program order causes the dependencies to be satisfied.

Q.4.1: The feasible interval of each job in the precedence graph in figure 4P-1 is given next to
its name. The execution time of all jobs are equal to 1.

a) Find the effective release times and deadlines of the jobs in the precendence graph in Figure 4P1.
Sol:

b) Find an EDF schedule of the jobs.


Sol:

c) A job is said to be at level i if the length of the longest path from the job to jobs that have no
successors is i. So, jobs J3, J6 and J9 are at level 0, jobs J2, J5 and J8 are at level 1, and so on.
Suppose that the priorities of the jobs are assigned based on their levels, the heigher the level, the
higher the priority. Find a priority-drive schedule of the jobs in Figure 4P-1 according to this
priority assignment.
Sol:

Explanation:
J1 is the only job released at t=0, so it goes first.

At t=1, J2, J4, and J7 have been released. J4 has a level of 3, so it goes first.

At t=2, J4is done. J7 has the next highest level (2), so it goes next.

At t=3, J7 is done. J3, J5, J8, and J9 are released. J5 has the next highest level (2), so it
runs.

At t=4, J5 is done. Either J2 or J8 could run because both have a level of 1 and both
have had their precedence contraints met. At this point J2 has already missed its deadline...

At t=5, either J2 or J8, whichever was run at t=4, is done. The one that was not
previously run gets to run. There are no more level 1 jobs.

At t=6, J3, J6, and J9 are all eligible to run and are all at level 0. They can run in any
order occording to this scheduling algorithm.

J2 and J3 miss their deadlines. This is not an optimal scheduling algorithm.

Q.4.2: The execution times of the jobs in the precedence graph in figure 4P-2 are all equal to
1, and their release times are identical. Give a non preemptive optimal schedule that minimizes

the completion time of all jobs on three processors. Describe briefly the algorithm you used to find
the schedule.

Sol: Execution time of all jobs equal to 1. Release times are identical, non preemptive optimal
solution:

Q.4.4: Consider a system that has five periodic tasks, A, B, C, D, and E, and three processors
P1, P2, P3. The periods of A, B, and C are 2 and their execution times are equal to 1. The periods of
D and E are 8 and their execution times are 6. The phase of every task is 0, that is, the first job of
the task is released at time 0. The relative deadline of every task is equal to its period.
a) Show that if the tasks are scheduled dynamically on three processors according to the LST
algorithm, some jobs in the system cannot meet their deadlines.
Sol:
At t=0, A, B, and C have 1 time unit of slack. D and E each have a slack of 2, so A, B, and C run
first.
At t=1, A, B, and C are done running and the slack of D and E is 1, so D and E both get to run.

At t=2, A, B, and C are released again. Their slack is 1, as are the slacks of D and E. Assuming
that once a job starts running on a processor, it cannot change processors, D and E will run
round-robin on two processors with two of A, B, and C with the third running alone.
By time t=4, A, B, and C will have completed, and D and E will have completed one time unit of
work.
At t=4, new jobs in A, B, and C are released with a slack of 1, but D and E have 0 slack. D and E
run on two processors and A, B, and C run round-robin on the third.
At t=5.5 the A, B, and C's slack has fallen to 0. At that point all 5 tasks have a slack of 0 (i.e.,
they require the processor time from now until their deadline), but there are five jobs and only
three processors. At least one job will finish past its deadline.
If jobs are allowed to change processors once they start, things are a bit more complicated. The
five jobs run round-robin on three processors.
At t=1.6667, A, B, and C finish. D and E continue to run until t=2. Every 2 time units D and E
only execute for 1.3333 time units, so at t=6 they have not completed.
b) Find a feasible schedule of the five tasks on three processors.
Sol:

Q.4.5: A system contains nine non-preemptable jobs named Ji, for i = 1, 2, ..., 9. Their
execution times are 3, 2, 2, 2, 2, 4, 4, 4, 4, and 9, respectively, their release times are equal to 0,
and their deadlines are 12. J1 is the immediate predecessor of J9, and J4 is the immediate
predecessor of J5, J6, J7, and J8. There are no other precedence constraints. For all the jobs, J i has
a higher priority than Jk if i < k.
a) Draw the precedence graph of the jobs.
Sol:

b) Can the jobs meet their deadlines if they are scheduled on three processors? Explain your
answer.
Sol: All jobs meet their deadline.

c) Can the jobs meet their deadlines if we make them preemptable and schedule them
preemptively? Explain your answer.
Sol: Job J9 does not meet its deadline.

d) Can the jobs meet their deadlines if they are scheduled nonpreemptively on four processors?
Explain your answer.
Sol: Job J9 does not meet its deadline.

e) Suppose that due to an improvement of the three processors, the execution time of every job is
reduced by 1. Can the jobs meet their deadlines? Explain your answer.
Sol: Job J9 does not meet its deadline.

Q.4.7: Consider the set of jobs in Figure 4-3. Suppose that the jobs have identical execution
time. What maximum execution time can the jobs have and still can be feasible scheduling on one
processor? Explain your answer.

Sol: Jobs with their effective release time and deadline are:
J1 (2,8)

J2 (0,7)

J3 (2,8)

J4 (4,9)

J5 (2,8)

J6 (4,20)

J7 (6,21)

Between 2 to 9, four jobs need to be fit.


Hence, maximum execution time of each job is 1.75.

Q.5.1: Each of the following systems of periodic tasks is scheduled and executed according to
a cyclic schedule. For each system, choose an appropriate frame size. Preemptions are allowed,
but the number of preemption should be kept small.
a) (6, 1), (10, 2), and (18, 2)
Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f2
2.
f {2,
3.

f max(ei), 1 i n
f divides at least one of the periods evenly:
3, 5, 6, 9, 10, 18}
2f - gdc(f, pi) Di, 1 i n
f=2
2 2 - gcd(2, 6) = 2 - 2 = 0 2
2 2 - gcd(2, 10) = 2 - 2 = 0 5
2 2 - gcd(2, 18) = 2 - 2 = 0 5
f=3
2 3 - gcd(3, 6) = 6 - 3 = 3 > 2

f=5
2 5 - gcd(5, 6) = 10 - 1 = 9 > 2
f=6
2 6 - gcd(6, 6) = 12 - 6 = 6 > 2
f=9
2 9 - gcd(9, 6) = 18 - 3 = 15 > 2
f = 18
2 10 - gcd(10, 6) = 20 - 2 = 18 > 2
f = 18
2 18 - gcd(18, 6) = 36 - 6 = 30 > 2
The only frame size that works for this set of tasks is f = 2.
b) (8, 1), (15, 3), (20, 4), and (22, 6)
Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f max(ei), 1 i n
f6
2.
f divides at least one of the periods evenly:
f {1, 2, 3, 4, 5, 8, 10, 11, 15, 20, 22}
3.
2f - gdc(f, pi) Di, 1 i n
f=8
2 8 - gcd(8, 8) = 16 - 8 = 8 8
2 8 - gcd(8, 15) = 16 - 1 = 15 15
2 8 - gcd(8, 20) = 16 - 4 = 12 20
2 8 - gcd(8, 22) = 16 - 2 = 14 22
f = 10
2 10 - gcd(10, 8) = 20 - 2 = 18 > 8
f = 11
2 11 - gcd(11, 8) = 22 - 1 = 21 > 8
f = 15
2 15 - gcd(15, 8) = 30 - 1 = 29 > 8
f = 20
2 20 - gcd(20, 8) = 40 - 4 = 36 > 2
f = 22
2 22 - gcd(22, 8) = 44 - 2 = 42 > 2
The only frame size that works for this set of tasks is f = 8.

Clock-driven Cyclic Scheduler


Since the parameters of all jobs with hard deadlines are known can construct a static cyclic schedule
in advance Processor time allocated to a job equals its maximum execution time Scheduler
dispatches jobs according to the static schedule, repeating each hyperperiod Static schedule
guarantees that each job completes by its deadline
No job overruns all deadlines are met
Schedule calculated off-line can use complex algorithms Run-time of the scheduling algorithm
irrelevant Can search for a schedule that optimizes some characteristic of the system
e.g. a schedule where the idle periods are nearly periodic; accommodating aperiodic jobs

Structured Cyclic Schedules


Arbitrary table-driven cyclic schedules flexible, but inefficient Relies on accurate timer interrupts,
based on execution times of tasks High scheduling overhead
Easier to implement if structure imposed: Make scheduling decisions at periodic intervals (frames)
of length f Execute a fixed list of jobs with each frame, disallowing pre-emption except at frame

boundaries Require phase of each periodic task to be a non-negative integer multiple of the frame
size
The first job of every task is released at the beginning of a frame = kf where k is a non-negative
integer
Gives two benefits: Scheduler can easily check for overruns and missed deadlines at the end of
each frame Can use a periodic clock interrupt, rather than programmable timer

Q5.1: Each of the following systems of periodic tasks is scheduled and executed
according to a cyclic schedule. For each system, choose an appropriate frame size.
Preemptions are allowed, but the number of preemptions should be kept small.
c) (4, 0.5), (5, 1.0), (10, 2), and (24, 9)
Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f max(ei), 1 i n
f9
2.
f divides at least one of the periods evenly:
f {2, 3, 4, 5, 6, 8, 10, 12, 24}
3.
2f - gdc(f, pi) Di, 1 i n
f = 10
2 10 - gcd(10, 4) = 20 - 2 = 18 > 4
f = 12
2 12 - gcd(12, 4) = 24 - 4 = 20 > 4
f=5
2 24 - gcd(24, 4) = 48 - 4 = 44 > 4
None of the possible frame sizes becuase e4 = 9 is too long. We have to split T4 into two
smaller tasks. First try e4,1 = 4, and e4,2 = 5.
1.
f max(ei), 1 i n
f5
2.
f divides at least one of the periods evenly:
f {2, 3, 4, 5, 6, 8, 10, 12, 24}
3.
2f - gdc(f, pi) Di, 1 i n
f=5
2 5 - gcd(5, 4) = 10 - 1 = 9 > 4
A frame size of 5 is still too big, as is a frame size of 4.5. We cannot make the frame
size any smaller unless we break up the taks into smaller pieces. Try dividing T 4 into
three equal sized pieces with e4 = 3.
1.
f max(ei), 1 i n
f3
2.
f divides at least one of the periods evenly:
f {2, 3, 4, 5, 6, 8, 10, 12, 24}
3.
2f - gdc(f, pi) Di, 1 i n
f=3
2 3 - gcd(3, 4) = 6 - 1 = 5 > 4

Even three is too big. We need to break up T4 further, try four tasks with execution
time 2 and one with execution time 1.
1.
f max(ei), 1 i n
f2
2.
f divides at least one of the periods evenly:
f {2, 3, 4, 5, 6, 8, 10, 12, 24}
3.
2f - gdc(f, pi) Di, 1 i n
f=2
2 2 - gcd(2, 4) = 4 - 2 = 2 4
2 2 - gcd(2, 5) = 4 - 1 = 3 4
2 2 - gcd(2, 10) = 4 - 2 = 2 4
2 2 - gcd(2, 24) = 4 - 2 = 2 4
With this set of tasks f = 2 works.

d) (5, 0.1), (7, 1.0), (12, 6), and (45, 9)

Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f max(ei), 1 i n
f9
The smallest period is 5, which is less than the longest execution time. We cannot have a frame
size larger than the period, so at this point we know we have to split the (45, 9) task and the
(12, 6) task. Splitting (45, 9) into two tasks does not leave many frame size choices. Try (45, 9)
=> (45, 3), (45, 3), (45, 3) and (12, 6) => (12, 3), (12, 3)
f 3
2.
f divides at least one of the periods evenly:
f {1, 2, 3, 4, 5, 6, 7, 9, 12, 15, 45}
3.
2f - gdc(f, pi) Di, 1 i n
f=3
2 3 - gcd(3, 5) = 6 - 1 = 5 5
2 3 - gcd(3, 7) = 6 - 1 = 5 7
2 3 - gcd(3, 12) = 6 - 3 = 3 12
2 3 - gcd(3, 45) = 6 - 3 = 3 45
f=4
2 4 - gcd(4, 5) = 8 - 1 = 7 > 5
f=5
2 5 - gcd(5, 5) = 10 - 5 = 5 5
2 5 - gcd(5, 7) = 10 - 1 = 9 > 7
The only frame size that works for this set of tasks is f = 3 (assuming the last two tasks are split
as described above.)

Scheduling Aperiodic Jobs


Aperiodic jobs are scheduled in the background after all jobs with hard deadlines scheduled in
each frame have completed
Delays execution of aperiodic jobs in preference to periodic jobs However, note that there is
often no advantage to completing a hard real-time job early, and since an aperiodic job is released
due to an event, the sooner such a job completes, the more responsive the system
Hence, minimizing response times for aperiodic jobs is typically a design goal of real-time
schedulers

Slack Stealing
Periodic jobs scheduled in frames that end before their deadline; there may be some slack time
in the frame after the periodic job completes
Since we know the execution time of periodic jobs, can move the slack time to the start of the
frame, running the periodic jobs just in time to meet their deadline
Execute aperiodic jobs in the slack time, ahead of periodic jobs The cyclic executive keeps
track of the slack left in each frame as the aperiodic jobs execute, preempts them to start the
periodic jobs when there is no more slack As long as there is slack remaining in a frame, the cyclic
executive returns to examine the aperiodic job queue after each slice completes
Reduces response time for aperiodic jobs, but requires accurate timers

Q.5.1: Each of the following systems of periodic tasks is scheduled and executed
according to a cyclic schedule. For each system, choose an appropriate frame size.
Preemptions are allowed, but the number of preemptions should be kept small.

e) (5, 0.1), (7, 1.0), (12, 6), and (45, 9)


Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f max(ei), 1 i n
f9
The smallest period is 5, which is less than the longest execution time. We cannot
have a frame size larger than the period, so at this point we know we have to split the
(45, 9) task and the (12, 6) task. Splitting (45, 9) into two tasks does not leave many
frame size choices. Try (45, 9) => (45, 3), (45, 3), (45, 3) and (12, 6) => (12, 3), (12,
3)
f 3

2.
f divides at least one of the periods evenly:
f {1, 2, 3, 4, 5, 6, 7, 9, 12, 15, 45}
3.
2f - gdc(f, pi) Di, 1 i n
f=3
2 3 - gcd(3, 5) = 6 - 1 = 5 5
2 3 - gcd(3, 7) = 6 - 1 = 5 7
2 3 - gcd(3, 12) = 6 - 3 = 3 12
2 3 - gcd(3, 45) = 6 - 3 = 3 45
f=4
2 4 - gcd(4, 5) = 8 - 1 = 7 > 5
f=5
2 5 - gcd(5, 5) = 10 - 5 = 5 5
2 5 - gcd(5, 7) = 10 - 1 = 9 > 7
The only frame size that works for this set of tasks is f = 3 (assuming the last two tasks
are split as described above.)

f) (7, 5, 1, 5), (9, 1), (12, 3), and (0.5, 23, 7, 21)
Sol: The frame size has to meet all three criteria discussed in the chapter.
1.
f max(ei), 1 i n
f7
The smallest period is 5, which is less than the longest execution time. We cannot have a frame
size larger than the period, so at this point we know we have to split the (0.5, 23, 7, 21) task.
Splitting it into two tasks does not work (try it, to see). Split the long task into three (0.5, 23,
3, 21), (0.5, 23, 3, 21), and (0.5, 23, 2, 21)
f 3
2.
f divides at least one of the periods evenly:
f {1, 2, 3, 4, 5, 6, 9, 12, 23}
3.
2f - gdc(f, pi) Di, 1 i n
f=3
2 3 - gcd(3, 5) = 6 - 1 = 5 5
2 3 - gcd(3, 9) = 6 - 3 = 3 9
2 3 - gcd(3, 12) = 6 - 3 = 3 12
2 3 - gcd(3, 23) = 6 - 1 = 5 21
f=4
2 4 - gcd(4, 5) = 8 - 1 = 7 > 5
f=5
2 5 - gcd(5, 5) = 10 - 5 = 5 5
2 5 - gcd(5, 9) = 10 - 1 = 9 9
2 5 - gcd(5, 12) = 10 - 1 = 9 12
2 5 - gcd(5, 23) = 10 - 1 = 9 21

Either f = 3 or f = 5 may work, assuming the last two tasks are split as described above. We
need to make a schedule to verify the tasks can be scheduled with those frame sizes.

Scheduling Sporadic Jobs


We assumed there were no sporadic jobs
Sporadic jobs have hard deadlines, release and execution times that are not known a priori Hence,
a clock-driven scheduler cannot guarantee a priori that sporadic jobs complete in time
However, scheduler can determine if a sporadic job is schedulable when it arrives Perform an
acceptance test to check whether the newly released sporadic job can be feasibly scheduled with all the
jobs in the system at that time If there is sufficient slack time in the frames before the new jobs
deadline, the new sporadic job is accepted; otherwise, it is rejected
Can be determined that a new sporadic job cannot be handled as soon as that job is released; earliest
possible rejection If more than one sporadic job arrives at once, they should be queued for
acceptance in EDF order

Practical Considerations
Handling overruns: Jobs are scheduled based on maximum execution time, but failures might
cause overrun A robust system will handle this by either: 1) killing the job and starting an error
recovery task; or 2) preempting the job and scheduling the remainder as an aperiodic job
Depends on usefulness of late results, dependencies between jobs, etc.
Mode changes: A cyclic scheduler needs to know all parameters of real-time jobs a priori
Switching between modes of operation implies reconfiguring the scheduler and bringing in the
code/data for the new jobs This can take a long time: schedule the reconfiguration job as an
aperiodic or sporadic task to ensure other deadlines met during mode change
Multiple processors: Can be handled, but off-line scheduling table generation more complex

Q.5.2: A system uses the cyclic EDF algorithm to schedule sporadic jobs. The cyclic schedule of
periodic tasks in the system uses a frame size of 5, and a major cycle contains 6 frames. Supose
that the initial amounts of slack time in the frames are 1, 0.5, 0.5, 0.5, 1, and 1.
a.
Suppose that a sporadic job S(23, 1) arrives in frame 1, sporadic jobs S 2(16, 0.8) and S3(20,
0.5) arrive in frame 2. In which frame are the accepted sporadic jobs scheduled?
Sol:
S1(23, 1)
Since S1 arrives in frame 1, scheduling decisions about it are made at the start of frame
2. Frame 2 has a slack of 0.5, as does frame 3. Frame 3 ends at t=15 which is well
before S1's deadline. The scheduler accepts S1 at the start of frame 2. If no other jobs
arrive it would finish at the end of frame 3.
S2(16, 0.8)

The scheduler examines S2 at the start of frame 3 (t=10). The deadline, 16, is in frame
4, but there is only 0.5 slack in frame 3, so there is no way S2 can finish before its
deadline. The scheduler rejects S2 at the start of frame 3.
S3(20, 0.5)
The scheduler examines S3 at the start of frame 3 (t=10). It's deadline is 20, which is
the start of frame 5. There are 1.0 units of slack between frame 3 and frame 5, so the
scheduler needs to see if S3 can be scheduled without making any currently scheduled
jobs miss their deadlines. S3 has an earlier deadline than S1 so if S3 were accepted, it
would run for 0.5 time units at the end of frame 3 and S1 would run for 0.5 time units at
the end of frame 4. Since S1 has already executed for 0.5 time units at the end of frame
2, it will meet its deadline. S3 is accepted at the start of frame 3.

b.
Suppose that an aperiodic job with exeuction time 3 arrives at time 1. When will it be
completed, if the systems does not do slack stealing?
Sol: Call the aperiodic job A. When all the periodic jobs complete at the end of frame 1,
the scheduler will let A execute until the start of frame 2, 1 time unit later. Frames 2,
3, and 4 have no slack because S1 and S3, from part (a), consume all of it. The scheduler
runs A in the slack at the ends of frames 5 and 6. A completes at t=30, the end of frame
6.

Pro and Cons of Clock Driven Scheduling


Simplicity
Easy extension of frame based decision to event driven
Decision made at clock ticks
Events are queued
Time driven polling
Hard to maintain and modify
Fixed release time and must be known in advance

Q.5.3: Draw a network flow graph that we can use to find a preemptive cyclic schedule of the
periodic tasks
T1 = (3,7,1); T2 = (4,1); T3 = (6,2.4,8).
Sol:
1.
f max(ei), 1 i n
f 2.4
2.
f divides at least one of the periods evenly:
f {3, 4, 6,12}
3.
2f - gdc(f, pi) Di, 1 i n
Pi
Di
f=3
f=4
f=6
3
7
3
7
9
4
4
5
4
10
6
8
3
6
6
Hence for f = 4, Network flow graph is-

T3 can't be scheduled.

Assumptions for Clock-driven scheduling


Clock-driven scheduling applicable to deterministic systems
A restricted periodic task model: The parameters of all periodic tasks are known a priori For
each mode of operation, system has a fixed number, n, periodic tasks
For task Ti each job Ji,k is ready for execution at its release time ri,k and is released pi units of
time after the previous job in Ti such that ri,k = ri,k-1 + pi Variations in the inter-release times of
jobs in a periodic task are negligible Aperiodic jobs may exist Assume that the system maintains
a single queue for aperiodic jobs
Whenever the processor is available for aperiodic jobs, the job at the head of this queue is
executed There are no sporadic jobs

Notation for Clock-driven scheduling

The 4-tuple Ti = (i, pi, ei, Di) refers to a periodic task Ti with phase i, period pi, execution
time ei, and relative deadline Di Default phase of Ti is i = 0, default relative deadline is the
period Di = pi.
Omit elements of the tuple that have default values

The clock-driven approach has many advantages:


- conceptual simplicity;
- we can take into account complex dependencies, communication delays, and resource
contentions among jobs in the choice and construction of the static schedule;
- static schedule stored in a table; change table to change operation mode;
- no need for concurrency control and synchronization mechanisms;
- context switch overhead can be kept low with large frame sizes. It is possible to further simplify
clock-driven scheduling
- sporadic and aperiodic jobs may also be time-triggered (interrupts in response to external events
are queued and polled periodically);

- the periods may be chosen to be multiples of the frame size.


- Easy to validate, test and certify (by exhaustive simulation and testing).
- Many traditional real-time applications use clock-driven schedules.
- This approach is suited for systems (e.g. small embedded controllers) which are rarely modified
once built.

Q.5.4:

A system contains the following periodic tasks:

T1 = (5,1);

T2 = (7,1,9);

T3 = (10,3) and T4 = (35,7).

If the frame size constraint (5-1) is ignored, what are the possible frame sizes ?
Sol:
1.
f max(ei), 1 i n
This step is ignored, here.
2.
f divides at least one of the periods evenly:
f {2, 5, 7, 10, 14, 35}
3.
2f - gdc(f, pi) Di, 1 i n
Pi
Di
f=2
f=5
f=7
f = 10
5
5
3
5
13(x)
15(x)
7

10

10

35

35

7
13(x)
7

15

f = 14
27(x)

f = 35
65(x)

19(x)

21(x)

63(x)

10

26(x)

65(x)

20

35(x)

Cyclic scheduling: frame size


Decision points at regular intervals(frames);
Within a frame the processor may be idle to accommodate aperiodic jobs
The first job of every task is released at the beginning of some frame
How to determine the frame size f ?
The following 3 constraints should be satisfied:
1. f max(ei)

(for 1 i n) (n tasks)

each job may start and complete within one frame: no job is preempted
2. [pi /f]- pi /f = 0

(for at least one i)

to keep the cyclic schedule short, f must divide the hyperperiod H; this is true if f divides at least one
pi

3. 2f gcd(pi,f) Di (for 1 i n)
to have at least one whole frame between the release time and the deadline of every job (so the job
can be feasibly scheduled in that frame)

Constructing a cyclic schedule


Design steps and decisions to consider in the process of constructing a cyclic schedule:
determine the hyperperiod H,
determine the total utilization U (if >1 schedule is unfeasible),
choose a frame size that meets the constraints,
partition jobs into slices, if necessary,
place slices in the frames.

The clock-driven approach has many disadvantages:


- brittle: changes in execution time or addition of a task often require a new schedule to be
constructed;
- release times must be fixed (this is not required in priority-driven systems);
- all combinations of periodic tasks that might execute at the same time must be known a priori: it is
not possible to reconfigure the system on line (priority-driven systems do not have this restriction);
- not suitable for many systems that contain both hard and soft real-time applications: in the clockdriven systems previously discussed, aperiodic and sporadic jobs were scheduled in a priority driven
manner (EDF).

Q.6.4: A system T contains for periodic tasks, (8, 1), (15, 3), (20, 4), and (22, 6). Its total
utilization is 0.80. Construct the initial segment in the time interval (0, 50) of a rate-monotonic
schedule of the system.
Sol: The scheduling will be as-

Schedulability Test for RMA


An important problem that is addressed during the design of a uniprocessor-based real-time system is
to check whether a set of periodic real-time tasks can feasibly be scheduled under RMA. Schedulability
of a task set under RMA can be determined from a knowledge of the worst-case execution times and
periods of the tasks. A pertinent question at this point is how can a system developer determine the
worst-case execution time of a task even before the system is developed. The worst-case execution
times are usually determined experimentally or through simulation studies.
The following are some important criteria that can be used to check the schedulability of a set of tasks
set under RMA.

Necessary Condition
A set of periodic real-time tasks would not be RMA schedulable unless they satisfy the following
necessary condition:
i= ei / pi = ui 1
where ei is the worst case execution time and pi is the period of the task Ti, n is the number of tasks to
be scheduled, and ui is the CPU utilization due to the task Ti. This test simply expresses the fact that
the total CPU utilization due to all the tasks in the task set should be less than 1.

Sufficient Condition
The derivation of the sufficiency condition for RMA schedulability is an important result and was
obtained by Liu and Layland in 1973. A formal derivation of the Liu and Laylands results from first
principles is beyond the scope of this discussion. We would subsequently refer to the sufficiency as the
Liu and Laylands condition. A set of n real-time periodic tasks are schedulable under RMA, if i=ui
n(21/n 1) (3.4/2.10)
where ui is the utilization due to task Ti. Let us now examine the implications of this result. If a set of
tasks satisfies the sufficient condition, then it is guaranteed that the set of tasks would be RMA
schedulable.

Q.6.5: Which of the following systems of periodic tasks are schedulable by the rate-monotonic
algorithm? By the earliest-deadline-first algorithm? Explain your answer.
a.

T = {(8, 3), (9, 3), (15, 3)}


Sol:

URM(3) 0.780

U = 3/8 + 8/9 + 3/15 = 0.908 > URM


schedulable utilization test is indeterminateFor RM, shortest period is highest priority
w1(t) = 3, W1 = 3 8, T1 is schedulable
w2(t) = 3 + t/83 = t
W2 = 6 9, T2 is schedulable
w3(t) = 3 + t/83 + t/93 = t
W3 = 15 15, T3 is schedulable.
All tasks are schedulable under RM, therefore the system is schedulable under RM.
U 1, the system is schedulable under EDF

b.

T = {(8, 4), (12, 4), (20, 4)}

Sol: U = 4/8 + 4/12 + 4/20 1.03 > 1


this system is not schedulable by any scheduling algorithm

c.

T = {(8, 4), (10, 2), (12, 3)}


Sol: U = 4/8 + 2/10 + 3/12 = 0.95 > URM(3)
Schedulable utilization test is indeterminate, use time-demand analysis,
w1(t) = 4, W1 = 4 8
T1 is schedulable
w2(t) = 2 + t/8 4 = t
W2 = 6 10
T2 is schedulable
w3(t) = 2 + t/8 4 + t/10 2 = t
W3 = 15 > 12
T3 misses its deadline
This system is not schedulable under RM
U 1 this system is schedulable under EDF

Earliest Deadline First (EDF) Scheduling


In Earliest Deadline First (EDF) scheduling, at every scheduling point the task having the
shortest deadline is taken up for scheduling. This basic principles of this algorithm is very
intuitive and simple to understand. The schedulability test for EDF is also simple. A task set is
schedulable under EDF, if and only if it satisfies the condition that the total processor
utilization due to the task set is less than 1.
EDF has been proven to be an optimal uniprocessor scheduling algorithm. This means that, if
a set of tasks is not schedulable under EDF, then no other scheduling algorithm can feasibly
schedule this task set. In the simple schedulability test for EDF, we assumed that the period of
each task is the same as its deadline. However, in practical problems the period of a task may
at times be different from its deadline. In such cases, the schedulability test needs to be
changed.
A more efficient implementation of EDF would be as follows. EDF can be implemented by
maintaining all ready tasks in a sorted priority queue. A sorted priority queue can efficiently
be implemented by using a heap data structure. In the priority queue, the tasks are always
kept sorted according to the proximity of their deadline. When a task arrives, a record for it
can be inserted into the heap in O(log2 n) time where n is the total number of tasks in the
priority queue.
At every scheduling point, the next task to be run can be found at the top of the heap. When a
task is taken up for scheduling, it needs to be removed from the priority queue. This can be
achieved in O(1) time.

Q.6.6: Give two different explanation of why the periodic tasks (2,1), (4,1) and (8,2) are
schedulable by the rate monotonic algorithm.
Sol: The priorities to tasks are assigned statically, before the actual execution of the task set.
Rate Monotonic scheduling scheme assigns higher priority to tasks with smaller periods. It is
preemptive (tasks are preempted by the higher priority tasks). It is an optimal scheduling
algorithm among xed-priority algorithms; if a task set cannot be scheduled with RM, it cannot

be scheduled by any xed-priority algorithm.


The sufcient schedulability test is given by:

The term U is said to be the processor utilization factor (the fraction of the processor time
spent on executing task set). n is the number of tasks.
In our case: 1/2 + 1/4 + 2/8 = 1 which is not less than 0.78
The above condition is not necessary; we can do a somewhat more involved sufcient and
necessary condition test, as follows.
We have to guarantee that all the tasks can be scheduled, in any possible instance. In
particular, if a task can be scheduled in its critical instances, then the schedulability guarantee
condition holds (a critical instance of a task occurs whenever the task is released
simultaneously with all higher priority tasks). For that, we have to use the method as
mentioned in Exercise 6.5.

Q.6.7:

This problem is concerned with the performance an behavior of rate-monotonic an


earliest-deadline-first algorithms.
a.
Construct the initial segments in the time interval (0, 750) of a rate-monotonic schedule
and an earliest-deadline-first schedule of the periodic tasks (100, 20) (150, 50), and (250, 100)
whose total utilization is 0.93.
Sol:

RM

Note, the third task (the blue one) runs past its deadline from t = 250 to t = 260.

EDF

There are no missed deadlines in this schedule.

b.
Construct the initial segments in the time interval (0, 750) of a rate-monotonic schedule
and an earliest-deadline-first schedule of the periodic tasks (100, 20) (150, 50), and (250, 120)
whose total utilization is 1.01.
Sol:
RM

The third task (the blue one) runs past its deadline from 250 to 280 and from 520 to
560. The third task will continue to be backlogged farther and farther each time a new
job in the task is released, but the first and second task are not affected.

EDF

Task 2 eventually misses its deadline. Once jobs start missing deadlines, almost every
job is going to miss its deadline.

Rate Monotonic vs. EDF


Since the rst results published in 1973 by Liu and Layland on the Rate Monotonic (RM) and
Earliest Deadline First (EDF) algorithms, a lot of progress has been made in the schedulability
analysis of periodic task sets. Unfortunately, many misconceptions still exist about the
properties of these two scheduling methods, which usually tend to favor RM more than EDF.
Typical wrong statements often heard in technical conferences and even in research papers
claim that RM is easier to analyze than EDF, it introduces less runtime overhead, it is more
predictable in overload conditions, and causes less jitter in task execution. Since the above
statements are either wrong, or not precise, it is time to clarify these issues in a systematic
fashion, because the use of EDF allows a better exploitation of the available resources and
signicantly improves systems performance.

Most commercial RTOSes are based on RM. RM is simpler to implement on top of


commercial (fixed priority) kernels.
EDF requires explicit kernel support for deadline scheduling, but gives other advantages.
Less overhead due to preemptions.
More uniform jitter control
Better aperiodic responsiveness.
Two different types of overhead:
Overhead for job release
EDF has more than RM, because the absolute deadline must be updated at each job activation
Overhead for context switch
RM has more than EDF because of the higher number of preemptions

Resource access protocols:


For RM
Non Preemptive Protocol (NPP)
Highest Locker Priority (HLP)
Priority Inheritance (PIP)
Priority Ceiling (PCP)
Under EDF
Non Preemptive Protocol (NPP)
Dynamic Priority Inheritance (D-PIP)
Dynamic Priority Ceiling (D-PCP)
Stack Resource Policy (SRP)

Q.6.8: a) Use the time demand analysis method to show that the rate-monotonic algorithm
will produce a feasible schedule of the tasks (6,1), (8,2) and (15,6).
Sol: U = 1/6 + 2/8 + 6/15 = 0.816
TDA analysisw1(6) = 1,
w2(6) = 2 + 6/61 = 3
w3(6) = 6 + 6/61 + 6/82 = 9
w3(12) = 6 + 12/61 + 12/82 = 12

W1 = 1 6,
W2 = 3 6,

T1 is schedulable
T2 is schedulable

b) Change the period of one of the tasks in part (a) to yield a set of tasks with the maximal total
utilization which is feasible when scheduled using the rate-monotonic algorithm. (Consider only
integer values for period)

Sol: Change P1 such that-

w3(15) = 6 + 12/P11 + 4 = 15

=>

P1 = 3

c) Change the execution time of one of the tasks in part (a) to yield a set of tasks with the
maximum total utilization which is feasible when scheduled using the rate-monotonic algorithm.
(Consider only register values for the execution time).

Sol: Change the execution time of tasks such that maximum possible utilization

w3(15) = e3 + 15/61 + 15/82 = 15


=>

e3 + 3 + 4 = 15

=>

e3 = 8

=>

T3 = (15,8)

Rate Monotonic Scheduling


The term rate monotonic derives from a method of assigning priorities to a set of processes as a
monotonic function of their rates. While rate monotonic scheduling systems use rate monotonic
theory for actually scheduling sets of tasks, rate monotonic analysis can be used on tasks scheduled by
many different systems to reason about schedulablility. We say that a task is schedulable if the sum of
its preemption, execution, and blocking is less than its deadline. A system is schedulable if all tasks
meet their deadlines. Rate monotonic analysis provides a mathematical and scientific model for
reasoning about schedulability.

Assumptions
Reasoning with rate monotonic analysis requires the presence of the following assumptions :
Task switching is instantaneous.
Tasks account for all execution time.
Task interactions are not allowed.
Tasks become ready to execute precisely at the beginning of their periods and relinquish the CPU
only when execution is complete.

Task deadlines are always at the start of the next period.


Tasks with shorter periods are assigned higher priorities; the criticality of tasks is not considered.
Task execution is always consistent with its rate monotonic priority: a lower priority task never
executes when a higher priority task is ready to execut

Q.6.9: The Periodic Tasks (3,1), (4,2), (6,1) are scheduled according to the rate-monotonic
algorithm.

a)

Draw Time Demand Function of the tasks

Sol:

b)

Are the tasks schedulable? Why or why not ?

Sol: No. Based on the Time Demand Function graph, Task 3 did not touch or
go below the dash line by its deadline at time 6. In another word, it can
its deadline and therefore not schedulable.

c)

not meet

Can this graph be used to determine whether the tasks are schedulable according to an arbitrary
priority-driven algorithm?

Sol: No. This graph is fundamentally based on fixed priority driven algorithm which assigns the
same priority to all jobs in each task. In the graph, T2 is built on top of T1 since all jobs in T1
have a higher priority than all jobs in T2. T3 is built on top of T1 and T2since all jobs in T1 and
T2 have a higher priority than all jobs in T3. This graph does not depict dynamic priority driven
algorithm, such as earliest deadline first (EDF). In EDF, any job in a task can have a higher
priority at a specific moment depending on its deadline compared to the jobs of other tasks.
Therefore, this graph cannot be used to determine the schedulability of an arbitrary prioritydriven algorithm.

Time-demand Analysis
Simulate system behaviour at the critical instants. For each job Ji,c released at a critical instant, if Ji,c
and all higher priority tasks complete executing before their relative deadlines the system can be
scheduled. Compute the total demand for processor time by a job released at a critical instant of a
task, and by all the higher-priority tasks, as a function of time from the critical instant; check if this
demand can be met before the deadline of the job:
Consider one task, Ti, at a time, starting highest priority and working down to lowest priority. Focus
on a job, Ji, in Ti, where the release time, t0, of that job is a critical instant of T
Compare time-demand function, wi(t), and available time, t:
If wi(t) t at some t Di, the job, Ji, meets its deadline, t0 + Di
If wi(t) > t for all 0 < t Di then the task probably cannot complete by its deadline; and the system
likely cannot be scheduled using a fixed priority algorithm
Note that this is a sufficient condition, but not a necessary condition. Simulation may show that the
critical instant never occurs in practice, so the system could be feasible
Use this method to check that all tasks are can be scheduled if released at their critical instants; if so
conclude the entire system can be scheduled. The time-demand, wi(t), is a staircase function with
steps at multiples of higher priority task periods Plot the time-demand versus available time
graphically, to get intuition into approach

Q.6.10:

Which of the following fixed-priority task is not schedulable? Explain your

answer.

T1(5,1)

T2(3,1)

T3(7,2.5)

T4(16,1)

Sol: If Wi(t) <= t, the task is schedulable.

Assume RM/DM scheduling algorithm is used. Priority: T2>T1>T3>T4


Index, i, is assigned to each task according to its priority.

T2: i = 1, T1: i = 2, T3: i = 3., T4: i = 4

Check: t = 3, 5, 6, 7, 9, 10, 12, 14, 15, 16

W1(t) = 1 < t,

t= 3, 5, 6, 7, 9, 10, 12, 14, 15, 16

=> Schedulable

W2(t) = 1 + t/3

W2(3) = 1+1 = 2 <=3

W3(t) = 2.5 + t/3 + t/5

=> Schedulable

(check: 2.5+1+1=4.5 => min t =5)

W3(5) = 2.5+2+1 = 5.5


W3(6) = 2.5+2+2 = 6.5
W3(7) = 2.5+3+2 = 7.5 => Miss deadline, 7 Not Schedulable

W4(t) = 1 + t/3 + t/5 + 2.5t/7

(check: 1+1+1+2.5=5.5 => min t =6)

W4(6) = 1+2+2+2.5 = 7.5


W4(7) = 1+3+2+2.5 = 8.5
W4(9) = 1+3+2+5 = 11
W4(10) = 1+4+2+5 = 12
W4(12) = 1+4+3+5 = 13
W4(14) = 1+5+3+5 = 14 <= 14

=> Schedulable

T3 is not a schedulable task

Time Bound in Fixed-Priority Scheduling


Since worst-case response times must be determined repeatedly during the interactive design of realtime application systems, repeated exact computation of such response times would slow down the
design process considerably. In this research, we identify three desirable properties of estimates of the
exact response times: continuity with respect to system parameters; efcient computability; and
approximability. We derive a technique possessing these properties for estimating the worst case
response time of sporadic task systems that are scheduled using xed priorities upon a preemptive
uniprocessor .
When a group of tasks share a common resource (such as a processor, a communication medium), a
scheduling policy is necessary to arbitrate access to the shared resource. One of the most intuitive
policies consists of assigning Fixed Priorities (FP) to the tasks, so that at each instant in time the
resource is granted to the highest priority task requiring it at that instant. Depending on the assigned
priority, a task can have longer or shorter response time, which is the time elapsed from request of the
resource to the completion of the task.
Since worst case response times must be determined repeatedly during the interactive design of realtime application systems, repeated exact computation of such response times would slow down the
design process considerably. In this research, we identify three desirable properties of estimates of the
exact response times: continuity with respect to system parameters, efficient computability, and
approximability. We derive a technique possessing these properties for estimating the worst-case
response time of sporadic task systems that are scheduled using fixed priorities upon a preemptive
uniprocessor.

Q.6.11: Find the maximum possible response time of tasks T4 in the following fixed-priority
system by solving the equation w4(t) = t, iteratively
T1 = (5,1), T2 = (3,1),

T3 = (8,1.6), and T4 = (18,3.5)

Sol: Iteration 1:
w4(t=1)(1) = 3.5 + 1/5 1 + 1/3 1 + + 1/8 1.6
= 3.5 + 1 + 1 + 1.6
= 7.1
Iteration 2:
w4(t=7)(2) = 3.5 + 7/5 1 + 7/3 1 + + 7/8 1.6
= 3.5 + 2 + 3 + 1.6
= 10.1
Iteration 3:
w4(t=10)(3) = 3.5 + 10/5 1 + 10/3 1 + + 10/8 1.6
= 3.5 + 2 + 4 + 3.2
= 12.7
Iteration 4:
w4(t=12.7)(4) = 3.5 + 12.7/5 1 + 12.7/3 1 + + 12.7/8 1.6
= 3.5 + 3 + 5 + 3.2
= 14.7
Iteration 5:

w4(t=14.7)(5) = 3.5 + 14.7/5 1 + 14.7/3 1 + + 14.7/8 1.6


= 3.5 + 3 + 5 + 3.2
= 14.7
Max possible response time = 14.7

Time Bound in Fixed-Priority Scheduling


Since worst-case response times must be determined repeatedly during the interactive design of realtime application systems, repeated exact computation of such response times would slow down the
design process considerably. In this research, we identify three desirable properties of estimates of the
exact response times: continuity with respect to system parameters; efcient computability; and
approximability. We derive a technique possessing these properties for estimating the worst case
response time of sporadic task systems that are scheduled using xed priorities upon a preemptive
uniprocessor .
When a group of tasks share a common resource (such as a processor, a communication medium), a
scheduling policy is necessary to arbitrate access to the shared resource. One of the most intuitive
policies consists of assigning Fixed Priorities (FP) to the tasks, so that at each instant in time the
resource is granted to the highest priority task requiring it at that instant. Depending on the assigned
priority, a task can have longer or shorter response time, which is the time elapsed from request of the
resource to the completion of the task.
Since worst case response times must be determined repeatedly during the interactive design of realtime application systems, repeated exact computation of such response times would slow down the
design process considerably. In this research, we identify three desirable properties of estimates of the
exact response times: continuity with respect to system parameters, efficient computability, and
approximability. We derive a technique possessing these properties for estimating the worst-case
response time of sporadic task
systems that are scheduled using fixed priorities upon a preemptive uniprocessor.

Q.6.13: Find the length of an in-phase level-3 busy interval of the following fixed-priority
tasks:
T1 = (5, 1), T2 = (3,1), T3 = (8, 1.6), and T4 = (18, 3.5)
Sol: The level-3 busy interval is based on T1, T2, and T3
t = t/5 1 + t/3 1 + t/8 1.6

t = 4.6 = length of in-phase level-3 busy interval

Busy Intervals
Definition: A level-i busy interval (t0, t] begins at an instant t0 when
(1) all jobs in Ti released before this instant have completed, and
(2) a job in Ti is released.
The interval ends at the first instant t after t0 when all jobs in Ti released since t0 are complete. For
any t that would qualify as the end of a level-i busy interval, a corresponding t0 exists. During a leveli busy interval, the processor only executes tasks in Ti other tasks can be ignored.

Definition: We say that a level-i busy interval is in phase if the first job of all tasks that execute in the
interval are released at the same time. For systems in which each tasks relative deadline is at most its
period, we argued that an upper bound on a tasks response time could be computed by considering a
critical instant scenario in which that task releases a job together with all higher-priority tasks. In
other words, we just consider the first job of each task in an in-phase system. For many
years, people just assumed this approach would work if a tasks relative deadline could exceed its
period. Lehoczky showed that this folk wisdom that only each tasks first job must be considered is
false by means of a counterexample.
The general schedulability test hinges upon the assumption that the job with the maximum response
occurs within an in-phase busy interval.

Q.6.21: a) Use the time-demand analysis method to show that the set of periodic tasks {(5,
1), (8, 2), (14, 4)} is schedulable according to the rate-monotonic algorithm.
Sol: Shortest period has the highest priority...
T1 (5, 1): w1(t) = 1
W1 = 1 5
T1 is schedulable
T2 (8, 2): w2(t) = 2 + t/51
W2 = 3 8
T2 is schedulable
T3 (14, 4): w3(t) = 4 + t/51 + t/82
W3 = 8 14
T3 is schedulable

b) Suppose that we want to make the first x units of each request in the task (8,2) nonpreemptable.
What is the maximum value of x so that the system remains schedulable according to the ratemonotonic algorithm?
Solution 1:
T={(5,1)(8,2)(14,4)}
T1=(5,1)

T2=(8,2)

T3=(14,4) (in order of priority, T1 being highest)

T2 (8, 2) can be made nonpreemptable for the first 2 time units (its entire duration) and still
allow the system to be scheduled on time.
Solution 2:
If we make the first x units of Task (8, 2) nonpreemptable: T3 is unaffected by this change
since T2 is a higher priority task anyway. T2 is also unaffected. Its response time will not be
affected by the change (if anything it would improve)
W1= x+1 <=5, x<=4

x can be at most 4 time units. But since Task 2 (8, 2), only has an execution
time of 2 time units, x can be 2 time units

Q.6.23: A system contains tasks T1 = (10,3), T2 = (16,4), T3 = (40,10) and

T4 = (50,5).
The total blocking due to all factors of the tasks are b1 = 5, b2 = 1, b3 = 4 and b4 = 10,
respectively. These tasks are scheduled on the EDF basis. Which tasks (or task) are (or is)
schedulable? Explain your answer.
Sol: For ith task to be scheduled by EDF basis

= 3/10 + 4/16 + 10/40 + 5/50


= 0.3 + 0.25 + 0.25 + 0.1
= 0.9
for T1:
0.9 + b1/10 = 0.9 + 5/10 = 1.4 > 1

...not schedulable

for T2:
0.9 + 1/16 = 0.9 + 0.0625 = 0.9625 < 1 ...schedulable
for T3:
0.9 + 4/40 = 0.9 + 0.01 = 1 <= 1

...schedulable

0.9 + 10/50 = 0.9 + 0.2 = 1.1 > 1

...not schedulable

for T4:

Earliest deadline first scheduling


Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorithm used in realtime operating systems to place processes in a priority queue. Whenever a scheduling event occurs
(task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline.
This process is the next to be scheduled for execution.
when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable (it
will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable

disadvantage to a real time systems designer. The algorithm is also difficult to implement
in hardware and there is a tricky issue of representing deadlines in different ranges (deadlines must be
rounded to finite amounts, typically a few bytes at most). If a modular arithmetic is used to calculate
future deadlines relative to now, the field storing a future relative deadline must accommodate at least
the value of the (("duration" {of the longest expected time to completion} * 2) + "now").
Therefore EDF is not commonly found in industrial real-time computer systems.
Instead, most real-time computer systems use fixed priority scheduling (usually rate-monotonic
scheduling). With fixed priorities, it is easy to predict that overload conditions will cause the lowpriority processes to miss deadlines, while the highest-priority process will still meet its deadline.
EDF is an optimal scheduling algorithm on preemptive uniprocessors, in the following sense: if a
collection of independent jobs, each characterized by an arrival time, an execution requirement and a
deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their
deadline, the EDF will schedule this collection of jobs so they all complete by their deadline.

Q.6.31:

Interrupts typically arrive sporadically. When an interrupt arrives, interrupt handling


is serviced (i.e., executed on the processor) immediately and in a nonpreemptable fashion. The
effect of interrupt handling on the schedulability of periodic tasks can be accounted for in the
same manner as blocking time. To illustrate this, consider a system of four tasks: T 1 = (2.5, 0.5),
T2 = (4, 1), T3 = (10, 1), and T4 = (30, 6). Suppose that there are two streams of interrupts. The
interrelease time of interrupts in one stream is never less than 9, and that of the other stream is
never less than 25. Suppose that it takes at most 0.2 units of time to service each interrupt. Like
the periodic tasks interrupt handling tasks (i.e., the stream of interrupt handling jobs) are given
fixed prioriteies. They have higher priorities than the periodic tasks, and the one with a higher rate
(i.e., shorter minimum interrelease time) has a higher priority.
a.
What is the maximum amount of time each job in each periodic task may be delayed from
completion by interrupts?
Sol: If an interrupt comes while a job is running or a higher priority job is running, the
job's completion time will be delayed by the interrupt service time, e int. The maximum
delay for Ti comes when all tasks with higher priority than Ti release jobs at the same
time as Ti. Interrupts behave like a higher priority task.
b1 = e1/pint,1eint,1 + e1/pint,2eint,2 = 0.5/90.2 +0.5/250.2 = 0.4
w2(t) = 1 + 0.5t/2.5 + 0.2t/9 + 0.2 t/25 = t
W2 = 1.9

The amount of time taken by interrupt handlers between the release of the first job in
T2 and it's completion time is:
b2 = 0.2 1.9/9 + 0.2 1.9/25 = 0.4

w3(t) = 1 + 0.5t/2.5 + 1t/4 + 0.2t/9 + 0.2 t/25 = t

W3 = 3.4

b3 = 0.2 3.4/9 + 0.2 3.4/25 = 0.4

w4(t) = 6 + 0.5t/2.5 + 1t/4 + 1t/10 + 0.2t/9 + 0.2 t/25 = t


W4 = 17.1

b4 = 0.2 17.1/9 + 0.2 17.1/25 = 0.6

b.
Let the maximum delay suffered by each job in T i in part (a) be bi, for i = 1, 2, 3, and 4.
Compute the time-demand functions of the tasks and use the time-demand analysis method to
determine whether every periodic task Ti can meet all its deadlines if Di is equal to pi.
Sol: (This is a bit redundant given part (a) above.)
w1(t) = b1 + e1 = 0.4 + 0.5 = 0.9 = t
W1 = 0.9 2.5
T1 is schedulable

w2(t) = 0.4 + 1 + 0.5t/2.5 = t


W2 = 1.9 4
T2 is schedulable

w3(t) = 0.4 + 1 + 0.5t/2.5 + 1t/4 = t


W3 = 3.4 10
T3 is schedulable

w4(t) = 0.6 + 6 + 0.5t/2.5 + 1t/4 + 1t/10 = t


W4 = 17.1 30
T4 is schedulable

c.
In one or two sentences, explain why the answer you obtained in (b) about the
schedulability of the periodic tasks is correct and the method you use works not only for this system
but also for all independent preemptive periodic tasks.

Sol: The interrupt behavior described in the problem is the same as the behavior of a
high priority periodic task. Therefore, the amount of time taken handling interrupts can
be
analyzed
with
the
same
method
as
high
priority
tasks.

You might also like