0% found this document useful (0 votes)
21 views

Questions and Answers

The document is a model question and answer set for a course on Real-Time Systems, authored by Prof. Rajib Mall from IIT Kharagpur. It covers various topics including real-time task scheduling, resource sharing, and the differences between hard, firm, and soft real-time systems. The document also includes true/false questions with justifications, explanations of scheduling points, and characteristics of different types of real-time tasks.

Uploaded by

Francis Lubango
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Questions and Answers

The document is a model question and answer set for a course on Real-Time Systems, authored by Prof. Rajib Mall from IIT Kharagpur. It covers various topics including real-time task scheduling, resource sharing, and the differences between hard, firm, and soft real-time systems. The document also includes true/false questions with justifications, explanations of scheduling points, and characteristics of different types of real-time tasks.

Uploaded by

Francis Lubango
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Real‐Time Systems: Model QnA 2010

NPTEL Course: Real‐Time Systems


Model Questions and Answers
1st June, 2010

PROF. RAJIB MALL,


DEPT. OF CSE,
IIT KHARAGPUR.

Prof. R. Mall, IIT Kharagpur Page 1


Real‐Time Systems: Model QnA 2010

Contents
Lectures 1‐5: Introduction to Real‐Time Systems........................................................................... 3
Lectures 6‐12: Real‐Time Task Scheduling...................................................................................... 9
Lectures 13‐16: Resource Sharing and Dependencies among Real‐Time Tasks........................... 18
Lectures 17‐20: Scheduling Real‐Time Tasks in Multiprocessor and Distributed Systems .......... 23
Lectures 21‐30: Real‐Time Operating Systems ............................................................................. 29
Lectures 31‐39: Real‐Time Communication.................................................................................. 36
Lecture 40: Real‐Time Databases ................................................................................................. 42

Prof. R. Mall, IIT Kharagpur Page 2


Real‐Time Systems: Model QnA 2010

Lectures 1‐5: Introduction to Real‐Time Systems

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.

i. A hard real‐time application consists of only hard real‐time tasks.


FALSE. A hard real‐time application may also contain several non‐real‐time
tasks such as logging activities, etc.

ii. Every safety‐critical real‐time system contains a fail‐safe state.


FALSE. Having fail‐safe states in safety‐critical real‐time systems is
meaningless because failure of a safety‐critical system can lead to loss of
lives, cause damage, etc. E.g.: a navigation system on‐board an aircraft.

iii. A deadline constraint between two stimuli is a behavioral


constraint on the environment of the system.
TRUE. It is a behavioral constraint since the constraint is imposed on the
second stimulus event.

iv. Hardware fault‐tolerance techniques are easily adaptable to


provide software fault‐tolerance.
FALSE. Hardware fault‐tolerance is usually achieved using redundancy
techniques. However, the property of statistical correlation of failures for
software renders the technique ineffective.

v. A good algorithm for scheduling of hard real‐time tasks tries to


complete each task in the shortest possible time.
FALSE. A scheduling algorithm for hard real‐time tasks is only concerned
with completing the tasks before the deadlines. Unlike desktop

Prof. R. Mall, IIT Kharagpur Page 3


Real‐Time Systems: Model QnA 2010

applications, there is no benefit in completing each task in the shortest


possible time.

vi. All hard real‐time systems usually are safety‐critical in nature.


FALSE. Not all hard real‐time systems are safety‐critical in nature. E.g.:
computer games, etc.

vii. It is ensured by the performance constraints on a real‐time


system That the environment of the system is well‐behaved.
FALSE. Behavioral constraints on a real‐time system ensure that the
environment of the system is well‐behaved.

viii. Soft real‐time tasks do not have any associated time bounds.
FALSE. Soft real‐time tasks also have time bounds associated with them.
Instead of absolute values of time, the constraints are expressed in terms of
the average response times required.

ix. The objective of any good hard real‐time task scheduling


algorithm is to minimize average task response times.
FALSE. A good hard real‐time task scheduling algorithm is concerned with
scheduling the tasks such that all of them can meet their respective
deadlines.

x. The goal of any good real‐time operating system to complete


every hard real‐time task as ahead of its deadline as possible.
FALSE. A good real‐time operating system should try and complete the
tasks such that they meet their respective deadlines.

Q2: With a suitable example explain the difference between the


traditional notion of time and real‐time.

Prof. R. Mall, IIT Kharagpur Page 4


Real‐Time Systems: Model QnA 2010

Ans: In a real‐time application, the notion of time stands for the absolute time
which is quantifiable. In contrast to real time, logical time, used in most general
category applications, deals with a qualitative notion of time and are expressed
using event ordering relations. For example, consider the following part of the
behavior of library automation software used to automate the bookkeeping
activities of a college library: “After a query book command is given by the user,
the details of all the matching books are displayed by the software”.

Q3: What is the difference between a performance constraint and a


behavioral constraint in a real‐time system?

Ans: Performance constraints are the constraints that are imposed on the
response of the system. Behavioral constraints are the constraints that are
imposed on the stimuli generated by the environment. Behavioral constraints
ensure that the environment of a system is well‐behaved, whereas performance
constraints ensure that the computer system performs satisfactorily.

Q4: Explain the important differences between hard, firm and soft real‐
time systems.

Ans: A hard real‐time task is one that is constrained to produce its results within
certain predefined time bounds. The system is considered to have failed
whenever any of its hard real‐time tasks does not produce its required results
before the specified time bound. Unlike a hard real‐time task, even when a firm
real‐time task does not complete within its deadline, the system does not fail. The
late results are merely discarded. In other words, the utility of the results
computed by a real‐time task becomes zero after the deadline. Soft real‐time
tasks also have time bounds associated with them. However, unlike hard and firm
real‐time tasks, the timing constraints on soft real‐time tasks are not expressed as
absolute values. Instead, the constraints are expressed in terms of the average
response times required.

Q5: It is difficult to achieve software fault tolerance as compared to


hardware fault tolerance. Why?
Prof. R. Mall, IIT Kharagpur Page 5
Real‐Time Systems: Model QnA 2010

Ans: The popular technique to achieve hardware fault‐tolerance is through


redundancy. However, it much harder to achieve software fault‐tolerance
compared to hardware fault‐tolerance. A few approaches have been proposed for
software modeled on the redundancy techniques used in achieving hardware
fault‐tolerance. The reason is the statistical correlation of failures for software.
The different versions of a software component show similar failure patterns, i.e.,
they fail due to identical reasons. Moreover, fault‐tolerance using redundancy can
only be applied to real‐time tasks if they have large deadlines.

Q6: What is a “fail‐safe” state? Since safety‐critical systems do not have


a fail‐safe state, how is safety guaranteed?

Ans: A fail‐safe state of a system is one which if entered when the system fails,
no damage would result. All traditional non‐real‐time systems do have one or
more fail‐safe states.

However, safety‐critical systems do not have a fail‐safe state. A safety‐critical


system is one whose failure can cause severe damages. This implies that the
reliability requirement of a safety‐critical system is very high.

Q7: Give an example of an extremely safe but unreliable system?

Ans: Yes, document‐processing software is an example of a system which is safe


but may be unreliable. The software may be extremely buggy and can fail many
times while using. But a failure of the software does not usually cause any
significant damage of financial loss.

Q8: List the different types of timing constraints that can occur in a
real‐time system?

Ans: The different timing constraints associated with a real‐time system can be
broadly classified into the following categories:

1) Performance constraints
2) Behavioral constraints

Prof. R. Mall, IIT Kharagpur Page 6


Real‐Time Systems: Model QnA 2010

Each of the performance and behavioral constraints can further be classified into
the following types:

1) Delay constraint
2) Deadline constraint
3) Duration constraint

Q9: In the following, the partial behavior of a telephone system is


given.

a) If you press the button of the handset for less than 15 s it


connects to the local operator. If you press the button for any
duration lasting between 15 to 30 s, it connects to the
international operator. If you keep the button pressed for more
than 30 s, then on releasing it would produce the dial tone.
b) Once the receiver of the handset is lifted, the dial tone must be
produced by the system within 20 s, otherwise a beeping sound is
produced until the handset is replaced.

Draw the EFSM model for telephone system

Ans:
(a) The EFSM model for the problem is shown in the below figure.

Prof. R. Mall, IIT Kharagpur Page 7


Real‐Time Systems: Model QnA 2010

(b) The EFSM model for the problem is shown in the below figure.

Prof. R. Mall, IIT Kharagpur Page 8


Real‐Time Systems: Model QnA 2010

Lectures 6‐12: Real‐Time Task Scheduling

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.

i. Cyclic schedulers do not require storing a precomputed schedule


unlike table‐driven schedulers.
FALSE. A cyclic scheduler also needs to store a precomputed schedule.
However, the precomputed schedule needs to be stored for only one major
cycle. Each task in the task set which repeats identically in every major
cycle.

ii. The upper bound on achievable utilization improves as the


number of tasks in the system being developed increases when
RMA is used for scheduling a set of hard real‐time periodic tasks.
FALSE. Under RMA, the achievable CPU utilization falls as the number of
tasks in the system increases. The utilization is maximum when there is only
one task in the system. As the number of tasks reaches infinity, the
utilization stabilizes at loge 2.

iii. For a non‐preemptive operating system, RMA is an optimal static


priority scheduling algorithm for a set of periodic real‐time tasks.
FALSE. RMA is preemptive in nature, and hence is meaningless on a non‐
preemptive operating system.

iv. A pure table‐driven scheduler is not as proficient as a cyclic


scheduler for scheduling a set of hard real‐time tasks.
TRUE. In a cyclic scheduler, the timer is set only once at the application
initialization time, and interrupts only at the frame boundaries. But in a
table‐driven scheduler, a timer has to be set every time a task starts to run.

Prof. R. Mall, IIT Kharagpur Page 9


Real‐Time Systems: Model QnA 2010

This represents a significant overhead and results in degraded system


performance.
v. While scheduling a set of hard real‐time periodic tasks using a
cyclic scheduler, if more than one frame satisfies all the
constraints on frame size then the largest of these frame sizes
should be chosen.
FALSE. For a given task set, it is possible that more than one frame size
satisfies all the three constraints. In such cases, it is better to choose the
shortest frame size. This is because of the fact that the schedulability of a
task increases as more number of frames becomes available over a major
cycle.

vi. EDF possesses good transient overload handing capability.


FALSE. EDF is not good in handling transient overload conditions. This is a
serious drawback in the EDF scheduling algorithm. When EDF is used to
schedule a set of periodic real‐time tasks, a task overshooting its
completion time can cause some other task(s) to miss their deadlines.

vii. Data dependencies determine the precedence ordering among a


set of tasks.
FALSE. Precedence ordering among a set of tasks can also be achieved
using control dependence, for example, through passing of messages and
events.

viii. Scheduling decisions are made only at the arrival and completion
of tasks in a non‐preemptive event‐driven task scheduler.
TRUE. In event‐driven scheduling, the scheduling points are defined by task
completion and task arrival times. This is because during the course of
execution of a task on the CPU, the task cannot be preempted.

Prof. R. Mall, IIT Kharagpur Page 10


Real‐Time Systems: Model QnA 2010

ix. For uniprocessor systems, determining an optimal schedule for a


set of independent periodic hard real‐time tasks without any
resource‐sharing constraints under static priority conditions is an
NP‐complete problem.
FALSE. Optimal scheduling algorithms on uniprocessors already exist (e.g.,
RMA for static priority, and EDF for dynamic priority). However, finding an
optimal schedule for a set of independent periodic hard real‐time tasks
without any resource‐sharing constraints under static priority conditions on
a multiprocessor is an NP‐complete problem.

x. A set of periodic real‐time tasks scheduled on a uniprocessor


system using RMA scheduling show similar completion time jitter.
TRUE. Completion time jitters are caused by the basic nature of RMA
scheduling which schedules task at the earliest opportunity at which it can
run. Thus, the response time of a task depends on how many higher priority
tasks arrive (or were waiting) during the execution of the task.

Q2: Explain scheduling point of a task scheduling algorithm? How the


scheduling points are determined in (i) clock‐driven, (ii) event‐driven,
(iii) hybrid schedulers?
Ans: The scheduling point of a scheduler are the points on the time line at which
the scheduler makes decisions regarding which task is to be run next. In a clock‐
driven scheduler, the scheduling points are defined at the time instants marked
by interrupts generated by a periodic timer. The scheduling points in an event‐
driven scheduler are generated by occurrence of certain events. For hybrid
schedulers, the scheduling points are defined both through the clock interrupts
and event occurrences.

Q3: What are the distinguishing characteristics of periodic, aperiodic,


and sporadic real‐time tasks?

Prof. R. Mall, IIT Kharagpur Page 11


Real‐Time Systems: Model QnA 2010

Ans: A periodic task is one that repeats after a certain fixed time interval. The
precise time instants at which periodic tasks recur are usually demarcated by
clock interrupts. For this reason, periodic tasks are also referred to as clock‐driven
tasks.

A sporadic task is one that recurs at random instants. Each sporadic task is
characterized by a parameter gi which implies that two instances of the sporadic
task have to be separated by a minimum time of gi. An aperiodic task is in many
ways similar to a sporadic task. An aperiodic task can arise at random instants. In
case of aperiodic tasks, the minimum separation gi between two consecutive
instances can be 0. Also, the deadline for aperiodic task is expressed as either an
average value or is expressed statistically.

Q4: What is understood by jitter associated with a periodic task?


Mention techniques by which jitter can be overcome.
Ans: Jitter is the deviation of a periodic task from its strict periodic behavior. The
arrival time jitter is the deviation of the task from arriving at the precise periodic
time of arrival. It may be caused by imprecise clocks, or other factors such as
network congestions. Similarly, completion time jitter is the deviation of the
completion of a task from precise periodic points. The completion time jitter may
be caused by the specific scheduling algorithm employed which takes up as task
for scheduling as per convenience and the load at an instant, rather than
scheduling at some strict time instants.

Real‐time programmers commonly handle tasks with tight completion time jitter
requirements using any one of the following two techniques:

 If only one or two actions (tasks) have tight jitter requirements, these
actions are assigned very high priority. This method works well only when
there are a very small number of actions (tasks). When it is used in an
application in which the tasks are barely schedulable, it may result in some
tasks missing their respective deadlines.
 If jitter must be minimized for an application that is barely schedulable,
each task needs to be split into two: one which computes the output but
Prof. R. Mall, IIT Kharagpur Page 12
Real‐Time Systems: Model QnA 2010

does not pass it on, and one which passes the output on. This method
involves setting the second task’s priority to very high values and its period
to be the same as that of the first task. An action scheduled with this
approach will run one cycle behind schedule, but the tasks will have tight
completion time jitter.

Q5: Can we consider EDF as a dynamic priority scheduling algorithm for


real‐time tasks?
Ans: EDF scheduling does not directly require any priority value to be computed
for any task at any time, and in fact has no notion of a priority of a task. Tasks are
scheduled solely on the proximity to their deadline. However, the longer a task
waits in a ready queue, the higher is the chance (probability) of being taken up for
scheduling. This can be considered to be a virtual priority value associated with a
task which keeps increasing with time until the task is taken up for scheduling.

Q6: A real‐time system consists of three tasks T1, T2, and T3. Their
characteristics have been shown in the following table.
Relative
Execution
Task Phase (ms) Deadline Period (ms)
Time (ms)
(ms)
T1 20 10 20 20
T2 40 10 50 50
T3 70 20 80 80

Suppose the tasks are to be scheduled using a table‐driven scheduler.


Compute the length of time for which the schedules have to be stored
in the precomputed schedule table of the scheduler.
Ans: In table‐driven scheduling, the size of the schedule that needs to be stored is
equal to the LCM of the periods of the individual tasks. This value is called the
major cycle of the set of tasks. The tasks in the schedule will automatically repeat

Prof. R. Mall, IIT Kharagpur Page 13


Real‐Time Systems: Model QnA 2010

after every major cycle. The major cycle of a set of tasks is LCM of the periods
even when the tasks have arbitrary phasing.

So, major cycle = LCM(20, 50, 80)=400 ms.

Q7: Using a cyclic real‐time scheduler, suggest a suitable frame size that
can be used to schedule three periodic tasks T1, T2, and T3 with the
following characteristics:
Execution Relative
Task Phase (ms) Period (ms)
Time (ms) Deadline (ms)
T1 0 20 100 100
T2 0 20 80 80
T3 0 30 150 150

Ans: For the given task set, an appropriate frame size is the one that satisfies all
the required constraints. Let F be the appropriate frame size.

Constraint 1: F>= max{execution time of a task}

 F>= 30

Constraint 2: The major cycle M for the given task set T is

M= LCM(100, 80, 150) = 1200

M should be an integral multiple of F. This consideration implies that F can take


on the values 30, 40, 60, 100, 120, 150, 200, 300, 400, 600, 1200 and still satisfy
Constraint 1.

Constraint 3: To satisfy this constraint, we need to check whether a selected


frame size F satisfies the inequality: 2F – GCD(F, pi) <= di for each pi.

Let us first try for F = 30.

T1: 2*30 – GCD(30, 100) <= 100

 60 – 10 <= 100

Prof. R. Mall, IIT Kharagpur Page 14


Real‐Time Systems: Model QnA 2010

 Satisfied

T2: 2*30 – GCD(30, 80) <= 80

 60 – 10 <= 80
 Satisfied

T3: 2*30 – GCD(30, 150) <= 150

 60 – 30 <= 150
 Satisfied

Therefore, F=30 is a suitable frame size. We can carry out our computations for
the other possible frame sizes, i.e. F = 40, 60, 100, 120, 150, 200, 300, 400, 600,
1200, to find out other possible solutions. However, it is better to choose the
shortest frame size.

Q8: Determine whether the following set of periodic real‐time tasks is


schedulable on a uniprocessor using RMA.
Start Time Processing
Task Period (ms) Deadline (ms)
(ms) Time (ms)
T1 20 25 150 100
T2 40 7 40 40
T3 60 10 60 50
T4 25 10 30 20

Ans: Let us first compute the total CPU utilization achieved due to the given tasks.
4
25 7 10 10
U  ui      0.84  1
i 1 150 40 60 30
Therefore, the necessary condition is satisfied.

The sufficiency condition is given by

Prof. R. Mall, IIT Kharagpur Page 15


Real‐Time Systems: Model QnA 2010

n 1
 ui  n(2
i 1
n
 1)

1
0.84  4(2 4
 1)
Therefore,
 0.84  0.76

 Not satisfied.

Although, the given set of tasks fails the Liu and Layland’s test which is pessimistic
in nature, we need to carry out Lehoczky’s test.

We need to reorder the tasks according to their decreasing priorities.

Start Time Processing


Task Period (ms) Deadline (ms)
(ms) Time (ms)
T4 25 10 30 20
T2 40 7 40 40
T3 60 10 60 50
T1 20 25 150 100

Testing for task T4: Since e4<= d4, therefore, T4 would meet its first deadline.

  40
Testing for task T2: 7    * 10  40
 30 

 Satisfied.
 Task T2 would meet its first deadline.

 60   60 
Testing for task T3: 10    * 7    * 10  50
 40   30 

 Satisfied.
 Task T3would meet its first deadline.

   
150  
150 150
Testing for task T1: 25    * 10    * 7    * 10  100
 60   40   30 

Prof. R. Mall, IIT Kharagpur Page 16


Real‐Time Systems: Model QnA 2010

 Not satisfied.
 Therefore, task T1 would fail to meet its first deadline.

Hence, the given task set is not RMA schedulable.

Prof. R. Mall, IIT Kharagpur Page 17


Real‐Time Systems: Model QnA 2010

Lectures 13‐16: Resource Sharing and Dependencies


among Real‐Time Tasks

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.

i. RMA is optimal for scheduling access of several hard real‐time


periodic tasks to a certain shared critical resource.
FALSE. RMA schedulers impose no constraints on the orders in which
various tasks execute. Therefore, the schedules produced by RMA might
violate the constraints imposed due to task dependencies.

ii. Unless a suitable resource‐sharing protocol is used, even the


lowest priority task in a real‐time system may suffer from
unbounded priority inversions.
FALSE. Unbounded priority inversion occurs when a higher priority task
waits for a lower priority task to release a resource it needs, and meanwhile
the intermediate priority tasks preempt the lower priority task from CPU
usage repeatedly.

iii. Scheduling a set of real‐time tasks for access to a set of non‐


preemptable resources using PIP results in unbounded priority
inversions for tasks.
FALSE. PIP allows real‐time tasks share critical resources without letting
them incur unbounded priority inversions.

iv. A task can undergo priority inversion for some duration under PCP
even if it does not require any resource.

Prof. R. Mall, IIT Kharagpur Page 18


Real‐Time Systems: Model QnA 2010

TRUE. Under PCP, inheritance‐related inversion can occur. In such


scenarios, an intermediate priority task not needing a critical resource is
kept waiting.

v. PCP is a efficient protocol to share a set of serially reusable


preemptable resources among a set of real‐time tasks.
FALSE. PCP is not well‐suited for sharing a set of serially reusable
preemptable resources among a set of real‐time tasks. EDF or RMA are
more efficient.

vi. A separate queue is maintained for the waiting tasks for each
critical resource in HLP.
FALSE. Unlike PIP, HLP does not maintain a separate queue for each critical
resource. The reason is that whenever a task acquires a resource, it
executes at the ceiling priority of the resource, and the other tasks that
may need this resource do not even get a chance to execute and request
for the resource.

vii. HLP overcomes deadlocks while sharing critical resources among a


set of real‐time tasks.
TRUE. HLP overcomes the deadlock problem possible with PIP.

viii. Under HLP, tasks are single‐blocking.


TRUE. When HLP is used for resource sharing, once a task gets a resource
required by it, it is not blocked any further. This means that when a task
acquires one resource, all the other resources required by it must be free.

ix. Under PCP, the highest priority task does not suffer any inversions
when sharing certain critical resources.
FALSE. Even under PCP, the highest priority protocol can suffer from direct
and avoidance‐related inversions.

Prof. R. Mall, IIT Kharagpur Page 19


Real‐Time Systems: Model QnA 2010

x. Under PCP, the lowest priority task does not suffer any inversions
when sharing certain critical resources.
TRUE. In PCP, the lowest priority task does not suffer from any of direct,
inheritance‐related, or avoidance‐related inversions.

Q2: Explain priority inversion in the context of real‐time scheduling?

Ans: When a lower priority task is already holding a resource, a higher priority
task needing the same resource has to wait and cannot make progress with its
computations. The higher priority task remains blocked until the lower priority
task releases the required non‐preemptable resource. In this situation, the higher
priority task is said to undergo simple priority inversion on account of the lower
priority task for the duration it waits while the lower priority task keeps holding
the resource.

Q3: What can be the types of priority inversions that a task might
undergo on account of a lower priority task under PCP?

Ans: Tasks sharing a set of resources using PCP may undergo three important
types of priority inversions: direct inversion, inheritance‐related inversion, and
avoidance inversion.

Direct inversion occurs when a higher priority task waits for a lower priority task
to release a resource that it needs. Inheritance‐related inversion occurs when a
lower priority task is holding a resource and a higher priority task is waiting for it.
Then, the priority of the lower priority task is raise to that of the waiting higher
priority task by the inheritance clause of the PCP. As a result, the intermediate
priority tasks not needing the resource undergo inheritance‐related inversion.

In PCP, when a task requests a resource its priority is checked against CSC. The
requesting task is granted use of the resource only when its priority is greater
than CSC. Therefore, even when a resource that a task is requesting is idle, the
requesting task may be denied access to the resource if the requesting task’s
priority is less than CSC. A task, whose priority is greater than the currently

Prof. R. Mall, IIT Kharagpur Page 20


Real‐Time Systems: Model QnA 2010

executing task, but lesser than the CSC and needs a resource that is currently not
in use, is said to undergo avoidance‐related inversion.

Q4: How are deadlocks, unbounded priority inversions, and chain


blocking prevented using PCP?

Ans: Deadlocks occur only when different (more than one) tasks hold part of
each other’s required resources at the same time, and then they request for the
resources being held by each other. But under PCP, when one task is executing
with some resources, any other task cannot hold a resource that may ever be
needed by this task. That is, when a task is granted one resource, all its required
resources must be free. This prevents the possibility of any deadlock.

PCP overcomes unbounded priority inversions because whenever a high priority


task waits for some resources which are currently being used by a low priority
task, then the executing lower priority task is made to inherit the priority of the
higher priority task. So, the intermediate priority tasks cannot preempt lower
priority task from CPU usage. Therefore, unbounded priority inversions cannot
occur in PCP.

Tasks are single blocking under PCP. That means, under PCP a task can undergo at
most one inversion during its execution. This feature of PCP prevents chain
blocking.

Q5: Can PIP and PCP be considered as greedy algorithms?

Ans: In PIP, whenever a request for a resource is made, the resource will be
allocated to the requesting task if it is free. However, in PCP a resource may not
be granted to a requesting task even if the resource is free. This strategy in PCP
helps in avoiding potential deadlocks.

Q6: The following table shows the details of tasks in a real‐time system.
The tasks have zero phasing and repeat with a period of 90 ms.
Determine a feasible schedule to be used by a table‐driven scheduler.

Prof. R. Mall, IIT Kharagpur Page 21


Real‐Time Systems: Model QnA 2010

Computation Time
Task Deadline di (ms) Dependency
ei (ms)
T1 30 90 ‐
T2 15 40 T1, T3
T3 20 40 T1
T4 10 70 T2

Ans:
Step 1: Sort the tasks in increasing order of their deadlines.

T2 T3 T4 T1

Step 2: Schedule tasks as late as possible without violating constraints.

T2 T3 T1 T4

5 20 40 50 60 90

Step 3: Move tasks as early as possible without altering the schedule.

15 35 45 75

Prof. R. Mall, IIT Kharagpur Page 22


Real‐Time Systems: Model QnA 2010

Lectures 17‐20: Scheduling Real‐Time Tasks in


Multiprocessor and Distributed Systems

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.

i. By extending EDF, we can generate optimal scheduling schemes


for hard real‐time tasks in multiprocessor computing
environments.
FALSE. Determining an optimal schedule for a set of real‐time tasks on a
multiprocessor or a distributed system is an NP‐hard problem.

ii. It is possible to keep the good clocks of a distributed system


having 12 clocks synchronized using distributed clock
synchronization if only two of the clocks are byzantine.
TRUE. If less than one‐third of the clocks are bad or byzantine, then we can
have the good clocks approximately synchronized in distributed clock
synchronization technique.

iii. Task allocation using bin‐packing algorithm along with task


scheduling at the individual nodes using the EDF algorithm is
optimal in a distributed hard real‐time computing environment.
FALSE. Finding an optimal schedule for a set of independent periodic hard
real‐time tasks without any resource‐sharing constraints under static
priority conditions on a multiprocessor is an NP‐complete problem. Most
scheduling algorithms in a distributed hard real‐time computing
environment are based on heuristics.

iv. Task allocation is done statically in the focused addressing and


bidding algorithm in distributed real‐time systems.

Prof. R. Mall, IIT Kharagpur Page 23


Real‐Time Systems: Model QnA 2010

FALSE. Focused addressing and bidding algorithm used for allocating tasks
in distributed real‐time systems is a dynamic allocation technique.

v. Dynamic task arrivals can efficiently be handled using the focused


addressing and bidding algorithm in multiprocessor‐based real‐
time systems.
TRUE.

vi. The communication overhead incurred due to Buddy algorithms is


less compared to focused addressing and bidding algorithms in
multiprocessor real‐time task scheduling.
TRUE.

vii. In a distributed system, allocating a set of real‐time tasks can be


done optimally using the bin‐packing algorithm.
FALSE. Determining an optimal schedule for a set of real‐time tasks on a
multiprocessor or a distributed system is an NP‐hard problem.

viii. A simple internal synchronization scheme using a time server


makes the synchronization time incrementally delayed by the
average message transmission time after every synchronization
interval in a distributed system where the message
communication time is non‐zero and significant.
FALSE. The synchronization time will not be incrementally delayed by the
average message transmission time. The synchronization time will,
however, be delayed by the message jitter.

Q2: Why are algorithms which can satisfactorily schedule real‐time


tasks on multiprocessors not satisfactory to schedule real‐time tasks on
distributed systems?

Prof. R. Mall, IIT Kharagpur Page 24


Real‐Time Systems: Model QnA 2010

Ans: A basic difference between multiprocessor and distributed systems is sharing


of physical memory. Multiprocessor systems (aka. tightly‐coupled systems) are
characterized by the existence of a shared physical memory. In contrast, a
distributed system (aka. loosely‐coupled system) is devoid of any shared physical
memory. In a tightly‐coupled system, the interprocess communication is
inexpensive and is achieved through read and writes to the shared memory.
However, the same is not true for distributed systems where inter‐task
communication times are comparable to the task execution times. Due to this, a
multiprocessor system may use a centralized scheduler/dispatcher whereas a
distributed system cannot.

Q3: What is it required to synchronize the clocks in a distributed real‐


time system? Compare the advantages and disadvantages of
centralized and the distributed clock synchronization.

Ans: Besides the traditional use of clocks in a computer system, clocks are also
used for determining timeouts and time stamping. However, different clocks in a
distributed system tend to diverge since it is almost impossible to have two clocks
that run exactly at the same speed. This lack of synchrony among clocks is
expressed as the clock skew and determines the attendant drifts of the clocks
with time. Lack of synchrony and drift among clocks makes the time stamping and
timeout operations in a distributed real‐time system meaningless. Therefore, to
have meaningful timeouts and time stamping spanning more than one node of a
distributed system, the clocks need to be synchronized.

The main problem with centralized clock distribution scheme is that it is


susceptible to single point failure. Any failure of the master clock causes
breakdown of the synchronization scheme. Distributed clock synchronization
overcomes this handicap by periodically exchanging the clock readings of each
node. However, distributed clock synchronization needs to take care of the fact
that some nodes may have bad or byzantine clocks. A disadvantage of the
distributed clock synchronization is the larger communication overhead that is
incurred in synchronizing clocks as compared to the centralized scheme.

Prof. R. Mall, IIT Kharagpur Page 25


Real‐Time Systems: Model QnA 2010

Q4: Modern commercial real‐time operating systems use gigahertz


clocks, while the clock resolution provided is rarely finer than few
hundreds of milliseconds. Why?

Ans: With the current technology, it is possible to have clocks of nanosecond


granularity. But still, the time granularities of modern RTOS’ are of the order of
microseconds or even milliseconds. The reason can be attributed to the following.
The kernel maintains a software clock. After each interrupt of the hardware clock,
the software clock is updated. A task can read the time using the POSIX function,
clock_gettime(). So the finer the resolution, the more frequent is the hardware
interrupts, and the larger is the amount of processor time servicing it. However,
the response of the clock_gettime() function is not deterministic and the variation
is greater than few hundreds of nanoseconds. So any software clock resolution
finer than this is not meaningful.

Q5: With respect to the communication overhead and the scheduling


proficiency, discuss the relative merits of the focused addressing and
bidding and the buddy schemes.

Ans: The focused addressing and bidding strategy incurs a high communication
overhead in maintaining the system load table at the individual processors.
Window size is an important parameter in determining the communication
overhead incurred. If the window size is increased, then the communication
overhead decreases; however, the information at various processors would be
obsolete. This may lead to a scenario where none of the focused processors bids
due to status change in the window duration. If the window size is too small, then
the information would be reasonably up to date at the individual processors, but
the communication overhead in maintaining the status tables would be
unacceptably high.

The buddy algorithm tries to overcome the high communication overhead of the
focused addressing and bidding algorithm. Unlike focused addressing and bidding,
in the buddy algorithm broadcast does not occur periodically at the end of every

Prof. R. Mall, IIT Kharagpur Page 26


Real‐Time Systems: Model QnA 2010

window. A processor broadcasts only when the status of a processor changes


either from overloaded to underloaded or vice versa. Further, whenever the
status of a processor changes, it does not broadcast this information to all
processors and limits it only to a subset of processors called its buddy set.

Q6: In a distributed system with six clocks, the maximum difference of


between any two clocks is 10 ms. The individual clocks have a
maximum rate of drift of 2 * 10‐6. Ignore clock setup times and
communication latencies.

a) What is the rate at which the clocks need to resynchronize using


i. a simple central time server method?
b) What is the communication overhead in each of the two
schemes?

Ans:

a) i) When clocks are resynchronized every ΔT s, maximum drift between any


two clocks is limited to 2ρΔT = ε.
 2*2*10‐6*ΔT = 10*10‐3
 ΔT = (10*10-3)/2*2*10-6
 ΔT = 2500 s.
b) i) Master time server transmits (n‐1) = 5 messages per resynchronization
interval.

Number of resynchronization intervals per hour = (60*60)/2500 = 1.44.

Number of messages transmitted per hour = 1.44 * 5 = 7.2 ≈ 8.

Q7: A distributed real‐time system consisting of 10 clocks has up to


three clocks which are byzantine. The maximum drift between the
clocks has to be less than ε = 1 ms. The maximum drift rate between
two clocks is ρ = 5*10‐6. Compute the resynchronization interval.

Prof. R. Mall, IIT Kharagpur Page 27


Real‐Time Systems: Model QnA 2010

Ans:
We know that

ΔT = ε/2nρ

 ΔT = 10‐3/(2*10*5*10‐6)
 ΔT = 10 s.

Q8: A distributed system has 12 clocks with at best two byzantine


clocks. The clocks are required to be resynchronized within 1 ms of
each other. The maximum drift rate of the clocks is 6*10‐6. Compute

 The rate at which the clocks need to exchange time values,


 The total number of message exchanges required per hour for
synchronization.

Ans:
Time difference between two byzantine clocks = (3*ε*2)/n

Resynchronization needed when clocks diverge by

2*ΔTρ = (nε‐6ε)/n

 2ΔTρ = ε/2
 ΔT = 10‐3/2*2*6*10‐6
 ΔT = 41.67 s.

Each node transmits (n‐1) messages per resynchronization interval.

Therefore, total message transmissions per resynchronization interval = n*(n‐1) =


132.

Number of resynchronization intervals per hour = 60*60/41.67 = 86.41

Number of messages transmitted per hour = 86.41*132 = 11406.

Prof. R. Mall, IIT Kharagpur Page 28


Real‐Time Systems: Model QnA 2010

Lectures 21‐30: Real‐Time Operating Systems

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.

i. Real‐time processes are scheduled at higher priorities than the


kernel processes in RTLinux.
TRUE. The kernel runs as a low priority background task of RTLinux.

ii. Commercial real‐time operating systems such as PSOS and VRTX


support EDF scheduling of tasks.
TRUE. PSOS and VRTX support RMA by assigning static priorities to tasks.
Since EDF requires dynamic priority computation, EDF is not supported on
PSOS and VRTX.

iii. POSIX 1003.4 specifies that real‐time processes are to be


scheduled at priorities higher than kernel processes.
FALSE. POSIX.4 does not specify at what priority levels at what priority
levels the kernel services are to be executed.

iv. Computation intensive tasks dynamically take on higher priorities


in Unix.
FALSE. Under the Unix operating system, i/o bound tasks are assigned
higher priorities to keep the i/o channels as busy as possible.

v. Any real‐time priority level can be assigned to tasks for


implementing PCP in Windows NT with ceiling values computed
on these.
FALSE. To implement PCP in Windows NT, the real‐time tasks need to have
even priorities (i.e. 16, 18, …, 30) only.

Prof. R. Mall, IIT Kharagpur Page 29


Real‐Time Systems: Model QnA 2010

vi. Task switching time on the average is larger than task preemption
time.
FALSE. Task switching is a part of the total time taken to preempt a task. In
addition, task preemption also requires comparing the priorities of the
currently running task and the tasks in the ready queue.

vii. In general, segmented addressing incurs lower jitter in memory


access compared to virtual addressing scheme.
TRUE. Memory access using virtual addressing schemes can incur large
jitter depending on whether the required page is in the physical memory or
has been swapped out.

viii. POSIX by ANSI/IEEE enables executable files to be portable across


different Unix machines.
FALSE. The main goal of POSIX is application portability at the source code
level.

ix. In an implementation of PCP at the user‐level, half of the available


priority levels are meaningfully assigned to the tasks if FIFO
scheduling is not supported by the operating system among equal
priority tasks.
TRUE. If the highest priority among all tasks needing a resource is 2*n, then
the ceiling priority of the resource is 2*n+1.

x. For a real‐time operating system which does not support memory


protection, a procedure call and a system call are
indistinguishable.
TRUE.

Prof. R. Mall, IIT Kharagpur Page 30


Real‐Time Systems: Model QnA 2010

xi. Watchdog timers are used to start sensor and actuator processing
tasks at regular intervals.
FALSE. Watchdog timers are used to detect if a task misses its deadline, and
then to initiate exception handling procedures upon a deadline miss. It is
also used to trigger a system reset or other corrective action if the main
program hangs due to some fault condition.

Q2: List the important features that are required to be supported by a


RTOS.

Ans: The following is a list of the important features that an RTOS needs to
support:

 Real‐time priority levels


 Fast task preemption
 Predictable and fast interrupt latency
 Support for resource‐sharing among real‐time tasks
 Requirements on memory management
 Support for asynchronous I/O
Q3: What is the difference between synchronous and asynchronous
I/O? Which one is better suited for use in real‐time applications?

Ans: Asynchronous I/O means non‐blocking I/O. Under synchronous I/O system
calls, a process needs to wait till the hardware has completed the physical I/O.
Thus, in case of synchronous I/O, the process is blocked while it waits for the
results of the system call.

On the other hand, if a process uses asynchronous I/O, then the system call will
return immediately once the I/O request has been passed down to the hardware
or queued in the operating system typically before the physical I/O operation has
even begun. The execution of the process is not blocked because it does not need

Prof. R. Mall, IIT Kharagpur Page 31


Real‐Time Systems: Model QnA 2010

to wait for the results of the system call. This helps in achieving deterministic I/O
times in an RTOS.

Q4: Describe an open system? How does an open system compare with
a close system?

Ans: An open system is a vendor‐neutral environment which allows users to


intermix hardware, software, and networking solutions from different vendors.
Open systems are based on open standards and are not copyrighted. The
important advantages of an open system over a closed system are interoperability
and portability. Other advantages are that it reduces the cost of system
development and the time to market a product. It helps increase the availability
of add‐on software packages, enhances the ease of programming and facilitates
easy integration of separately developed modules.

Q5: What are the shortcomings of Windows NT for developing a hard


real‐time application?

Ans: The following are the major shortcomings of Windows NT when used to
develop real‐time applications:

 Interrupt processing – In Windows NT, the priority level of interrupts is


always higher than that of user‐level threads including the threads of real‐
time class. It is not possible for a user‐level thread to execute at a priority
higher than that of ISRs or DPCs. Therefore, even ISRs and DPCs
corresponding to very low priority tasks can preempt real‐time processes.
As a result, the potential blocking of real‐time tasks due to DPCs can be
large.
 Support for resource sharing protocols –Windows NT does not provide any
support to real‐time tasks to share critical resources among themselves.

Q6: What are the drawbacks in using Unix kernel for developing real‐
time applications?

Prof. R. Mall, IIT Kharagpur Page 32


Real‐Time Systems: Model QnA 2010

Ans: The following are the major shortcomings in using traditional Unix for real‐
time application development:

 Non‐preemptive kernel – Unix kernel is non‐preemptive, i.e., a process


running in the kernel mode cannot be preempted by other processes. A
consequence of this is that even when a low priority process makes a
system call, the high priority processes would have to wait until the system
call by the low priority process completes. For real‐time applications, this
leads to priority inversion.
 Dynamic priority levels – In traditional Unix systems, real‐time tasks cannot
be assigned static priority values. Soon after a programmer sets a priority
value for a task, the OS keeps on altering it during the course of execution
of the task. This makes it very difficult to schedule real‐time tasks using
algorithms such as EDF or RMA.
 Insufficient device driver support – In Unix (System V), device driver runs in
kernel mode. Therefore, if support for a new device is to be added, then
the driver module has to be linked to the kernel modules – necessitating a
system generation step.
 Lack of real‐time file services – In Unix, file blocks are allocated as and
when they are requested by an application. As a consequence, while a task
is writing to a file, it may encounter an error when the disk runs out of
space. In other words, no guarantee is given that disk space would be
available when a task writes a block to a file. Traditional file writing
approaches also result in slow writes because the required space has to be
allocated before writing a block. Another problem with traditional file
systems is that blocks of the same file may not be contiguously located on
the disk. This would result in read operations taking unpredictable times,
resulting in jitter in data access.
 Inadequate timer services support – In Unix, real‐time timer support is
insufficient for hard real‐time applications. The clock resolution that is
provided to applications is 10 ms, which is too coarse for many hard real‐
time applications.

Prof. R. Mall, IIT Kharagpur Page 33


Real‐Time Systems: Model QnA 2010

Q7: Is it true that even in uniprocessor systems multithreading can


result in faster response times compared to single‐threaded tasks?

Ans: Yes, multithreading can result in faster response times even on a single
processor systems. An advantage of multithreading, even for single‐CPU systems,
is the ability for an application to remain responsive to input. In a single threaded
program, if the main execution thread blocks on a long running task, the entire
application can appear to freeze. Thread creation and switching is much less
costlier as compared to tasks. By moving such long running tasks to a worker
thread that runs concurrently with the main execution thread, it is possible for
the application to remain responsive to user input while executing tasks in the
background.

Q8: Why is dynamically changing the priority levels of tasks important


for traditional operating systems? How does this property affect real‐
time systems?

Ans: One major design decision in traditional operating systems is to provide


acceptable response times to all the user processes. It is normally observed that
I/O transfer rate is primarily responsible for the slow response time. Processors
are extremely fast compared to the transfer times of I/O devices. To mitigate this
problem, it is desirable to keep the I/O channels as busy as possible. This can be
achieved by assigning the I/O bound tasks high priorities. To keep the I/O
channels busy, any task performing I/O should not be kept waiting very long for
the CPU. For this reason, as soon as the task blocks for I/O, its priority is increased
by a priority recomputation rule. This gives the interactive users good response
time.

However, for hard real‐time systems dynamic shifting of priority values is clearly
inappropriate, as it prevents tasks being constantly scheduled at high priority
levels, and also prevents scheduling under popular real‐time task scheduling
algorithms such as EDF and RMA.

Prof. R. Mall, IIT Kharagpur Page 34


Real‐Time Systems: Model QnA 2010

Q9: Explain the differences between a system call and a function call?
What problems may arise if a system call is made indistinguishable from
a function call?

Ans: Application programs invoke operating system services through system calls.
Examples of system calls include operating system services for creating a process,
I/O operations, etc. This is because certain operations can only be performed in
the kernel mode.

In many embedded systems, kernel and user processes execute in the same
address space, i.e., there is no memory protection. This makes debugging
applications difficult, since a run‐away pointer can corrupt the operating system
code making the system ‘freeze’.

Q10: Explain the requirements of a real‐time file system? How is it


compared to traditional file systems?

Ans: Real‐time file systems overcome the problem of unpredictable read times
and data access jitter by storing files contiguously on the disk. Since the file
system preallocates space, the times for read and write operations are more
predictable.

Prof. R. Mall, IIT Kharagpur Page 35


Real‐Time Systems: Model QnA 2010

Lectures 31‐39: Real‐Time Communication


Q1: State whether you consider the following statements to be TRUE or
FALSE. Justify your answer in each case.

i. An example of real‐time VBR traffic is streaming compressed


video transmission.
TRUE.

ii. The order of nodes in a logical ring must be the same as their
order in the physical network.
FALSE. The physical order in which the stations are connected to the cable
need not be the same as the order of the stations in the logical ring. This is
because a cable is inherently a broadcast medium. Each station would
receive every frame transmitted, and discard those that are not addressed
to it.

iii. The maximum delay suffered by the highest priority packets at a


switch when a multi‐level FIFO queue‐based service discipline is
deployed is generally lower compared than when a framing‐based
service discipline is deployed.
TRUE. Multilevel FIFO queue‐based service is a work conserving discipline,
while framing‐based discipline is of type non‐work conserving. Work
conserving disciplines usually cause less delay than non‐work conserving
disciplines.

iv. A priority‐queue based service discipline incurs less processing


overhead at a switch compared to a multilevel FIFO queue.
FALSE. Insertion operations and hence maintaining the priority queue
structure is more complex than FIFO operations.

Prof. R. Mall, IIT Kharagpur Page 36


Real‐Time Systems: Model QnA 2010

v. Successful working of the countdown protocol is independent of


whether the priority arbitration transmissions start at either the
msb or lsb end of the priority value, as long as all the nodes agree
on the convention.
FALSE. It is not possible to perform priority arbitrations if we start at the lsb
end of the priority value.

vi. RMA scheduling of real‐time messages is ineffective under global


priority‐based message transmission.
TRUE. RMA or EDF scheduling can be used with a global priority‐based
message transmission but the achieved schedulable utilization of the
channel would be very low.

vii. A calendar‐based reservation protocol yields higher GP(U) than


either a global priority‐based or a bounded‐access type of
protocol for transmitting VBR real‐time traffic among tasks over a
LAN.
FALSE. Calendar‐based protocols are pessimistic in nature and is designed
keeping in mind the worst‐case traffic. Hence, a global priority‐based or a
bounded‐access type of protocol would have a higher GP(U).

viii. The primary component in end‐to‐end delay that a message


suffers in transit in a network is the propagation delay.
FALSE. The propagation delay is usually in the order of milliseconds or even
microseconds. But the major component in delay is the queuing delay
which can be of the order of seconds.

ix. Under similar traffic conditions, the maximum delay suffered by


packets in a multilevel FCFS queue is lower than that in a framing‐
based service discipline.

Prof. R. Mall, IIT Kharagpur Page 37


Real‐Time Systems: Model QnA 2010

TRUE.

x. In any practical real‐time communication protocol, the GP is close


to 0 for utilization lower than the ABU and GP is close to 1 as the
utilization increases beyond ABU.
FALSE. The GP(U) is close to 1 for utilization lower than ABU, and approach
0 as utilization increases beyond ABU.

Q2: What is the difference between hard and soft real‐time


communication supported by a network?

Ans: Hard real‐time communication networks are expected to provide absolute


QoS guarantees to the applications, while soft real‐time networks do not provide
any guarantees.

Q3: Distinguish traffic shaping and policing.

Ans: Rate control can be achieved using either traffic shaping or policing. The
main differences between shaping and policing are given in the following table.

Shaping Policing
Buffers the packets that Drops the excess packets
Objective are above the committed over the committed
rates. rates. Does not buffer.
Uses a leaky bucket to
Propagates bursts, does
Handling Bursts delay traffic, achieving a
no smoothing.
smoothing effect.
Avoids retransmission Avoids delay due to
Advantage
due to dropped packets. queuing.
Can introduce delays due Can reduce throughput of
Disadvantage
to queuing. affected streams.

Prof. R. Mall, IIT Kharagpur Page 38


Real‐Time Systems: Model QnA 2010

Q4: What is meant by QoS routing?

Ans: The primary goals of QoS routing or constraint‐based routing are


 To select routes that can meet the QoS requirements of a connection.
 To increase the utilization of the network.

While determining a route, QoS routing schemes not only consider the topology
of a network, but also the requirements of the flow, the resource availability at
the links, etc. Therefore, QoS routing would be able to find a longer and lightly‐
loaded path rather than the shortest path that may be heavily loaded. QoS
routing schemes are, therefore, expected to be more successful than the
traditional routing schemes in meeting QoS guarantees requested by the
connections.

Q5: Define the concepts of additive, multiplicative and concave


constraints that are normally used in QoS routing schemes.

Ans: Let d(i, j) denote a chosen constraint for the link (i, j). For any path P = (I, j,
…, l, m), a constraint d would be termed additive if d(P) = d(I, j) + d(j, k) + … + d(l,
m). That is, the constraint for a path is the sum of the constraints of the individual
links making up the path. For example, end‐to‐end delay is an additive constraint.

A constraint is termed multiplicative if d(P) = d(I, j) * d(j, k) * … * d(l, m). For


example, reliability constraint is multiplicative.

A constraint is termed concave if d(P) = min{d(I, j), d(j, k), … , d(l, m)}. That is, for a
concave constraint, the constraint of a path is the minimum of all the constraints
of the individual links making up the path. An example of concave constraint is
bandwidth.

Q6: A real‐time network consists of four nodes, and uses IEEE 802.4
protocol. The real‐time requirement is that node Ni should able to
transmit up to bi bits over each period of duration Pi ms, where bi and Pi
are given in the table below.

Prof. R. Mall, IIT Kharagpur Page 39


Real‐Time Systems: Model QnA 2010

Node bi Pi
N1 1K 10000
N2 4K 50000
N3 16 K 90000
N4 16 K 90000

Compute a suitable TTRT and obtain suitable values of fi (total number


of bits that can be transmitted by node Ni once every cycle). Assume
that the propagation time is compared to TTRT and that the system
bandwidth is 1 Mbps.

Ans: From an examination of the requirements, it can be seen that the shortest
deadline among all messages is 10000ms. Therefore, we can select TTRT =
10000/2 = 5 s. The channel utilization due to the different nodes would be:

C1 1
 Kb / s
T 1 10
C2 4
 Kb / s
T 2 50

C 3 16
 Kb / s
T 3 90

C 4 16
 Kb / s
T 4 90

C1 C 2 C3 C 4 1 5 16 16 66
        Kb / s
T 1 T 2 T 3 T 4 10 16 90 90 90
The token holding times (Hi) of the different nodes can be determined as follows:

H1 = 5*(1/10)*(90/66) =45/66 s

H2 = 5*(5/16)*(90/66) = 125/66 s

Prof. R. Mall, IIT Kharagpur Page 40


Real‐Time Systems: Model QnA 2010

H3 = 5*(16/90)*(90/66) = 80/66 s

H4 = 5*(16/90)*(90/66) = 80/66 s

f1 = H1*1 Mbps = 45/66*1 Mb = 681.82 Kb

f2 = H2*1 Mbps = 125/66*1 Mb = 1893.94 Kb

f3 = H3*1 Mbps = 80/66*1 Mb = 1212.12 Kb

f4 = H4*1 Mbps = 80/66*1 Mb = 1212.12 Kb

Q7: Describe any two traffic specification models which can


satisfactorily be used to specify bursty traffic.

Ans: The following two models are satisfactory for bursty traffic:
 (Xmin, Xavg, Smax, I) model – In this model, Xmin specifies the minimum inter‐
arrival time between two packets. Smax specifies the maximum packet size
and I is an interval over which the observations are valid. Xavg is the average
inter‐arrival time of packets over an interval of size I. A connection would
satisfy (Xmin, Xavg, Smax, I) model if it satisfies (Xmin, Smax) model, and
additionally, during any interval of length I the average inter‐arrival time of
packets is Xavg. In this model, the peak and the average rates of the traffic
are given by Smax/Xmin and Smax/Xavg, respectively. Note that this model
bounds both peak and average rates of the traffic. Therefore, this model
provides a more accurate characterization of VBR traffic compared to either
(Xmin, Smax) model or (r, T) model.
 (σ, ρ) model – In this model, σ is the maximum burst size and ρ is the long
term average rate of the traffic source. Average traffic ρ is calculated by
observing the number of packets generated over a sufficiently large
duration and dividing this by the size of the duration. A connection would
satisfy (σ, ρ) model if during any interval of length t, the number of bits
generated by the connectivity in that interval is less than σ + ρ*t. This
model can satisfactorily be used to model bursty traffic sources.

Prof. R. Mall, IIT Kharagpur Page 41


Real‐Time Systems: Model QnA 2010

Lecture 40: Real‐Time Databases

Q1: State whether the following statements are TRUE or FALSE. Justify
your answer in each case.

i. In real‐time applications, a set of temporal data that is absolutely


consistent is guaranteed to be relatively consistent.
FALSE. The consistency relations are different for absolute and relative
consistency.

ii. When the data contention among transactions is low, the


performance of the 2PL‐WP protocol is better than the basic 2PL
protocol, but the performance becomes worse under high data
contentions.
TRUE. It is true that under low data contention, the performance of 2PL‐WP
protocol is better than the basic 2PL protocol. Its performance becomes
worse than the 2PL protocol under high data contentions because it has to
perform unnecessary priority checks.

iii. 2PL‐WP protocol is free from deadlocks.


FALSE. The following example shows two transactions T1 and T2 which need
access to two data items d1 and d2.

T1: Lock d1, Lock d2, Unlock d2, Unlock d1

T2: Lock d2, Lock d1, Unlock d1, Unlock d2

Assume that T1 has higher priority than T2. T2 starts running first and locks
data item d2. After some time, T1 locks d1 and then tries to lock d2 which is
being held by T2. As a consequence T1 blocks, and T2 needs to lock the data
item d1 being held by T1. Now, the transactions T1 and T2 are deadlocked.

Prof. R. Mall, IIT Kharagpur Page 42


Real‐Time Systems: Model QnA 2010

iv. 2PL‐HP protocol is free from priority inversion and deadlocks.


TRUE.

v. Once a transaction reaches the validation phase under the OCC‐


BC protocol, it is guaranteed committed.
TRUE.

vi. If in a real‐time database, the choice is based purely on


performance (percentage of tasks meeting their deadlines)
considerations, then the OCC protocol should be used.
FALSE. The SCC protocol performs better than the OCC protocol when
transactions have tight deadlines and the load is moderate.

vii. Before a committing transaction can commit in OCC‐BC protocol,


the transaction needs to check for possible conflicts with already
committed transactions.
FALSE. In OCC‐BC protocol, there is no need for the committing transaction
to check for conflicts with already committed transactions, because if it
were in conflict with any of the committed transactions, it would have
already been aborted.

viii. The serialization order is the same as the committing order of a


transaction in a pessimistic concurrency control protocol.
TRUE. This is true for all concurrency control protocols.

ix. RAID disks are generally used in real‐time databases for data
storage.
FALSE. Real‐time databases are usually in‐memory databases. Using disks
for storage will introduce unpredictable delays in read/write operations.

Prof. R. Mall, IIT Kharagpur Page 43


Real‐Time Systems: Model QnA 2010

x. OCC protocols are free from deadlocks.


TRUE.

xi. Forward validations are performed by all OCC protocols just


before the commitment of a transaction.
FALSE. This is true only for Forward OCC protocol.

Q2: List the important differences between a real‐time database and a


conventional database?

Ans: The following are the main differences between a real‐time database and a
conventional database:

 Unlike traditional databases, timing constraints are associated with the


different operations carried out on real‐time databases.
 Real‐time databases have to deal with temporal data compared to static
data as in the case of traditional databases.
 The performance metrics that are meaningful to the transactions of these
two types of databases are very different.

Q3: Discuss which category of concurrency protocol is best suited under


what circumstances?

Ans: Locking‐based concurrency protocols tend to reduce the degree of


concurrent execution of transactions as they construct serializable schedules. On
the other hand, optimistic approaches attempt to increase parallelism to its
maximum, but they prune some of the transactions in order to satisfy
serializability. In 2PL‐HP, a transaction could be restarted by or wait for another
transaction that will be aborted later. Such restarts and/or waits cause
performance degradation.

Unlike 2PL‐HP, in optimistic approaches when incorporating broadcasting commit


schemes, only validating transactions can cause restart of other transactions.

Prof. R. Mall, IIT Kharagpur Page 44


Real‐Time Systems: Model QnA 2010

Since all validating transactions are guaranteed commitment at completion, all


restarts generated by such optimistic protocols are useful. Further, the OCC
protocols have the advantage of being non‐blocking and free and deadlocks – a
very desirable feature in real‐time applications.

At zero contention, all three types of protocols result in identical performance.


However, pessimistic protocols perform better as the load (and therefore, the
conflicts) becomes higher. On the other hand, SCC performs better compared to
both OCC and pessimistic protocols, when transactions have tight deadlines and
the load is moderate.

Q4: Traditional 2PL protocol is not suitable for use in real‐time


databases. Why?
Ans: The conventional 2PL is unsatisfactory for real‐time applications for several
reasons: possibility of priority inversions, long blocking delays, lack of
consideration for timing information, and deadlock. A 2PL is also restrictive and
prevents concurrent execution which could have easily have been allowed
without causing any violation of consistency of the database.

Q5: Given that a relative consistency set R = {position, velocity,


acceleration} and Rrvi = 100 ms, and the following data items: position =
(25 m, 2500 ms, 200 ms), velocity = (300 m/s, 2550 ms, 300 ms),
acceleration = (20 m/s2, 2425 ms, 200 ms), current time = 2600 ms. Are
the given data items absolutely valid? Also, are they relatively
consistent?

Ans: position is absolutely valid as (2600‐2500) < 200


velocity is also absolutely valid as (2600‐2550)<300

acceleration is also absolutely valid as (2600‐2425)<200

For relative consistency, we have to check whether the different data items are
pair‐wise consistent. It can be easily checked that the given set of data is not

Prof. R. Mall, IIT Kharagpur Page 45


Real‐Time Systems: Model QnA 2010

relatively consistent, since for velocity and acceleration: (2550‐2425) is not less
than 100.

Prof. R. Mall, IIT Kharagpur Page 46

You might also like