Questions and Answers
Questions and Answers
Contents
Lectures 1‐5: Introduction to Real‐Time Systems........................................................................... 3
Lectures 6‐12: Real‐Time Task Scheduling...................................................................................... 9
Lectures 13‐16: Resource Sharing and Dependencies among Real‐Time Tasks........................... 18
Lectures 17‐20: Scheduling Real‐Time Tasks in Multiprocessor and Distributed Systems .......... 23
Lectures 21‐30: Real‐Time Operating Systems ............................................................................. 29
Lectures 31‐39: Real‐Time Communication.................................................................................. 36
Lecture 40: Real‐Time Databases ................................................................................................. 42
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.
viii. Soft real‐time tasks do not have any associated time bounds.
FALSE. Soft real‐time tasks also have time bounds associated with them.
Instead of absolute values of time, the constraints are expressed in terms of
the average response times required.
Ans: In a real‐time application, the notion of time stands for the absolute time
which is quantifiable. In contrast to real time, logical time, used in most general
category applications, deals with a qualitative notion of time and are expressed
using event ordering relations. For example, consider the following part of the
behavior of library automation software used to automate the bookkeeping
activities of a college library: “After a query book command is given by the user,
the details of all the matching books are displayed by the software”.
Ans: Performance constraints are the constraints that are imposed on the
response of the system. Behavioral constraints are the constraints that are
imposed on the stimuli generated by the environment. Behavioral constraints
ensure that the environment of a system is well‐behaved, whereas performance
constraints ensure that the computer system performs satisfactorily.
Q4: Explain the important differences between hard, firm and soft real‐
time systems.
Ans: A hard real‐time task is one that is constrained to produce its results within
certain predefined time bounds. The system is considered to have failed
whenever any of its hard real‐time tasks does not produce its required results
before the specified time bound. Unlike a hard real‐time task, even when a firm
real‐time task does not complete within its deadline, the system does not fail. The
late results are merely discarded. In other words, the utility of the results
computed by a real‐time task becomes zero after the deadline. Soft real‐time
tasks also have time bounds associated with them. However, unlike hard and firm
real‐time tasks, the timing constraints on soft real‐time tasks are not expressed as
absolute values. Instead, the constraints are expressed in terms of the average
response times required.
Ans: A fail‐safe state of a system is one which if entered when the system fails,
no damage would result. All traditional non‐real‐time systems do have one or
more fail‐safe states.
Q8: List the different types of timing constraints that can occur in a
real‐time system?
Ans: The different timing constraints associated with a real‐time system can be
broadly classified into the following categories:
1) Performance constraints
2) Behavioral constraints
Each of the performance and behavioral constraints can further be classified into
the following types:
1) Delay constraint
2) Deadline constraint
3) Duration constraint
Ans:
(a) The EFSM model for the problem is shown in the below figure.
(b) The EFSM model for the problem is shown in the below figure.
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.
viii. Scheduling decisions are made only at the arrival and completion
of tasks in a non‐preemptive event‐driven task scheduler.
TRUE. In event‐driven scheduling, the scheduling points are defined by task
completion and task arrival times. This is because during the course of
execution of a task on the CPU, the task cannot be preempted.
Ans: A periodic task is one that repeats after a certain fixed time interval. The
precise time instants at which periodic tasks recur are usually demarcated by
clock interrupts. For this reason, periodic tasks are also referred to as clock‐driven
tasks.
A sporadic task is one that recurs at random instants. Each sporadic task is
characterized by a parameter gi which implies that two instances of the sporadic
task have to be separated by a minimum time of gi. An aperiodic task is in many
ways similar to a sporadic task. An aperiodic task can arise at random instants. In
case of aperiodic tasks, the minimum separation gi between two consecutive
instances can be 0. Also, the deadline for aperiodic task is expressed as either an
average value or is expressed statistically.
Real‐time programmers commonly handle tasks with tight completion time jitter
requirements using any one of the following two techniques:
If only one or two actions (tasks) have tight jitter requirements, these
actions are assigned very high priority. This method works well only when
there are a very small number of actions (tasks). When it is used in an
application in which the tasks are barely schedulable, it may result in some
tasks missing their respective deadlines.
If jitter must be minimized for an application that is barely schedulable,
each task needs to be split into two: one which computes the output but
Prof. R. Mall, IIT Kharagpur Page 12
Real‐Time Systems: Model QnA 2010
does not pass it on, and one which passes the output on. This method
involves setting the second task’s priority to very high values and its period
to be the same as that of the first task. An action scheduled with this
approach will run one cycle behind schedule, but the tasks will have tight
completion time jitter.
Q6: A real‐time system consists of three tasks T1, T2, and T3. Their
characteristics have been shown in the following table.
Relative
Execution
Task Phase (ms) Deadline Period (ms)
Time (ms)
(ms)
T1 20 10 20 20
T2 40 10 50 50
T3 70 20 80 80
after every major cycle. The major cycle of a set of tasks is LCM of the periods
even when the tasks have arbitrary phasing.
Q7: Using a cyclic real‐time scheduler, suggest a suitable frame size that
can be used to schedule three periodic tasks T1, T2, and T3 with the
following characteristics:
Execution Relative
Task Phase (ms) Period (ms)
Time (ms) Deadline (ms)
T1 0 20 100 100
T2 0 20 80 80
T3 0 30 150 150
Ans: For the given task set, an appropriate frame size is the one that satisfies all
the required constraints. Let F be the appropriate frame size.
F>= 30
60 – 10 <= 100
Satisfied
60 – 10 <= 80
Satisfied
60 – 30 <= 150
Satisfied
Therefore, F=30 is a suitable frame size. We can carry out our computations for
the other possible frame sizes, i.e. F = 40, 60, 100, 120, 150, 200, 300, 400, 600,
1200, to find out other possible solutions. However, it is better to choose the
shortest frame size.
Ans: Let us first compute the total CPU utilization achieved due to the given tasks.
4
25 7 10 10
U ui 0.84 1
i 1 150 40 60 30
Therefore, the necessary condition is satisfied.
n 1
ui n(2
i 1
n
1)
1
0.84 4(2 4
1)
Therefore,
0.84 0.76
Not satisfied.
Although, the given set of tasks fails the Liu and Layland’s test which is pessimistic
in nature, we need to carry out Lehoczky’s test.
Testing for task T4: Since e4<= d4, therefore, T4 would meet its first deadline.
40
Testing for task T2: 7 * 10 40
30
Satisfied.
Task T2 would meet its first deadline.
60 60
Testing for task T3: 10 * 7 * 10 50
40 30
Satisfied.
Task T3would meet its first deadline.
150
150 150
Testing for task T1: 25 * 10 * 7 * 10 100
60 40 30
Not satisfied.
Therefore, task T1 would fail to meet its first deadline.
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.
iv. A task can undergo priority inversion for some duration under PCP
even if it does not require any resource.
vi. A separate queue is maintained for the waiting tasks for each
critical resource in HLP.
FALSE. Unlike PIP, HLP does not maintain a separate queue for each critical
resource. The reason is that whenever a task acquires a resource, it
executes at the ceiling priority of the resource, and the other tasks that
may need this resource do not even get a chance to execute and request
for the resource.
ix. Under PCP, the highest priority task does not suffer any inversions
when sharing certain critical resources.
FALSE. Even under PCP, the highest priority protocol can suffer from direct
and avoidance‐related inversions.
x. Under PCP, the lowest priority task does not suffer any inversions
when sharing certain critical resources.
TRUE. In PCP, the lowest priority task does not suffer from any of direct,
inheritance‐related, or avoidance‐related inversions.
Ans: When a lower priority task is already holding a resource, a higher priority
task needing the same resource has to wait and cannot make progress with its
computations. The higher priority task remains blocked until the lower priority
task releases the required non‐preemptable resource. In this situation, the higher
priority task is said to undergo simple priority inversion on account of the lower
priority task for the duration it waits while the lower priority task keeps holding
the resource.
Q3: What can be the types of priority inversions that a task might
undergo on account of a lower priority task under PCP?
Ans: Tasks sharing a set of resources using PCP may undergo three important
types of priority inversions: direct inversion, inheritance‐related inversion, and
avoidance inversion.
Direct inversion occurs when a higher priority task waits for a lower priority task
to release a resource that it needs. Inheritance‐related inversion occurs when a
lower priority task is holding a resource and a higher priority task is waiting for it.
Then, the priority of the lower priority task is raise to that of the waiting higher
priority task by the inheritance clause of the PCP. As a result, the intermediate
priority tasks not needing the resource undergo inheritance‐related inversion.
In PCP, when a task requests a resource its priority is checked against CSC. The
requesting task is granted use of the resource only when its priority is greater
than CSC. Therefore, even when a resource that a task is requesting is idle, the
requesting task may be denied access to the resource if the requesting task’s
priority is less than CSC. A task, whose priority is greater than the currently
executing task, but lesser than the CSC and needs a resource that is currently not
in use, is said to undergo avoidance‐related inversion.
Ans: Deadlocks occur only when different (more than one) tasks hold part of
each other’s required resources at the same time, and then they request for the
resources being held by each other. But under PCP, when one task is executing
with some resources, any other task cannot hold a resource that may ever be
needed by this task. That is, when a task is granted one resource, all its required
resources must be free. This prevents the possibility of any deadlock.
Tasks are single blocking under PCP. That means, under PCP a task can undergo at
most one inversion during its execution. This feature of PCP prevents chain
blocking.
Ans: In PIP, whenever a request for a resource is made, the resource will be
allocated to the requesting task if it is free. However, in PCP a resource may not
be granted to a requesting task even if the resource is free. This strategy in PCP
helps in avoiding potential deadlocks.
Q6: The following table shows the details of tasks in a real‐time system.
The tasks have zero phasing and repeat with a period of 90 ms.
Determine a feasible schedule to be used by a table‐driven scheduler.
Computation Time
Task Deadline di (ms) Dependency
ei (ms)
T1 30 90 ‐
T2 15 40 T1, T3
T3 20 40 T1
T4 10 70 T2
Ans:
Step 1: Sort the tasks in increasing order of their deadlines.
T2 T3 T4 T1
T2 T3 T1 T4
5 20 40 50 60 90
15 35 45 75
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.
FALSE. Focused addressing and bidding algorithm used for allocating tasks
in distributed real‐time systems is a dynamic allocation technique.
Ans: Besides the traditional use of clocks in a computer system, clocks are also
used for determining timeouts and time stamping. However, different clocks in a
distributed system tend to diverge since it is almost impossible to have two clocks
that run exactly at the same speed. This lack of synchrony among clocks is
expressed as the clock skew and determines the attendant drifts of the clocks
with time. Lack of synchrony and drift among clocks makes the time stamping and
timeout operations in a distributed real‐time system meaningless. Therefore, to
have meaningful timeouts and time stamping spanning more than one node of a
distributed system, the clocks need to be synchronized.
Ans: The focused addressing and bidding strategy incurs a high communication
overhead in maintaining the system load table at the individual processors.
Window size is an important parameter in determining the communication
overhead incurred. If the window size is increased, then the communication
overhead decreases; however, the information at various processors would be
obsolete. This may lead to a scenario where none of the focused processors bids
due to status change in the window duration. If the window size is too small, then
the information would be reasonably up to date at the individual processors, but
the communication overhead in maintaining the status tables would be
unacceptably high.
The buddy algorithm tries to overcome the high communication overhead of the
focused addressing and bidding algorithm. Unlike focused addressing and bidding,
in the buddy algorithm broadcast does not occur periodically at the end of every
Ans:
Ans:
We know that
ΔT = ε/2nρ
ΔT = 10‐3/(2*10*5*10‐6)
ΔT = 10 s.
Ans:
Time difference between two byzantine clocks = (3*ε*2)/n
2*ΔTρ = (nε‐6ε)/n
2ΔTρ = ε/2
ΔT = 10‐3/2*2*6*10‐6
ΔT = 41.67 s.
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer.
vi. Task switching time on the average is larger than task preemption
time.
FALSE. Task switching is a part of the total time taken to preempt a task. In
addition, task preemption also requires comparing the priorities of the
currently running task and the tasks in the ready queue.
xi. Watchdog timers are used to start sensor and actuator processing
tasks at regular intervals.
FALSE. Watchdog timers are used to detect if a task misses its deadline, and
then to initiate exception handling procedures upon a deadline miss. It is
also used to trigger a system reset or other corrective action if the main
program hangs due to some fault condition.
Ans: The following is a list of the important features that an RTOS needs to
support:
Ans: Asynchronous I/O means non‐blocking I/O. Under synchronous I/O system
calls, a process needs to wait till the hardware has completed the physical I/O.
Thus, in case of synchronous I/O, the process is blocked while it waits for the
results of the system call.
On the other hand, if a process uses asynchronous I/O, then the system call will
return immediately once the I/O request has been passed down to the hardware
or queued in the operating system typically before the physical I/O operation has
even begun. The execution of the process is not blocked because it does not need
to wait for the results of the system call. This helps in achieving deterministic I/O
times in an RTOS.
Q4: Describe an open system? How does an open system compare with
a close system?
Ans: The following are the major shortcomings of Windows NT when used to
develop real‐time applications:
Q6: What are the drawbacks in using Unix kernel for developing real‐
time applications?
Ans: The following are the major shortcomings in using traditional Unix for real‐
time application development:
Ans: Yes, multithreading can result in faster response times even on a single
processor systems. An advantage of multithreading, even for single‐CPU systems,
is the ability for an application to remain responsive to input. In a single threaded
program, if the main execution thread blocks on a long running task, the entire
application can appear to freeze. Thread creation and switching is much less
costlier as compared to tasks. By moving such long running tasks to a worker
thread that runs concurrently with the main execution thread, it is possible for
the application to remain responsive to user input while executing tasks in the
background.
However, for hard real‐time systems dynamic shifting of priority values is clearly
inappropriate, as it prevents tasks being constantly scheduled at high priority
levels, and also prevents scheduling under popular real‐time task scheduling
algorithms such as EDF and RMA.
Q9: Explain the differences between a system call and a function call?
What problems may arise if a system call is made indistinguishable from
a function call?
Ans: Application programs invoke operating system services through system calls.
Examples of system calls include operating system services for creating a process,
I/O operations, etc. This is because certain operations can only be performed in
the kernel mode.
In many embedded systems, kernel and user processes execute in the same
address space, i.e., there is no memory protection. This makes debugging
applications difficult, since a run‐away pointer can corrupt the operating system
code making the system ‘freeze’.
Ans: Real‐time file systems overcome the problem of unpredictable read times
and data access jitter by storing files contiguously on the disk. Since the file
system preallocates space, the times for read and write operations are more
predictable.
ii. The order of nodes in a logical ring must be the same as their
order in the physical network.
FALSE. The physical order in which the stations are connected to the cable
need not be the same as the order of the stations in the logical ring. This is
because a cable is inherently a broadcast medium. Each station would
receive every frame transmitted, and discard those that are not addressed
to it.
TRUE.
Ans: Rate control can be achieved using either traffic shaping or policing. The
main differences between shaping and policing are given in the following table.
Shaping Policing
Buffers the packets that Drops the excess packets
Objective are above the committed over the committed
rates. rates. Does not buffer.
Uses a leaky bucket to
Propagates bursts, does
Handling Bursts delay traffic, achieving a
no smoothing.
smoothing effect.
Avoids retransmission Avoids delay due to
Advantage
due to dropped packets. queuing.
Can introduce delays due Can reduce throughput of
Disadvantage
to queuing. affected streams.
While determining a route, QoS routing schemes not only consider the topology
of a network, but also the requirements of the flow, the resource availability at
the links, etc. Therefore, QoS routing would be able to find a longer and lightly‐
loaded path rather than the shortest path that may be heavily loaded. QoS
routing schemes are, therefore, expected to be more successful than the
traditional routing schemes in meeting QoS guarantees requested by the
connections.
Ans: Let d(i, j) denote a chosen constraint for the link (i, j). For any path P = (I, j,
…, l, m), a constraint d would be termed additive if d(P) = d(I, j) + d(j, k) + … + d(l,
m). That is, the constraint for a path is the sum of the constraints of the individual
links making up the path. For example, end‐to‐end delay is an additive constraint.
A constraint is termed concave if d(P) = min{d(I, j), d(j, k), … , d(l, m)}. That is, for a
concave constraint, the constraint of a path is the minimum of all the constraints
of the individual links making up the path. An example of concave constraint is
bandwidth.
Q6: A real‐time network consists of four nodes, and uses IEEE 802.4
protocol. The real‐time requirement is that node Ni should able to
transmit up to bi bits over each period of duration Pi ms, where bi and Pi
are given in the table below.
Node bi Pi
N1 1K 10000
N2 4K 50000
N3 16 K 90000
N4 16 K 90000
Ans: From an examination of the requirements, it can be seen that the shortest
deadline among all messages is 10000ms. Therefore, we can select TTRT =
10000/2 = 5 s. The channel utilization due to the different nodes would be:
C1 1
Kb / s
T 1 10
C2 4
Kb / s
T 2 50
C 3 16
Kb / s
T 3 90
C 4 16
Kb / s
T 4 90
C1 C 2 C3 C 4 1 5 16 16 66
Kb / s
T 1 T 2 T 3 T 4 10 16 90 90 90
The token holding times (Hi) of the different nodes can be determined as follows:
H1 = 5*(1/10)*(90/66) =45/66 s
H2 = 5*(5/16)*(90/66) = 125/66 s
H3 = 5*(16/90)*(90/66) = 80/66 s
H4 = 5*(16/90)*(90/66) = 80/66 s
Ans: The following two models are satisfactory for bursty traffic:
(Xmin, Xavg, Smax, I) model – In this model, Xmin specifies the minimum inter‐
arrival time between two packets. Smax specifies the maximum packet size
and I is an interval over which the observations are valid. Xavg is the average
inter‐arrival time of packets over an interval of size I. A connection would
satisfy (Xmin, Xavg, Smax, I) model if it satisfies (Xmin, Smax) model, and
additionally, during any interval of length I the average inter‐arrival time of
packets is Xavg. In this model, the peak and the average rates of the traffic
are given by Smax/Xmin and Smax/Xavg, respectively. Note that this model
bounds both peak and average rates of the traffic. Therefore, this model
provides a more accurate characterization of VBR traffic compared to either
(Xmin, Smax) model or (r, T) model.
(σ, ρ) model – In this model, σ is the maximum burst size and ρ is the long
term average rate of the traffic source. Average traffic ρ is calculated by
observing the number of packets generated over a sufficiently large
duration and dividing this by the size of the duration. A connection would
satisfy (σ, ρ) model if during any interval of length t, the number of bits
generated by the connectivity in that interval is less than σ + ρ*t. This
model can satisfactorily be used to model bursty traffic sources.
Q1: State whether the following statements are TRUE or FALSE. Justify
your answer in each case.
Assume that T1 has higher priority than T2. T2 starts running first and locks
data item d2. After some time, T1 locks d1 and then tries to lock d2 which is
being held by T2. As a consequence T1 blocks, and T2 needs to lock the data
item d1 being held by T1. Now, the transactions T1 and T2 are deadlocked.
ix. RAID disks are generally used in real‐time databases for data
storage.
FALSE. Real‐time databases are usually in‐memory databases. Using disks
for storage will introduce unpredictable delays in read/write operations.
Ans: The following are the main differences between a real‐time database and a
conventional database:
For relative consistency, we have to check whether the different data items are
pair‐wise consistent. It can be easily checked that the given set of data is not
relatively consistent, since for velocity and acceleration: (2550‐2425) is not less
than 100.