RTS M-4 Notes
RTS M-4 Notes
ON
MODULE 4
Operating Systems
By
Suprith P G
Assistant Professor
Department Of ECE
GM Institute of Technology, Davangere
REAL-TIME SYSTEMS 18EC731
OPERATING SYSTEMS
MODULE-4
Introduction:
The traditional approach is to incorporate all the requirements inside a general purpose operating
system as illustrated in figure 6.1. Access to the hardware of the system and to the IO devices is
through the operating system. In many real time and multiprogramming systems restriction of
access is enforced by hardware and software traps. In single job OS access through the OS is not
usually enforced. In addition to support and control basic activities, OS provides various utility
programs, for eg., loaders, linkers, assemblers and debuggers, as well as run time support for
high level languages.
A general purpose OS will provide some facilities that are not required in a particular
application, and to be forced to be included them adds unnecessarily to the system over heads.
Usually during installing of an OS certain feature can be selected or omitted.
Recently OS which provides only minimum kernel or nucleus have become popular; additional
features can be added by the application programmer writing in high level language. This
structure is shown in figure 6.2. In this type of OS the distinction between OS and the application
software becomes blurred. The approach has many advantages for applications that involve
small, embedded systems.
The relationship between various sections of a simple OS, the computer hardware and the user is
illustrated in figure 6.3. The command processor provides a means by which the user can
communicate with the OS from the computer console device.
The actual processing of the commands issued by the user is done by the BDOS (Basic Disk
Operating System) which also handles the input, output and the file operations on the disks.
Application programs will normally communicate with the hardware of the system through
system calls which are processed by the BDOS.
The BIOS (Basic IO system) contains the various device drivers which manipulate the physical
devices and this section of the OS may vary from implementation to implementation as it has to
operate directly with the underlying hardware of the computer.
Devices are treated as logical and physical units. Logical devices are software construct used to
simplify the user interface; user programs perform IO to logical devices and the BDOS connects
the logical devices to physical device. The actual operation of the physical is performed by the
software in the BIOS.
Access to the OS is by means of subroutine calls and information is passed in the CPU register of
the machine. Functions cannot be called directly from most high level languages and this
provides isolation between OS and a programmer using a high level language.
There are many different types of Oss and until the early 1980s there was a clear distinction
between OS designed for use in real time applications and other types of OS.
For example Module 2 enables us to construct multitasking real time applications that run on top
of single user, single task OS. And Oss such as UNIX and OS/2 support multi-user, multi tasking
applications. The function of multiuser OS is illustrated in figure 6.4.
In a multitasking OS it is assumed that there is a single user and that the various tasks cooperates
to serve the requirement of the user. Cooperation requires that the tasks communicate with each
other and share common data. This is illustrated in figure 6.5. in a good multitasking OS task
communication and data sharing will be regulated so that OS is able to prevent inadvertent
communication or data access and hence protect data which is private to a task.
A task may use another task that is it may require certain activities which are contained in
another task to be performed and it may itself be used by another task. Thus tasks may need to
communicate with one another.
In summary a RT multitasking OS has to support the resource sharing and the timing
requirement of the tasks and the functions can be divided as follows:
Resource control: Control of all shared resources other than memory and CPU time.
Inter task communication and synchronisation: Provision of support mechanism to provide safe
communication between tasks and to enable tasks to synchronize their activities.
In addition to the above the system has to provide the standard features such as support for disk
files, basic IO device drivers and utility programs. The typical structure is illustrated in figure
6.6. The overall control of the system is provided by the task management module which is
responsible for allocating the use of the CPU. This module is often referred to as monitor or as
the executive control program.
Scheduling Strategies:
If we consider the scheduling of time allocation on a single CPU there are two basic strategies:
Cyclic.
Pre-emptive.
1. Cyclic: The first of these, cyclic, allocates the CPU to a task in turn. The task uses the CPU for
as long as it wishes. When it no longer requires it the scheduler allocates it to the next
task in the list. In general this approach is too restrictive since it requires that the tasks
units have similar execution times. It is difficult to deal with random events using this
method.
2. Pre-emptive: There are many pre–emptive stages. All involve the possibility that a task will be
interrupted- hence the term pre-emptive – before it has completed a particular
invocation. A consequence of this is that the executive has to make provision to
save the volatile environment for each task, since at some later time it will be
allocated CPU time and will want to continue from the exact point at which it
was interrupted.
1. Tasks are allocated a priority level and at the end of a predetermined time slice, the task
with highest priority of those ready to run is chosen and is given control of the CPU.
2. Task priorities may be fixed (static priority system) or may be changed during system
execution (dynamic priority system).
4. Changing priorities is risky as it makes it much harder to predict and test the behavior of
the system.
5. The task management system has to deal with the handling of interrupts. These may be
hardware interrupts caused by external events, or software interrupts generated by a
running task.
Priority Structures:
I. In a real-time system the designer has to assign priorities to the tasks in the system.
II. The priority will depend on how quickly a task will have to respond to a particular event.
III. Most RTOSs provide facilities such that tasks can be divided into three board levels:
IV. Interrupt Level: at this level are the service routines for the tasks and devices which
require very fast response (measured in msec.) Example: real-time clock task.
V. Clock Level: at this level are the tasks which require accurate timing and repetitive
processing, such as the sampling and control tasks.
VI. Base Level: tasks at this level are of low priority and either have no deadlines to meet or
are allowed a wide margin of error in their timing. Tasks at this level may be allocated
priorities or may all run at a single priority level.
Clock Level
One interrupt level task will be the real-time clock handling routine which will be entered at
some interval, usually determined by the required activation rate for the most frequently required
task. Typical values are I to 200 ms. Each clock interrupt is known as a tick and represents the
smallest time interval known to the system.
The function of the clock interrupt handling routine is to update the time-of-day clock in the
system and to transfer control to the dispatcher. The scheduler selects which task is to run at a
particular clock tick. Clock level tasks divide into two categories:
l. CYCLIC: these are tasks which require accurate synchronization with the outside world.
2. DELA Y: these tasks simply wish to have a fixed delay between successive repetitions or to
delay their activities for a given period of time.
Cyclic Task.
The cyclic tasks are ordered in a priority which reflects the accuracy of timing required for the
task, those which require high accuracy being given the highest priority. Tasks of lower priority
within the clock level will have some jitter since they will have to await completion of the higher-
level tasks.
Successive invocations of the task will be constant (if the execution time for each task is a
constant value). If the priority order is now rearranged so that it is Co A and B then the activation
diagram is as shown in Figure 6.8b and every fourth tick of the clock there will be a delay in the
timing of tasks A and B. In practice there is unlikely 10 be any justification for choosing a
priority order Co A and B rather than A 0 Band C. Usually the task with the highest repetition
rate will have the most stringent liming requirements and hence will be assigned the highest
priority.
Delay Tasks.
The tasks which wish to delay their activities for a fixed period of time, either to allow some
external event to complete (for example, a relay may take 20 ms to close) or because they only
need 10 run at certain intervals (for example, to update the operator display), usually run at the
base level. When a task requests a delay its status is changed from runnable to suspend and
remains suspended until the delay period has elapsed. One method of implementing the delay
function is to use a queue of task descriptors, say identified by the name DELAYED. This, queue
is an ordered list of task descriptors, the task at the front of the queue. Being that who’s next
running time is nearest to the current time.
CODE SHARING:
In many applications the same actions have to be carried out in several different tasks. In a
conventional program the actions would be coded as a subroutine and one copy of the subroutine
would be included in the program.
In a multi-tasking system each task must have its own copy of the subroutine or some
mechanism must be provided to prevent one task interfering with the use of the code by another
task. The problems which can arise are illustrated in Figure 6.20.
Two tasks share the subroutine S. If task A is using the subroutine but before it finishes some
event occurs which causes a rescheduling of the tasks and task B runs and uses the subroutine,
then when a return is made to task A, although it will begin to use subroutine Sagain at the
correct place, the values of locally held data will have been changed and will reflect the
information processed within the subroutine by task B.
As shown in figure 6.21. Some form of lock mechanism is placed at the beginning of the routine
such that if any task is already using the routine the calling task will not be allowed entry until
the task which is using the routine unlocks it. The use of a lock mechanism to protect a
subroutine is an example of the need for mechanisms to support mutual exclusion when
constructing an operating system.
Re-entrant Code
If the subroutine-can be coded such that it does not hold within it any data, which is it is purely
code - any intermediate results are stored in the calling task or in a control algorithm in a process
control system with a large number of control loops.
The mechanism is illustrated in Figure 6.23; associated with each control loop is a loop
descriptor as well as a T ASK descriptor. The LOOP descriptor contains information about the
measuring and actuation devices for the particular loop. For example the scaling of the
measuring instrument, the actuator limits, the physical addresses of the input and output devices,
and the parameters for the PID controller. The PI D controller code segment uses the information
In the L00P descriptor and the T ASK to calculate the control value and to send it to the
controller. The actual task is made up of the LOOP descriptor, the TASK segment and the PID
control code segment. The addition of another loop to the system requires the provision of new
loop descriptors; the actual PID control code remains unchanged.
One of the most difficult areas of programming is the transfer of information to and from
external devices. The availability of a well-designed and implemented input/output subsystem
(lOSS) in an operating system is essential for efficient programming. The presence of such a
system enables the application programmer to perform input or output by means of system calls
either from a high-level language or from the assembler. The lOSS handles all the details of the
devices. In a multi-tasking system the lOSS should also deal with all the problems of several
tasks attempting to access the same device. A typical lOSS will be divided into two levels as
shown in Figure 6.24.· The I/O manager accepts the system calls from the user tasks and
transfers the information contained in the calls to the device control block (DCB) for the
particular device. The information supplied in the call by the user task will be, for example, the
location of a buffer area in which the data to be transferred is stored (output) or is to be stored
(input); the amount of data 10 be transferred; type of data, for example binary or ASCII; the
direction of transfer; and the device to be used.
The actual transfer of the data between the user task and the device will be carried out by the
device driver and this segment of code will make use of other information stored in the DCB. A
separate device driver may be provided for each device or, as is shown in Figure 6.25, a single
driver may be shared between several devices; however, each device will require its own DCB.
The actual data transfer will usually be carried out under interrupt control.
There are advantages and disadvantages to each method and a good operating system will
provide the programmer with a choice of actions, although not all options will be available for
every device.
Option 1 is referred to as a non-buffered request, in that the user task and the device have to
rendezvous. In some ways it can be thought of as the equivalent of hardware handshaking - the
user task asks the device 'are you ready?' and waits for a reply from the device before
proceeding.
Option 3 is referred to as a buffered request. It is a form of message passing: the user task passes
to the lOSS the equivalent of a letter - this consists of both the message and instructions about
the destination of the message – and then the user task continues on the assumption that
eventually the message will be delivered, that is sent to the output device. Usually some
mechanism is provided which enables the user task to check if the message has been received
that is a form of recorded delivery in which the lOSS records that the message has been delivered
and allows the user task to check. Buffered input is slightly different in that the user task invites
an external device to send it a message - this can be considered as the equivalent of providing
your address to a person or to a group of people. The lOSS will collect the message and deliver it
but it is up to the user task to check its 'mail box' to see if a message has been delivered,
The Monitor:
An alternative solution which associates the control of mutual exclusion with the resource rather
than with the user task is the monitor. Introduced by Brinch Hansen (J973. 1975) and by Hoare
(1974). A monitor is a set of procedures that provide access to data or to a device. The
procedures are encapsulated inside a module that has the special property that only one task at a
time can be actively executing a monitor procedure. It can be thought of as providing a fence
around critical data. The operations which can be performed on the data are moved inside the
fence as well as the data itself. The user task thus communicates with the monitor rather than
directly with the resource.
Figure 6.30 shows an example of a simple monitor. Two procedures, Write Data and Read Data.
provide access to the data. These procedures represent gates through which access to the monitor
is obtained. The monitor prevents any other form of access to the critical data. A task wishing to
write data calls the procedure Write Data and as long as no other task is already accessing the
monitor it will be allowed to enter and write new data. If any other task was already using either
the Write Data or Read Data operations then the task would be halted at the gate and suspended,
since only one task at a time is allowed to be within the monitor fence.
There are two main forms of synchronisation involving data transfer. The first involves the
producer task simply signalling to say that a message has been produced and is waiting to be
collected, and the second is to signal that a message is ready and to wait for the consumer task to
reach a point where the two tasks can exchange the data.
The first method is simply an extension of the mechanism used in the example in the previous
section to signal that a channel was empty or full. Instead of signaling these conditions a signal is
sent each time a message is placed in the channel. Either a generalised semaphore or signal that
counts the number of sends and waits, or a counter, has to be used.
LIVENESS:
i. Livelock,
ii. Deadlock, and
iii. Indefinite postponement.
Livelock is the condition under which the tasks requiring mutually exclusive access to a set of
resources both enter busy wait routines but neither can get out of the busy wail because they are
waiting for each other. The CPU appears to be doing useful work and hence the term livelock.
Deadlock is the condition in which a set of tasks are in a state such that it is impossible to any of
them to proceed. The CPU is free but there are no tasks that are ready to run.
Indefinite postponement is the condition that occurs when a task is unable to gain access to a
resource because some other tasks always gains access ahead of it.
Minimum OS kernel:
• The idea of providing a minimum kernel of RTOS supports mechanisms and constructing
the required additional mechanisms for a particular application or group of applications.
Functions:
– A clock interrupts procedure that decrements a time count for relevant tasks.
– A basic task handling and context switching mechanism that will support the moving of
tasks between queues and the formation of task queues.
Primitives:
Model Questions:
3. What is task management? List the functions of task managements. With a neat diagram,
and discuss different tasks.
4. Discuss the significance of memory management and hence explain the task chaining and
task overlapping.
12. What is code sharing? Explain the serially reusable and re-entrant code.
13. List the minimum set of operations that RTOs kernel need to support, with examples.
15. Define Liveness. List the set of functions and primitives for RTOs.
16. Explain the problem of shared memory. How semaphores are used to overcome this
problem
18. Three tasks A, B and C are required to run at 1ms, 6ms and 25ms intervals
[corresponding to 1 tick, 2 ticks and 4 ticks, if the clock interrupt rate is set at 20ms]. If
the task priority order is set as A, B and C with A has highest priority and also calculate
the delay required to invoke task A at every 4th invocation, consider the task are in cyclic
manner.
19. What are the functions of attack management module? Explain various task states with
the help of state diagram.
Suprith P G, Asst professor .,GMIT Davanagere. Page
REAL-TIME SYSTEMS 18EC731
22. Three Cyclic tasks A, B and C are required to run at 1 tick, 2 tick and 3 tick respectively
(1 tick is equal to 20ms). Assume tasks A, B, C consumes 5ms, 8ms and 10ms
respectively. Write task activation diagram for priority order. (context switching time is
‘0’s) i. A (highest), B and C. ii. B (highest), A and C.
23. What are the basic functions of the task management module? With system commands
explain RTOs task state diagram.
24. What do you mean by minimum operating system kernel? List its functions.
25. What is CODE sharing? How do you overcome CODE sharing problems? Explain.
27. Explain the different mechanisms supported by RTOs for the transfer of date between
tasks.