0% found this document useful (0 votes)
2 views

RTOS Material

The document provides an overview of the Real-Time Operating System, focusing on key components such as the UNIX kernel, file systems, processes, and process management. It discusses the structure and functionality of UNIX, including the role of the kernel, shell, and commands, as well as the hierarchical nature of the Linux file system. Additionally, it covers process scheduling, concurrency, and the challenges associated with managing multiple processes in an operating system.

Uploaded by

tnagalaxmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

RTOS Material

The document provides an overview of the Real-Time Operating System, focusing on key components such as the UNIX kernel, file systems, processes, and process management. It discusses the structure and functionality of UNIX, including the role of the kernel, shell, and commands, as well as the hierarchical nature of the Linux file system. Additionally, it covers process scheduling, concurrency, and the challenges associated with managing multiple processes in an operating system.

Uploaded by

tnagalaxmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

REAL TIME

OPERATING SYSTEM
M.E I-I Semester
Arbaaz Khan
161020744001
NSAKCET
UNIT I
1. Unix Kernel:
The UNIX operating system is made up of three parts; the kernel, the shell, the Commands and Utilities, the Files and Directories

• Kernel: The kernel of UNIX is the hub of the operating system: it allocates time and memory to programs and handles
the files management and communications in response to system calls.
• Shell: The shell is the utility that processes your requests. When you type in a command at your terminal, the shell
interprets the command and calls the program that you want. The shell uses standard syntax for all commands. C Shell,
Bourne Shell and Korn Shell are the most famous shells which are available with most of the Unix variants.
• Commands and Utilities: There are various commands and utilities which you can make use of in your day to day
activities. cp, mv, cat and grep, etc. are few examples of commands and utilities. There are over 250 standard
commands plus numerous others provided through 3 rd party software. All the commands come along with various
options.
• Files and Directories: All the data of Unix is organized into files. All files are then organized into directories. These
directories are further organized into a tree-like structure called the filesystem.

2. File system:
A Linux file system is a structured collection of files on a disk drive or a partition. A partition is a segment of memory and
contains some specific data. In our machine, there can be various partitions of the memory. Generally, every partition contains
a file system.

The Linux file system contains the following sections:

• The root directory (/)


• A specific data storage format (EXT3, EXT4, BTRFS, XFS and so on)
• A partition or logical volume having a particular file system.
• Linux file system has a hierarchal file structure as it contains a root directory and its subdirectories. All other directories
can be accessed from the root directory. A partition usually has only one file system, but it may have more than one
file system.
• A file system is designed in a way so that it can manage and provide space for non-volatile storage data. All file
systems required a namespace that is a naming and organizational methodology. The namespace defines the naming
process, length of the file name, or a subset of characters that can be used for the file name. It also defines the logical
structure of files on a memory segment, such as the use of directories for organizing the specific files. Once a
namespace is described, a Metadata description must be defined for that particular file.
• The data structure needs to support a hierarchical directory structure; this structure is used to describe the available
and used disk space for a particular block. It also has the other details about the files such as file size, date & time of
creation, update, and last modified.
• Also, it stores advanced information about the section of the disk, such as partitions and volumes.
• The advanced data and the structures that it represents contain the information about the file system stored on the
drive; it is distinct and independent of the file system metadata.
• Linux file system contains two-part file system software implementation architecture. Consider the below image:

• The file system requires an API (Application programming interface) to access the function calls to interact with file
system components like files and directories. API facilitates tasks such as creating, deleting, and copying the files. It
facilitates an algorithm that defines the arrangement of files on a file system.
• The first two parts of the given file system together called a Linux virtual file system. It provides a single set of
commands for the kernel and developers to access the file system. This virtual file system requires the specific system
driver to give an interface to the file system.
3. Process:
A process is defined as an entity which represents the basic unit of work to be implemented in the system.

An instance of a running program is called a process. Every time you run a shell command, a program is run and a process is
created for it. Each process in Linux has a process id (PID) and it is associated with a particular user and group account.

Linux is a multiprocessing operating system, its objective is to have a process running on each CPU in the system at all times,
to maximize CPU utilization. If there are more processes than CPUs (and there usually are), the rest of the processes must wait
before a CPU becomes free until they can be run.

The kernel itself is not a process but a process manager. The process/kernel model assumes that processes that require a kernel
service use specific programming constructs called system calls. Each system call sets up the group of parameters that identifies
the process request and then executes the hardware-dependent CPU instruction to switch from User Mode to Kernel Mode.

"Real time" (for a process) refers to the scheduling algorithm, or the thinking the kernel does when it decides which process
gets to run. A real time process will preempt all other processes (of lesser scheduling weight) when an interrupt is received and
it needs to run.

4. Concurrent Execution & Interrupts:


Concurrency is the execution of the multiple instruction sequences at the same time. It happens in the operating system when
there are several process threads running in parallel. The running process threads always communicate with each other through
shared memory or message passing. Concurrency results in sharing of resources result in problems like deadlocks and resources
starvation.
It helps in techniques like coordinating execution of processes, memory allocation and execution scheduling for maximizing
throughput.

Principles of Concurrency:
Both interleaved and overlapped processes can be viewed as examples of concurrent processes, they both present the same
problems.
The relative speed of execution cannot be predicted. It depends on the following:
• The activities of other processes
• The way operating system handles interrupts
• The scheduling policies of the operating system

Problems in Concurrency:
• Sharing global resources
Sharing of global resources safely is difficult. If two processes both make use of a global variable and both perform
read and write on that variable, then the order iin which various read and write are executed is critical.
• Optimal allocation of resources
It is difficult for the operating system to manage the allocation of resources optimally.
• Locating programming errors
It is very difficult to locate a programming error because reports are usually not reproducible.
• Locking the channel
It may be inefficient for the operating system to simply lock the channel and prevents its use by other processes.

Advantages of Concurrency:
• Running of multiple applications
It enable to run multiple applications at the same time.
• Better resource utilization
It enables that the resources that are unused by one application can be used for other applications.
• Better average response time
Without concurrency, each application has to be run to completion before the next one can be run.
• Better performance
It enables the better performance by the operating system. When one application uses only the processor and another
application uses only the disk drive then the time to run both applications concurrently to completion will be shorter
than the time to run each application consecutively.
Drawbacks of Concurrency:
• It is required to protect multiple applications from one another.
• It is required to coordinate multiple applications through additional mechanisms.
• Additional performance overheads and complexities in operating systems are required for switching among
applications.
• Sometimes running too many applications concurrently leads to severely degraded performance.
Issues of Concurrency:
• Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.
• Race conditions
A race condition occurs of the outcome depends on which of several processes gets to a point first.
• Blocking –
Processes can block waiting for resources. A process could be blocked for long period of time waiting for input from
a terminal. If the process is required to periodically update some data, this would be very undesirable.
• Starvation
It occurs when a process does not obtain service to progress.
• Deadlock
It occurs when two processes are blocked and hence neither can proceed to execute.

Interrupts:
An interrupt is an event that changes the sequence of instructions executed by the processor.
There are two different kinds of interrupts:
1. Synchronous interrupt (Exception) produced by the CPU while processing instructions
2. Asynchronous interrupt (Interrupt) issued by other hardware devices

Exceptions are caused by programming errors (i.e. Divide error, Page Fault, Overflow) that must be handled by the kernel. He
sends a signal to the program and tries to recover from the error.
The following two exceptions are classified:
• Processor-detected exception generated by the CPU while detecting a anomalous condition; divided into three
groups: Faults can generally be corrected, Traps report an execution, Aborts are serious errors.
• Programmed exception requested by the programmer, handled like a trap.

Interrupts can be issued by I/O devices (keyboard, network adapter, ...), interval timers and (on multiprocessor systems) other
CPUs. When an interrupt occurs, the CPU must stop his current instruction and execute the newly arrived interrupt. He needs
to save the old interrupted process state to (probably) resume it after the interrupt is handled.
Handling interrupts is a sensitive task:
• Interrupts can occur at any time; the kernel tries to get it out of the way as soon as possible
• An interrupt can be interrupted by another interrupt
• There are regions in the kernel which must not be interrupted at all

Two different interrupt levels are defined:


• Maskable interrupts issued by I/O devices; can be in two states, masked or unmasked. Only unmasked interrupts are
getting processed.
• Nonmaskable interrupts; critical malfunctions (i.e. hardware failure); always processed by the CPU.

Every hardware device has its own Interrupt Request (IRQ) line. The IRQs are numbered starting from 0. All IRQ lines are
connected to a Programmable Interrupt Controller (PIC). The PIC listens on IRQs and assigns them to the CPU. It is also
possible to disable a specific IRQ line.

Modern multiprocessing Linux systems generally include the newer Advanced PIC (APIC), which distributes IRQ requests
equally between the CPUs.

The mid-step between an interrupt or exception and the handling of it is the Interrupt Descriptor Table (IDT). This table
associates each interrupt or exception vector (a number) with a specified handler (i.e. Divide error gets handled by the
function divide error ()). Through the IDT, the kernel knows exactly how to handle the occurred interrupt or exception.

5. Process Management:

• A process is usually defined as the instance of the running program. Process management is an integral part of any
modern day operating system (OS). Process management describes how the operating systems manage the multiple
processes running at a particular instance of time.
• The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources
of each process from other processes and enable synchronization among processes. To meet these requirements, the
OS must maintain a data structure for each process, which describes the state and resource ownership of that process,
and which enables the OS to exert control over each process.
• Several processes can run at the same time and must all be handled by the operating system. The kernel needs to
maintain data for each process it is running. The data tells the OS in what state the process is, what les are open, which
user is running it, etc. The information is maintained in a process structure is also called Process Control Block (PCB).
• The operating system tracks processes through a five-digit ID number known as the pid or the process ID. Each
process in the system has a unique pid.
• Pids eventually repeat because all the possible numbers are used up and the next pid rolls or starts over. At any point
of time, no two processes with the same pid exist in the system because it is the pid that Unix uses to track each process.

Every application(program) comes into execution through means of process, process is a running instance of a program.
Processes are created through different system calls, most popular are fork() and exec().

Fork creates a new process by duplicating the calling process, The new process, referred to as child, is an exact duplicate of
the calling process, referred to as parent, except for the following:
1. The child has its own unique process ID, and this PID does not match the ID of any existing process group.
2. The child’s parent process ID is the same as the parent’s process ID.
3. The child does not inherit its parent’s memory locks and semaphore adjustments.
4. The child does not inherit outstanding asynchronous I/O operations from its parent nor does it inherit any asynchronous
I/O contexts from its parent.

Return value of fork()


On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On failure, -1 is returned in
the parent, no child process is created, and err no is set appropriately.

Exection family of functions replaces the current process image with a new process image. It loads the program into the current
process space and runs it from the entry point.

The main difference between fork and execution is:


The fork system call creates a new process. The new process created by fork() is a copy of the current process except for the
returned value. The exec() system call replaces the current process with a new program.
6. Programming with system calls:
System calls in Unix are used for file system control, process control, interprocess communication etc. Access to the Unix
kernel is only available through these system calls. Generally, system calls are similar to function calls, the only difference is
that they remove the control from the user process.
There are around 80 system calls in the Unix interface currently. Details about some of the important ones are given as follows
System Call Description

CreateProcess() A new process is created using this command

ExitProcess() This system call is used to exit a process.

CreateFile() A file is created or opened using this system call.

ReadFile() Data is read from the file using this system call.

WriteFile() Data is written into the file using this system call.

CloseHandle() This system call closes the file currently in use.

SetTimer() This system call sets the alarm or the timer of a process

CreatePipe() A pipe is created using this system call

SetFileSecurity() This system call sets the security for a particular process

SetConsoleMode() This sets the input mode or output mode of the console’s
input buffer or output screen buffer respectively.

ReadConsole() This reads the characters from the console input buffer.

WriteConsole() This writes the characters into the console output buffer.

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU
and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

7. Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU
and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than
one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.
The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler,
as it is sometimes called) can be viewed as the code that divides the finite resource of processor time between the runnable
processes on a system.
Scheduling fell into one of the two general categories:
• Pre-emptive Scheduling: In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it
is important to run a task with a higher priority before another lower priority task, even if the lower priority task is still
running. The lower priority task holds for some time and resumes when the higher priority task finishes its execution.

• Non Pre-emptive Scheduling: In this type of scheduling method, the CPU has been allocated to a specific process.
The process that keeps the CPU busy will release the CPU either by switching context or terminating. It is the only
method that can be used for various hardware platforms. That's because it doesn't need special hardware (for example,
a timer) like preemptive scheduling. When the currently executing process gives up the CPU voluntarily.

Types of CPU scheduling Algorithm


There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

1. First Come First Serve


First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type
of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed
with a FIFO queue.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when CPU
becomes free, it should be assigned to the process at the beginning of the queue.

Characteristics of FCFS method:


• It offers non-preemptive and pre-emptive scheduling algorithm.
• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general wait time is quite high.

2. Shortest Remaining Time


The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling. In this method, the process
will be allocated to the task, which is closest to its completion. This method prevents a newer ready state process from
holding the completion of an older process.

Characteristics of SRT scheduling method:


• This method is mostly applied in batch environments where short jobs are required to be given preference.
• This is not an ideal method to implement it in a shared system where the required CPU time is unknown.
• Associate with each process as the length of its next CPU burst. So that operating system uses these lengths, which
helps to schedule the process with the shortest possible time.

3. Priority Based Scheduling


Priority scheduling is a method of scheduling processes based on priority. In this method, the scheduler selects the tasks to
work as per the priority.
Priority scheduling also helps OS to involve priority assignments. The processes with higher priority should be carried out
first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis. Priority can be decided based on
memory requirements, time requirements, etc.
4. Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the round-robin principle,
where each person gets an equal share of something in turn. It is mostly used for scheduling algorithms in multitasking.
This algorithm method helps for starvation free execution of processes.

Characteristics of Round-Robin Scheduling


• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific task to be processed. However, it may vary for different
processes.
• It is a real time system which responds to the event within a specific time limit.

5. Shortest Job First


SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the shortest execution time
should be selected for execution next. This scheduling method can be preemptive or non-preemptive. It significantly
reduces the average waiting time for other processes awaiting execution.

Characteristics of SJF Scheduling


• It is associated with each job as a unit of time to complete.
• In this method, when the CPU is available, the next process or job with the shortest completion time will be executed
first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not critical.
• It improves job output by offering shorter jobs, which should be executed first, which mostly have a shorter turnaround
time.

6. Multiple-Level Queues Scheduling


This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue
based on a specific property of the process, like the process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other types of algorithms in order to
schedule the jobs.

Characteristic of Multiple-Level Queues Scheduling:


• Multiple queues should be maintained for processes with some characteristics.
• Every queue may have its separate scheduling algorithms.
• Priorities are given for each queue.

The Purpose of a Scheduling algorithm


Here are the reasons for using a scheduling algorithm:
• The CPU uses scheduling to improve its efficiency.
• It helps you to allocate resources among competing processes.
• The maximum utilization of CPU can be obtained with multi-programming.
• The processes which are to be executed are in ready queue.

8. Shell programming and filters:


When you login to a Unix system, a program called a shell process is run for you. A shell process is a command interpreter that
provides you with an interface to the operating system. A shell script is just a file of commands, normally executed by a shell
process that was spawned to run the script. The contents of the script file can just be ordinary commands as would be entered
at the command prompt, but all standard command interpreters also support a scripting language to provide control flow and
other capabilities analogous to those of high level languages.
There is more than one shell process available on most Unix systems. The most popular ones in use are the
• Bourne shell (sh) - Unix System V (developed by Steve Bourne at Bell Laboratories; the "grandfather of all UNIX
shells")
• C shell (csh) - Berkeley Unix; so known because it's commands are C like (created as an alternative to the "bare bones"
provided by the Bourne shell)
• Korn shell (ksh) - extends the Bourne shell (developed by David Korn at Bell Laboratories, sites often have the 1988
version rather than the 1993 version)
• GNU Bourne-Again SHell (bash) - extends the Bourne shell and also has features from the Korn shell
• Z shell (zsh) - designed by Paul Falstad of Princeton; a superset of the Korn Shell, with added C shell features
• Enhanced C shell (tcsh) - an extension of the C shell [note: there are strong dissents regarding the C shell

Shell, command interpreter, is a program started up after opening of the user session by the login process. The shell is active
till the occurrence of the character, which signals requests for termination of the execution and for informing about that fact the
operating system kernel.
Each user obtains their own separate instance of the sh. Program sh prints out the monitor on the screen showing its readiness
to read a next command.

The shell interpreter works based on the following scenario:


1. Displays a prompt
2. waits for entering text from a keyboard
3. analyses the command line and finds a command
4. submit to the kernel execution of the command
5. accepts an answer from the kernel and again waits for user input.

A filter is a small and (usually) specialized program in Unix-like operating systems that transforms plain text (i.e., human
readable) data in some meaningful way and that can be used together with other filters and pipes to form a series of operations
that produces highly specific results.
As is generally the case with command line (i.e., all-text mode) programs in Unix-like operating systems, filters read data
from standard input and write to standard output. Standard input is the source of data for a program, and by default it is text
typed in at the keyboard. However, it can be redirected to come from a file or from the output of another program. Standard
output is the destination of output from a program, and by default it the display screen. This means that if the output of a
command is not redirected to a file or another device (such as a printer) or piped to another filter for further processing, it will
be sent to the monitor where it will be displayed.
The ls command lists the contents of /sbin and pipes its output (using the pipe operator, which is designated by the vertical bar
character) to the filter grep, which searches for all files and directories that contain the letter sequence mk in their names. grep
then pipes its output to the sort filter, which, with its -r option, sorts it in reverse alphabetical order. sort, in turn, pipes its output
to the head filter.

9. Portable Operating System Interface (POSIX):


POSIX stands for Portable Operating System Interface, and is an IEEE standard designed to facilitate application portability.
POSIX is an attempt by a consortium of vendors to create a single standard version of UNIX. If they are successful, it will make
it easier to port applications between hardware platforms i.e. enterprises using computers wanted to be able to develop programs
that could be moved among different manufacturer's computer systems without having to be recoded.

There are more than ten parts to the POSIX standard, but two are widely available.

POSIX.1 defines C programming interfaces (that is, a library of system calls) for files, processes, and terminal I/O. To support
the library, a POSIX system must implement a Hierarchical File System (HFS).

POSIX.2 defines a "shell" command interpreter and utilities (e.g., ls to list files).

Certain important standards are not covered by POSIX (for example, spooling, batch processing, and NLS -- Native Language
Support). NLS is defined by the X/Open standard.

IEFE Standard 1003.13.:


POSIX 1003.13 is a subprofile standard of 1003.12001
• It allows diverse realtime operating systems “clothed” with a runtime library to comply.
• This standardizes the application-to-RTOS API, allowing considerable application code portability
between different RTOS offerings, which portability had not been possible in the past.
• RTOS+wrapper offerings can be compared and competed directly
• There are currently four profiles

POSIX Real Time Profile:


Defines four real-time system subsets (profiles)
• Minimal: Small embedded systems
➢ Platform: Small embedded system, with no MMU, no disk, no terminal
➢ Model: controller of a “Toaster”
• Controller: Industrial controllers
➢ Platform: Special purpose controller, with no MMU, but with a disk containing a simplified file system
➢ Model: industrial robot controller
• Dedicated: Large embedded systems
➢ Platform: Large embedded system with file system on disk, with an MMU; software is complex and requires m
emory protection and network communications.
➢ Models: avionics controller, cellular phone cell node
• Multi-Purpose: Large general-purpose systems with realtime requirements
➢ Platform: Large real-time system with all the features,including a development environment, network
communications, file system on disk, terminal and graphical user interfaces, etc.
➢ Model: workstation with realtime requirements:
▪ air traffic control systems
▪ telemetry systems for Formula One racing cars

Fig.: POSIX 1003.13 Profiles

POSIX versus traditional Unix signals:

Overheads and Timing Predictability:


UNIT II
1. Hard versus Soft Real-time systems:
A real-time operating system is an operating system intended to serve real-time applications that process data as it comes in,
typically without delay. Real-time system can be classified into two:
▪ Hard Real Time Systems
▪ Soft Real Time Systems

Hard Real Time System (Immediate Real-Time System):

A hard real-time system also referred to as immediate real-time system is a hardware or software that must operate within the
confines of a stringent predefined deadline. Usually, the application is considered to have failed if it does not complete its
function within the specified timeline. The response predefined time of hard real time systems is in order of milliseconds and
therefore, missing of the deadline results in complete or massive system failure, and therefore this system should not miss the
deadline.
Soft Real Time System:
Soft real-time system is an operating system in which a critical real-time task gets priority over other task and retains that
priority over other tasks until it completes. The response predefined time of soft real time systems are not very stringent,
therefore missing of the deadline only affects the performance and not the entire system.

BASIS OF HARD REAL TIME SYSTEM SOFT REAL TIME SYSTEM


COMPARISON
Data File Size Size of the data file in soft real time systems is Size of the data file in soft real time systems is
small or medium. large.
Restrictive Nature A hard-real time system is very restrictive. Soft real time system is less restrictive in nature.

Response Time The response predefined time of hard real time The response predefined time of soft real time
systems is in order of milliseconds and therefore, systems are not very stringent, therefore missing
missing of the deadline results in complete or of the deadline only affects the performance and
massive system failure, and therefore this system not the entire system. Soft real time systems miss
should not miss the deadline. deadline occasionally.

Peak Load Peak load performance should be predictable and In a soft real-time system, a degraded operation in
should not violate the predefined deadlines. a rarely occurring peak load can be tolerated.

Conditional A hard real-time system must remain synchronous Soft real time system will slow down their
Requirement with the state of the environment at all time. response time if the load is very high.

Database Size & Most of the hard real time systems have small Most of the soft real time systems have larger
Integrity databases and occasionally require short-term databases and require long-term integrity of the
integrity of the system. system.

Error Handling In case of an error in a hard real time system, the In case of an error in a soft real time system, the
computation is rolled back or recovery is of computation is rolled back to previously
limited use. established checkpoint to initiate a recovery
action.

Completion Of Completion of the task or activity by hard real Completion of task or activity by soft real time
Task/Activity time systems is predefined or deterministic. system probabilistic.

Validation The users of hard real time systems obtain Users of soft real time systems not always obtain
validation when required. the validation.
Examples Examples of hard real time system are inkjet Examples of soft real time systems are DVD
printer system, railway signalling system, air player, electronic games, multimedia system, web
traffic control systems, nuclear reactor control browsing, online transaction systems, telephone
systems, anti-missile system. switches, virtual reality, weather station, mobile
communication etc.
2. Jobs & Processors:
▪ Each unit of work that is scheduled and executed by the system job is a job.
▪ Set of related jobs which jointly provides some system function is called task.
▪ Computation of a control law is a job, so is the computation of FFT (Fast Fourier Transform) of sensor data or
transmission of data packet or file retrieval and so on.
▪ Rather than using different verbs (E.g. Compute, transmit etc.) for different types of jobs, we say job executes or is
executed by system.
▪ The jobs mentioned above executes on CPU, network or disk. These all resources are called processor, except in the
case where we wanted to be specific.
▪ Processors have attributes (pre-emptivity, context switch time, speed). Two processors are of the same type, if they
are functionally identical and can be used interchangeably.
▪ According to the model the basic component of any real-time application are jobs. The operating system treats jobs as
a unit of work and allocates processors and resources.
3. Hard and Soft Timing Constraints:
3.1. Deadline:
A deadline is a timing milestone. If a deadline is missed by a computer-controller, the controlled system may transit to an
undesirable state.
In hard real-time systems, according to the usual definition, a deadline that is not met can lead to a catastrophic failure. This
means that the criteria used to establish deadlines are safety based. Control system engineers, on the other hand, use performance
criteria to establish the desired response time of a controlling computer.
The deadlines suggested by these scientific communities are not mutually exclusive but only different entities perceived in
particular and equally important contexts. Moreover, they show the disassociation of controllers’ timing constraints into those
related to safety – hard deadlines – and those related to performance – performance deadlines.
Performance deadlines are usually more confining than hard deadlines. Therefore, a computer controller, designed to meet
performance deadlines, does not drive the controlled system to an unsafe state as soon as one of them is missed, but only later,
when a hard deadline is disrespected. Performance and hard deadlines are thus separated by a grace-time. This notion can help
in the design of low-cost, yet highly reliable control systems.
3.2. Time Constraints:
Real-time systems are usually classified into soft and hard. Classically, in a soft real-time system, missing a deadline is
inconvenient but not damaging to the environment; in hard real-time system, missing a deadline can be catastrophic, and thus
unacceptable.
The traditional view of the temporal merit of a hard real-time computation (i.e., the relationship between the computation
completion time and the resulting temporal merit of that computation) is usually modelled by a step time-value function: if a
controller service is completed before a given deadline it yields a constant positive value while completing it any time later may
incur in a catastrophic failure. From this point of view, hard deadlines are established in a safety based context.
This means that when a computer is part of a hard real-time system, all the software running on it has to be tuned to satisfy all
controlled system deadlines.
It is commonly accepted that a controlled process can sporadically tolerate a missed deadline, if not by much. This notion
presupposes a controller not tuned to meet hard deadlines but some other kind of time limit.
However, the characterisation of a deadline is by itself a relatively unexplored problem in the real-time community. Most of
the literature seems to consider that deadlines are somehow provided by others, possibly by control system engineers. Moreover,
techniques to calculate systems’ deadlines are very seldom presented.
Nevertheless, soft and hard deadlines are universally used, and many suggestions appear in the literature reasoning about the
existence of other kinds of deadlines, besides these classical ones. Moreover, there is a growing tendency to classify real-time
services according to their associated benefit/cost functions, and to establish on them a set of pertinent points of time concerning
application goals. This means that there is a growing feeling that traditional definitions and interpretations of deadlines are poor
since they cannot describe reality in a satisfactory way nor can be explicitly employed for a best-effort scheduling.
3.3. Hard and Soft Timing Constraint:
According to a commonly used definition, a timing constraint or deadline is hard if the failure to meet it is considered to be a
fatal fault. A hard deadline is imposed on a job because a late result produced by the job after the deadline may have disastrous
consequences. (As examples, a late command to stop a train may cause a collision, and a bomb dropped too late may hit a
civilian population instead of the intended military target.)
In contrast, the late completion of a job that has a soft deadline is undesirable. However, a few misses of soft deadlines do no
serious harm; only the system’s overall performance becomes poorer and poorer when more and more jobs with soft deadlines
complete late.
This definition of hard and soft deadlines invariably leads to the question of whether the consequence of a missed deadline is
indeed serious enough. The question of whether a timing constraint is hard or soft degenerates to that of how serious is serious.
In real-time systems literature, the distinction between hard and soft timing constraints is sometimes stated quantitatively in
terms of the usefulness of results (and therefore the overall system performance) as functions of the tardinesses of jobs.

The tardiness of a job measures how late it completes respective to its deadline. Its tardiness is zero if the job completes at or
before its deadline; otherwise, if the job is late, its tardiness is equal to the difference between its completion time (i.e., the time
instant at which it completes execution) and its deadline.

The usefulness of a result produced by a soft real-time job (i.e., a job with a soft deadline) decreases gradually as the tardiness
of the job increases, but the usefulness of a result produced by a hard real-time job (i.e., a job with a hard deadline) falls off
abruptly and may even become negative when the tardiness of the job becomes larger than zero.

The deadline of a job is softer if the usefulness of its result decreases at a lower rate. By this means, we can define a spectrum
of hard/soft timing constraints. This quantitative measure of hardness and softness of deadlines is sometimes useful.

It is certainly more appealing to computer scientists and engineers who have been trained not to rely on handwaving, qualitative
measures. However, there is often no natural choice of usefulness functions. When choices are made, it is difficult to validate
that the choices are sound and that different measures of the overall system performance as functions of tardinesses indeed
behave as specified by the usefulness functions.

Consequently, this kind of quantitative measure is not as rigorous as it appears to be. Sometimes, we see this distinction made
on the basis of whether the timing constraint is expressed in deterministic or probabilistic terms. If a job must never miss its
deadline, then the deadline is hard. On the other hand, if its deadline can be missed occasionally with some acceptably low
probability, then its timing constraint is soft.

An example is that the system recovery job or a point-of-sales transaction completes within one minute 99.999 percent of the
time. In other words, the probability of failure to meet the one-minute relative deadline is 10−5. This definition ignores
completely the consequence of a timing failure. In our example, if the failure of an on-time recovery could cause loss of life
and property, we would require a rigorous demonstration that the timing failure probability is indeed never more than 10−5.
However, we would not require a demonstration of nearly the same rigor for a credit validation.

4. SCHEDULING:
When a computer is multiprogrammed, it frequently has multiple processes or threads competing for the CPU at the same time.
This situation occurs whenever two or more of them are simultaneously in the ready state. If only one CPU is available, a choice
has to be made which process to run next. The part of the operating system that makes the choice is called the scheduler, and
the algorithm it uses is called the scheduling algorithm. These topics form the subject matter of the following sections. Many
of the same issues that apply to process scheduling also apply to thread scheduling, although some are different. When the
kernel manages threads, scheduling is usually done per thread, with little or no regard to which process the thread belongs.
Initially we will focus on scheduling issues that apply to both processes and threads. Later on, we will explicitly look at thread
scheduling and some of the unique issues it raises

Rate Monotonic Scheduling Algorithm:


The Rate Monotonic scheduling algorithm is a simple rule that assigns priorities to different tasks according to their time period.
That is task with smallest time period will have highest priority and a task with longest time period will have lowest priority
for execution. As the time period of a task does not change so not its priority changes over time, therefore Rate monotonic is
fixed priority algorithm. The priorities are decided before the start of execution and they does not change overtime.

Rate monotonic scheduling Algorithm works on the principle of preemption. Preemption occurs on a given processor when
higher priority task blocked lower priority task from execution. This blocking occurs due to priority level of different tasks in
a given task set. rate monotonic is a preemptive algorithm which means if a task with shorter period comes during execution it
will gain a higher priority and can block or preemptive currently running tasks. In RM priorities are assigned according to time
period. Priority of a task is inversely proportional to its timer period. Task with lowest time period has highest priority and the
task with highest period will have lowest priority. A given task is scheduled under rate monotonic scheduling Algorithm, if its
satisfies the following equation:
Where n = The number of processes in the process set,
Ci = The computation time of the process,
Ti = The Time period for the process to run and
U = The processor utilization

For example, we have a task set that consists of three tasks as follows

Table: Task Set

U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0.333 = 0.75


As processor utilization is less than 1 or 100% so task set is schedulable and it also satisfies the above equation of rate monotonic
scheduling algorithm.

Figure: RM scheduling of Task set of above table

A task set given in above table its RM scheduling is given in above figure. The explanation of above is as follows

1. According to RM scheduling algorithm task with shorter period has higher priority so T1 has high priority, T2 has
intermediate priority and T3 has lowest priority. At t=0 all the tasks are released. Now T1 has highest priority so it
executes first till t=0.5.

2. At t=0.5 task T2 has higher priority than T3 so it executes first for one-time units till t=1.5. After its completion only one
task is remained in the system that is T3, so it starts its execution and executes till t=3.

3. At t=3 T1 releases, as it has higher priority than T3 so it pre-empts or blocks T3 and starts it execution till t=3.5. After
that the remaining part of T3 executes.

4. At t=4 T2 releases and completes it execution as there is no task running in the system at this time.

5. At t=6 both T1 and T3 are released at the same time but T1 has higher priority due to shorter period so it preempts T3
and executes till t=6.5, after that T3 starts running and executes till t=8.

6. At t=8 T2 with higher priority than T3 releases so it preempts T3 and starts its execution.

7. At t=9 T1 is released again and it preempts T3 and executes first and at t=9.5 T3 executes its remaining part. Similarly,
the execution goes on.

Earliest Deadline First (EDF):

It is an optimal dynamic priority scheduling algorithm used in real-time systems. It can be used for both static and dynamic
real-time scheduling.

EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the absolute deadline. The task
whose deadline is closest gets the highest priority. The priorities are assigned and changed in a dynamic fashion. EDF is very
efficient as compared to other scheduling algorithms in real-time systems. It can make the CPU utilization to about 100%
while still guaranteeing the deadlines of all the tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all the tasks have met the
deadline. EDF finds an optimal feasible schedule. The feasible schedule is one in which all the tasks in the system are
executed within the deadline. If EDF is not able to find a feasible schedule for all the tasks in the real-time system, then it
means that no other task scheduling algorithms in real-time systems can give a feasible schedule. All the tasks which are
ready for execution should announce their deadline to EDF when the task becomes runnable.

EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or processes require a fixed
CPU burst time. In EDF, any executing task can be preempted if any other periodic instance with an earlier deadline is ready
for execution and becomes active. Preemption is allowed in the Earliest Deadline First scheduling algorithm.

Example:
Consider two processes P1 and P2.

Let the period of P1 be p1 = 50


Let the processing time of P1 be t1 = 25

Let the period of P2 be period2 = 75


Let the processing time of P2 be t2 = 30

Steps for solution:

1. Deadline pf P1 is earlier, so priority of P1>P2.


2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8. Repeat the above steps…
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its processing time after
which P1 starts to execute.

Limitations of EDF scheduling algorithm:


• Transient Overload Problem
• Resource Sharing Problem
• Efficient Implementation Problem

Allowing for Preemptive and Exclusion Condition:


UNIT III
1. Embedded Operating Systems
Embedded systems run on the computers that control devices that are not generally thought of as computers and which do not accept
user-installed software. Typical examples are microwave ovens, TV sets, cars, DVD recorders, cell phones, MP3 players. The main
property which distinguishes embedded systems from handhelds is the certainty that no untrusted software will ever run on it. You
cannot download new applications to your microwave oven—all the software is in ROM. This means that there is no need for
protection between applications, leading to some simplification. Systems such as QNX and VxWorks are popular in | this domain.

2. Differences between Traditional OS and RTOS.


A Real Time Operating System, commonly known as an RTOS, is a software component that rapidly
switches between tasks, giving the impression that multiple programs are being executed at the same time on
a single processing core.

The difference between an OS (Operating System) such as Windows or Unix and an RTOS (Real Time
Operating System) found in embedded systems, is the response time to external events. OS’s typically
provide a non-deterministic, soft real time response, where there are no guarantees as to when each task will
complete, but they will try to stay responsive to the user. An RTOS differs in that it typically provides a hard
real time response, providing a fast, highly deterministic reaction to external events.

A Traditional OS (or) General Purpose OS(GPOS) is used for systems/applications that are not time
critical. Example: - Windows, Linux, Unix etc.
A RTOS is used for time critical systems. Example: - VxWorks, uCos etc.

RTOS GPOS

RTOS has unfair scheduling i.e. scheduling is GPOS has fair scheduling i.e. it can be adjusted
based on priority. dynamically for optimized throughput.

Kernel is pre-emptive either completely or up to Kernel is non-preemptive or has long non-preemptive


maximum degree. code sections.

Priority inversion is a major issue. Priority inversion usually remains unnoticed.

It has a predictable behavior. There is no predictability.

It works under worst case assumptions. It optimizes for the average case.

RTOS focuses on very fast response time A regular OS focuses on computing throughput.

It does not have a large memory. It has a large memory.

RTOSes are generally embedded in devices that OSes are used in a wide variety of applications
require real time response

RTOSes either use a time sharing design or an OSes use a time sharing design to allow for multi-
even driven design tasking

The coding of an RTOS is stricter. The coding of an RTOS is not much stricter.
3. Real time System Concepts:
Real-time systems are characterized by the fact that severe consequences will result if logical as well as timing correctness properties
of the system are not met. There are two types of real-time systems: SOFT and HARD.
In a SOFT real-time system, tasks are performed by the system as fast as possible, but the tasks don't have to finish by specific
times. In HARD real-time systems, tasks have to be performed not only correctly but on time. Most realtime systems have a
combination of SOFT and HARD requirements.
Real-time applications cover a wide range. Most applications for real-time systems are embedded. This means that the computer
is built into a system and is not seen by the user as being a computer. Examples of embedded systems are:
Process control:
• Food processing
• Chemical plants
Automotive:
• Engine controls
• Anti-lock braking systems
Office automation:
• FAX machines
• Copiers
Computer peripherals:
• Printers
• Terminals
• Scanners
• Modems
Robots Aerospace:
• Flight management systems
• Weapons systems
• Jet engine controls
Domestic:
• Microwave ovens
• Dishwashers
• Washing machines
• Thermostats

Real-time software applications are typically more difficult to design than non-real-time applications. This chapter describes real-
time concepts.

1. Foreground/Background Systems:
These systems are called foreground/background or super-loops. An application consists of an infinite loop that calls
modules (that is, functions) to perform the desired operations (background). Interrupt Service Routines (ISRs) handle
asynchronous events (foreground). Foreground is also called interrupt level while background is called task level. Critical
operations must be performed by the ISRs to ensure that they are dealt with in a timely fashion.

2. Critical Section of Code:


A critical section of code, also called a critical region, is code that needs to be treated indivisibly. Once the section of code
starts executing, it must not be interrupted. To ensure this, interrupts are typically disabled before the critical code is
executed and enabled when the critical code is finished (see also Shared Resource).

3. Resource:
A resource is any entity used by a task. A resource can thus be an I/O device such as a printer, a keyboard, a display, etc.
or a variable, a structure, an array, etc.

4. Shared Resource:
A shared resource is a resource that can be used by more than one task. Each task should gain exclusive access to the shared
resource to prevent data corruption. This is called Mutual Exclusion.

5. Multitasking:
Multitasking is the process of scheduling and switching the CPU (Central Processing Unit) between several tasks; a single
CPU switches its attention between several sequential tasks. Multitasking is like foreground/background with multiple
backgrounds. Multitasking maximizes the utilization of the CPU and also provides for modular construction of
applications.
One of the most important aspects of multitasking is that it allows the application programmer to manage
complexity inherent in real-time applications. Application programs are typically easier to design and maintain if
multitasking is used.
6. Task:
A task, also called a thread, is a simple program that thinks it has the CPU all to itself. The design process for a realtime
application involves splitting the work to be done into tasks which are responsible for a portion of the problem. Each task
is assigned a priority, its own set of CPU registers, and its own stack area.

Each task typically is an infinite loop that can be in any one of five states: DORMANT, READY, RUNNING, WAITING
FOR AN EVENT, or INTERRUPTED (see Figure 2-3). The DORMANT state corresponds to a task which resides in
memory but has not been made available to the multitasking kernel. A task is READY when it can execute but its priority
is less than the currently running task. A task is RUNNING when it has control of the CPU. A task is WAITING FOR AN
EVENT when it requires the occurrence of an event (waiting for an I/O operation to complete, a shared resource to be
available, a timing pulse to occur, time to expire etc.). Finally, a task is INTERRUPTED when an interrupt has occurred
and the CPU is in the process of servicing the interrupt. Figure 2-3 also shows the functions provided by µC/OS-II to make
a task switch from one state to another.

Fig.: Task States

7. Context Switch (or Task Switch):


When a multitasking kernel decides to run a different task, it simply saves the current task's context (CPU registers) in
the current task's context storage area – it’s stack (see Figure 2-2). Once this operation is performed, the new task's
context is restored from its storage area and then resumes execution of the new task's code. This process is called a
context switch or a task switch. Context switching adds overhead to the application. The more registers a CPU has, the
higher the overhead. The time required to perform a context switch is determined by how many registers have to be
saved and restored by the CPU. Performance of a real-time kernel should not be judged on how many context switches
the kernel is capable of doing per second.

8. Kernel: The kernel is the part of a multitasking system responsible for the management of tasks (that is, for managing
the CPU's time) and communication between tasks. The fundamental service provided by the kernel is context switching.
The use of a real-time kernel will generally simplify the design of systems by allowing the application to be divided into
multiple tasks managed by the kernel. A kernel will add overhead to your system because it requires extra ROM (code
space), additional RAM for the kernel data structures but most importantly, each task requires its own stack space which
has a tendency to eat up RAM quite quickly. A kernel will also consume CPU time (typically between 2 and 5%).
Single chip microcontrollers are generally not able to run a real-time kernel because they have very little RAM.

A kernel can allow you to make better use of your CPU by providing you with indispensible services such as semaphore
management, mailboxes, queues, time delays, etc. Once you design a system using a real-time kernel, you will not want
to go back to a foreground/background system.

9. Scheduler: The scheduler, also called the dispatcher, is the part of the kernel responsible for determining which task will
run next. Most real-time kernels are priority based. Each task is assigned a priority based on its importance. The priority
for each task is application specific. In a priority-based kernel, control of the CPU will always be given to the highest
priority task ready-to-run. When the highest-priority task gets the CPU, however, is determined by the type of kernel
used. There are two types of priority-based kernels: non-preemptive and preemptive.

10. Non-Preemptive Kernel:


Non-preemptive kernels require that each task does something to explicitly give up control of the CPU. To maintain the
illusion of concurrency, this process must be done frequently. Non-preemptive scheduling is also called cooperative
multitasking; tasks cooperate with each other to share the CPU. Asynchronous events are still handled by ISRs. An ISR
can make a higher priority task ready to run, but the ISR always returns to the interrupted task. The new higher priority
task will gain control of the CPU only when the current task gives up the CPU.

One of the advantages of a non-preemptive kernel is that interrupt latency is typically low (see the later
discussion on interrupts). At the task level, non-preemptive kernels can also use non-reentrant functions (discussed later).
Nonreentrant functions can be used by each task without fear of corruption by another task. This is because each task can
run to completion before it relinquishes the CPU. Non-reentrant functions, however, should not be allowed to give up
control of the CPU.
Task-level response using a non-preemptive kernel can be much lower than with foreground/background
systems because task-level response is now given by the time of the longest task.
Another advantage of non-preemptive kernels is the lesser need to guard shared data through the use of
semaphores. Each task owns the CPU and you don't have to fear that a task will be preempted. This is not an absolute
rule, and in some instances, semaphores should still be used. Shared I/O devices may still require the use of mutual
exclusion semaphores; for example, a task might still need exclusive access to a printer.

NOTE: For more concepts refer MicroC OS II_ The Real Time Kernel Book from Pg. No. 50
4. RTOS Kernel
RTOS Kernel RTOS Kernel provides an Abstraction layer that hides from application software the hardware details of the processor
/ set of processors upon which the application software shall run.
1. The central component of most Operating systems is called Kernel.
2. The kernel manages system's resources and the communication.
3. The kernel provides the most basic interface between the computer itself and the rest of the operating system.
4. The kernel is responsible for the management of the central processor.
5. The kernel includes the dispatcher to allocate the central processor, to determine the cause of an interrupt and initiate its
processing, and some provision for communication among the various system and user tasks currently active in the system.
6. Kernel is the core of an operating system.
Basic function of RTOS Kernel:

• Task Management
• Task Scheduling
• Task Synchronization
• Memory Management
• Time Management
• Interrupt Handling
• Device I/O Management
4.1. Task Assignment:
The application is divided into small, schedulable, and sequential program units known as ‘Thread‘ or ‘Task‘. This is done to achieve
concurrency in Real Time Application. Task Management by Kernel includes Real Time Task Creation, termination, changing
priorities etc. Task creation involves creating a Task Control Block (TCB) which has information about Task id, priority, Task states
i.e. if the task is in (idle, running, ready, terminated) state etc.
DORMANT
• The DORMANT state corresponds to a task that resides in memory but has not been made available to the multitasking
kernel.
READY
• A task is READY when it can be executed but its priority is less than that of the task currently being run.
• In this state, the task actively competes with all other ready tasks for the processor’s execution time.
• The kernel’s scheduler uses the priority of each task to determine which task to move to the running state.
RUNNING
• A task is RUNNING when it has control of the CPU and it's currently being executed.
• On a single-processor system, only one task can run at a time.
• When a task is preempted by a higher priority task, it moves to the ready state.
• It also can move to the blocked state.
–Making a call that requests an unavailable resource
–Making a call that requests to wait for an event to occur
–Making a call to delay the task for some duration
WAITING
•A task is WAITING when it requires the occurrence of an event (waiting for an I/O operation to complete, a shared resource
to be available, a timing pulse to occur, time to expire, etc.).
BLOCKED
• CPU starvation occurs when higher priority tasks use all of the CPU execution time and lower priority tasks do not get to
run.
The cases when blocking conditions are met
–A semaphore token for which a task is waiting is released
–A message, on which the task is waiting, arrives in a message queue
–A time delay imposed on the task expires
ISR(Interrupt Service Routine)
• A task is in ISR state when an interrupt has occurred and the CPU is in the process of servicing the interrupt.
4.2. Task Priorities:
Deciding Task priority based on
1. Assign high priority to Interrupt processing tasks - particularly if they are in user interaction flow.
2. Can assign high priority to short duration tasks.
3. I/O bound tasks may end up having low priority.
4.3. Task Synchronization:
Synchronization and messaging provide the necessary communication between tasks in one system to tasks in another system. The
event flag is used to synchronize internal activities while message queues and mailboxes are used to send text messages between
systems. Common data areas utilize semaphores.
4.4. Scheduling:
1. Scheduling is the process of deciding how to commit resources between a variety of possible tasks. Time can be specified
(scheduling a flight to leave at 8:00) or floating as part of a sequence of events.
2. Scheduling is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs.
3. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes
running than there are available CPUs. This assignment is carried out by software's known as a scheduler and dispatcher.
The scheduler is concerned mainly with
CPU utilization - to keep the CPU as busy as possible.
Throughput - number of processes that complete their execution per time unit.
Turnaround - total time between submission of a process and its completion.
Waiting time - amount of time a process has been waiting in the ready queue.
Response time - amount of time it takes from when a request was submitted until the first response is produced.
Fairness - Equal CPU time to each thread.
4.5. Task Scheduling:
1. Schedulers are parts of the kernel responsible for determining which task runs next
2. Most real-time kernels use priority-based scheduling
1. Each task is assigned a priority based on its importance
2. The priority is application-specific
3. The scheduling can be handled automatically.
4. Many kernels also provide a set of API calls that allows developers to control the state changes.
5. Manual scheduling
6. Non-Real -time systems usually use Non-preemptive Scheduling
 Once a task starts executing, it completes its full execution
7. Most RTOS perform priority-based preemptive task scheduling.
8. Basic rules for priority based preemptive task scheduling
 The Highest Priority Task that is Ready to Run, will be the Task that Must be Running.
4.6. Inter task Communication:

• Software is the basic building block of RTOS. Task is a simply subroutine. Task must be able to communicate with one
another to coordinate their activities or to share the data.
• Kernel object is used for inter task communication. Kernel objects uses message queue, mail box and pipes, Shared
Memory, Signal Function and RPC a for inter task communication.
4.7. Context switch:

• Also called task switch or process switch


• Occurred when a scheduler switches from one task to another
Although each process can have its own address space, all processes have to share the CPU registers
• Kernel ensure that each such register is loaded with the value it had when the process was suspended Context Switch
Each task has its own context
• The state of the CPU registers required for tasks ‘running.
• When a task running, its context is highly dynamic
• The context of a task is stored in its process descriptor
Operations
• Save the context of the current process
• Load the context of new process
• If have page table, update the page table entries
• Flush those TLB entries that belonged to the old process

4.8. Foreground ISRs & Background Tasks:

• Small systems of low complexity are generally designed as shown


• These systems are called foreground/background systems
• An application consists of an infinite loop that calls modules (functions) to
perform desired operations (background)
• Interrupt service routines (ISRs) handle asynchronous events (foreground)
• Foreground is also called interrupt level; background is called task level.
• Critical operations must be performed by the ISRs to ensure that they are
dealt with in a timely fashion
– Because of this, ISRs have a tendency to take longer than they should
• Also, information for a background module that an ISR makes available is
not processed until the background routine gets its turn to execute, which is
called the task level response
• The worst-case task-level response time depends on how long the
background loop takes to execute
• Most high-volume microcontroller-based applications (e.g., microwave ovens, telephones, toys, and so on) are designed as
foreground/background systems

4.9. Critical Section:


When more than one processes access a same code segment that segment is known as critical section. Critical section contains
shared variables or resources which are needed to be synchronized to maintain consistency of data variable.
In simple terms a critical section is group of instructions/statements or region of code that need to be executed atomically (read this
post for atomicity), such as accessing a resource (file, input or output port, global data, etc.).
In concurrent programming, if one thread tries to change the value of shared data at the same time as another thread tries to read the
value (i.e., data race across threads), the result is unpredictable.
The access to such shared variable (shared memory, shared files, shared port, etc…) to be synchronized. Few programming
languages have built-in support for synchronization.
It is critical to understand the importance of race condition while writing kernel mode programming (a device driver, kernel thread,
etc.). since the programmer can directly access and modifying kernel data structures.
4.10. Reentrant Functions:
A function is said to be Reentrant if there is a provision to interrupt the function in the course of execution, service the interrupt
service routine and then resume the earlier going on function, without hampering its earlier course of action. Reentrant functions
are used in applications like hardware interrupt handling, recursion, etc.
The function has to satisfy certain conditions to be called as reentrant:
1. It may not use global and static data. Though there are no restrictions, but it is generally not advised. because the interrupt may
change certain global values and resuming the course of action of the reentrant function with the new data may give undesired
results.
2. It should not modify its own code. This is important because the course of action of the function should remain the same
throughout the code. But this may be allowed in case the interrupt routine uses a local copy of the reentrant function every time it
uses different values or before and after the interrupt.
3. Should not call another non-reentrant function.
4.11. Inter process Communication (IPC):
Inter process communication (IPC) is used for exchanging data between multiple threads in one or more processes or programs.
The Processes may be running on single or multiple computers connected by a network. The full form of IPC is Inter-process
communication.
It is a set of programming interface which allow a programmer to coordinate activities among various program processes which can
run concurrently in an operating system. This allows a specific program to handle many user requests at the same time.
Since every single user request may result in multiple processes running in the operating system, the process may require to
communicate with each other. Each IPC protocol approach has its own advantage and limitation, so it is not unusual for a single
program to use all of the IPC methods.
Message queue: -
• A message queue is a buffer like data structure, through which tasks and ISRs communicate with each other by sending
and receiving messages and synchronize with data.
• It temporarily stores the message from a sender until the intended receiver is ready to read them.
• A message queue has queue control block, queue name, unique ID, memory buffers, a queue length. Kernel allocates the
memory for message queue, ID, control block etc.
Mail box: -
• In general, mailboxes are similar to message queues. Mail box technique for inter task communication in RTOS based
system used for one way messaging.
• The task/thread creates mail box to send the message. The receiver task can subscribe the mail box. The thread which
creates the mail box is known as mailbox server.
• The others are known as client. RTOS has function to create, write and read from mail box. No of messages (limited or
unlimited) in mail box have been decided by RTOS.
Pipes: -
• Pipes are kernel objects used for unstructured data exchange between tasks facilities synchronization among tasks. Pipe
provides a simple data transfer facility.
Shared Memory: -
• Shared memory is simplest way of inter process communication. The sender process writes data into shared memory and
receiver process reads data.
Signal Function: -
• Operating system provides the signal function for messaging among task (process).
Remote Procedure Call (RPC) and Sockets: -
• RPC is a mechanism used by process(task) to call the procedure of another process running on same or different CPU in
the network.
• Sockets are used for RPC communication and establishes full duplex communication between tasks.
Semaphore: -
A semaphore (sometimes called a semaphore token) is a kernel object that one or more threads of execution can acquire or release
for the purposes of synchronization or mutual exclusion.
RTOS support many different types of semaphores, including binary, counting, and mutual-exclusion (mutex) semaphores.
• A binary semaphore can have a value of either 0 or 1.
• When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty); when the value is 1, the binary
semaphore is considered available (or full).

• A counting semaphore uses a count to allow it to be acquired or released multiple times.


• When creating a counting semaphore, assign the semaphore a count that denotes the number of semaphore tokens it has
initially.

The meaning of the signal is implied by the semaphore object, so you need one semaphore for each purpose. The most common
type of semaphore is a binary semaphore, that triggers activation of a task. The typical design pattern is that a task contains a main
loop with an RTOS call to “take” the semaphore. If the semaphore is not yet signalled, the RTOS blocks the task from executing
further until some task or interrupt routine “gives” the semaphore, i.e., signals it.

Mutex: -
Mutexes are used to protect access to a shared resource. A mutex is created and then passed between the threads (they can acquire
and release the mutex).
A binary semaphore for mutual exclusion between tasks, to protect a critical section. Internally it works much the same way as a
binary semaphore, but it is used in a different way. It is “taken” before the critical section and “given” right after, i.e., in the same
task. A mutex typically stores the current “owner” task and may boost its scheduling priority to avoid a problem called “priority
inversion”

You might also like