0% found this document useful (0 votes)
24 views

UNIT-II RTOS Embedded Systems

Uploaded by

Duggirala Manoj
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

UNIT-II RTOS Embedded Systems

Uploaded by

Duggirala Manoj
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Real Time Operating Systems

Definition: An RTOS is an OS for response time-controlled and event-controlled


processes. The processes have predictable latencies. An RTOS is an OS for the systems
having the real timing constraints and deadlines on the task, ISTs and ISRs.

(Or)

“The ability of the operating system to provide a required level of service in a bounded
response time.”

 It responds to inputs immediately(Real-Time).

 Here the task is completed within a specified time delay.

 In real life situations like controlling traffic signal or a nuclear reactor or an aircraft.

 The operating system has to respond quickly.


Round-Robin with Priority Algorithms
 The round-Robin algorithm can be slightly modified by assigning priority levels to some or
all the tasks.
 The high priority task can interrupt the CPU and execute the task.
 This algorithm can meet the desired response for high priority task.
 Example: Bar Code Scanner.

Shortest Job First Algorithms


 The task that will take minimum time to be executed will be given priority.
 This approach fulfills the maximum number of tasks, however, some tasks may have to
wait forever.
Real Time Operating Systems
 Those systems in which the correctness of the system depends not only on the logical
result of computation, but also on the time at which the results are produced.

 Arealtimesystemisasystem/thatensurestheexacttime

 requirementsforajob.
Real Time Operating Systems
Real Time Operating Systems
Soft Real Time Systems
 In a soft real-time system, tasks are completed as fast as possible without having to
be completed within a specified timeframe.
 In a soft real-time system, it is considered undesirable, but not catastrophic, if deadlines
are occasionally missed.
 Also known as “best effort” systems.
 Most modern operating systems can serve as the base for a soft real time systems.
 One in which deadlines are mostly met.
 Soft real time means that only the precedence and sequence for the task operations are
defined, interrupt latencies and context switching latencies are small but there can be
few deviations between expected latencies of the tasks and observed time constraints and
a few deadline misses are accepted.
Soft Real Time Systems

 Examples: Multimedia transmission and reception, – Networking, Telecom (cellular)


networks, web sites and services, and computer games.
Soft Real Time Tasks:
 The preemption period for the soft real time task in worst case may be about a few ms.
 Mobile phone, digital cameras and orchestra playing robots are examples of soft real
time systems.
Hard Real Time Systems
 In a hard real-time operating system however, not only must tasks be completed
within the specified timeframe, but they must also be completed correctly.
 A hard real-time system has time-critical deadlines that must be met; otherwise a
catastrophic system failure can occur.
 Requires formal verification/guarantees of being to always meet its hard deadlines
(except for fatal errors).
 Hard real time means strict about adherence to each task deadline.
 When an event occurs, it should be serviced within the predictable time at all times
in a given hard real time system.
 The preemption period for the hard real time task in worst case should be less than a
Few micro seconds.
Hard Real Time Systems
 A hard RT RTOS is one, which has predictable performance with no deadline miss,
even in case of sporadic tasks (sudden bursts of occurrence of events requiring
attention).
 Examples: Air traffic control, vehicle subsystems control, Nuclear power plant
control, Automobile engine control system and antilock brake.
Characteristics of RTOS
Basic Functions of RTOS
 Task management
 Task synchronization
Avoid priority inversion
 Task scheduling
 Interrupt handling
 Memory management
No virtual memory for hard RT tasks
 Exception handling (important)
Task

 A task (also called a thread) is a program on a computer which can be executed and run.
 A task is an independent thread of execution that can compete with other concurrent tasks
for processor execution time.
 A task is schedulable.
 The design process for a real-time application involves splitting the work to be done into
tasks which are responsible for a portion of the problem.
 Each task is assigned a priority, its own set of CPU registers, and its own stack area.
Task States
 DORMANT
 READY
 RUNNING
 DELAYED
 PENDING
 BLOCKED
 INTERRUPTED
Task States
Task States

DORMANT-The DORMANT state corresponds to a task that resides in memory but has not
been made available to the multitasking kernel.
READY-A task is READY when it can be executed but its priority is less than that of the
task currently being run.
In this state, the task actively competes with all other ready tasks for the processor’s
execution time.
The kernel’s scheduler uses the priority of each task to determine which task to move to the
running state.
Task States

RUNNING-A task is RUNNING when it has control of the CPU and it's currently being
executed.
On a single-processor system, only one task can run at a time.
•When a task is preempted by a higher priority task, it moves to the ready state.
•It also can move to the blocked state.
–Making a call that requests an unavailable resource
–Making a call that requests to wait for an event to occur
–Making a call to delay the task for some duration
Task States
 WAITING-A task is WAITING when it requires the occurrence of an event (waiting for
an I/O operation to complete, a shared resource to be available, a timing pulse to occur,
time to expire, etc.).
 BLOCKEDCPU starvation occurs when higher priority tasks use all of the CPU
execution time and lower priority tasks do not get to run.
 The cases when blocking conditions are met.
 A semaphore token for which a task is waiting is released.
 A message, on which the task is waiting, arrives in a message queue.
 A time delay imposed on the task expires.
 ISR(Interrupt Service Routine)-A task is in ISR state when an interrupt has occurred and
the CPU is in the process of servicing the interrupt.
Task States
Scheduling
Scheduling Cont…
Task Scheduling
 Schedulers are parts of the kernel responsible for determining which task runs next
 Most real-time kernels use priority-based scheduling
• Each task is assigned a priority based on its importance
• The priority is application-specific
 The scheduling can be handled automatically.
 Many kernels also provide a set of API calls that allows developers to control the state changes.
 Manual scheduling
 Non Real -time systems usually use Non-preemptive Scheduling
• Once a task starts executing, it completes its full execution
 Most RTOS perform priority-based preemptive task scheduling.
 Basic rules for priority based preemptive task scheduling
• The Highest Priority Task that is Ready to Run, will be the Task that Must be Running.
Why Scheduling is necessary for RTOS?
 More information about the tasks are known
• Number of tasks.
• Resource Requirements.
• Execution time.
• Deadlines.
 Being a more deterministic system better scheduling algorithms can be devised.
Scheduling Algorithms
 Depending upon the requirement of ES, the scheduling algorithm needs to be choosen.
• First-In First-Out
• Round-Robin Algorithm
• Round Robin With Priority
• Shortest Job First
• Non-Preemptive Multi-tasking
• Preemptive Multi-tasking
First-In First-Out

Task-1 Task-2 Task-3 Scheduler

Queue

• The task which ready to run are kept in a queue and CPU serves the task on first come first serve
basis.

Round-Robin Algorithm
Task-1 Task-2 Task-3 Task-1 Task-2 Task-3

Time

• The kernel assign a certain amount of time for each task waiting in the Queue.
• The time slice allocated to each task is called Quantum.
Non-Preemptive Multi-Tasking Algorithms
Low-priority task Interrupt Service High-priority task Low-priority task
running Routine running running

Time

ISR makes high priority task High priority task release the
Ready-to-run CPU

 This is also called as cooperative multitasking as the tasks have to cooperate with one
another to share CPU time.

Low-priority task Interrupt Service High-priority task Low-priority task


running Routine running running

Time

ISR makes high priority task Low priority task release the
Ready-to-run CPU
Preemptive Multi-Tasking Algorithms

Low-priority task Interrupt Service High-priority task Low-priority task


running Routine running running

Time

ISR makes high priority task Low priority task release the
Ready-to-run CPU

 The high priority tasks is always executed by the CPU, by preempting the lower priority
task.
 All real time OS implement this scheduling algorithm.
Rate Monotonic Analysis
 It is used to calculate the percentage of CPU time utilized by the tasks and to assign
priorities to tasks.
 Priorities are proportional to the frequency of execution.
 There are two types of priority assignments,
• Static priority assignment
• Dynamic priority assignment
 Static priority assignment: A task will be assigned a priority at the time of creating the
task and it remains the same.
 Dynamic priority assignment: the priority of the task can be changed during the
execution time.
 It is not very easy to assign priorities to task, with the help of Rate Monotonic analysis
priorities of the tasks has been assign.
Task Management Function Calls
 Create a Task
 Delete a Task
 Suspend a Task
 Resume a Task
 Change priority of a task
 Query a Task.
Semaphores Cont…

 Let us consider, two tasks wants to access a display. Display is a shared resource, in order
to control the access a semaphores is created.
 If task 1 wants to access the printer, it acquires the semaphore, uses the printer and then
releases the semaphore.
 If both tasks want to access a resource simultaneously, the kernel has to give the semaphore
only to one of the task.
 This allocation may be based on the priority of the task or FIFO basis.
 If a number of tasks have to access the same resource then the tasks are kept in a queue and
each task can acquire the semaphore one by one.
Resource Synchronization
 When multiple tasks are running, two or more tasks may need to share the same resource.
 In order to access a shared resource, there should be a mechanism so that there is
discipline.
 That mechanism is known as Resource Synchronization.

Display Unit

Task 1 Task 2

Fig: Resource Synchronization


Task Synchronization
 Let us consider, one task can reads the data from ADC and writes in to memory.
 Another task read the data and sends it to DAC. However, the read operation takes place
only after write operation and it has to be done very fast with minimal time delay.
 This is done through a well defined procedure called task synchronization.

Voltage
Signal A D
Task to write the Task to read the
D A
Data data
C C

Memory

Fig: Task Synchronization


Display Semaphores
Display Unit

Semaphore

1 3

4
2

Task 1 Task 2

1 2 3 4

Task 1 Task 2
Task 1 Uses Task 1 releases Task 2 Uses Task 2 releases
Acquires Acquires
Display Semaphore Display Semaphore
Semaphore Semaphore

Fig: Display Semaphore


Semaphores
 Semaphores is just an integer.
 There are two types
• Counting Semaphores
• Binary Semaphores
 Counting Semaphores: counting semaphores will have an integer value greater than 1.
 Binary Semaphores: Binary Semaphores will take the values either 0 or 1.
 Semaphore is a kernel object that is used for both resource synchronization and task
synchronization.
 Semaphores is like a key to enter the house.
Counting Semaphores
Pool of Buffers

Buffer 1 Buffer 10

Task 1 Task 2 Task n

Fig: Counting Semaphore


Semaphores Cont…
 Let us consider, a pool of 10 buffers is available, the buffers need to be shared by a number
of tasks. Any task can write to a buffer.
 Here, the counting semaphore is used. However, the initial value of the semaphore is set to
10.
 Whenever a task acquires a semaphore, the value is decremented by 1 and whenever a task
releases a semaphore, the value is incremented by 1.
 When the value is 0, it is indicated that the shared resource is no longer available.
 A counting semaphore is like having multiple keys to enter a house.
Semaphores Management Function Calls
 The OS API provides the following function calls for semaphore Management
• Create Semaphore
• Delete Semaphore
• Acquire Semaphore
• Release Semaphore
• Query a Semaphore
 Semaphores can be used for several activities such as
• To control access to a shared resource.
• To indicate the occurrence of events.
• To synchronize the activities of task.
Deadlock
 Deadlock occurs when two or more tasks wait for a resource being held by another task. If
the resource is not released for a long time due to some problem in that task then the
system may be reset by the watchdog timer.
 To avoid deadlock , a time limit can be set.
Mailbox
 It is a kernel object used for inter-task communication
 It is just like a normal postal mailbox.
 A task post a message in a mail box another task will read the message.

Mailbox Management Function calls


 Create a mailbox
 Delete a mailbox
 Query a mailbox
 Post a message in a mailbox
 Read a message from a mailbox
Message Queues
 Message Queues can be consider as an array of mailboxes.
 It is a kernel object used in a inter-task communication. A task post the message in message
queue and another task will read the message.
 Some of the applications of Message Queues are
• Taking the inputs from keyboard.
• To display output.
• Reading voltage from sensors or transducers.
• Data packet transmission in a network.
Message Queues cont…
 In each of these applications, a task deposits the message in the message queue and other
task can take the messages.
 Based on the application, the highest priority task or the first task waiting in the queue can
take the message.
 At the time of creating queue, the queue is given a name or ID, queue length, sending task
waiting list and receiving task waiting list.
 The applications of message queues are
1. One-way data communication.
2. Two-way data communication.
3. Broadcast data communication.
Message Queue Management Function calls
 Create a queue
 Delete a queue
 Flush a queue
 Post a message in a queue
 Post a message in front of queue
 Read message from queue
 Broadcast a message
 Show queue information
 Show queue waiting list
Event Registers
 A task can have an event register in which the bits correspond to different event.
 Each bit in an event register can be used to obtain the status of an event.
 A task can have an event register and other tasks can set/clear the bits in the event
register to inform the status of an event.
 The meaning of 1 or 0 for a particular bit has to be decided beforehand.

Event Registers Management Function Calls


 Create an event register.
 Delete an event register.
 Query an event register.
 Set an event flag.
 Clear an event flag.
Event Registers

0 1 0 0 1 0 1 1 0 0 1 0 0 1 0 0

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Task#2 Task#5 Task#7 Task#4

Fig: 16-Bit Event Register


Pipes
 Pipe is a kernel object. In pipes, the output of one task is passed on as input of other
task.
 By using pipes tasks-to task or ISR to task data transfer can takes place.
 Pipes can also used for inter-task communication also.

Task Task

Write Data Reads Data


To pipe From pipe
Fig: Pipe
Pipes
 One task may send the data packets to one pipe and the other task may send the
acknowledgements through the other pipe.
 In Unix/Linux, the pipe concepts are used.

Task 1 Task 2

Fig: pipes for Inter-task Communication


Pipe Management Function Calls
 Create a pipe.
 Open a pipe.
 Close a pipe.
 Read the pipe.
 Write to the pipe.
Priority Inversion Problem
Priority Inheritance

You might also like