0% found this document useful (0 votes)
31 views

RTOS - Module 3

KTU ece RTOS mod 3

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

RTOS - Module 3

KTU ece RTOS mod 3

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

ECT426 : REAL

TIME OPERATING
SYSTEMS
MODULE - 3, Part 2
MODULE 3
• Real Time Operating Systems:
Structure and characteristics of Real
Time Systems

• Task: Task states

• Task synchronization -Semaphores-


types

• Inter task communication


RTOS - Real time operating system
• Real time operating system popularly known as RTOS
provides controller with the ability to respond to
input and complete tasks within a specific period of
time based on priority

• An RTOS might sound like just any other embedded


program or firmware, but it is built on the
architecture of an Operating system. Hence, like any
operating system RTOS can allow multiple programs
to execute at the same time supporting multiplexing.
Difference between RTOS &
Sl. Operating System
Operating System Real time System
no

Time sharing is the basis Processes are executed on


1 of execution of processes the basis of the order of
in operating system their priority
Operating system acts as Real time system is
an interface between the designed to have its
2
hardware and software of execution for the real
a system world problems
Memory management is
Managing memory is not a
difficult as based on the real
critical issue when it comes
3 time issue memory is
to execution of operating
allocated , which itself is
system
critical

Applications: Controlling
Applications: Office , Data
aircraft or nuclear reactor ,
4 centers , System for home
scientific research
etc
equipments

Examples : Microsoft Examples: Vx Works ,QNX ,


5
Windows , Linux ,OS Windows CE
Structure of RTOS
A real-time operating system (RTOS) is a program that schedules
execution in a timely manner, manages system resources, and
provides a consistent foundation for developing application code.
Application code designed on an RTOS can be quite diverse, ranging
from a simple application for a digital stopwatch to a much more
complex application for aircraft navigation.
Good RTOSes, therefore, are scalable in order to meet different
sets of
requirements for different applications. For example, in some
applications, an RTOS comprises only a kernel, which is the core
supervisory software that
provides minimal logic, scheduling, and resource-management
algorithms. Every RTOS has a kernel.
On the other hand, an RTOS can be a combination of various
modules, including the kernel, a file system, networking protocol
stacks, and other components required for a particular application
RTOS architecture_ Introduction to RTOS

• https://robocraze.com/blogs/post/architecture-of-rtos-part-1
• https://electricalfundablog.com/rtos-real-time-operating-system/
Characteristics of Real-time
System:
Time Constraints: Time constraints related with real-time systems
simply means that time interval allotted for the response of the ongoing
program. This deadline means that the task should be completed within
this time interval.

Correctness: Correctness is one of the prominent part of real-time


systems. Real-time systems produce correct result within the given time
interval. If the result is not obtained within the given time interval then
also result is not considered correct.
Key Characteristics of an RTOS
An application's requirements define the requirements of its
underlying RTOS. Some of the more common attributes are

• Reliability
• Predictability
• Performance
• Compactness
• Scalability.
Key Characteristics of an RTOS
• Reliability: Embedded systems must be reliable.
Depending on the application, the system might need to
operate for long periods without human intervention.
• Predictability: The term deterministic describes RTOSes
with predictable behaviour, in which the completion of
operating system calls occurs within known timeframes.
• Performance: An embedded system must perform fast
enough to fulfill its timing requirements.
Key Characteristics of an RTOS
• Compactness: Application design constraints and cost
constraints help determine how compact an embedded
system can be. For example, a cell phone clearly must be
small, portable, and low cost.

• Scalability: Because RTOSes can be used in a wide


variety of embedded systems, they must be able to scale
up or down to meet application-specific requirements
Task
• A task is an independent thread of execution that can
compete with other concurrent tasks for processor
execution time.

● Task─ term used for the process in the RTOSes for the
embedded systems.
● For example, VxWorks and COS-II are the RTOSes, which
use the term task.
• Each task has
• Task Control Block (TCB)
• Unique ID and Common memory
• States (idle, ready, running, blocked, finished)
Task Control Block (TCB)
A task, its associated parameters, and
supporting data structures.
Task
Examples of system tasks include:
• Initialization or startup task: initializes the system and
creates and starts system tasks
• Idle task :uses up processor idle cycles when no other
activity is present.
• Logging task: logs system messages

• Exception-handling task: handles exceptions

• Debug agent task: allows debugging with a host


debugger.
Task States and Scheduling
Task States and Scheduling
Three main states are used in most typical preemptive-
scheduling kernels, including:
• ready state-the task is ready to run but cannot because a
higher priority task is executing.
• blocked state-the task has requested a resource that is
not available, has requested to wait until some event
occurs, or has delayed itself for some duration.
• running state-the task is the highest priority task and is
running.
Ready State
• When a task is first created and made ready to run, the
kernel puts it into the ready state.
• Tasks in the ready state cannot move directly to the
blocked state. A task first needs to run so it can make a
blocking call ,which is a call to a function that cannot
immediately run to completion, thus putting the task in
the blocked state.
• Ready tasks, therefore, can only move to the running
state.
• In this example, tasks 1, 2, 3, 4, and 5 are ready to run, and the
kernel queues them by priority in a task-ready list.
• Task 1 is the highest priority task (70); tasks 2, 3, and 4 are at
the next-highest priority level (80); and task 5 is the lowest
priority (90). The following steps explains how a kernel might
use the task-ready list to move tasks to and from the ready
state:
1. Tasks 1, 2, 3, 4, and 5 are ready to run and are waiting in the
task- ready list.
2. Because task 1 has the highest priority (70), it is the first task
ready to run. If nothing higher is running, the kernel removes
task 1 from the ready list and moves it to the running state.
3. During execution, task 1 makes a blocking call. As a result, the
kernel moves task 1 to the blocked state; takes task 2, which is
first in the list of the next-highest priority tasks (80), off the ready
list; and moves task 2 to the running state.

4. Next, task 2 makes a blocking call. The kernel moves task 2 to


the blocked state; takes task 3, which is next in line of the priority
80 tasks, off the ready list; and moves task 3 to the running state.

5. As task 3 runs, frees the resource that task 2 requested. The


kernel returns task 2 to the ready state and inserts it at the end of
the list of tasks ready to run at priority level 80. Task 3 continues
as the currently running task.
Running State
• On a single-processor system, only one task can run at a time.
In this case, when a task is moved to the running state, the
processor loads its registers with this task's context. The
processor can then execute the task's instructions and
manipulate the associated stack.
A running task can move to the blocked state in any of the
following ways:
•by making a call that requests an unavailable resource,
•by making a call that requests to wait for an event to occur, and
•by making a call to delay the task for some duration.

In each of these cases, the task is moved from the running state
to the blocked state
Blocked State
• The possibility of blocked states is extremely important
in real-time systems because without blocked states,
lower priority tasks could not run. If higher priority tasks
are not designed to block, CPU starvation can result.
Blocking conditions are met include the following:
•a semaphore token for which a task is waiting is released,
•a message, on which the task is waiting, arrives in a
message queue,
•a time delay imposed on the task expires.
• When a task becomes unblocked, the task might move
from the blocked state to the ready state if it is not the
highest priority task.
• If the unblocked task is the highest priority task, the task
moves directly to the running state.
Task Creation and Deletion
• The task is first created and put into a suspended state; then, the
task is moved to the ready state when it is started.
• During the deletion process, a kernel terminates the task and frees
memory by deleting the tasks TCB and stack.
However, when tasks execute, they can acquire memory or access
resources using other kernel objects. If the task is deleted incorrectly,
the task might not get to release these resources. For example, assume
that a task acquires a semaphore token to get exclusive access to a
shared data structure. While the task is operating on this data
structure, the task gets deleted. If not handled appropriately, this
abrupt deletion of the operating task can result in:
• a corrupt data structure, due to an incomplete write operation,
• an unreleased semaphore, which will not be available for other tasks
that might need to acquire it, and
• an inaccessible data structure, due to the unreleased semaphore.
As a result, premature deletion of a task can result in memory or
Task Scheduling
Obtaining Task Information
Semaphore
A semaphore (sometimes called a semaphore
token ) is a kernel object that one or more threads of
execution can acquire or release for the purposes of
synchronization or mutual exclusion.

Watch this animation:


https://www.youtube.com/watch?v=LIzTbA3cAWY
A semaphore, its associated parameters,
and supporting data structures
• A semaphore is like a key that allows a task to carry out
some operation or to access a resource. If the task can
acquire the semaphore, it can carry out the intended
operation or access the resource. A single semaphore can
be acquired a finite number of times.
• The kernel tracks the number of times a semaphore has
been acquired or released by maintaining a token count,
which is initialized to a value when the semaphore is
created.
• As a task acquires the semaphore, the token count is
decremented; as a task releases the semaphore, the count is
incremented.
• If the token count reaches 0, the semaphore has no tokens
le . A requesting task, therefore, cannot acquire the
semaphore, and the task blocks, if it chooses to wait for the
semaphore to become available.
Types of semaphores
Different types of semaphores, including

• Binary,
• Counting, and
• Mutual-exclusion (Mutex) Semaphores.
Binary Semaphores
• A binary semaphore can have a value of either 0 or 1.
• When a binary semaphore’s value is 0, the semaphore
is considered unavailable (or empty);
• when the value is 1, the binary semaphore is
considered available (or full ).
Counting Semaphores
• A counting semaphore uses a count to allow it to be
acquired or released multiple times.

• When creating a counting semaphore, assign the


semaphore a count that denotes the number of semaphore
tokens it has initially.

• If the initial count is 0, the counting semaphore is created


in the unavailable state.

• If the count is greater than 0, the semaphore is created in


the available state, and the number of tokens it has equals
its count,
Counting Semaphores
Mutual Exclusion (Mutex)
Semaphores
• A mutex is initially created in the unlocked state, in
which it can be acquired by a task.

• A er being acquired, the mutex moves to the locked


state.

• Conversely, when the task releases the mutex, the


mutex returns to the unlocked state.

• Note that some kernels might use the terms lock and
unlock for a mutex instead of acquire and release .
Mutual Exclusion (Mutex)
Semaphores
Mutual Exclusion (Mutex)
Semaphores
• Depending on the implementation, a mutex can
support additional features not found in binary or
counting semaphores.

• These key differentiating features include


• ownership,
• recursive locking,
• task deletion safety, and
• priority inversion avoidance protocols.
Mutex Ownership
• Ownership of a mutex is gained when a task first locks
the mutex by acquiring it.

• Conversely, a task loses ownership of the mutex when


it unlocks it by releasing it.

• When a task owns the mutex, it is not possible for any


other task to lock or unlock that mutex.
Recursive Locking
• Many mutex implementations also support recursive
locking, which allows the task that owns the mutex to
acquire it multiple times in the locked state.
• The mutex with recursive locking is called a recursive
mutex.
• This type of mutex is most useful when a task requiring
exclusive access to a shared resource calls one or more
routines that also require access to the same resource.
• The count used for the mutex tracks the number of times
that the task owning the mutex has locked or unlocked the
mutex.
• The count used for the counting semaphore tracks the
number of tokens that have been acquired or released by
any task.
Task Deletion Safety
• Some mutex implementations also have built-in task
deletion safety.

• Premature task deletion is avoided by using task


deletion locks when a task locks and unlocks a mutex.

• Enabling this capability within a mutex ensures that


while a task owns the mutex, the task cannot be
deleted.
Priority Inversion Avoidance
Protocol
• Priority inheritance protocol:
• Ensures that the priority level of the lower priority task that
has acquired the mutex is raised to that of the higher priority
task that has requested the mutex when inversion happens.
• The priority of the raised task is lowered to its original value
a er the task releases the mutex that the higher priority task
requires.

• Ceiling priority protocol:


• Ensures that the priority level of the task that acquires the
mutex is automatically set to the highest priority of all
possible tasks that might request that mutex when it is first
acquired until it is released.
Typical Semaphore Operations
• creating and deleting semaphores,

• acquiring and releasing semaphores,

• clearing a semaphore s task-waiting list, and

• getting semaphore information.


Creating and Deleting Semaphores
• When a semaphore is deleted, blocked tasks in its task-
waiting list are unblocked and moved either to the
ready state or to the running state
Acquiring and Releasing
Semaphores

• The operations for acquiring and releasing a semaphore might have


different names, depending on the kernel: for example, take and
give, sm_p and sm_v , pend and post, and lock and unlock .
• Any task can release a binary or counting semaphore; however, a
mutex can only be released (unlocked) by the task that first acquired
(locked) it.
Clearing Semaphore Task-Waiting
Lists
• To clear all tasks waiting on a semaphore task-waiting
list, some kernels support a flush operation.
• The flush operation is useful for broadcast signalling
to a group of tasks.
Getting Semaphore Information
Mutex is a type of semaphore but its used for LOCKING in
multitasking
BASIS FOR
COMPARIS SEMAPHORE MUTEX
ON
Basic Semaphore is a signaling mechanism.Mutex is a locking mechanism.
Existence Semaphore is an integer variable. Mutex is an object.
Function Semaphore allow multiple tasks to Mutex allow multiple tasks to
access a finite instance of resources.
access a single resource but not
simultaneously.
Ownership Semaphore value can be changed by Mutex object lock is released
any process acquiring or releasing only by the process that has
the resource. acquired the lock on it.
Categorize Semaphore can be categorized into Mutex is not categorized further.
counting semaphore and binary
semaphore.
Resources If all resources are being used, the If a mutex object is already
Occupied process requesting for resource locked, the process requesting
performs wait() operation and block for resources waits and queued
Inter task communication
mechanisms
• Message queues
• Pipes
• Event registers
• Signals
Introduction
• Different tasks in an embedded system typically must share the same
hardware and so ware resources, or may rely on each other in order to
function correctly.
• Hence embedded OSes provide different mechanisms that allow for
tasks in a multitasking system to intercommunicate and synchronize
their behaviour
• Embedded OSessocommonly
as to coordinate their functions,
implement avoid
inter process problems, and
communication
(IPC) and to
allow tasks synchronization algorithms
run simultaneously based upon one or some
in harmony.
combination of :
• memory sharing,
• message passing, and
• signalling mechanisms.

Memory
sharing
Message Queues
• A message queue is a buffer-like object through which
tasks and ISRs send and receive messages to
communicate and synchronize with data.
• A message queue is like a pipeline.
• It temporarily holds messages from a sender until the
intended receiver is ready to read them.
• This temporary buffering decouples a sending and
receiving task; that is, it frees the tasks from having to
send and receive messages simultaneously.
• EG:
• a temperature value from a sensor,
• a text message to print to an LCD,
• a keyboard event.
• Primary intertask communication mechanism within a
single CPU.
• Allow a variable number of messages (varying in
length) to be queued in first-in-first-out (FIFO) order.
• Any task or ISR can send messages to the message
queue
• Any task can receive messages from the message queue
• Multiple tasks can send to and receive from the same
message queue
A message queue, its associated parameters,
and supporting data structures.
• It is the kernel s job to assign a unique ID to a message queue and to
create its QCB and task-waiting list. The kernel also takes developer-
supplied parameters such as the length of the queue and the
maximum message length to determine how much memory is
required for the message queue.
• After the kernel has this information, it allocates memory for the
message queue from either a pool of system memory or some
private memory space.
• The message queue itself consists of a number of elements, each of
which can hold a single message. The elements holding the first and
last messages are called the head and tail respectively. Some
elements of the queue may be empty (not containing a message). The
total number of elements (empty or not) in the queue is the total
length of the queue . The developer specified the queue length when
The state diagram for a message queue.
• As with other kernel objects, message queues follow the
logic of a simple FSM, as shown in Figure. When a message
queue is first created, the FSM is in the empty state.
• If a task attempts to receive messages from this message
queue while the queue is empty, the task blocks and, if it
chooses to, is held on the message queue’s task-waiting list,
in either a FIFO or priority-based order.
Typical Message Queue Operations
Typical message queue operations include the following:
• creating and deleting message queues,
• sending and receiving messages, and
• obtaining message queue information.
Creating and Deleting Message Queues
Sending and Receiving Messages
Obtaining Message Queue Information
Typical Message Queue Use
• non-interlocked, one-way data communication,
• interlocked, one-way data communication,
• interlocked, two-way data communication, and
• broadcast communication
Non-Interlocked, One-Way Data Communication

The activities of tSourceTask and tSinkTask are not


synchronized. TSourceTask simply sends a message; it
does not require acknowledgement from tSinkTask.
tSourceTask ()
{

Send message to message queue


}
tSinkTask ()
{

Receive message from message queue

}
Interlocked, One-Way Data Communication

Here a sending task might require a handshake


(acknowledgement) that the receiving task has been
successful in receiving the message. This process is called
interlocked communication, in which the sending task
sends a message and waits to see if the message is
tSourceTask ()
{
:
Send message to message queue
Acquire binary semaphore
:
}
tSinkTask ()
{
:
Receive message from message queue
Give binary semaphore
:
Interlocked, Two-Way Data
Communication

● In this case, tClientTask sends a request to tServerTask via a


message queue.
● tServer-Task fulfills that request by sending a message back to
tClientTask.
● Two separate message queues are required for full-duplex
tClientTask ()
{
:
Send a message to the requests queue
Wait for message from the server queue
:
}
tServerTask ()
{
:
Receive a message from the requests queue
Send a message to the client queue
:
Broadcast Communication

● Message broadcasting is a one-to-many-task relationship.


● tBroadcastTask sends the message on which multiple tSink-
Task are waiting.
tBroadcastTask ()
{
:
Send broadcast message to queue
:
}
Note: similar code for tSignalTasks 1, 2, and 3.
tSinkTask ()
{
:
Receive message on queue
:
}
PIPES
• Pipes are kernel objects that provide unstructured data
exchange and facilitate synchronization among tasks.
• Two descriptors, one for each end of the pipe (one end for
reading and one for writing), are returned when the pipe is
created.
• Data is written via one descriptor and read via the other.
• The data remains in the pipe as an unstructured byte stream.
• Data is read from the pipe in FIFO order.
A common pipe unidirectional.
Common pipe operation.

● A pipe provides a simple data flow facility so that the reader becomes
blocked when the pipe is empty, and the writer becomes blocked when
the pipe is full.
● Typically, a pipe is used to exchange data between a data-producing
task and a data-consuming task.
• A pipe is conceptually similar to a message queue but with
significant differences.
• For example,
○ unlike a message queue, a pipe does not store multiple
messages. Instead, the data that it stores is not structured,
but consists of a stream of bytes.
○ Also, the data in a pipe cannot be prioritized; the data flow
is strictly first-in, first-out FIFO.
○ Finally, as is described below, pipes support the powerful
select operation, and message queues do not.
Pipe Control Blocks
● Pipes can be dynamically created or destroyed.
● The kernel creates and maintains pipe-specific information in an
internal data structure called a pipe control block .
● A pipe control block contains a kernel-allocated data buffer for the
pipe’s input and output operation. The size of this buffer is
maintained in the control block and is fixed when the pipe is created;
it cannot be altered at run time.
● The current data byte count, along with the current input and output
position indicators, are part of the pipe control block.
○ The current data byte count indicates the amount of readable data
in the pipe.
○ The input position specifies where the next write operation begins
in the buffer.
○ Similarly, the output position specifies where the next read
Pipe Control Blocks

● Two task-waiting lists are associated with each pipe.


● One waiting list keeps track of tasks that are waiting to write into the pipe
while it is full; the other keeps track of tasks that are waiting to read from
the pipe while it is empty.
Pipe States

● Pipe has 3 states viz. Empty, Non-empty and Full.


● Empty state is when pipe is empty, Non-empty state is when
the pipe has some data and Full state is when the pipe is full.
Typical Pipe Operations
• create and destroy a pipe,
• read from or write to a pipe,
• issue control commands on the pipe, and
• select on a pipe.
Pipe- device Functions
1. pipeDevCreate ( ) ─ for creating a device
2. open ( ) ─ for opening the device to enable its use from
beginning of its allocated buffer.
3. connect ( ) ─ for connecting a thread or task inserting
bytes to the thread or task deleting bytes from the pipe.
4. write ( ) ─ function for inserting (writing) into the pipe
from the bottom of the empty memory space in the buffer
allotted to it.
5. read ( ) ─ function for deleting (reading) from the pipe
from the bottom of the unread memory spaces in the
buffer filled after writing into the pipe.
6. close ( ) ─ for closing the device to enable its use from
beginning of its allocated buffer only after opening it again.
Typical Uses of Pipes
• Because a pipe is a simple data channel, it is mainly used for task-
to-task or ISR-to-task data transfer.
• Another common use of pipes is for inter-task synchronization.
• Inter-task synchronization can be made asynchronous for both
tasks by using the select operation.
.
• Task A and task B open two pipes for inter-task
communication.
• The first pipe is opened for data transfer from task A to
task B.
• The second pipe is opened for acknowledgement (another
data transfer) from task B to task A.
• Both tasks issue the select operation on the pipes.
• Task A can wait asynchronously for the data pipe to
become writeable (task B has read some data from the
pipe).
• That is, task A can issue a non-blocking call to write to the
pipe and perform other operations until the pipe becomes
writeable.
• Task A can also wait asynchronously for the arrival of the
transfer acknowledgement from task B on the other pipe.
• Similarly, task B can wait asynchronously for the arrival of
Event Registers
• Some kernels provide a special register as part of
each task s control block, this register, called an
event register.

• Event register, is an object belonging to a task and


consists of a group of binary event flags used to
track the occurrence of specific events.
• Depending on a given kernel s implementation of
this mechanism, an event register can be 8-, 16-, or
32-bits wide, maybe even more.
• Each bit in the event register is treated like a
binary flag (also called an event flag) and can be
Event Registers
Event Registers
• Through the event register, a task can check for the
presence of events that can control its execution.
• An external source, such as another task or an ISR,
can set bits in the event register to inform the task
that a particular event has occurred.
Event Register Control Blocks

● The kernel creates an event register control block as part of the


task control block when creating a task.
Event Register Control Blocks
• The task specifies the set of events it wishes to receive.
This set of events is maintained in the wanted events
register.
• Similarly, arrived events are kept in the received
events register. The task indicates a timeout to specify
how long it wishes to wait for the arrival of certain
events.
• The kernel wakes up the task when this timeout has
elapsed if no specified events have arrived at the task.
• Using the notification conditions, the task directs the
kernel as to when it wishes to be notified (awakened)
SIGNALS
• A signal is a software interrupt that is generated when an event has
occurred.
• It diverts the signal receiver from its normal execution path and
triggers the associated asynchronous processing.
• Essentially, signals notify tasks of events that occurred during the
execution of other tasks or ISRs.
• As with normal interrupts, these events are asynchronous to the
notified task and do not occur at any predetermined point in the task
s execution.
• The difference between a signal and a normal interrupt is that signals
are so-called software interrupts, which are generated via the
execution of some software within the system.
• By contrast, normal interrupts are usually generated by the arrival of
an interrupt signal on one of the CPU s external pins. They are not
generated by software within the system but by external devices.
SIGNALS
• The number and type of signals defined is both system-
dependent and RTOS dependent.
• An easy way to understand signals is to remember that each
signal is associated with an event.
• The event can be either unintentional, such as an illegal
instruction encountered during program execution, or the event
may be intentional,
such as a notification to one task from another that it is about to
terminate.
• While a task can specify the particular actions to undertake
when a signal arrives, the task has no control over when it
SIGNALS
• When a signal arrives, the task is diverted from its normal
execution path, and the corresponding signal routine is invoked.
• The terms signal routine, signal handler, asynchronous event
handler, and asynchronous signal routine are interchangeable
Signal control block
• If the underlying kernel provides a signal facility, it
creates the signal control block as part of the task control
block
Exceptions and Interrupts
• An exception is any event that disrupts the normal
execution of the processor and forces the processor into
execution of special instructions in a privileged state.
• Exceptions can be classified into two categories:
1. Synchronous exceptions
2. Asynchronous exceptions.
• Exceptions raised by internal events, such as events
generated by the execution of processor instructions, are
called synchronous exceptions.
Exceptions and Interrupts
• Example:
1. On some processor architectures, the read and the write
operations must start at an even memory address for
certain data sizes. Read or write operations that begin at
an odd memory address cause a memory access error
event and raise an exception (called an alignment
exception ).
2. An arithmetic operation that results in a division by
zero raises an exception.
Exceptions and Interrupts
• Exceptions raised by external events, which are events that do
not relate to the execution of processor instructions, are called
asynchronous exception s.
• In general, these external events are associated with
hardware signals.

Examples:

1. Pushing the reset button on the embedded board triggers an


asynchronous exception (called the system reset exception ).

2. The communications processor module that has become an


integral part of many embedded designs is another example of
an external device that can raise asynchronous exceptions when
it receives data packets.
Exceptions and Interrupts
• An interrupt, sometimes called an external interrupt, is
an asynchronous exception triggered by an event that an
external hardware device generates.
• Interrupts are one class of exception.
• The event source for a synchronous exception is internally
generated from the processor due to the execution of some
instruction.
• On the other hand, the event source for an asynchronous
exception is an external hardware device.
• Exceptions- synchronous exceptions
• Interrupts- asynchronous exceptions.
Applications of Exceptions and Interrupts

Exceptions and interrupts help the embedded engineer in


three areas:

• Internal errors and special conditions management,

• Hardware concurrency, and

• Service requests management.


Programmable Interrupt Controllers
and External Interrupts
• Most embedded designs have more than one source of
external interrupts, and these multiple external interrupt
sources are prioritized.
• To understand how this process is handled, a clear
understanding of the concept of a programmable
interrupt controller (PIC) is required.
Two main functionalities:
• Prioritizing multiple interrupt sources so that at any time the
highest priority interrupt is presented to the core CPU for
processing.
• Offloading the core CPU with the processing required to determine
an interrupt's exact source.
Programmable interrupt controller.
Interrupt table
Classification of General Exceptions
❑ Asynchronous-non-maskable
❑ Asynchronous-maskable
❑ Synchronous-precise
❑ Synchronous-imprecise
Asynchronous exceptions
• Asynchronous exceptions are classified into maskable
and non-maskable exceptions.
• External interrupts are asynchronous exceptions.
• Asynchronous exceptions that can be blocked or enabled
by software are called maskable exceptions.
• Similarly, asynchronous exceptions that cannot be
blocked by software are called non-maskable exceptions.
• Non-maskable exceptions are always acknowledged by
the processor and processed immediately.
Synchronous exceptions
• Synchronous exceptions can be classified into precise and
imprecise exceptions.
• With precise exceptions, the processor's program counter
points to the exact instruction that caused the exception,
which is the offending instruction, and the processor knows
where to resume execution upon return from the exception.
• With modern architectures that incorporate instruction and
data pipelining, exceptions are raised to the processor in the
order of written instruction, not in the order of execution.
• In particular, the architecture ensures that the instructions
that follow the offending instruction and that were started in
the instruction pipeline during the exception do not affect the
CPU state.
Classification of General Exceptions
Processing General Exceptions
The processor takes the following steps when an exception
or an external interrupt is raised:
• Save the current processor state information.

• Load the exception or interrupt handling function into


the program counter.
• Transfer control to the handler function and begin
execution.
• Restore the processor state information after the handler
function completes.
• Return from the exception or interrupt and resume
previous execution.
Processing General Exceptions
A typical handler function does the following:

• Switch to an exception frame or an interrupt stack.

• Save additional processor state information.

• Mask the current interrupt level but allow higher priority


interrupts to occur.
Store processor state information onto
stack
• Stacks are used for the storage requirement of saving processor
state information.
Task TCB and stack
Loading exception vector
Nested Exceptions and Stack
Overflow
Nested Exceptions and Stack
Overflow
• Nested exceptions refer to the ability for higher priority
exceptions to preempt the processing of lower priority
exceptions.
• Much like a context switch for tasks when a higher
priority one becomes ready, the lower priority exception
is preempted, which allows the higher priority ESR to
execute.
Nested interrupts and stack
overflow.
Nested interrupts and stack
overflow.
• When interrupts can nest, the application stack must be
large enough to accommodate the maximum
requirements for the application's own nested function
invocation, as well as the maximum exception or
interrupt nesting possible, if the application executes
with interrupts enabled.
• When data is copied onto the stack past the statically
defined limits the data get corrupted, which is a stack
overflow
Exception Handlers
• Exception Frames
The exception frame is also
called the interrupt stack in the
context of asynchronous
exceptions
Exception timing
• The interrupt latency, TB, refers to the interval between the time
when the interrupt is raised and the time when the ISR begins to
execute

You might also like