0% found this document useful (0 votes)
60 views57 pages

Software Design of Real-time Systems

Real time in deep

Uploaded by

thomasmanazi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views57 pages

Software Design of Real-time Systems

Real time in deep

Uploaded by

thomasmanazi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

SOFTWARE DESIGN OF REAL-TIME

SYSTEMS
In designing real-time systems, software must be structured to handle
predictable, low-latency tasks efficiently. Key design considerations
include modularity, efficient inter-task communication, and
minimizing context-switching. Achieving efficient real-time software
design involves several techniques that ensure predictable and timely
execution across tasks.
Here’s how these principles are implemented to meet real-time
constraints:
1. Modular design:
• Separation of Concerns: Break down the system into modules or
subsystems where each module has a single responsibility. This
approach minimizes dependencies and allows easier testing and
timing analysis.
• Example: In a medical monitoring system, separate modules can
handle data acquisition (e.g., heart rate and blood pressure readings),
data processing (e.g., calculating average rates), and alert
management. This separation ensures that critical data readings aren't
delayed by non-essential tasks like data logging.
2. Prioritized Task Scheduling:
• Scheduling Algorithms: Use algorithms like Rate-Monotonic (RM) or
Earliest Deadline First (EDF) that prioritize tasks according to their
urgency or deadline, ensuring high-priority tasks execute on time.
• Example: In an autonomous vehicle, the braking system task should
have a higher priority than tasks related to infotainment (example
radio, Bluetooth, vehicle information like fuel levels). This ensures
that the critical tasks (like braking when an obstacle is detected) are
handled immediately, irrespective of less critical processes.
3. Efficient Interrupt Handling:
• Low-Latency Interrupts: Minimize interrupt handling time by
keeping interrupt service routines (ISRs) short and efficient. Quick
responses prevent delays in other system tasks. Interrupts are often
used in real-time systems for handling events like sensor updates.
• Example: In a temperature control system, an interrupt might be
triggered when temperature exceeds a certain threshold, activating
cooling mechanisms. Keeping this ISR minimal and efficient
prevents other tasks, like data logging, from being delayed.

4. Minimizing Context Switching:


• Task Optimization: Limit task switching by organizing tasks based
on priority and avoiding preemption unless necessary. Context
switching, while essential for multitasking, can add overhead that
impacts timing predictability.
5. Resource Management and Synchronization:
• Semaphore and Mutex Locks: Implement semaphores and mutexes to
control access to shared resources. Priority inversion protocols, like priority
inheritance, prevent lower-priority tasks from blocking higher-priority
tasks.
• Example: In a robotic arm control system, mutexes can be used to prevent
multiple tasks from accessing the arm's position control simultaneously,
avoiding timing conflicts and resource-related delays.
6. Deterministic Memory Management:
• Pre-Allocation and Static Memory: Avoid dynamic memory allocation (e.g.,
malloc) during runtime, as it introduces unpredictable delays. Instead, use
static memory allocation or pre-allocate memory during system
initialization.
• Example: In a flight control system, memory for navigation data should be
statically allocated at the start to avoid delays during critical flight phases
caused by on-demand memory allocation.
7. Testing and Timing Analysis:
• Worst-Case Execution Time (WCET) Analysis: Perform WCET analysis for
each task to ensure it meets deadlines under all conditions.
• Real-Time Testing: Simulate the system in real-world scenarios to validate
that the tasks consistently meet their timing constraints.
• Example: In a healthcare monitoring system, WCET analysis of the patient
data acquisition task ensures it always completes within its allotted time,
allowing other critical tasks, like alarm triggering, to operate reliably.
• These practices collectively build a reliable, predictable framework for real-
time systems, ensuring they meet strict timing constraints essential for safe
and efficient operation.
SCHEDULING CONCEPTS OF REAL-TIME
SYSTEMS
Scheduling is fundamental in real-time systems, ensuring tasks are completed
within their required time constraints. The two main scheduling approaches
are fixed-priority scheduling and dynamic scheduling, each suited to different
types of real-time needs:

1. Fixed-Priority Scheduling:
In fixed-priority scheduling, each task is assigned a static priority level,
typically based on its frequency or criticality. The Rate-Monotonic (RM)
algorithm is a popular fixed-priority method, where tasks with shorter
periods are given higher priorities. This ensures that tasks that need to run
more frequently are less likely to be delayed by lower-priority tasks.
• Example: Consider a real-time control system in an industrial setting
where sensors monitor machinery every 10 ms, while data logging occurs
every second. Using RM scheduling, the sensor monitoring task would
receive higher priority due to its shorter period. This guarantees that time-
sensitive readings are processed immediately, without being delayed by
the lower-priority data logging task.
2. Dynamic Priority Scheduling:
• Dynamic priority scheduling assigns priorities based on task deadlines,
which can change over time. The Earliest Deadline First (EDF) algorithm is
a common dynamic approach, where tasks closest to their deadline are
prioritized. EDF is particularly useful in systems with variable task
frequencies or where tasks arrive at irregular intervals.
• Example: In a multimedia streaming system, video packets have
different deadlines based on playback time. EDF scheduling allows
the system to adjust priorities dynamically, ensuring packets
needed sooner are processed first. This helps prevent playback
delays, enhancing the viewer experience.
3. Other Scheduling Techniques:
• Round-Robin Scheduling is used when tasks have equal priority,
distributing CPU time evenly among them. It’s common in systems
where fairness is important, though it’s less effective in strictly
time-bound applications.
• Cyclic Scheduling involves scheduling tasks in a predefined,
repetitive sequence, suitable for simple, predictable environments
4. Combining Scheduling Techniques:
• Some real-time systems use a hybrid approach, combining fixed-priority
and dynamic scheduling. This allows the system to handle a mix of high-
frequency, fixed tasks (like sensor monitoring) and tasks with dynamic
priorities (like deadline-driven tasks).
• Example: In an automated factory, sensor monitoring tasks use fixed-
priority scheduling, while tasks related to order deadlines, such as
packaging, might use EDF for flexibility.
• By carefully selecting a scheduling strategy suited to the specific
requirements of the system, designers can ensure that critical tasks meet
their timing constraints, even under high load.
RESOURCE HANDLING FOR REAL-TIME
SYSTEMS
• Effective resource management is crucial to avoid conflicts.
Semaphores and mutexes are often used to control access to shared
resources, with protocols like priority inheritance to prevent priority
inversion. For example, in a robotic assembly line, resource locks
prevent multiple tasks from simultaneously accessing a conveyor belt
control, ensuring each task's timing is predictable .
Here's a detailed description of how resource handling is typically
managed in real-time systems:
1. Task Scheduling
• Real-Time Scheduling: The core of an RTOS is its ability to schedule tasks
based on their timing requirements. RTOS uses scheduling algorithms like
Rate-Monotonic Scheduling (RMS) or Earliest Deadline First (EDF) to ensure
tasks are executed in time for critical operations.
• Preemption: In most RTOS, tasks can be preempted to allow higher-priority
tasks to run. This is vital in ensuring time-sensitive tasks are completed
promptly.
• Priority-based Scheduling: Tasks are assigned priorities, and the RTOS will
allocate resources based on these priorities. In Hard Real-Time Systems,
meeting deadlines is crucial, so the scheduler ensures tasks meet their
deadlines based on priority.
2. Memory Management
• Memory Allocation: Memory is allocated in a way that ensures tasks have
enough space for execution without causing delays. RTOS usually employs
a fixed memory partitioning scheme to avoid fragmentation, ensuring
predictable behavior.
• Memory Protection: RTOS often implements memory protection
mechanisms to ensure that one task cannot interfere with another’s
memory. This prevents unpredictable results and system crashes.
• Deterministic Allocation: Memory allocation must be deterministic,
meaning the time taken to allocate memory must be predictable. Dynamic
memory allocation can be avoided in critical systems to prevent issues like
fragmentation.
3. Interrupt Handling
• Interrupts are used in RTOS to allow immediate attention to events. The
system must handle interrupts in a predictable way to meet time constraints.
• Interrupt Latency: The time it takes for an interrupt to be processed should
be as short as possible, as long latencies can cause system failures in real-
time environments.
• Prioritized Interrupts: RTOS often uses prioritized interrupts where higher-
priority interrupts are handled first.

4. I/O Resource Management


Efficient I/O Handling: Since real-time systems often involve hardware
interfaces (like sensors or actuators), the RTOS must manage I/O devices
efficiently, ensuring that I/O operations do not block critical tasks.
• Buffering: Data buffering techniques are used to store incoming or outgoing
data temporarily, preventing resource starvation in case of delays or high
workloads.
• Real-time I/O Scheduling: This ensures that high-priority I/O requests are
handled before lower-priority requests, avoiding delays that could affect
task deadlines.

5. Synchronization and Communication


• Mutexes and Semaphores: To avoid conflicts and ensure that resources are
not accessed concurrently inappropriately, RTOS uses synchronization
mechanisms like mutexes (mutual exclusion) and semaphores.
• Message Queues: For communication between tasks, RTOS often
implements message queues, where tasks can send data or request
resources in a thread-safe manner. This ensures that tasks are synchronized
without risking data corruption.
• Priority Inversion Handling: Priority inversion occurs when a lower-
priority task holds a resource needed by a higher-priority task. RTOS
often includes protocols like Priority Inheritance or Priority Ceiling to
avoid priority inversion.
6. Time Management
• Timers: RTOS relies heavily on timers to schedule tasks and track time
constraints. These timers are used for triggering tasks or generating
periodic events to ensure timely execution.
• Time-triggered Execution: Some RTOS systems use a time-triggered
approach where tasks are executed at specific time intervals, like a
heartbeat, making resource allocation deterministic.
7. Resource Isolation
• Hard Isolation: In safety-critical or high-reliability real-time systems,
tasks or applications are isolated from each other using hardware or
software mechanisms to prevent one task from impacting another’s
resources.
• Soft Isolation: Some systems use software techniques to isolate tasks. For
instance, by controlling access to shared resources through software-
managed locks or monitors, tasks are prevented from interfering with
each other.
• Power Management
8. Power management
• Dynamic Power Management: In many real-time embedded systems
(especially mobile devices), power consumption is a concern. RTOS
manage the power usage of hardware components by dynamically
adjusting the system's states (e.g., putting the CPU to sleep when not in
use) while ensuring timing constraints are met.
9. Failure Handling and Fault Tolerance
• Fail-safe Mechanisms: RTOS include mechanisms to ensure that even
in the event of resource failures (e.g., memory errors, device failures),
the system can either recover gracefully or at least continue operating
without catastrophic failure.
• Redundancy and Recovery: Systems may include redundant resources
(like dual processors or multiple communication paths) and recovery
mechanisms to maintain operation if a primary resource fails.
10. Real time resources allocation
• Resource Reservation: In some advanced RTOS, resources like CPU time,
memory, and I/O devices are reserved ahead of time for critical tasks. This
guarantees that the resources are available when needed, ensuring real-
time performance is maintained.
• Admission Control: This ensures that only tasks that can meet their
deadlines are admitted into the system. If a new task cannot meet the
required constraints, it is rejected.
PROCESS SCHEDULING AND SCHEDULING
ALGORITHM IN REAL-TIME SYSTEMS
1. Process Scheduling in Real-Time Systems
• In a typical computing system, process scheduling is about
determining the order in which processes will be executed. In a real-
time operating system (RTOS), scheduling becomes more complex
due to the need to meet deadlines and ensure deterministic behavior.

Key Characteristics of Real-Time Process Scheduling:


Preemptive or Non-Preemptive Scheduling: Real-time systems
often use preemptive scheduling, where higher-priority tasks can
interrupt lower-priority tasks. This is necessary for meeting strict
deadlines.
Priority-based Scheduling: Each task is assigned a priority, and the
scheduler ensures that tasks with higher priority are executed before
those with lower priority.
Deadline Requirements: Tasks in real-time systems usually have a
deadline by which they must finish execution. The scheduler must
ensure that tasks are completed before their deadlines.
Deterministic Execution: The time between the scheduling of a task
and its actual execution must be predictable and bounded.

2. Scheduling Algorithms for Real-Time Systems


Several scheduling algorithms are used in real-time systems to manage
processes and resources efficiently. The choice of scheduling algorithm
depends on the type of real-time system (hard or soft) and the nature of
the tasks.
A. Fixed-Priority Scheduling (Rate-Monotonic Scheduling (RMS)):
• Algorithm Description: The Rate-Monotonic Scheduling algorithm assigns fixed
priorities to tasks based on their periodicity. The task with the shortest period
(i.e., the most frequent) gets the highest priority.

• Ideal for: Periodic tasks with known deadlines (typically hard real-time).
Properties:
a) Pre-emptive: Higher-priority tasks can pre-empt lower-priority tasks.
b) Optimal for fixed-priority algorithms: RMS is optimal for a set of periodic tasks
where deadlines are equal to periods and tasks have fixed execution times.
c) Task Feasibility: A system is schedulable using RMS if the total CPU utilization
does not exceed about 69% (i.e.(2^{1/n} - 1) n for n tasks).

Example: If you have three tasks with periods of 5, 10, and 20 ms, then the task
with the 5 ms period will have the highest priority, the task with the 10 ms period
will have a medium priority, and the task with the 20 ms period will have the
lowest priority.
B. Dynamic-Priority Scheduling (Earliest Deadline First (EDF)):

• Algorithm Description: EDF is a dynamic scheduling algorithm that assigns


priorities based on the task’s absolute deadline. The task with the nearest deadline
is given the highest priority.
• Ideal for: Systems with variable or aperiodic tasks, where tasks can arrive at any
time and have deadlines.
Properties:
a) Preemptive: The task with the closest deadline preempts the others.
b) Optimal for a periodic tasks: EDF is optimal for single-processor systems, meaning
if a task set can be scheduled by any algorithm, it can also be scheduled by EDF.
c) Schedulability Test: If the CPU utilization is less than or equal to 100%, the tasks
are guaranteed to be schedulable using EDF.

• Example: Task A has a deadline at 8 ms, task B has a deadline at 12 ms, and task C
has a deadline at 20 ms. If all tasks arrive at the same time, task A will have the
highest priority, followed by task B, then task C.
C. Least Laxity First (LLF):
• Algorithm Description: LLF is a dynamic priority algorithm where
tasks are scheduled based on their laxity, which is the difference
between the time left until the deadline and the time required to
finish the task. Tasks with the least laxity are given the highest
priority.
• Ideal for: Tasks with varying execution times and deadlines.

Properties:
a) Pre-emptive: Similar to EDF, it uses pre-emption, but tasks are pre-
empted based on their laxity rather than deadline.
b) Less efficient than EDF: While LLF is optimal in theory, it is not
widely used in practice because it can incur high overhead due to
frequent pre-emption.
D. Round-Robin Scheduling (RR)

• In real-time systems, round-robin can be adapted for scheduling tasks with


equal priorities, especially in soft real-time systems where deadlines are
less stringent. Round-robin works by rotating through tasks in a circular
queue, ensuring that each task gets a fair share of the CPU time.
• Ideal for: Soft real-time or non-pre-emptive scheduling where deadlines
are flexible.
Properties:
a) Non-preemptive: Tasks execute for a fixed time slice and are then moved
to the back of the queue.
b) Fairness: Ensures all tasks get an equal share of CPU time, which can lead
to missed deadlines in hard real-time systems but works well for less critical
applications.
COMPUTING RESOURCE MANAGEMENT
FOR REAL-TIME SYSTEMS
This involves the efficient allocation and scheduling of system
resources (CPU time, memory, I/O devices, etc.). Real-time systems are
required to manage these resources in a way that guarantees
performance, timeliness, and predictability, as failing to meet deadlines
could lead to system failure or catastrophic consequences. Below is an
overview of how computing resources are managed in real-time
systems.
1. CPU Resource Management
• The CPU is often the most critical resource in real-time systems
because it directly impacts task execution time and the ability to meet
deadlines. Efficient CPU scheduling is essential for ensuring that the
system’s timing constraints are respected.
 CPU Scheduling Algorithms
Fixed Priority Scheduling:
• Rate-Monotonic Scheduling (RMS): In this fixed-priority algorithm,
tasks with shorter periods are given higher priorities. It’s optimal
for periodic tasks with known execution times and deadlines.
• Deadline-Monotonic Scheduling (DMS): Similar to RMS, but tasks
are prioritized based on their deadlines rather than periods. Tasks
with earlier deadlines are given higher priorities.
Dynamic Priority Scheduling:
• Earliest Deadline First (EDF): A dynamic priority algorithm where tasks
with the earliest deadlines are given the highest priority. It works well
for both periodic and aperiodic tasks.
• Least Laxity First (LLF): Tasks are scheduled based on their laxity,
which is the time remaining until the deadline minus the time required
to finish the task. The task with the least laxity is given the highest
priority.
• Round-Robin Scheduling: Suitable for soft real-time systems, where
tasks with equal priority are given equal time slices in a rotating
manner.
 Preemption and Context Switching:
• Preemption allows high-priority tasks to interrupt and preempt lower-
priority tasks to meet deadlines. Preemption is essential in hard real-
time systems.
• Context Switching: The RTOS must efficiently handle context switching,
where the state of one task is saved and the state of another is restored.
Minimizing context-switching overhead is key to improving real-time
performance.
2. Memory Resource Management
Memory management in real-time systems is critical because tasks may
require real-time access to memory, and delays in memory allocation or
deallocation can lead to missed deadlines or system instability.
 Memory Allocation and Deallocation
I. Static Memory Allocation: In many real-time systems, memory
allocation is done statically at compile-time to avoid unpredictable delays
associated with dynamic memory allocation.
II. Dynamic Memory Allocation: some systems allow dynamic memory
allocation, using techniques like memory pools or fixed-size blocks to
reduce fragmentation and minimize allocation delays.

 Memory Partitioning: Memory is often partitioned into regions for


different tasks or processes. This avoids conflicts and ensures that one task
cannot overwrite the memory of another, ensuring deterministic access to
memory.
 Memory Protection and Isolation
I. Memory Protection: To ensure that tasks don’t interfere with each
other’s memory space, memory protection mechanisms like hardware-
based memory protection (e.g., MMU) or software isolation are
employed.
II. Isolation: In safety-critical systems, tasks may be isolated from each
other using virtual memory techniques or software-managed isolation to
avoid the failure of one task affecting others.

3. I/O Resource Management


• Managing I/O resources efficiently in real-time systems is essential,
especially when handling time-sensitive devices like sensors, actuators, and
communication interfaces.
 I/O Scheduling
o Prioritized I/O Scheduling: Real-time systems use priority-based I/O
scheduling, where I/O operations for high-priority tasks are completed
before those of lower-priority tasks.
o Polling vs. Interrupts: In real-time systems, polling is typically avoided in
favor of interrupt-driven I/O, where the system responds immediately to
changes in I/O status. Interrupts reduce latency by directly invoking
handlers for time-critical events.
o Real-Time I/O Queues: For non-time-critical I/O tasks, RTOS can implement
queues where I/O operations are executed in sequence, ensuring that high-
priority tasks are not blocked by low-priority I/O operations.
o Buffered I/O: Real-time systems often use circular buffers or FIFO (First-In,
First-Out) queues to temporarily store I/O data. This helps in handling
bursty I/O traffic without blocking critical tasks.
4. Synchronization and Coordination
 Synchronization is critical in real-time systems to avoid conflicts and
ensure tasks access shared resources safely. Real-time systems use various
synchronization primitives to handle concurrency between tasks.
a) Mutual Exclusion
 Mutexes: A mutual exclusion lock (mutex) is used to ensure that only one
task can access a shared resource at a time.
Semaphores: Semaphores can be used to signal tasks when certain
conditions are met (e.g., when a resource becomes available).
 Priority Inversion Prevention: To avoid priority inversion, the RTOS may
use priority inheritance (where a lower-priority task inherits the priority of
a higher-priority task it is blocking) or priority ceiling protocols.
b) Inter-task Communication
 Message Passing: Tasks communicate with each other using message
queues or mailboxes.
 Signals and Events: Tasks can also synchronize using signals or event flags,
which trigger certain actions based on pre-defined conditions.

5. Time and Clock Management


Real-time systems often rely on accurate and reliable time management to
ensure that tasks meet their deadlines.
 System Clock: The system clock provides the base time reference for task
scheduling.
 Tickless Time Management: Some RTOS implementations use a tickless
kernel to reduce the overhead caused by periodic clock interrupts. The
system remains idle until it needs to wake up for task scheduling or to
handle an event.
 Timers and Delays
Timers are used to schedule tasks for execution at specific times or after a
delay. Real-time systems use hardware timers or software timers that trigger
interrupts to handle time-critical tasks.
6. Resource Reservation and Admission Control
Resource reservation techniques ensure that sufficient resources are available
to meet the deadlines of tasks. Admission control policies prevent the system
from being overloaded with tasks it cannot handle.
 Resource Reservation
• Pre-reserving resources ensures that tasks will always have access to the
CPU, memory, and I/O resources they need.
 Admission Control
• Admission Control ensures that only tasks that can be guaranteed to meet
their deadlines are admitted into the system.
7. Power Management
In many real-time embedded systems, especially battery-powered devices,
efficient power management is critical.

 Dynamic Voltage and Frequency Scaling (DVFS): This technique allows


the system to adjust the CPU frequency and voltage dynamically to save
power while ensuring tasks are completed within their deadlines.
 Sleep Modes and Task Scheduling: Some real-time systems allow parts of
the system to enter sleep modes when they are not in use.
OPERATING SYSTEMS FOR REAL-TIME
APPLICATIONS: FEATURES OF RTOSS
Real-Time Operating Systems (RTOS) are specialized operating
systems designed to meet the unique needs of real-time applications,
where tasks must be executed within specific time constraints.
Here are the key features of RTOSs that make them suitable for real-time
applications:
1. Deterministic Behaviour (Predictability)
• How it Works: RTOSs prioritize task scheduling and resource allocation
in a way that avoids uncertainty, ensuring that tasks meet their
deadlines, even in the presence of heavy system load.
• Example: A real-time system controlling a medical device must process
input data from sensors and provide feedback within milliseconds to
avoid compromising patient safety.
2. Real time task scheduling
 Preemptive Scheduling: RTOSs use preemptive scheduling algorithms, which allow
high-priority tasks to interrupt or preempt lower-priority tasks to meet deadlines.
 Priority-based Scheduling: RTOSs often use priority scheduling algorithms such as
Rate-Monotonic Scheduling (RMS) or Earliest Deadline First (EDF) to assign higher
priorities to more time-sensitive tasks.
 Fixed or Dynamic Scheduling: Depending on the RTOS, the scheduler might assign
tasks based on fixed priorities or dynamically adjust priorities to ensure deadlines are
met.
• Example: In automotive systems, an RTOS ensures that safety-critical tasks, like
brake control, always receive CPU time before less critical tasks, such as
infotainment.

3. Interrupt Handling
 Low-Latency Interrupt Response: RTOSs are designed to handle interrupts with
minimal latency. Interrupt service routines (ISRs) are executed quickly to process
time-sensitive events.
 Nested Interrupts: Some RTOSs allow nested interrupts, meaning the
system can handle a higher-priority interrupt even if the system is currently
processing a lower-priority interrupt.
• Example: In a drone control system, sensor data interrupts must be processed
immediately to adjust the flight path in response to environmental changes.

4. Multitasking and Multi-threading


 Multitasking Support: RTOSs support multitasking, enabling multiple
tasks (or threads) to run concurrently. Each task is allocated a time slice or is
pre-empted based on priority to ensure deadlines are met.
 Multi-threading: RTOSs typically support multi-threading, where different
tasks or sub-tasks within the same application run in parallel, improving
system efficiency.
• Example: In industrial automation, an RTOS can manage various tasks like
sensor data collection, actuator control, and diagnostics in parallel, ensuring
that each task runs on time.
5. Inter-process Communication (IPC)
 Message Queues: RTOSs provide mechanisms such as message queues and
mailboxes for tasks to communicate with each other. These are used to exchange
data or synchronize actions between tasks.
 Semaphores and Mutexes: Semaphores and mutexes are commonly used for task
synchronization and mutual exclusion to prevent race conditions.
 Event Flags: Event flags are often used to signal tasks about important changes or
events in the system.
• Example: In a networked embedded system, tasks may communicate with each
other using message queues to share data and synchronize the system's operation.

6. Real-Time Clock and Timers


Timers and Delays: RTOSs also offer timers that can delay or trigger task execution
after a specified time, which is essential for handling periodic tasks or timeouts.
• Example: In robotics, a real-time clock is used to ensure that movements are
synchronized with the external environment, such as adjusting the arm position
based on sensor readings at precise intervals.
7. Synchronization and Mutual Exclusion
• Critical Section Management: RTOSs provide mechanisms for managing critical
sections where shared resources or data are accessed.
• Priority Inheritance: In RTOSs, priority inheritance protocols are used to avoid
priority inversion, where a low-priority task blocks a high-priority task
• Example: In a real-time air traffic control system, synchronization between tasks
that manage radar data and control signals is essential to ensure that critical flight
data is handled correctly.
8.Resource Management
• CPU Allocation: The RTOS allocates CPU resources based on task priorities,
ensuring that high-priority tasks are given sufficient CPU time to meet their
deadlines.
• I/O Resource Management: In addition to CPU management, RTOSs allocate and
manage other resources, such as memory, communication interfaces, and I/O
devices, ensuring that critical tasks have uninterrupted access to them.
• Example: In a telecommunications system, an RTOS allocates CPU time for call
handling tasks and ensures that communication channels are managed efficiently to
9. Power Management
• Efficient Power Consumption: RTOSs may include features to minimize power
consumption in embedded systems, such as sleep modes and dynamic voltage
scaling (DVS), allowing the system to reduce power usage during periods of
inactivity while still meeting real-time deadlines.
• Example: In a battery-powered embedded system like a wearable device, an
RTOS manages power consumption by controlling the CPU's power states while
ensuring that critical tasks, like heart rate monitoring, continue uninterrupted.
10. Real-Time Networking
• Communication Protocols: Many RTOSs support real-time networking protocols,
such as CAN (Controller Area Network) or Ethernet, ensuring predictable and
timely communication in systems like automotive or industrial automation.
• Quality of Service (QoS): RTOSs may provide mechanisms to guarantee quality
of service (QoS) for network communications, ensuring that critical data is
delivered within specified time windows.
• Example: In autonomous driving, an RTOS ensures that critical vehicle data is
SCHEDULING FRAMEWORK IN REAL-TIME
OPERATING SYSTEMS
• A scheduling framework in real-time operating systems is a structured
set of mechanisms, algorithms, and policies designed to manage and
prioritize tasks while meeting the strict timing requirements of the
system.
• Unlike traditional operating systems, real-time systems often involve
time-sensitive applications where missing a deadline could lead to
system failure or degraded performance. Scheduling framework is a
solid strategy used to meet a program’s temporal requirements in some
real-time environment by ordering the use of system resources.
• The framework ensures predictability, timeliness, and reliability, which
are crucial for real-time tasks.
• For real-time operating systems, it is utmost important that the
scheduling algorithms applied produces a predictable schedule, that is at
all times, it is known which task is going to execute next.
Principle classes of scheduling policies
Pre-runtime scheduling; its objective is to manually or semiautomatically create
a feasible schedule offline which guarantees the execution order of tasks and
prevents conflicting access to shared resources.
• It takes into account and reduce cost of context switching overhead increasing
the chance that a feasible schedule can be found.
Runtime scheduling; in this class, fixed or dynamic priorities are assigned and
resources are allocated on a priority basis.
• It relies on relating complex runtime mechanisms for task synchronization and
intertask communication.

Key components and functions of a scheduling framework in real-time


operating systems

1.Task and job management


• Tasks and jobs: A real-time system handles tasks that can be either periodic
(repeated at fixed intervals) or aperiodic( triggered by events). Each task may
• Task control blocks(TCBs): the framework maintains a Task Control Block
for each task, containing essential information like task priority,
state(ready, running, waiting), execution time, deadline, and dependencies.
• Task queue: the framework organizes tasks into queues based on their
states
Ready queue: holds tasks waiting to execute.
Blocked queue: holds tasks waiting for resources or events.
Periodic queue: holds periodic tasks awaiting their next start time.

2. Scheduling algorithms and policies


Static scheduling: the priority of each periodic task is fixed relative to other
tasks. The framework employ static scheduling algorithms where task
priorities and timing are determined offline. A seminal fixed-priority
algorithm is the rate-monotonic scheduling(RMS) algorithm of Liu and
Leyland, 1973.
• Rate-monotonic scheduling approach (RMS) is an optimal fixed priority
algorithm for the basic task model in which a task with a shorter period is
Dynamic scheduling: in contrast to fixed-priority scheduling, in dynamic-priority
schemes, the priority of a task with respect to that of other tasks changes as tasks
are released and completed. The frame work use dynamic scheduling, where
priorities are assigned based on runtime conditions.
• One of the most well-known dynamic algorithm is the earliest deadline first
approach(EDF),
• Earliest Deadline First approach (EDFA); it deals with deadline rather than
execution times. At any time, the ready task with the earliest deadline has the
highest priority.
• Least laxity first(LLF); Tasks with the least laxity( time remaining before deadline
minus required execution time) are prioritized to minimize missed deadlines.

Preemptive and Non-preemptive scheduling


• Preemptive scheduling: higher-priority tasks can interrupt lower-priority ones,
allowing critical tasks to execute as soon as they become ready.
• Non-preemptive scheduling tasks run to completion once started, reducing
context-switching overhead but possibly delaying high-priority tasks.
Round-robin scheduling
• Several tasks are executed sequentially o completion in conjunction with a
cyclic code structure.
• With time slicing, each executable task is assigned a fixed time
quantum(time slice) in which to execute. A fixed –rate clock is used to
initiate an interrupt at a rate corresponding to the time slice.
• The dispatched task executes until it completes or its time slice expires as
indicated by the clock interrupt.
• If the task does not execute to completion, its context must be saved and
the task is placed at the end of the round-robin queue, then the context of
the next executable task is restored and it resumes execution.

Cyclic code scheduling


This is a scheduler that deterministically interleaves and makes sequential
the execution of periodic tasks on a CPU according to a pre-runtime
• In cyclic code approach, scheduling decisions are made
periodically(only at the beginning of every frame), and the time
intervals during scheduling decision points are referred to as frames
or minor cycles, and each frame has a length called frame size.
• Frames must be sufficiently long so that every task can start and
complete within a single frame.

3.Priority management
• Fixed priorities: some tasks have fixed priorities determined at the
design stage, commonly in hard real-time systems
• Dynamic priorities: the framework can also assign priorities at
runtime, adapting based on the task’s urgency or deadline.
• Priority inversion handling: the framework includes mechanisms like
priority inheritance or priority ceiling to address priority inversion ,
where a lower-priority task holds a resource needed by a higher-
priority task, delaying its execution.
4.Deadlines and time constraints
• Deadline monitoring: the frame work ensures that tasks meet their
deadlines by constantly monitoring their progress and, if needed,
adjusting priorities. Normally, there are three types of deadlines
 hard deadlines; must be met to avoid system failure,
 soft deadlines; missing these leads to degraded system performance,
 firm deadlines; missing these deadlines makes the task’s output useless
but does not compromise system integrity.

5.Schedulability analysis
• Offline analysis: before deployment, the framework may use offline
analysis to confirm that all tasks can meet their deadlines based on the
chosen scheduling algorithm.
• Online schedulability checks: at runtime, the framework may perform
periodic checks to ensure that deadlines can still be met, adopting
SEMAPHORES, DEADLOCK, AND PRIORITY
INVERSION PROBLEM
In Real-Time Operating Systems (RTOS), semaphores, deadlock,
and priority inversion are crucial concepts that help manage
resources and synchronization between tasks. Let's explore each of
these concepts in the context of RTOS and how they relate to
managing computing resources effectively.
1. Semaphores in RTOS
• Semaphores are synchronization tools used in real-time operating
systems to control access to shared resources and ensure mutual
exclusion. They are used to coordinate access to critical sections
of code or hardware to avoid conflicts or resource contention.
Types of Semaphores:
Binary Semaphore (Mutex):
• Definition: A binary semaphore, also known as a mutex (short for
mutual exclusion), can take only two values, typically 0 and 1. It is used
to ensure that only one task or thread can access a shared resource at a
time.
• Use Case: In a real-time embedded system, a binary semaphore can be
used to control access to a shared sensor resource, ensuring that only
one task can read from the sensor at any given moment.
Counting Semaphore:
• Definition: A counting semaphore allows the semaphore to take any
integer value and is used to manage access to a pool of resources, such
as multiple identical devices or buffers.
• Use Case: In a system with multiple printers, a counting semaphore can
track the number of available printers, allowing multiple tasks to print
Operations on Semaphores:
• Wait (P Operation): A task or thread that needs to access a shared resource
must perform the wait operation. If the semaphore value is greater than zero,
it is decremented, and the task can proceed. If the semaphore value is zero,
the task will be blocked until the semaphore becomes available again.
• Signal (V Operation): After completing the task or releasing the resource, the
task performs the signal operation to increment the semaphore value,
potentially unblocking another waiting task.
• Semaphore Example: Consider a scenario in a real-time audio processing
system where a task is responsible for writing to an audio buffer, and another
task reads from it. A binary semaphore is used to ensure mutual exclusion,
preventing one task from reading and writing to the buffer at the same time.
2. Deadlock in RTOS
• A deadlock occurs when a set of tasks become blocked because each task is
waiting for a resource that is held by another task in the same set, and none
of the tasks can proceed. Deadlocks can lead to a complete halt in the
execution of tasks, causing the system to fail.
Conditions for Deadlock (Coffman’s Conditions):
For a deadlock to occur, all of the following conditions must be true:
 Mutual Exclusion: At least one resource must be held in a non-shareable
mode (i.e., only one task can access the resource at a time).
 Hold and Wait: A task that is holding one resource is waiting for another
resource that is currently being held by another task.
 No Preemption: Resources cannot be forcibly taken from a task; they must
be released voluntarily.
 Circular Wait: There must be a circular chain of tasks, each of which is
waiting for a resource held by the next task in the chain.
Deadlock Example: Consider a system with two tasks, Task A and Task B, and
two resources, Resource 1 and Resource 2. If Task A holds Resource 1 and waits
for Resource 2, while Task B holds Resource 2 and waits for Resource 1, a
deadlock occurs because neither task can proceed.
Avoiding and Preventing Deadlock:
Resource Allocation Graph (RAG): The system can track resource allocation
using a graph and detect cycles that indicate deadlocks.
Resource Preemption: By allowing the system to take back resources from tasks
(preempt them), deadlocks can be avoided, but this must be handled carefully in
real-time systems to avoid violating task deadlines.
Order Resource Requests: By defining a global order in which resources must be
requested, the system can avoid the circular wait condition and prevent
deadlocks.
3. Priority Inversion Problem in RTOS
• Priority inversion occurs when a lower-priority task holds a resource that a
higher-priority task needs, causing the higher-priority task to be blocked. This
situation can cause a system to miss deadlines, which is especially problematic
in real-time systems where tasks must meet strict timing requirements.
Priority Inversion Example:
Scenario: Task A has the highest priority, Task B has medium priority, and Task C
has the lowest priority. Task A requests a resource, but Task B, which has the
resource, is still running. Task C, which does not need the resource, starts running
and finishes before Task B releases the resource. As a result, Task A has to wait until
Task C finishes, even though it should have been executed earlier due to its higher
priority.
 Solutions to Priority Inversion:
• Priority Inheritance Protocol (PIP):
Description: When a low-priority task holds a resource needed by a high-priority
task, the low-priority task temporarily inherits the higher priority until it releases
the resource. This prevents lower-priority tasks from blocking higher-priority ones.
Example: In an RTOS for embedded systems, if Task B (medium priority) holds a
resource needed by Task A (high priority), Task B temporarily inherits Task A’s
priority until it releases the resource. This ensures that Task A is not blocked by
Task B unnecessarily.
.
• Priority Ceiling Protocol (PCP):
Description: This approach assigns a ceiling priority to each resource, which
is the highest priority of any task that might access it. When a task locks a
resource, it temporarily assumes the ceiling priority, thus preventing lower-
priority tasks from interfering.
Example: In a distributed real-time control system, when a task locks a critical
resource, it automatically operates at the ceiling priority to prevent other
tasks from interfering with it.
Managing Synchronization and Resource Contention in RTOS

To prevent deadlock and priority inversion from causing failures or delays in real-
time systems, RTOSs implement various synchronization mechanisms and
resource management policies:
a) Mutexes and Semaphores: These provide simple but effective ways to manage
resource access and ensure that critical sections are protected. Binary semaphores
(mutexes) can be used for mutual exclusion, and counting semaphores can handle
multiple instances of shared resources.
b) Task Prioritization: Assigning appropriate priorities to tasks based on their
criticality and time constraints helps in managing the priority inversion problem.
Priority inheritance and priority ceiling protocols are key mechanisms for
handling priority inversion in RTOS.
c) Timeouts and Deadlock Detection: In some cases, the RTOS can monitor tasks
for possible deadlocks and implement timeouts.
d) Preemption and Scheduling: Preemptive scheduling, along with priority-based
task execution, helps prevent lower-priority tasks from blocking higher-priority
ones, particularly in hard real-time systems.

You might also like