Software Design of Real-time Systems
Software Design of Real-time Systems
SYSTEMS
In designing real-time systems, software must be structured to handle
predictable, low-latency tasks efficiently. Key design considerations
include modularity, efficient inter-task communication, and
minimizing context-switching. Achieving efficient real-time software
design involves several techniques that ensure predictable and timely
execution across tasks.
Here’s how these principles are implemented to meet real-time
constraints:
1. Modular design:
• Separation of Concerns: Break down the system into modules or
subsystems where each module has a single responsibility. This
approach minimizes dependencies and allows easier testing and
timing analysis.
• Example: In a medical monitoring system, separate modules can
handle data acquisition (e.g., heart rate and blood pressure readings),
data processing (e.g., calculating average rates), and alert
management. This separation ensures that critical data readings aren't
delayed by non-essential tasks like data logging.
2. Prioritized Task Scheduling:
• Scheduling Algorithms: Use algorithms like Rate-Monotonic (RM) or
Earliest Deadline First (EDF) that prioritize tasks according to their
urgency or deadline, ensuring high-priority tasks execute on time.
• Example: In an autonomous vehicle, the braking system task should
have a higher priority than tasks related to infotainment (example
radio, Bluetooth, vehicle information like fuel levels). This ensures
that the critical tasks (like braking when an obstacle is detected) are
handled immediately, irrespective of less critical processes.
3. Efficient Interrupt Handling:
• Low-Latency Interrupts: Minimize interrupt handling time by
keeping interrupt service routines (ISRs) short and efficient. Quick
responses prevent delays in other system tasks. Interrupts are often
used in real-time systems for handling events like sensor updates.
• Example: In a temperature control system, an interrupt might be
triggered when temperature exceeds a certain threshold, activating
cooling mechanisms. Keeping this ISR minimal and efficient
prevents other tasks, like data logging, from being delayed.
1. Fixed-Priority Scheduling:
In fixed-priority scheduling, each task is assigned a static priority level,
typically based on its frequency or criticality. The Rate-Monotonic (RM)
algorithm is a popular fixed-priority method, where tasks with shorter
periods are given higher priorities. This ensures that tasks that need to run
more frequently are less likely to be delayed by lower-priority tasks.
• Example: Consider a real-time control system in an industrial setting
where sensors monitor machinery every 10 ms, while data logging occurs
every second. Using RM scheduling, the sensor monitoring task would
receive higher priority due to its shorter period. This guarantees that time-
sensitive readings are processed immediately, without being delayed by
the lower-priority data logging task.
2. Dynamic Priority Scheduling:
• Dynamic priority scheduling assigns priorities based on task deadlines,
which can change over time. The Earliest Deadline First (EDF) algorithm is
a common dynamic approach, where tasks closest to their deadline are
prioritized. EDF is particularly useful in systems with variable task
frequencies or where tasks arrive at irregular intervals.
• Example: In a multimedia streaming system, video packets have
different deadlines based on playback time. EDF scheduling allows
the system to adjust priorities dynamically, ensuring packets
needed sooner are processed first. This helps prevent playback
delays, enhancing the viewer experience.
3. Other Scheduling Techniques:
• Round-Robin Scheduling is used when tasks have equal priority,
distributing CPU time evenly among them. It’s common in systems
where fairness is important, though it’s less effective in strictly
time-bound applications.
• Cyclic Scheduling involves scheduling tasks in a predefined,
repetitive sequence, suitable for simple, predictable environments
4. Combining Scheduling Techniques:
• Some real-time systems use a hybrid approach, combining fixed-priority
and dynamic scheduling. This allows the system to handle a mix of high-
frequency, fixed tasks (like sensor monitoring) and tasks with dynamic
priorities (like deadline-driven tasks).
• Example: In an automated factory, sensor monitoring tasks use fixed-
priority scheduling, while tasks related to order deadlines, such as
packaging, might use EDF for flexibility.
• By carefully selecting a scheduling strategy suited to the specific
requirements of the system, designers can ensure that critical tasks meet
their timing constraints, even under high load.
RESOURCE HANDLING FOR REAL-TIME
SYSTEMS
• Effective resource management is crucial to avoid conflicts.
Semaphores and mutexes are often used to control access to shared
resources, with protocols like priority inheritance to prevent priority
inversion. For example, in a robotic assembly line, resource locks
prevent multiple tasks from simultaneously accessing a conveyor belt
control, ensuring each task's timing is predictable .
Here's a detailed description of how resource handling is typically
managed in real-time systems:
1. Task Scheduling
• Real-Time Scheduling: The core of an RTOS is its ability to schedule tasks
based on their timing requirements. RTOS uses scheduling algorithms like
Rate-Monotonic Scheduling (RMS) or Earliest Deadline First (EDF) to ensure
tasks are executed in time for critical operations.
• Preemption: In most RTOS, tasks can be preempted to allow higher-priority
tasks to run. This is vital in ensuring time-sensitive tasks are completed
promptly.
• Priority-based Scheduling: Tasks are assigned priorities, and the RTOS will
allocate resources based on these priorities. In Hard Real-Time Systems,
meeting deadlines is crucial, so the scheduler ensures tasks meet their
deadlines based on priority.
2. Memory Management
• Memory Allocation: Memory is allocated in a way that ensures tasks have
enough space for execution without causing delays. RTOS usually employs
a fixed memory partitioning scheme to avoid fragmentation, ensuring
predictable behavior.
• Memory Protection: RTOS often implements memory protection
mechanisms to ensure that one task cannot interfere with another’s
memory. This prevents unpredictable results and system crashes.
• Deterministic Allocation: Memory allocation must be deterministic,
meaning the time taken to allocate memory must be predictable. Dynamic
memory allocation can be avoided in critical systems to prevent issues like
fragmentation.
3. Interrupt Handling
• Interrupts are used in RTOS to allow immediate attention to events. The
system must handle interrupts in a predictable way to meet time constraints.
• Interrupt Latency: The time it takes for an interrupt to be processed should
be as short as possible, as long latencies can cause system failures in real-
time environments.
• Prioritized Interrupts: RTOS often uses prioritized interrupts where higher-
priority interrupts are handled first.
• Ideal for: Periodic tasks with known deadlines (typically hard real-time).
Properties:
a) Pre-emptive: Higher-priority tasks can pre-empt lower-priority tasks.
b) Optimal for fixed-priority algorithms: RMS is optimal for a set of periodic tasks
where deadlines are equal to periods and tasks have fixed execution times.
c) Task Feasibility: A system is schedulable using RMS if the total CPU utilization
does not exceed about 69% (i.e.(2^{1/n} - 1) n for n tasks).
Example: If you have three tasks with periods of 5, 10, and 20 ms, then the task
with the 5 ms period will have the highest priority, the task with the 10 ms period
will have a medium priority, and the task with the 20 ms period will have the
lowest priority.
B. Dynamic-Priority Scheduling (Earliest Deadline First (EDF)):
• Example: Task A has a deadline at 8 ms, task B has a deadline at 12 ms, and task C
has a deadline at 20 ms. If all tasks arrive at the same time, task A will have the
highest priority, followed by task B, then task C.
C. Least Laxity First (LLF):
• Algorithm Description: LLF is a dynamic priority algorithm where
tasks are scheduled based on their laxity, which is the difference
between the time left until the deadline and the time required to
finish the task. Tasks with the least laxity are given the highest
priority.
• Ideal for: Tasks with varying execution times and deadlines.
Properties:
a) Pre-emptive: Similar to EDF, it uses pre-emption, but tasks are pre-
empted based on their laxity rather than deadline.
b) Less efficient than EDF: While LLF is optimal in theory, it is not
widely used in practice because it can incur high overhead due to
frequent pre-emption.
D. Round-Robin Scheduling (RR)
3. Interrupt Handling
Low-Latency Interrupt Response: RTOSs are designed to handle interrupts with
minimal latency. Interrupt service routines (ISRs) are executed quickly to process
time-sensitive events.
Nested Interrupts: Some RTOSs allow nested interrupts, meaning the
system can handle a higher-priority interrupt even if the system is currently
processing a lower-priority interrupt.
• Example: In a drone control system, sensor data interrupts must be processed
immediately to adjust the flight path in response to environmental changes.
3.Priority management
• Fixed priorities: some tasks have fixed priorities determined at the
design stage, commonly in hard real-time systems
• Dynamic priorities: the framework can also assign priorities at
runtime, adapting based on the task’s urgency or deadline.
• Priority inversion handling: the framework includes mechanisms like
priority inheritance or priority ceiling to address priority inversion ,
where a lower-priority task holds a resource needed by a higher-
priority task, delaying its execution.
4.Deadlines and time constraints
• Deadline monitoring: the frame work ensures that tasks meet their
deadlines by constantly monitoring their progress and, if needed,
adjusting priorities. Normally, there are three types of deadlines
hard deadlines; must be met to avoid system failure,
soft deadlines; missing these leads to degraded system performance,
firm deadlines; missing these deadlines makes the task’s output useless
but does not compromise system integrity.
5.Schedulability analysis
• Offline analysis: before deployment, the framework may use offline
analysis to confirm that all tasks can meet their deadlines based on the
chosen scheduling algorithm.
• Online schedulability checks: at runtime, the framework may perform
periodic checks to ensure that deadlines can still be met, adopting
SEMAPHORES, DEADLOCK, AND PRIORITY
INVERSION PROBLEM
In Real-Time Operating Systems (RTOS), semaphores, deadlock,
and priority inversion are crucial concepts that help manage
resources and synchronization between tasks. Let's explore each of
these concepts in the context of RTOS and how they relate to
managing computing resources effectively.
1. Semaphores in RTOS
• Semaphores are synchronization tools used in real-time operating
systems to control access to shared resources and ensure mutual
exclusion. They are used to coordinate access to critical sections
of code or hardware to avoid conflicts or resource contention.
Types of Semaphores:
Binary Semaphore (Mutex):
• Definition: A binary semaphore, also known as a mutex (short for
mutual exclusion), can take only two values, typically 0 and 1. It is used
to ensure that only one task or thread can access a shared resource at a
time.
• Use Case: In a real-time embedded system, a binary semaphore can be
used to control access to a shared sensor resource, ensuring that only
one task can read from the sensor at any given moment.
Counting Semaphore:
• Definition: A counting semaphore allows the semaphore to take any
integer value and is used to manage access to a pool of resources, such
as multiple identical devices or buffers.
• Use Case: In a system with multiple printers, a counting semaphore can
track the number of available printers, allowing multiple tasks to print
Operations on Semaphores:
• Wait (P Operation): A task or thread that needs to access a shared resource
must perform the wait operation. If the semaphore value is greater than zero,
it is decremented, and the task can proceed. If the semaphore value is zero,
the task will be blocked until the semaphore becomes available again.
• Signal (V Operation): After completing the task or releasing the resource, the
task performs the signal operation to increment the semaphore value,
potentially unblocking another waiting task.
• Semaphore Example: Consider a scenario in a real-time audio processing
system where a task is responsible for writing to an audio buffer, and another
task reads from it. A binary semaphore is used to ensure mutual exclusion,
preventing one task from reading and writing to the buffer at the same time.
2. Deadlock in RTOS
• A deadlock occurs when a set of tasks become blocked because each task is
waiting for a resource that is held by another task in the same set, and none
of the tasks can proceed. Deadlocks can lead to a complete halt in the
execution of tasks, causing the system to fail.
Conditions for Deadlock (Coffman’s Conditions):
For a deadlock to occur, all of the following conditions must be true:
Mutual Exclusion: At least one resource must be held in a non-shareable
mode (i.e., only one task can access the resource at a time).
Hold and Wait: A task that is holding one resource is waiting for another
resource that is currently being held by another task.
No Preemption: Resources cannot be forcibly taken from a task; they must
be released voluntarily.
Circular Wait: There must be a circular chain of tasks, each of which is
waiting for a resource held by the next task in the chain.
Deadlock Example: Consider a system with two tasks, Task A and Task B, and
two resources, Resource 1 and Resource 2. If Task A holds Resource 1 and waits
for Resource 2, while Task B holds Resource 2 and waits for Resource 1, a
deadlock occurs because neither task can proceed.
Avoiding and Preventing Deadlock:
Resource Allocation Graph (RAG): The system can track resource allocation
using a graph and detect cycles that indicate deadlocks.
Resource Preemption: By allowing the system to take back resources from tasks
(preempt them), deadlocks can be avoided, but this must be handled carefully in
real-time systems to avoid violating task deadlines.
Order Resource Requests: By defining a global order in which resources must be
requested, the system can avoid the circular wait condition and prevent
deadlocks.
3. Priority Inversion Problem in RTOS
• Priority inversion occurs when a lower-priority task holds a resource that a
higher-priority task needs, causing the higher-priority task to be blocked. This
situation can cause a system to miss deadlines, which is especially problematic
in real-time systems where tasks must meet strict timing requirements.
Priority Inversion Example:
Scenario: Task A has the highest priority, Task B has medium priority, and Task C
has the lowest priority. Task A requests a resource, but Task B, which has the
resource, is still running. Task C, which does not need the resource, starts running
and finishes before Task B releases the resource. As a result, Task A has to wait until
Task C finishes, even though it should have been executed earlier due to its higher
priority.
Solutions to Priority Inversion:
• Priority Inheritance Protocol (PIP):
Description: When a low-priority task holds a resource needed by a high-priority
task, the low-priority task temporarily inherits the higher priority until it releases
the resource. This prevents lower-priority tasks from blocking higher-priority ones.
Example: In an RTOS for embedded systems, if Task B (medium priority) holds a
resource needed by Task A (high priority), Task B temporarily inherits Task A’s
priority until it releases the resource. This ensures that Task A is not blocked by
Task B unnecessarily.
.
• Priority Ceiling Protocol (PCP):
Description: This approach assigns a ceiling priority to each resource, which
is the highest priority of any task that might access it. When a task locks a
resource, it temporarily assumes the ceiling priority, thus preventing lower-
priority tasks from interfering.
Example: In a distributed real-time control system, when a task locks a critical
resource, it automatically operates at the ceiling priority to prevent other
tasks from interfering with it.
Managing Synchronization and Resource Contention in RTOS
To prevent deadlock and priority inversion from causing failures or delays in real-
time systems, RTOSs implement various synchronization mechanisms and
resource management policies:
a) Mutexes and Semaphores: These provide simple but effective ways to manage
resource access and ensure that critical sections are protected. Binary semaphores
(mutexes) can be used for mutual exclusion, and counting semaphores can handle
multiple instances of shared resources.
b) Task Prioritization: Assigning appropriate priorities to tasks based on their
criticality and time constraints helps in managing the priority inversion problem.
Priority inheritance and priority ceiling protocols are key mechanisms for
handling priority inversion in RTOS.
c) Timeouts and Deadlock Detection: In some cases, the RTOS can monitor tasks
for possible deadlocks and implement timeouts.
d) Preemption and Scheduling: Preemptive scheduling, along with priority-based
task execution, helps prevent lower-priority tasks from blocking higher-priority
ones, particularly in hard real-time systems.