Lecture 04: Realtime Operating
Systems (RTOs)
                                 1
Real-Time Systems
 Result in severe consequences if logical and
  timing correctness are not met
 Two types exist
   ◼ Soft real-time
       Tasks are performed as fast as possible
       Late completion of jobs is undesirable but not fatal.
       System performance degrades as more & more jobs miss
        deadlines
       Example:
         ◼ Online Databases
                                                           2
Real-Time Systems (cont.)
  ◼ Hard real-time
      Tasks have to be performed on time
      Failure to meet deadlines is fatal
      Example :
        ◼ Flight Control System
  ◼ Qualitative Definition
                                            3
Hard and Soft Real Time Systems
 Hard Real Time System
  ◼ Validation by provably correct procedures or
    extensive simulation that the system always meets
    the timings constraints
 Soft Real Time System
  ◼ Demonstration of jobs meeting some statistical
    constraints suffices.
 Example – Multimedia System
  ◼ 25 frames per second on an average
                                                        4
Most Real-Time Systems are embedded
 An embedded system is a computer built into a
  system but not seen by users as being a computer
 Examples
   ◼   FAX machines
   ◼   Copiers
   ◼   Printers
   ◼   Scanners
   ◼   Routers
   ◼   Robots
                                                     5
Role of an OS in Real Time Systems
 Standalone Applications
  ◼ Often no OS involved
  ◼ Micro controller based Embedded Systems
 Some Real Time Applications are huge & complex
  ◼   Multiple threads
  ◼   Complicated Synchronization Requirements
  ◼   File system / Network / Windowing support
  ◼   OS primitives reduce the software design time
                                                      6
Features of Real Time OS (RTOS)
   Scheduling.
   Resource Allocation.
   Interrupt Handling.
   Other issues like kernel size.
                                     7
Foreground/Background Systems
 Small systems of low complexity
 These systems are also called “super-loops”
 An application consists of an infinite loop of
  desired operations (background)
 Interrupt service routines (ISRs) handle
  asynchronous events (foreground)
                                                   8
Foreground/Background Systems (cont.)
 Critical operations must be performed by the
  ISRs to ensure the timing correctness
 Thus, ISRs tend to take longer than they should
 Task-Level Response
  ◼ Information for a background module is not
    processed until the module gets its turn
                                                 9
Foreground/Background Systems (cont.)
 The execution time of typical code is not
  constant
 If a code is modified, the timing of the loop is
  affected
 Most high-volume microcontroller-based
  applications are F/B systems
   ◼ Microwave ovens
   ◼ Telephones
   ◼ Toys
                                                     10
Foreground/Background Systems (cont.)
                      From a power
                       consumption point
                       of view, it might be
                       better to halt and
                       perform all
                       processing in ISRs
                                      11
Multitasking Systems
 Like F/B systems with multiple backgrounds
 Allow programmers to manage complexity
  inherent in real-time applications
                                               12
Multitasking Systems (cont.)
                               13
Scheduling in RTOS
 More information about the tasks are known
  ◼   Number of tasks
  ◼   Resource Requirements
  ◼   Execution time
  ◼   Deadlines
 Being a more deterministic system better
  scheduling algorithms can be devised.
                                               14
Scheduling Algorithms in RTOS
 Clock Driven Scheduling
 Weighted Round Robin Scheduling
 Priority Scheduling
                                    15
Scheduling Algorithms in RTOS (cont.)
 Clock Driven
  ◼ All parameters about jobs (execution time/deadline)
    known in advance.
  ◼ Schedule can be computed offline or at some
    regular time instances.
  ◼ Minimal runtime overhead.
  ◼ Not suitable for many applications.
                                                     16
Scheduling Algorithms in RTOS (cont.)
 Weighted Round Robin
  ◼ Jobs scheduled in FIFO manner
  ◼ Time quantum given to jobs is proportional to it’s weight
  ◼ Example use : High speed switching network
      QOS guarantee.
  ◼ Not suitable for precedence constrained jobs.
      Job A can run only after Job B. No point in giving time quantum
       to Job B before Job A.
                                                                   17
Scheduling Algorithms in RTOS (cont.)
 Priority Scheduling
  ◼ Processor never left idle when there are ready tasks
  ◼ Processor allocated to processes according to
    priorities
  ◼ Priorities
      Static - at design time
      Dynamic - at runtime
                                                       18
Priority Scheduling
 Earliest Deadline First (EDF)
   ◼ Process with earliest deadline given highest priority
 Least Slack Time First (LSF)
   ◼ slack = relative deadline – execution left
 Rate Monotonic Scheduling (RMS)
   ◼ For periodic tasks
   ◼ Tasks priority inversely proportional to it’s period
                                                             19
Schedulers
 Also called “dispatchers”
 Schedulers are parts of the kernel responsible
  for determining which task runs next
 Most real-time kernels use priority-based
  scheduling
   ◼ Each task is assigned a priority based on its
     importance
   ◼ The priority is application-specific
                                                     20
Priority-Based Kernels
 There are two types
  ◼ Non-preemptive
  ◼ Preemptive
                         21
Non-Preemptive Kernels
 Perform “cooperative multitasking”
   ◼ Each task must explicitly give up control of the CPU
   ◼ This must be done frequently to maintain the illusion of
     concurrency
 Asynchronous events are still handled by ISRs
   ◼ ISRs can make a higher-priority task ready to run
   ◼ But ISRs always return to the interrupted tasks
                                                            22
Non-Preemptive Kernels (cont.)
                                 23
Advantages of Non-Preemptive Kernels
 Interrupt latency is typically low
 Can use non-reentrant functions without fear of
  corruption by another task
   ◼ Because each task can run to completion before it
     relinquishes the CPU
   ◼ However, non-reentrant functions should not be allowed
     to give up control of the CPU
 Task-response is now given by the time of the
  longest task
   ◼ much lower than with F/B systems
                                                         24
Advantages of Non-Preemptive Kernels (cont.)
 Less need to guard shared data through the use
  of semaphores
   ◼ However, this rule is not absolute
   ◼ Shared I/O devices can still require the use of
     mutual exclusion semaphores
   ◼ A task might still need exclusive access to a printer
                                                         25
Disadvantages of Non-Preemptive Kernels
 Responsiveness
  ◼ A higher priority task might have to wait for a long
    time
  ◼ Response time is nondeterministic
 Very few commercial kernels are non-
  preemptive
                                                       26
Preemptive Kernels
 The highest-priority task ready to run is always
  given control of the CPU
   ◼ If an ISR makes a higher-priority task ready, the
     higher-priority task is resumed (instead of the
     interrupted task)
 Most commercial real-time kernels are
  preemptive
                                                         27
Preemptive Kernels (cont.)
                             28
Advantages of Preemptive Kernels
 Execution of the highest-priority task is
  deterministic
 Task-level response time is minimized
                                              29
Disadvantages of Preemptive Kernels
 Should not use non-reentrant functions unless
  exclusive access to these functions is ensured
                                                   30
Reentrant Functions
 A reentrant function can be used by more than
  one task without fear of data corruption
 It can be interrupted and resumed at any time
  without loss of data
 It uses local variables (CPU registers or
  variables on the stack)
 Protect data when global variables are used
                                                  31
Reentrant Function Example
void strcpy(char *dest, char *src)
{
   while (*dest++ = *src++) {
      …
   }
   *dest = NULL;
}
                                     32
Non-Reentrant Function Example
int Temp;
void swap(int *x, int *y)
{
    Temp = *x;
    *x = *y;
    *y = Temp;
}
                                 33
Non-Reentrant Function Example (cont.)
                                   34
Resource Allocation in RTOS
 Resource Allocation
   ◼ The issues with scheduling applicable here.
   ◼ Resources can be allocated in
       Weighted Round Robin
       Priority Based
 Some resources are non preemptible
   ◼ Example: semaphores
 Priority inversion problem may occur if priority
  scheduling is used
                                                   35
Priority Inversion Problem
   Common in real-time kernels
   Suppose task 1 has a higher priority than task 2
   Also, task 2 has a higher priority than task 3
   If mutual exclusion is used in accessing a
    shared resource, priority inversion may occur
                                                   36
Priority Inversion Example
                             37
A Solution to Priority Inversion Problem
 We can correct the problem by raising the
  priority of task 3
  ◼ Just for the time it accesses the shared resource
  ◼ After that, return to the original priority
  ◼ What if task 3 finishes the access before being
    preempted by task 1?
      incur overhead for nothing
                                                        38
A Better Solution to the Problem
 Priority Inheritance
   ◼ Automatically change the task priority when needed
   ◼ The task that holds the resource will inherit the
     priority of the task that waits for that resource until
     it releases the resource
                                                          39
Priority Inheritance Example
                               40
Assigning Task Priorities
 Not trivial
 In most systems, not all tasks are critical
   ◼ Non-critical tasks are obviously low-priorities
 Most real-time systems have a combination of
  soft and hard requirements
                                                       41
A Technique for Assigning Task Priorities
 Rate Monotonic Scheduling (RMS)
  ◼ Priorities are based on how often tasks execute
 Assumption in RMS
  ◼ All tasks are periodic with regular intervals
  ◼ Tasks do not synchronize with one another, share
    data, or exchange data
  ◼ Preemptive scheduling
                                                       42
RMS Example
              43
RMS: CPU Time and Number of Tasks
                               44
RMS: CPU Time and Number of Tasks (cont.)
  The upper bound for an infinite number of tasks
   is 0.6973
   ◼ To meet all hard real-time deadlines based on RMS,
     CPU use of all time-critical tasks should be less
     than 70%
   ◼ Note that you can still have non-time-critical tasks
     in a system
   ◼ So, 100% of CPU time is used
   ◼ But not desirable because it does not allow code
     changes or added features later
                                                       45
RMS: CPU Time and Number of Tasks (cont.)
 Note that, in some cases, the highest-rate task
  might not be the most important task
   ◼ Eventually, application dictates the priorities
   ◼ However, RMS is a starting point
                                                       46
Other RTOS issues
 Interrupt Latency should be very small
   ◼ Kernel has to respond to real time events
   ◼ Interrupts should be disabled for minimum possible
     time
 For embedded applications Kernel Size should be
  small
   ◼ Should fit in ROM
 Sophisticated features can be removed
   ◼ No Virtual Memory
   ◼ No Protection
                                                          47
Mutual Exclusion
 The easiest way for tasks to communicate is
  through shared data structures
  ◼ Global variables, pointers, buffers, linked lists, and
    ring buffers
 Must ensure that each task has exclusive access
  to the data to avoid data corruption
                                                         48
Mutual Exclusion (cont.)
 The most common methods are:
  ◼   Disabling interrupts
  ◼   Performing test-and-set operations
  ◼   Disabling scheduling
  ◼   Using semaphores
                                           49
Disabling and Enabling Interrupts
 The easiest and fastest way to gain exclusive
  access
 Example:
     Disable interrupts;
     Access the resource;
     Enable interrupts;
                                                  50
Disabling and Enabling Interrupts (cont.)
 This is the only way that a task can share
  variables with an ISR
 However, do not disable interrupts for too long
 Because it adversely impacts the “interrupt
  latency”!
 Good kernel vendors should provide the
  information about how long their kernels will
  disable interrupts
                                                51
Test-and-Set (TAS) Operations
 Two functions could agree to access a resource
  based on a global variable value
 If the variable is 0, the function has the access
   ◼ To prevent the other from accessing the resource, the
     function sets the variable to 1
 TAS operations must be performed indivisibly by
  the CPU (e.g., 68000 family)
 Otherwise, you must disable the interrupts when
  doing TAS on the variable
                                                             52
Semaphores
 Invented by Edgser Dijkstra in the mid-1960s
 Offered by most multitasking kernels
 Used for:
  ◼ Mutual exclusion
  ◼ Signaling the occurrence of an event
  ◼ Synchronizing activities among tasks
                                                 53
Semaphores (cont.)
 A semaphore is a key that your code acquires in
  order to continue execution
 If the key is already in use, the requesting task is
  suspended until the key is released
 There are two types
   ◼ Binary semaphores
       0 or 1
   ◼ Counting semaphores
       >= 0
                                                         54
Semaphore Operations
 Initialize (or create)
   ◼ Value must be provided
   ◼ Waiting list is initially empty
 Wait (or pend)
   ◼ Used for acquiring the semaphore
   ◼ If the semaphore is available (the semaphore value is positive), the
     value is decremented, and the task is not blocked
   ◼ Otherwise, the task is blocked and placed in the waiting list
   ◼ Most kernels allow you to specify a timeout
   ◼ If the timeout occurs, the task will be unblocked and an error code
     will be returned to the task
                                                                        55
Semaphore Operations (cont.)
 Signal (or post)
  ◼ Used for releasing the semaphore
  ◼ If no task is waiting, the semaphore value is
    incremented
  ◼ Otherwise, make one of the waiting tasks ready to
    run but the value is not incremented
  ◼ Which waiting task to receive the key?
      Highest-priority waiting task
      First waiting task
                                                        56
Sharing I/O Devices
                      57