0% found this document useful (0 votes)
10 views

Esiot Unit 2

Uploaded by

anovahsherin133
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Esiot Unit 2

Uploaded by

anovahsherin133
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Programming Embedded Systems in C Using 8051 Microcontroller

Introduction to Embedded Systems


Embedded systems are specialized computing systems that perform dedicated functions within larger systems. They are typically
constrained by specific real-time operational requirements, memory, and processing power. The 8051 microcontroller is a popular
choice for embedded system applications due to its simplicity, versatility, and widespread availability.

Programming in C for Embedded Systems


Programming the 8051 microcontroller in C involves understanding the specific architecture and capabilities of the microcontroller. C
is widely used in embedded systems due to its efficiency and control over hardware.Embedded C is most popular programming
language in software field for developing electronic gadgets. Each processor used in electronic system is associated with embedded
software.

Embedded C programming plays a key role in performing specific function by the processor. In day-to-day life we used many
electronic devices such as mobile phone, washing machine, digital camera, etc. These all device working is based on microcontroller
that are programmed by embedded C.

Keywords in Embedded C

#include, #define, sbit, sfr, data, idata, bdata, xdata, code, bit, volatile, interrupt, using, _nop_, _at_, far, near, sfr16, sfr32, reentrant,
register, auto, static, const, unsigned, signed, long, short, typedef, enum

Data Types in Embedded C

● char: 1 byte, used for storing characters or small integers.


● int: 2 bytes, used for storing integers.
● unsigned int: 2 bytes, used for storing positive integers.
● long: 4 bytes, used for storing long integers.
● float: 4 bytes, used for storing floating-point numbers
● char a = 'A';
● int b = 12345;
● unsigned int c = 65535;
● long d = 123456789;
● float e = 3.14;

Basic Structure of a C Program for 8051

1. Include Header Files: Include necessary header files for the 8051.
2. Define Constants and Variables: Define any constants and global variables.
3. Main Function: Contains the core logic of the program.
4. Function Definitions: Define any additional functions used in the program.

Format:

#include <reg51.h> // Include necessary header files for the 8051

// Define Constants and Macros


#define CONSTANT_NAME value // Example: #define LED_PIN P1_0

// Define Global Variables


data_type variable_name; // Example: unsigned int counter = 0;

// Function Prototypes (Forward Declarations)


void function_name(parameters); // Example: void delay(unsigned int time);
// Main Function
void main() {
// Initialization Code
// Example: P1 = 0x00; // Initialize Port 1 to 0

// Main Loop
while(1) {
// Core Logic

}
}

// Function Definitions
void function_name(parameters) {
// Function Code

}
}
Examples of Arithmetic Operations
Addition of Two Numbers:

#include <reg51.h>

void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned int sum;

sum = num1 + num2;

while(1);
}
Subtraction of Two Numbers:
#include <reg51.h>

void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned int difference;

difference = num1 - num2;

while(1);
}

Multiplication of Two Numbers:

#include <reg51.h>

void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned long product;
product = num1 * num2;

while(1);
}
Division of Two Numbers:
#include <reg51.h>

void main() {
unsigned int num1 = 25;
unsigned int num2 = 5;
unsigned int quotient;

quotient = num1 / num2;

while(1);
}

Using I/O Ports


Example of blinking an LED connected to P1.0:

#include <reg51.h>

void delay() {
unsigned int i;
for(i = 0; i < 30000; i++);
}

void main() {
while(1) {
P1 = 0x01; // Turn ON the LED connected to P1.0
delay();
P1 = 0x00; // Turn OFF the LED
delay();
}
}

Timers and Counters

Example of generating a delay using Timer 0:


#include <reg51.h>

void Timer0_Delay() {
TMOD = 0x01; // Timer0 mode1 (16-bit)
TH0 = 0xFC; // Load high byte for 1ms delay
TL0 = 0x66; // Load low byte for 1ms delay
TR0 = 1; // Start Timer0
while(TF0 == 0); // Wait for overflow
TR0 = 0; // Stop Timer0
TF0 = 0; // Clear overflow flag
}

void main() {
while(1) {
P1 = 0x01; // Turn ON the LED
Timer0_Delay();
P1 = 0x00; // Turn OFF the LED
Timer0_Delay();
}
}
Serial Communication
Serial communication is essential for data exchange between the microcontroller and other devices. The 8051 supports UART for
serial communication.

Example of transmitting a character via UART:

#include <reg51.h>

void UART_Init() {
TMOD = 0x20; // Timer1 mode2 (8-bit auto-reload)
TH1 = 0xFD; // 9600 baud rate
SCON = 0x50; // Mode 1, 8-bit UART
TR1 = 1; // Start Timer1
}

void UART_TxChar(char ch) {


SBUF = ch; // Load the data into the serial buffer
while(TI == 0); // Wait until transmission is complete
TI = 0; // Clear the transmit interrupt flag
}

void main() {
UART_Init();
while(1) {
UART_TxChar('A'); // Transmit character 'A'
for(int i = 0; i < 30000; i++); // Delay
}
}

Interrupts
The 8051 microcontroller supports five interrupt sources, which can be used to handle specific events.

Example of using an external interrupt (INT0):

#include <reg51.h>

void Ext0_ISR(void) interrupt 0 {


P1 = ~P1; // Toggle LED connected to P1.0
}

void main() {
IT0 = 1; // Configure INT0 for falling edge trigger
EX0 = 1; // Enable external interrupt 0
EA = 1; // Enable global interrupts

P1 = 0x01; // Turn ON the LED


while(1);
}
ADC Interfacing
Interfacing an Analog-to-Digital Converter (ADC) with the 8051 allows the microcontroller to process analog signals.
Example of interfacing an ADC (e.g., ADC0804) with the 8051:

#include <reg51.h>

#define ADC_DATA P1 // ADC data port

sbit RD = P3^7; // Read pin


sbit WR = P3^6; // Write pin
sbit INTR = P3^5; // Interrupt pin
sbit CS = P3^4; // Chip Select pin

void ADC_Init() {
CS = 0; // Enable ADC
WR = 1; // Set WR high
RD = 1; // Set RD high
}

unsigned char ADC_Read() {


WR = 0; // Start conversion
WR = 1; // Stop conversion
while(INTR == 1); // Wait for conversion to complete
RD = 0; // Read the data
return ADC_DATA; // Return the data
}

void main() {
unsigned char adc_value;
ADC_Init();

while(1) {
adc_value = ADC_Read();
// Process the ADC value
}
}

Core Concepts in Embedded C Programming

1. Direct Access to Hardware

Embedded C allows direct manipulation of hardware registers, enabling precise control over the microcontroller's peripherals.

Example: Toggling an LED

#include <reg51.h>

void main() {
P1 = 0x00; // Set Port 1 to 0

while(1) {
P1 ^= 0x01; // Toggle the first pin of Port 1
for(unsigned int i = 0; i < 30000; i++); // Simple delay loop
}
}
2. Real-Time Constraints
Embedded systems often need to meet real-time requirements, where tasks must be performed within specific time constraints.

Example: Using a Timer to Create a Precise Delay


#include <reg51.h>

void Timer0_Delay() {
TMOD = 0x01; // Timer0 mode1 (16-bit timer)
TH0 = 0xFC; // Load high byte for 1ms delay
TL0 = 0x66; // Load low byte for 1ms delay
TR0 = 1; // Start Timer0
while(TF0 == 0); // Wait for overflow
TR0 = 0; // Stop Timer0
TF0 = 0; // Clear overflow flag
}

void main() {
while(1) {
P1 ^= 0x01; // Toggle LED
Timer0_Delay();
}
}
3. Memory Management
Efficient use of limited memory resources is critical in embedded systems. This includes both RAM and ROM.

Example: Using Static Variables

#include <reg51.h>

void staticExample() {
static unsigned int counter = 0; // Static variable retains its value between function calls
counter++;
P1 = counter;
}

void main() {
while(1) {
staticExample();
for(unsigned int i = 0; i < 30000; i++); // Simple delay loop
}
}
4. Interrupt Handling
Interrupts allow the microcontroller to respond quickly to external events, improving responsiveness and efficiency.

Example: External Interrupt

#include <reg51.h>

void External0_ISR(void) interrupt 0 {


P1 ^= 0x01; // Toggle LED on interrupt
}

void main() {
IT0 = 1; // Configure INT0 for falling edge trigger
EX0 = 1; // Enable external interrupt 0
EA = 1; // Enable global interrupts

while(1);
}
5. Low-Level Bit Manipulation
Embedded C often requires manipulation of individual bits for controlling hardware registers.
Example: Setting and Clearing Bits

#include <reg51.h>

void main() {
P1 |= 0x01; // Set bit 0 of Port 1
P1 &= ~0x01; // Clear bit 0 of Port 1

while(1);
}
6. Code Efficiency and Optimization
Optimizing code for speed and size is important in resource-constrained embedded systems.

Example: Using Inline Assembly for Optimization

#include <reg51.h>

void delay(unsigned int time) {


while(time--) {
_nop_(); // Inline assembly for no operation (delay)
}
}

void main() {
P1 = 0x00;

while(1) {
P1 ^= 0x01; // Toggle LED
delay(30000);
}
}
7. Peripheral Interface Programming
Interfacing with peripherals like ADC, DAC, I2C, SPI, etc., is common in embedded systems.

Example: Reading ADC Value (Assuming an ADC is connected)

#include <reg51.h>

#define ADC_DATA P1

void main() {
unsigned char adc_value;

// Initialization (pseudo-code, depends on specific ADC)


// ADC_Init();

while(1) {
// adc_value = ADC_Read();
adc_value = ADC_DATA; // Read ADC value from Port 1 (example)
P2 = adc_value; // Output ADC value to Port 2 (example)
}
}
8. State Machines
State machines are used to manage the state of an embedded system, making the code more modular and manageable.

Example: Simple State Machine for LED Control


#include <reg51.h>

enum State { OFF, ON, BLINK } state;

void main() {
state = OFF;

while(1) {
switch(state) {
case OFF:
P1 = 0x00; // LED off
break;
case ON:
P1 = 0x01; // LED on
break;
case BLINK:
P1 ^= 0x01; // Toggle LED
for(unsigned int i = 0; i < 30000; i++); // Simple delay
break;
}
}
}
9. Error Handling and Debugging
Implementing error handling and using debugging techniques is crucial for developing reliable embedded systems.

Example: Simple Error Handling

#include <reg51.h>

#define ERROR_CODE 0xFF

void checkError(unsigned char error) {


if (error == ERROR_CODE) {
P1 = 0x00; // Turn off all LEDs to indicate error
while(1); // Halt system
}
}

void main() {
unsigned char error = 0;

// Simulate an error
error = ERROR_CODE;

checkError(error);

while(1) {
P1 ^= 0x01; // Toggle LED if no error
for(unsigned int i = 0; i < 30000; i++); // Simple delay
}
}
Real Time Operating System (RTOS)
Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer
system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control,
telephone switching equipment, flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in this type of system must occur within the specified
constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control Systems, airline reservation systems,
Heart pacemakers, Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –

1. Hard Real-Time Operating System: These operating systems guarantee that critical tasks are completed within a

range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car cannot be sold, so it is a
hard real-time system that requires complete car welding by the robot hardly on time., scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

2. Soft real-time operating system: This operating system provides some relaxation in the time limit.

For example – Multimedia systems, digital audio systems, etc. Explicit, programmer-defined, and controlled processes
are encountered in real-time systems. A separate process is changed by handling a single external event. The process is
activated upon the occurrence of the related event signaled by an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently of each other. Each process
is assigned a certain level of priority that corresponds to the relative importance of the event that it services. The
processor is allocated to the highest-priority processes. This type of schedule, called, priority-based preemptive
scheduling is used by real-time systems.

3. Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of its small impact,

missing a deadline can have unintended consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this type of real-time operating system. It

ensures that all the task and processes execute with predictable timing all the time,which make it more suitable for
applications in which timing accuracy is very important. Examples: INTEGRITY, PikeOS.

Key Characteristics of RTOS in Embedded Systems


1. Deterministic Behavior:
RTOS ensures deterministic behavior by providing a predictable and guaranteed response time for tasks. This characteristic is crucial
for applications where timing precision is paramount, such as control systems in robotics or industrial automation.

2. Task Scheduling:
RTOS employs various scheduling algorithms to manage the execution of tasks. Common scheduling algorithms include Rate
Monotonic Scheduling (RMS) and Earliest Deadline First (EDF), which prioritize tasks based on their deadlines.

3. Interrupt Handling:
Embedded systems often rely on interrupts to respond to external events promptly. RTOS is designed to handle interrupts efficiently,
minimizing latency and ensuring that critical tasks can be executed without delay.

4. Resource Management:
Efficient resource management is vital in embedded systems to allocate and deallocate resources effectively. RTOS provides
mechanisms for managing resources such as memory, CPU time, and peripherals, preventing resource conflicts and enhancing
system stability.

5. Real-Time Clocks and Timers:


RTOS incorporates real-time clocks and timers to keep track of time accurately. This feature is essential for applications that require
precise time synchronization, such as in the field of telecommunications or distributed control systems.

Challenges of Implementing RTOS in Embedded Systems

1. Resource Constraints:
Many embedded systems operate with limited resources, including memory and processing power. Implementing an RTOS in such
environments requires careful optimization to ensure efficient resource utilization without compromising performance.

2. Complexity:
RTOS can introduce complexity to the system, especially for developers accustomed to designing applications for general-purpose
operating systems. Learning to work with real-time constraints and scheduling can be challenging.

3. Cost:
RTOS solutions may come with licensing costs, which can be a concern for projects with strict budget constraints. Open-source
RTOS options, such as FreeRTOS and ChibiOS, provide cost-effective alternatives, but developers must weigh the trade-offs
carefully.

Benefits of RTOS in Embedded Systems

1. Predictable Performance:
RTOS guarantees predictable and consistent performance, ensuring that tasks are executed within specified time frames. This
predictability is crucial for applications where timing precision is paramount.

2. Improved Responsiveness:
The efficient handling of interrupts and prioritized task scheduling in RTOS results in improved system responsiveness. This is
particularly beneficial in applications like real-time control systems and responsive user interfaces.

3. Reliability and Safety:


In safety-critical applications, such as medical devices and automotive systems, the reliability of task execution is paramount. RTOS
enhances system reliability by minimizing the likelihood of task deadline misses, contributing to overall system safety.
4. Optimized Resource Utilization:
RTOS optimizes resource utilization by efficiently managing tasks and resources. This is crucial in embedded systems where
resources are often limited, ensuring that the system operates efficiently without unnecessary overhead.

Real-Time Operating Systems (RTOS) in Embedded Systems:

A Closer Look In the fast-paced world of technology, embedded systems have become ubiquitous, powering devices that range from
the everyday to the extraordinary. Real-Time Operating Systems (RTOS) are the unsung heroes that ensure these embedded
systems meet the precise and time-sensitive demands of their applications. Let’s delve deeper into the significance of RTOS,
exploring specific applications, key components, and emerging trends in the realm of embedded systems.

Applications of RTOS in Embedded Systems

Automotive Systems:
In modern vehicles, numerous embedded systems control critical functions such as engine management, braking, and airbag
deployment. RTOS ensures that these systems respond to real-time events, contributing to vehicle safety and performance.

Medical Devices:
RTOS plays a pivotal role in medical devices, where accuracy and reliability are paramount. From infusion pumps to patient
monitoring systems, RTOS ensures timely and precise execution of tasks, contributing to patient well-being and healthcare efficiency.

Industrial Automation:
Embedded systems are the backbone of industrial automation, controlling processes in manufacturing plants and ensuring seamless
operation of machinery. RTOS facilitates real-time control and coordination, optimizing production efficiency.

Telecommunications:
In the telecommunications sector, RTOS is crucial for managing network protocols, ensuring low-latency communication, and
handling real-time data streams. This is essential for applications like video conferencing, voice over IP (VoIP), and multimedia
streaming.

RTOS scheduling
An RTOS is valued for how quickly it can respond and in that, the advanced scheduling algorithm is the key component.

The time-criticality of embedded systems vary from soft-real time washing machine control systems through hard-real time aircraft
safety systems. In situations like the latter, the fundamental demand to meet real-time requirements can only be made if the OS
scheduler’s behavior can be accurately predicted.

Many operating systems give the impression of executing multiple programs at once, but this multi-tasking is something of an illusion.
A single processor core can only run a single thread of execution at any one time. An operating system’s scheduler decides which
program, or thread, to run when. By rapidly switching between threads, it provides the illusion of simultaneous multitasking.

The flexibility of an RTOS scheduler enables a broad approach to process priorities, although an RTOS is more commonly focused
on a very narrow set of applications. An RTOS scheduler should give minimal interrupt latency and minimal thread switching
overhead. This is what makes an RTOS so relevant for time-critical embedded systems.

Aspect General purpose operating system RTOS


Suitability for Not well-suited for real-time application Tailored for real-time requirements,
real-time applications due to variable latency ensuring precise timing and low latency

Balances multitasking and versatility, Prioritizes precision and real-time


Focus and priority making it suitable for a wide range of performance over multitasking, which is
applications best for time-critical tasks

Typically used in a broad spectrum of Preferred for embedded systems with strict
Application choice
computing needs, offering flexibility timing requirements and real-time tasks

The choice should be based on the Chosen when deterministic and predictable
Selection criteria specific needs and timing constraints of timing is paramount, and real-time
the embedded system performance is a priority

MULTIPLE TASKS AND MULTIPLE PROCESSES:

Tasks and Processes in Embedded Systems

● Multiple Tasks in Embedded Systems:


● Embedded systems often perform more than one function.
● Different tasks can be triggered by environmental changes.
● Example: A telephone answering machine recording calls and operating the control panel are distinct tasks.
● Defining Tasks:
● Tasks perform logically distinct operations.
● Tasks operate at different rates.
● Example: Recording calls and user control panel operations in an answering machine.
● Processes:
● A process is a single execution of a program.
● Running the same program multiple times creates separate processes.
● Each process has its own state (registers and memory).
● Address Space:
● Some operating systems keep processes in separate address spaces using memory management units.
● Lightweight real-time operating systems (RTOSs) may run processes in the same address space (threads).
● Example System - Compression Box:
● Connected to serial ports on both ends.
● Input: Uncompressed stream of bytes.
● Output: Compressed string of bits based on a predefined table.
● Needs to handle data input and output at different rates.

● Challenges in Handling Rates:
● Irregular code can result from rate mismatches.
● Elegant solutions involve data structures like queues.
● Example: A queue of output bits sent to the serial port in 8-bit sets.
● Ensuring Proper Rates:
● Important to process inputs and outputs at the correct rates.
● Example: Spending too much time on output can result in dropped input characters.
● Rate Control Problems:
● Example: Text compression box needs to handle input and output efficiently.
● Control panel with a compression mode button introduces asynchronous input.
● Asynchronous input (e.g., user button press) must be handled properly to maintain system functionality.
● Handling Asynchronous Input:
● The compression mode button may enable/disable compression.
● Button presses are unpredictable and must be managed alongside regular operations.
● Ensuring seamless operation despite asynchronous events is crucial.

Multirate Systems

Implementing code that satisfies timing requirements is even more complex when multiple rates of computation must be handled.
Multirate embedded computing systems are very common, including automobile engines, printers, and cell phones. In all these
systems, certain operations must be executed periodically, and each operation is executed at its own rate.

Timing Requirements on Processes

Processes can have several different types of timing requirements imposed on them by the application. The timing requirements on a
set of processes strongly influence the type of scheduling that is appropriate. A scheduling policy must define the timing requirements
that it uses to determine whether a schedule is valid. Before studying scheduling proper, we outline the types of process timing
requirements that are useful in embedded system design.

Figure 3.2 illustrates different ways in which we can define two important requirements on processes: release time and deadline.
The release time is the time at which the process becomes ready to execute; this is not necessarily the time at which it actually takes
control of the CPU and starts to run. An aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process.

The release time is generally measured from that event, although the system may want to make the process ready at some interval
after the event itself. For a periodically executed process, there are two common possibilities.

In simpler systems, the process may become ready at the beginning of the period. More sophisticated systems, such as those with
data dependencies between processes, may set the release time at the arrival time of certain data, at a time after the start of the
period.

A deadline specifies when a computation must be finished. The deadline for an aperiodic process is generally measured from the
release time, since that is the only reasonable time reference. The deadline for a periodic process may in general occur at some time
other than the end of the period.

Rate requirements are also fairly common. A rate requirement specifies how quickly processes must be initiated.

The period of a process is the time between successive executions. For example, the period of a digital filter is defined by the time
interval between successive input samples.

The process’s rate is the inverse of its period. In a multirate system, each process executes at its own distinct rate.

The most common case for periodic processes is for the initiation interval to be equal to the period. However, pipelined execution of
processes allows the initiation interval to be less than the period. Figure 3.3 illustrates process execution in a system with four CPUs.
Process State and Scheduling

The process will be in any one state. The states are ready, executing and waiting states.

• Ready- A process goes into the ready state when it receives its required data and when it enters a new period.
• Waiting-A process goes into the waiting state when it needs data that it has not yet received or when it has finished all its work
for the current period.
• Executing-A process can go into the executing state only when it has all its data, that is ready to run.

The process state is shown in figure 3.

Each process will have its own process control block (PCB). The PCB consists of a data structure having the information using
which the OS controls the process state. The PCB is stored in the memory area of the kernel. It consists of the information about the
process state. The main components of PCB are
• Process ID
• Process priority
• Parent process
• Child process and address to the PCB of the next process which will run.

Scheduling policy in RTOS:


Task Management in Real-Time Operating Systems (RTOS)

Each task in an embedded application runs in an infinite loop. To achieve efficient CPU utilization, an RTOS uses orderly control
transfer between tasks, monitoring system resources and task states to ensure timely execution. A real-time system fails if it doesn't
perform operations at the correct time, with consequences ranging from minor to severe. Kernel services in an RTOS must be fast
and predictable. Real-time applications consist of multiple tasks competing for system resources like memory and CPU time.
Scheduling models in the RTOS manage this competition, preventing any single task from monopolizing resources needed by more
important tasks. Scheduling policies determine task execution, aiming to meet timing requirements and maximize CPU utilization.

2.1. Simple Scheduling

A very simple scheduling policy is known as cyclo static scheduling or sometimes as Time Division Multiple Access Scheduling.
Time Division Multiple Access divides the time into equal-sized time slots over an interval equal to the length of the hyper-period H.
Processes always run in the same time slot. It is depending on the deadlines for some of the processes. Some time slots will be
empty. Since the time slots are of equal size, some short processes may have time left over in their time slot. Utilization is used as a
schedulability measure.The total CPU time of all the processes must be less than the hyper-period. An example of Time division
multiple access is shown in Figure 1.2. It shows three processes scheduled using TDMA.

Another scheduling policy that is slightly more sophisticated is round robin. Round robin uses the same hyperperiod as cyclostatic.

2.2 Round robin

Here also, the processes are scheduled in a cyclic fashion. However, if a process does not have any useful work to do, the
round-robin scheduler moves on to the next process in order to fill the time slot with useful work.

Example: All three processes execute during the first hyper-period, but during the second one, P1 has no useful work and is
skipped. The processes are always executed in the same order and are shown in Figure 1.3.
2.3.Pre-emptive Scheduling

Pre-emption is a mechanism to stop a current process and provide service to another process. In a preemptive model, the tasks
can be forcibly suspended. This is initiated by an interrupt on the CPU. OS schedules such that the higher priority task, when ready,
preempts a lower priority by blocking the current running task. It solves the problem of large worst case latency for higher priority
tasks.

In the preemptive scheduling model, a task must be in one of four states:

● Running –In this state, the task is in control of the CPU.

●Ready – In this state, the task is not blocked and is ready to receive control of the CPU when the scheduling policy indicates it is
the highest priority task in the system that is not blocked.

Inactive and blocked are the two waiting states.

● Inactive – In Inactive state, the task is blocked and requires initialization in order to become ready.

● Blocked – In blocked state, the task is waiting for something to happen or for a resource to become available.

There must be a way of interrupting the operation of the lesser task and granting the resource to the more important one. Then the
highest-priority task ready to run is always given control of the CPU. If an ISR makes a higher-priority task ready, the higher-priority
task is resumed (instead of the interrupted task). Most commercial real-time kernels are preemptive.
2.4.Context Switch

The set of registers that define a process is known as its context. The switching from one process register set to another register
set is known as context switching. The data structure that holds the state of the process is known as the process control block (PCB).
That is, the context of a process is represented in the PCB. The time it takes is dependent on the hardware support. The
context-switch time is overhead, the system won’t do any useful work while switching.
Example of Context Switch

Consider that there are three processes P1, P2 and P3. Assume that the process P3 is in running state. Suddenly when an I/O
request is received, the scheduler loads the corresponding device driver, and then switches to process P1. When the time slice of P1
is exceeded, it schedules P2. While P2 is executing, an interrupt (corresponding to the previous I/O request) occurs. Hence, a context
switch occurs and interrupt service routine for I/O request is executed. After finishing the interrupt service, context switch again
occurs to resume the previous process P2.

Context Switching in OS (Operating System)

The Context switching is a technique or method used by the operating system to switch a process from one state to another to
execute its function using CPUs in the system. When switching perform in the system, it stores the old running process's status in the
form of registers and assigns the CPU to a new process to execute its tasks. While a new process is running in the system, the
previous process must wait in a ready queue. The execution of the old process starts at that point where another process stopped it.
It defines the characteristics of a multitasking operating system in which multiple processes shared the same CPU to perform multiple
tasks without the need for additional processors in the system.

The need for Context switching


A context switching helps to share a single CPU across all processes to complete its execution and store the system's tasks status.
When the process reloads in the system, the execution of the process starts at the same point where there is conflicting.

Following are the reasons that describe the need for context switching in the Operating system.

1. The switching of one process to another process is not directly in the system. A context switching helps the operating system
that switches between the multiple processes to use the CPU's resource to accomplish its tasks and store its context. We can
resume the service of the process at the same point later. If we do not store the currently running process's data or context,
the stored data may be lost while switching between processes.

2. If a high priority process falls into the ready queue, the currently running process will be shut down or stopped by a high
priority process to complete its tasks in the system.

3. If any running process requires I/O resources in the system, the current process will be switched by another process to use
the CPUs. And when the I/O requirement is met, the old process goes into a ready state to wait for its execution in the CPU.
Context switching stores the state of the process to resume its tasks in an operating system. Otherwise, the process needs to
restart its execution from the initials level.

4. If any interrupts occur while running a process in the operating system, the process status is saved as registers using context
switching. After resolving the interrupts, the process switches from a wait state to a ready state to resume its execution at the
same point later, where the operating system interrupted occurs.

5. A context switching allows a single CPU to handle multiple process requests simultaneously without the need for any
additional processors.

Example of Context Switching

Suppose that multiple processes are stored in a Process Control Block (PCB). One process is running state to execute its task with
the use of CPUs. As the process is running, another process arrives in the ready queue, which has a high priority of completing its
task using CPU. Here we used context switching that switches the current process with the new process requiring the CPU to finish
its tasks. While switching the process, a context switch saves the status of the old process in registers. When the process reloads
into the CPU, it starts the execution of the process when the new process stops the old process. If we do not save the state of the
process, we have to start its execution at the initial level. In this way, context switching helps the operating system to switch between
the processes, store or reload the process when it requires executing its tasks.

Context switching triggers

Following are the three types of context switching triggers as follows.

1. Interrupts

2. Multitasking

3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the context switching automatic switches
a part of the hardware that requires less time to handle the interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the process to be switched from the CPU so that
another process can be run. When switching the process, the old state is saved to resume the process's execution at the same point
in the system.

Kernel/User Switch: It is used in the operating systems when switching between the user mode, and the kernel/user mode is
performed.
What is the PCB?

A PCB (Process Control Block) is a data structure used in the operating system to store all data related information to the process.
For example, when a process is created in the operating system, updated information of the process, switching information of the
process, terminated process in the PCB.

Steps for Context Switching

There are several steps involves in context switching of the processes. The following diagram represents the context switching of two
processes, P1 to P2, when an interrupt, I/O needs, or priority-based process occurs in the ready queue of PCB.

As we can see in the diagram, initially, the P1 process is running on the CPU to execute its task, and at the same time, another
process, P2, is in the ready state. If an error or interruption has occurred or the process requires input/output, the P1 process
switches its state from running to the waiting state. Before changing the state of the process P1, context switching saves the context
of the process P1 in the form of registers and the program counter to the PCB1. After that, it loads the state of the P2 process from
the ready state of the PCB2 to the running state.

The following steps are taken when switching Process P1 to Process 2:

1. First, thes context switching needs to save the state of process P1 in the form of the program counter and the registers to the
PCB (Program Counter Block), which is in the running state.

2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the ready queue, I/O queue and
waiting queue.

3. After that, another process gets into the running state, or we can select a new process from the ready state, which is to be
executed, or the process has a high priority to execute its task.

4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It includes switching the process state
from ready to running state or from another state like blocked, exit, or suspend.

5. If the CPU already executes process P2, we need to get the status of process P2 to resume its execution at the same time
point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution. P1 process is reloaded from PCB1
to the running state to resume its task at the same point. Otherwise, the information is lost, and when the process is executed again,
it starts execution at the initial level.
ROHINI College of Engineering and Technology

Priority Based Scheduling


Earliest-Deadline-First Scheduling
Earliest Deadline First (EDF) is one of the best known algorithms for real
time processing. It is an optimal dynamic algorithm. In dynamic priority
algorithms, the priority of a task can change during its execution. It produces a
valid schedule whenever one exists.
EDF is a preemptive scheduling algorithm that dispatches the process with
the earliest deadline. If an arriving process has an earlier deadline than the running
process, the system preempts the running process and dispatches the arriving
process.
A task with a shorter deadline has a higher priority. It executes a job with
the earliest deadline. Tasks cannot be scheduled by rate monotonic algorithm.
EDF is optimal among all scheduling algorithms not keeping the processor
idle at certain times. Upper bound of process utilization is 100 %.
Whenever a new task arrive, sort the ready queue so that the task closest to
the end of its period assigned the highest priority. System preempt the running
task if it is not placed in the first of the queue in the last sorting.
If two tasks have the same absolute deadlines, choose one of the two at
random (ties can be broken arbitrarily). The priority is dynamic since it changes
for different jobs of the same task.
EDF can also be applied to aperiodic task sets. Its optimality guarantees
that the maximal lateness is minimized when EDF is applied.
Many real time systems do not provide hardware preemption, so other
algorithm must be employed.
In scheduling theory, a real-time system comprises a set of real-time tasks;
each task consists of an infinite or finite stream of jobs. The task set can be
scheduled by a number of policies including fixed priority or dynamic priority
algorithms.

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

The success of a real-time system depends on whether all the jobs of all the
tasks can be guaranteed to complete their executions before their deadlines. If
they can, then we say the task set is schedulable.
The schedulability condition is that the total utilization of the task set must
be less than or equal to 1.
Implementation of earliest deadline first : Is it really not feasible to
implement EDF scheduling ?

Task Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

Problems for implementations :


1. Absolute deadlines change for each new task instance, therefore the priority
needs to be updated every time the task moves back to the ready queue.
2. More important, absolute deadlines are always increasing, how can we
associate a finite priority value to an ever increasing deadline value.

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

3. Most important, absolute deadlines are impossible to compute a-priori.


EDF properties :
1. EDF is optimal with respect to feasibility (i.e. schedulability).
2. EDF is optimal with respect to minimizing the maximum lateness.
Advantages
1. It is optimal algorithm.
2. Periodic, aperiodic and sporadic tasks are scheduled using EDF algorithm.
3. Gives best CPU utilization.
Disadvantages
1. Needs priority queue for storing deadlines
2. Needs dynamic priorities
3. Typically no OS support
4. Behaves badly under overload
5. Difficult to implement.
Rate Monotonic Scheduling
Rate Monotonic Priority Assignment (RM) is a so called static priority
round robin scheduling algorithm.
In this algorithm, priority is increases with the rate at which a process must
be scheduled. The process of lowest period will get the highest priority.
The priorities are assigned to tasks before execution and do not change over
time. RM scheduling is preemptive, i.e., a task can be preempted by a task with
higher priority.
In RM algorithms, the assigned priority is never modified during runtime
of the system. RM assigns priorities simply in accordance with its periods, i.e. the
priority is as higher as shorter is the period which means as higher is the activation
rate. So RM is a scheduling algorithm for periodic task sets.
If a lower priority process is running and a higher priority process becomes
available to run, it will preempt the lower priority process. Each periodic task is
assigned a priority inversely based on its period :

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

1. The shorter the period, the higher the priority.


2. The longer the period, the lower the priority.
The algorithm was proven under the following assumptions :
1. Tasks are periodic.
2. Each task must be completed before the next request occurs.
3. All tasks are independent.
4. Run time of each task request is constant.
5. Any non-periodic task in the system has no required deadlines.
RMS is optimal among all fixed priority scheduling algorithms for
scheduling periodic tasks where the deadlines of the tasks equal their periods.
Advantages :
1. Simple to understand. 2. Easy to implement. 3. Stable algorithm.
Disadvantages :
1. Lower CPU utilization.
2. Only deal with independent tasks.
3. Non-precise schedulability analysis
Comparison between RMS and EDF

Parameters RMS EDF


Priorities Static Dynamic
Works with OS with fixed priorities Yes No
Uses full computational power of processor No Yes
Possible to exploit full computational power of No Yes
Processor without provisioning for slack
Priority Inversion
Priority inversion occurs when a low-priority job executes while some
ready higher-priority job waits.
Consider three tasks Tl, T2 and T3 with decreasing priorities. Task T1 and
T3 share some data or resource that requires exclusive access, while T2 does not
interact with either of the other two tasks.

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

Task T3 starts at time t0 and locks semaphore s at time tv At time t2, Tl


arrives and preempts T3 inside its critical section. After a while, Tl requests to
use the shared resource by attempting to lock s, but it gets blocked, as T3 is
currently using it. Hence, at time t3 continues to execute inside its critical section.
Next, when T2 arrives at time t4, it preempts T3, as it has a higher priority and
does not interact with either Tl or T3.

The execution time of T2 increases the blocking time of Tl, as it is no


longer dependent solely on the length of the critical section executed by T3.
When tasks share resources, there may be priority inversions.

Priority inversion is not avoidable; However, in some cases, the priority


inversion could be too large.
Simple solutions :
1. Make critical sections non-preemptable.
2. Execute critical sections at the highest priority of the task that could use it.

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

The solution of the problem is rather simple; while the low priority task
blocks an higher priority task, it inherits the priority of the higher priority task; in
this way, every medium priority task cannot make preemption.
Timing anomalies
As seen, contention for resources can cause timing anomalies due to
priority inversion and deadlock. Unless controlled, these anomalies can be
arbitrary duration, and can seriously disrupt system timing.
It cannot eliminate these anomalies, but several protocols exist to control
them :
1. Priority inheritance protocol
2. Basic priority ceiling protocol
3. Stack-based priority ceiling protocol
Wait for graph
Wait-for graph is used for representing dynamic-blocking relationship
among jobs. In the wait-for graph of a system, every job that requires some
resource is represented by a vertex labeled by the name of the job.
At any time, the wait-for graph contains an (ownership) edge with label x
from a resource vertex to a job vertex if x units of the resource are allocated to
the job at the time.
Wait-for-graph is used to model resource contention. Every serial reusable
resource is modeled. Every job which requires a resource is modeled by vertex
with arrow pointing towards the resource.

EC8791-Embedded and Realtime Systems


ROHINI College of Engineering and Technology

Every job holding a resource is represented by a vertex pointing away from


the resource and towards the job. A cyclic path in a wait-for-graph indicates
deadlock.
J3 has locked the single unit of resource R and J2 is waiting to lock it.
A minimum of two system resources are required in a deadlock.

EC8791-Embedded and Realtime Systems

You might also like