Esiot Unit 2
Esiot Unit 2
Embedded C programming plays a key role in performing specific function by the processor. In day-to-day life we used many
electronic devices such as mobile phone, washing machine, digital camera, etc. These all device working is based on microcontroller
that are programmed by embedded C.
Keywords in Embedded C
#include, #define, sbit, sfr, data, idata, bdata, xdata, code, bit, volatile, interrupt, using, _nop_, _at_, far, near, sfr16, sfr32, reentrant,
register, auto, static, const, unsigned, signed, long, short, typedef, enum
1. Include Header Files: Include necessary header files for the 8051.
2. Define Constants and Variables: Define any constants and global variables.
3. Main Function: Contains the core logic of the program.
4. Function Definitions: Define any additional functions used in the program.
Format:
// Main Loop
while(1) {
// Core Logic
}
}
// Function Definitions
void function_name(parameters) {
// Function Code
}
}
Examples of Arithmetic Operations
Addition of Two Numbers:
#include <reg51.h>
void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned int sum;
while(1);
}
Subtraction of Two Numbers:
#include <reg51.h>
void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned int difference;
while(1);
}
#include <reg51.h>
void main() {
unsigned int num1 = 25;
unsigned int num2 = 17;
unsigned long product;
product = num1 * num2;
while(1);
}
Division of Two Numbers:
#include <reg51.h>
void main() {
unsigned int num1 = 25;
unsigned int num2 = 5;
unsigned int quotient;
while(1);
}
#include <reg51.h>
void delay() {
unsigned int i;
for(i = 0; i < 30000; i++);
}
void main() {
while(1) {
P1 = 0x01; // Turn ON the LED connected to P1.0
delay();
P1 = 0x00; // Turn OFF the LED
delay();
}
}
void Timer0_Delay() {
TMOD = 0x01; // Timer0 mode1 (16-bit)
TH0 = 0xFC; // Load high byte for 1ms delay
TL0 = 0x66; // Load low byte for 1ms delay
TR0 = 1; // Start Timer0
while(TF0 == 0); // Wait for overflow
TR0 = 0; // Stop Timer0
TF0 = 0; // Clear overflow flag
}
void main() {
while(1) {
P1 = 0x01; // Turn ON the LED
Timer0_Delay();
P1 = 0x00; // Turn OFF the LED
Timer0_Delay();
}
}
Serial Communication
Serial communication is essential for data exchange between the microcontroller and other devices. The 8051 supports UART for
serial communication.
#include <reg51.h>
void UART_Init() {
TMOD = 0x20; // Timer1 mode2 (8-bit auto-reload)
TH1 = 0xFD; // 9600 baud rate
SCON = 0x50; // Mode 1, 8-bit UART
TR1 = 1; // Start Timer1
}
void main() {
UART_Init();
while(1) {
UART_TxChar('A'); // Transmit character 'A'
for(int i = 0; i < 30000; i++); // Delay
}
}
Interrupts
The 8051 microcontroller supports five interrupt sources, which can be used to handle specific events.
#include <reg51.h>
void main() {
IT0 = 1; // Configure INT0 for falling edge trigger
EX0 = 1; // Enable external interrupt 0
EA = 1; // Enable global interrupts
#include <reg51.h>
void ADC_Init() {
CS = 0; // Enable ADC
WR = 1; // Set WR high
RD = 1; // Set RD high
}
void main() {
unsigned char adc_value;
ADC_Init();
while(1) {
adc_value = ADC_Read();
// Process the ADC value
}
}
Embedded C allows direct manipulation of hardware registers, enabling precise control over the microcontroller's peripherals.
#include <reg51.h>
void main() {
P1 = 0x00; // Set Port 1 to 0
while(1) {
P1 ^= 0x01; // Toggle the first pin of Port 1
for(unsigned int i = 0; i < 30000; i++); // Simple delay loop
}
}
2. Real-Time Constraints
Embedded systems often need to meet real-time requirements, where tasks must be performed within specific time constraints.
void Timer0_Delay() {
TMOD = 0x01; // Timer0 mode1 (16-bit timer)
TH0 = 0xFC; // Load high byte for 1ms delay
TL0 = 0x66; // Load low byte for 1ms delay
TR0 = 1; // Start Timer0
while(TF0 == 0); // Wait for overflow
TR0 = 0; // Stop Timer0
TF0 = 0; // Clear overflow flag
}
void main() {
while(1) {
P1 ^= 0x01; // Toggle LED
Timer0_Delay();
}
}
3. Memory Management
Efficient use of limited memory resources is critical in embedded systems. This includes both RAM and ROM.
#include <reg51.h>
void staticExample() {
static unsigned int counter = 0; // Static variable retains its value between function calls
counter++;
P1 = counter;
}
void main() {
while(1) {
staticExample();
for(unsigned int i = 0; i < 30000; i++); // Simple delay loop
}
}
4. Interrupt Handling
Interrupts allow the microcontroller to respond quickly to external events, improving responsiveness and efficiency.
#include <reg51.h>
void main() {
IT0 = 1; // Configure INT0 for falling edge trigger
EX0 = 1; // Enable external interrupt 0
EA = 1; // Enable global interrupts
while(1);
}
5. Low-Level Bit Manipulation
Embedded C often requires manipulation of individual bits for controlling hardware registers.
Example: Setting and Clearing Bits
#include <reg51.h>
void main() {
P1 |= 0x01; // Set bit 0 of Port 1
P1 &= ~0x01; // Clear bit 0 of Port 1
while(1);
}
6. Code Efficiency and Optimization
Optimizing code for speed and size is important in resource-constrained embedded systems.
#include <reg51.h>
void main() {
P1 = 0x00;
while(1) {
P1 ^= 0x01; // Toggle LED
delay(30000);
}
}
7. Peripheral Interface Programming
Interfacing with peripherals like ADC, DAC, I2C, SPI, etc., is common in embedded systems.
#include <reg51.h>
#define ADC_DATA P1
void main() {
unsigned char adc_value;
while(1) {
// adc_value = ADC_Read();
adc_value = ADC_DATA; // Read ADC value from Port 1 (example)
P2 = adc_value; // Output ADC value to Port 2 (example)
}
}
8. State Machines
State machines are used to manage the state of an embedded system, making the code more modular and manageable.
void main() {
state = OFF;
while(1) {
switch(state) {
case OFF:
P1 = 0x00; // LED off
break;
case ON:
P1 = 0x01; // LED on
break;
case BLINK:
P1 ^= 0x01; // Toggle LED
for(unsigned int i = 0; i < 30000; i++); // Simple delay
break;
}
}
}
9. Error Handling and Debugging
Implementing error handling and using debugging techniques is crucial for developing reliable embedded systems.
#include <reg51.h>
void main() {
unsigned char error = 0;
// Simulate an error
error = ERROR_CODE;
checkError(error);
while(1) {
P1 ^= 0x01; // Toggle LED if no error
for(unsigned int i = 0; i < 30000; i++); // Simple delay
}
}
Real Time Operating System (RTOS)
Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer
system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control,
telephone switching equipment, flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths of
seconds. This system is time-bound and has a fixed deadline. The processing in this type of system must occur within the specified
constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control Systems, airline reservation systems,
Heart pacemakers, Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –
1. Hard Real-Time Operating System: These operating systems guarantee that critical tasks are completed within a
range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car cannot be sold, so it is a
hard real-time system that requires complete car welding by the robot hardly on time., scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
2. Soft real-time operating system: This operating system provides some relaxation in the time limit.
For example – Multimedia systems, digital audio systems, etc. Explicit, programmer-defined, and controlled processes
are encountered in real-time systems. A separate process is changed by handling a single external event. The process is
activated upon the occurrence of the related event signaled by an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently of each other. Each process
is assigned a certain level of priority that corresponds to the relative importance of the event that it services. The
processor is allocated to the highest-priority processes. This type of schedule, called, priority-based preemptive
scheduling is used by real-time systems.
3. Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of its small impact,
missing a deadline can have unintended consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this type of real-time operating system. It
ensures that all the task and processes execute with predictable timing all the time,which make it more suitable for
applications in which timing accuracy is very important. Examples: INTEGRITY, PikeOS.
2. Task Scheduling:
RTOS employs various scheduling algorithms to manage the execution of tasks. Common scheduling algorithms include Rate
Monotonic Scheduling (RMS) and Earliest Deadline First (EDF), which prioritize tasks based on their deadlines.
3. Interrupt Handling:
Embedded systems often rely on interrupts to respond to external events promptly. RTOS is designed to handle interrupts efficiently,
minimizing latency and ensuring that critical tasks can be executed without delay.
4. Resource Management:
Efficient resource management is vital in embedded systems to allocate and deallocate resources effectively. RTOS provides
mechanisms for managing resources such as memory, CPU time, and peripherals, preventing resource conflicts and enhancing
system stability.
1. Resource Constraints:
Many embedded systems operate with limited resources, including memory and processing power. Implementing an RTOS in such
environments requires careful optimization to ensure efficient resource utilization without compromising performance.
2. Complexity:
RTOS can introduce complexity to the system, especially for developers accustomed to designing applications for general-purpose
operating systems. Learning to work with real-time constraints and scheduling can be challenging.
3. Cost:
RTOS solutions may come with licensing costs, which can be a concern for projects with strict budget constraints. Open-source
RTOS options, such as FreeRTOS and ChibiOS, provide cost-effective alternatives, but developers must weigh the trade-offs
carefully.
1. Predictable Performance:
RTOS guarantees predictable and consistent performance, ensuring that tasks are executed within specified time frames. This
predictability is crucial for applications where timing precision is paramount.
2. Improved Responsiveness:
The efficient handling of interrupts and prioritized task scheduling in RTOS results in improved system responsiveness. This is
particularly beneficial in applications like real-time control systems and responsive user interfaces.
A Closer Look In the fast-paced world of technology, embedded systems have become ubiquitous, powering devices that range from
the everyday to the extraordinary. Real-Time Operating Systems (RTOS) are the unsung heroes that ensure these embedded
systems meet the precise and time-sensitive demands of their applications. Let’s delve deeper into the significance of RTOS,
exploring specific applications, key components, and emerging trends in the realm of embedded systems.
Automotive Systems:
In modern vehicles, numerous embedded systems control critical functions such as engine management, braking, and airbag
deployment. RTOS ensures that these systems respond to real-time events, contributing to vehicle safety and performance.
Medical Devices:
RTOS plays a pivotal role in medical devices, where accuracy and reliability are paramount. From infusion pumps to patient
monitoring systems, RTOS ensures timely and precise execution of tasks, contributing to patient well-being and healthcare efficiency.
Industrial Automation:
Embedded systems are the backbone of industrial automation, controlling processes in manufacturing plants and ensuring seamless
operation of machinery. RTOS facilitates real-time control and coordination, optimizing production efficiency.
Telecommunications:
In the telecommunications sector, RTOS is crucial for managing network protocols, ensuring low-latency communication, and
handling real-time data streams. This is essential for applications like video conferencing, voice over IP (VoIP), and multimedia
streaming.
RTOS scheduling
An RTOS is valued for how quickly it can respond and in that, the advanced scheduling algorithm is the key component.
The time-criticality of embedded systems vary from soft-real time washing machine control systems through hard-real time aircraft
safety systems. In situations like the latter, the fundamental demand to meet real-time requirements can only be made if the OS
scheduler’s behavior can be accurately predicted.
Many operating systems give the impression of executing multiple programs at once, but this multi-tasking is something of an illusion.
A single processor core can only run a single thread of execution at any one time. An operating system’s scheduler decides which
program, or thread, to run when. By rapidly switching between threads, it provides the illusion of simultaneous multitasking.
The flexibility of an RTOS scheduler enables a broad approach to process priorities, although an RTOS is more commonly focused
on a very narrow set of applications. An RTOS scheduler should give minimal interrupt latency and minimal thread switching
overhead. This is what makes an RTOS so relevant for time-critical embedded systems.
Typically used in a broad spectrum of Preferred for embedded systems with strict
Application choice
computing needs, offering flexibility timing requirements and real-time tasks
The choice should be based on the Chosen when deterministic and predictable
Selection criteria specific needs and timing constraints of timing is paramount, and real-time
the embedded system performance is a priority
Multirate Systems
Implementing code that satisfies timing requirements is even more complex when multiple rates of computation must be handled.
Multirate embedded computing systems are very common, including automobile engines, printers, and cell phones. In all these
systems, certain operations must be executed periodically, and each operation is executed at its own rate.
Processes can have several different types of timing requirements imposed on them by the application. The timing requirements on a
set of processes strongly influence the type of scheduling that is appropriate. A scheduling policy must define the timing requirements
that it uses to determine whether a schedule is valid. Before studying scheduling proper, we outline the types of process timing
requirements that are useful in embedded system design.
Figure 3.2 illustrates different ways in which we can define two important requirements on processes: release time and deadline.
The release time is the time at which the process becomes ready to execute; this is not necessarily the time at which it actually takes
control of the CPU and starts to run. An aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process.
The release time is generally measured from that event, although the system may want to make the process ready at some interval
after the event itself. For a periodically executed process, there are two common possibilities.
In simpler systems, the process may become ready at the beginning of the period. More sophisticated systems, such as those with
data dependencies between processes, may set the release time at the arrival time of certain data, at a time after the start of the
period.
A deadline specifies when a computation must be finished. The deadline for an aperiodic process is generally measured from the
release time, since that is the only reasonable time reference. The deadline for a periodic process may in general occur at some time
other than the end of the period.
Rate requirements are also fairly common. A rate requirement specifies how quickly processes must be initiated.
The period of a process is the time between successive executions. For example, the period of a digital filter is defined by the time
interval between successive input samples.
The process’s rate is the inverse of its period. In a multirate system, each process executes at its own distinct rate.
The most common case for periodic processes is for the initiation interval to be equal to the period. However, pipelined execution of
processes allows the initiation interval to be less than the period. Figure 3.3 illustrates process execution in a system with four CPUs.
Process State and Scheduling
The process will be in any one state. The states are ready, executing and waiting states.
• Ready- A process goes into the ready state when it receives its required data and when it enters a new period.
• Waiting-A process goes into the waiting state when it needs data that it has not yet received or when it has finished all its work
for the current period.
• Executing-A process can go into the executing state only when it has all its data, that is ready to run.
Each process will have its own process control block (PCB). The PCB consists of a data structure having the information using
which the OS controls the process state. The PCB is stored in the memory area of the kernel. It consists of the information about the
process state. The main components of PCB are
• Process ID
• Process priority
• Parent process
• Child process and address to the PCB of the next process which will run.
Each task in an embedded application runs in an infinite loop. To achieve efficient CPU utilization, an RTOS uses orderly control
transfer between tasks, monitoring system resources and task states to ensure timely execution. A real-time system fails if it doesn't
perform operations at the correct time, with consequences ranging from minor to severe. Kernel services in an RTOS must be fast
and predictable. Real-time applications consist of multiple tasks competing for system resources like memory and CPU time.
Scheduling models in the RTOS manage this competition, preventing any single task from monopolizing resources needed by more
important tasks. Scheduling policies determine task execution, aiming to meet timing requirements and maximize CPU utilization.
A very simple scheduling policy is known as cyclo static scheduling or sometimes as Time Division Multiple Access Scheduling.
Time Division Multiple Access divides the time into equal-sized time slots over an interval equal to the length of the hyper-period H.
Processes always run in the same time slot. It is depending on the deadlines for some of the processes. Some time slots will be
empty. Since the time slots are of equal size, some short processes may have time left over in their time slot. Utilization is used as a
schedulability measure.The total CPU time of all the processes must be less than the hyper-period. An example of Time division
multiple access is shown in Figure 1.2. It shows three processes scheduled using TDMA.
Another scheduling policy that is slightly more sophisticated is round robin. Round robin uses the same hyperperiod as cyclostatic.
Here also, the processes are scheduled in a cyclic fashion. However, if a process does not have any useful work to do, the
round-robin scheduler moves on to the next process in order to fill the time slot with useful work.
Example: All three processes execute during the first hyper-period, but during the second one, P1 has no useful work and is
skipped. The processes are always executed in the same order and are shown in Figure 1.3.
2.3.Pre-emptive Scheduling
Pre-emption is a mechanism to stop a current process and provide service to another process. In a preemptive model, the tasks
can be forcibly suspended. This is initiated by an interrupt on the CPU. OS schedules such that the higher priority task, when ready,
preempts a lower priority by blocking the current running task. It solves the problem of large worst case latency for higher priority
tasks.
●Ready – In this state, the task is not blocked and is ready to receive control of the CPU when the scheduling policy indicates it is
the highest priority task in the system that is not blocked.
● Inactive – In Inactive state, the task is blocked and requires initialization in order to become ready.
● Blocked – In blocked state, the task is waiting for something to happen or for a resource to become available.
There must be a way of interrupting the operation of the lesser task and granting the resource to the more important one. Then the
highest-priority task ready to run is always given control of the CPU. If an ISR makes a higher-priority task ready, the higher-priority
task is resumed (instead of the interrupted task). Most commercial real-time kernels are preemptive.
2.4.Context Switch
The set of registers that define a process is known as its context. The switching from one process register set to another register
set is known as context switching. The data structure that holds the state of the process is known as the process control block (PCB).
That is, the context of a process is represented in the PCB. The time it takes is dependent on the hardware support. The
context-switch time is overhead, the system won’t do any useful work while switching.
Example of Context Switch
Consider that there are three processes P1, P2 and P3. Assume that the process P3 is in running state. Suddenly when an I/O
request is received, the scheduler loads the corresponding device driver, and then switches to process P1. When the time slice of P1
is exceeded, it schedules P2. While P2 is executing, an interrupt (corresponding to the previous I/O request) occurs. Hence, a context
switch occurs and interrupt service routine for I/O request is executed. After finishing the interrupt service, context switch again
occurs to resume the previous process P2.
The Context switching is a technique or method used by the operating system to switch a process from one state to another to
execute its function using CPUs in the system. When switching perform in the system, it stores the old running process's status in the
form of registers and assigns the CPU to a new process to execute its tasks. While a new process is running in the system, the
previous process must wait in a ready queue. The execution of the old process starts at that point where another process stopped it.
It defines the characteristics of a multitasking operating system in which multiple processes shared the same CPU to perform multiple
tasks without the need for additional processors in the system.
Following are the reasons that describe the need for context switching in the Operating system.
1. The switching of one process to another process is not directly in the system. A context switching helps the operating system
that switches between the multiple processes to use the CPU's resource to accomplish its tasks and store its context. We can
resume the service of the process at the same point later. If we do not store the currently running process's data or context,
the stored data may be lost while switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will be shut down or stopped by a high
priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will be switched by another process to use
the CPUs. And when the I/O requirement is met, the old process goes into a ready state to wait for its execution in the CPU.
Context switching stores the state of the process to resume its tasks in an operating system. Otherwise, the process needs to
restart its execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process status is saved as registers using context
switching. After resolving the interrupts, the process switches from a wait state to a ready state to resume its execution at the
same point later, where the operating system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests simultaneously without the need for any
additional processors.
Suppose that multiple processes are stored in a Process Control Block (PCB). One process is running state to execute its task with
the use of CPUs. As the process is running, another process arrives in the ready queue, which has a high priority of completing its
task using CPU. Here we used context switching that switches the current process with the new process requiring the CPU to finish
its tasks. While switching the process, a context switch saves the status of the old process in registers. When the process reloads
into the CPU, it starts the execution of the process when the new process stops the old process. If we do not save the state of the
process, we have to start its execution at the initial level. In this way, context switching helps the operating system to switch between
the processes, store or reload the process when it requires executing its tasks.
1. Interrupts
2. Multitasking
3. Kernel/User switch
Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the context switching automatic switches
a part of the hardware that requires less time to handle the interrupts.
Multitasking: A context switching is the characteristic of multitasking that allows the process to be switched from the CPU so that
another process can be run. When switching the process, the old state is saved to resume the process's execution at the same point
in the system.
Kernel/User Switch: It is used in the operating systems when switching between the user mode, and the kernel/user mode is
performed.
What is the PCB?
A PCB (Process Control Block) is a data structure used in the operating system to store all data related information to the process.
For example, when a process is created in the operating system, updated information of the process, switching information of the
process, terminated process in the PCB.
There are several steps involves in context switching of the processes. The following diagram represents the context switching of two
processes, P1 to P2, when an interrupt, I/O needs, or priority-based process occurs in the ready queue of PCB.
As we can see in the diagram, initially, the P1 process is running on the CPU to execute its task, and at the same time, another
process, P2, is in the ready state. If an error or interruption has occurred or the process requires input/output, the P1 process
switches its state from running to the waiting state. Before changing the state of the process P1, context switching saves the context
of the process P1 in the form of registers and the program counter to the PCB1. After that, it loads the state of the P2 process from
the ready state of the PCB2 to the running state.
1. First, thes context switching needs to save the state of process P1 in the form of the program counter and the registers to the
PCB (Program Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the ready queue, I/O queue and
waiting queue.
3. After that, another process gets into the running state, or we can select a new process from the ready state, which is to be
executed, or the process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It includes switching the process state
from ready to running state or from another state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume its execution at the same time
point where the system interrupt occurs.
Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution. P1 process is reloaded from PCB1
to the running state to resume its task at the same point. Otherwise, the information is lost, and when the process is executed again,
it starts execution at the initial level.
ROHINI College of Engineering and Technology
The success of a real-time system depends on whether all the jobs of all the
tasks can be guaranteed to complete their executions before their deadlines. If
they can, then we say the task set is schedulable.
The schedulability condition is that the total utilization of the task set must
be less than or equal to 1.
Implementation of earliest deadline first : Is it really not feasible to
implement EDF scheduling ?
The solution of the problem is rather simple; while the low priority task
blocks an higher priority task, it inherits the priority of the higher priority task; in
this way, every medium priority task cannot make preemption.
Timing anomalies
As seen, contention for resources can cause timing anomalies due to
priority inversion and deadlock. Unless controlled, these anomalies can be
arbitrary duration, and can seriously disrupt system timing.
It cannot eliminate these anomalies, but several protocols exist to control
them :
1. Priority inheritance protocol
2. Basic priority ceiling protocol
3. Stack-based priority ceiling protocol
Wait for graph
Wait-for graph is used for representing dynamic-blocking relationship
among jobs. In the wait-for graph of a system, every job that requires some
resource is represented by a vertex labeled by the name of the job.
At any time, the wait-for graph contains an (ownership) edge with label x
from a resource vertex to a job vertex if x units of the resource are allocated to
the job at the time.
Wait-for-graph is used to model resource contention. Every serial reusable
resource is modeled. Every job which requires a resource is modeled by vertex
with arrow pointing towards the resource.