0% found this document useful (0 votes)
2 views17 pages

5.3 synchronisation techniques

The document discusses resource allocation strategies to prevent deadlock, livelock, and starvation in operating systems, emphasizing the importance of proper resource management. It explores the Dining Philosophers Problem as an analogy for synchronization issues, illustrating potential scenarios leading to deadlock and livelock, and suggests solutions like semaphore usage. Additionally, it covers the Producer-Consumer and Readers-Writers problems, along with the concept of priority inversion and its workarounds, including priority inheritance.

Uploaded by

akshayaakhila930
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views17 pages

5.3 synchronisation techniques

The document discusses resource allocation strategies to prevent deadlock, livelock, and starvation in operating systems, emphasizing the importance of proper resource management. It explores the Dining Philosophers Problem as an analogy for synchronization issues, illustrating potential scenarios leading to deadlock and livelock, and suggests solutions like semaphore usage. Additionally, it covers the Producer-Consumer and Readers-Writers problems, along with the concept of priority inversion and its workarounds, including priority inheritance.

Uploaded by

akshayaakhila930
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

• Ensure that a process does not hold any other resources when it requests a resource.

This
can be achieved by implementing the following set of rules/guidelines in allocating resources
to processes.

1. A process must request all its required resource and the resources should be allocated
before the process begins its execution.

2. Grant resource allocation requests from processes only if the process does not hold a
resource currently.

• Ensure that resource preemption (resource releasing) is possible at operating system level.
This can be achieved by implementing the following set of rules/guidelines in resources
allocation and releasing.
1. Release all the resources currently held by a process if a request made by the
process for a new resource is not able to fulfil immediately.

2. Add the resources which are preempted (released) to a resource list describing the
resources which the process requires to complete its execution.

3. Reschedule the process for execution only when the process gets its old resources
and the new resource which is requested by the process.
Imposing these criterions may introduce negative impacts like low resource utilization and
starvation of processes.

Livelock: The Livelock condition is similar to the deadlock condition except that a process in
livelock condition changes its state with time. While in deadlock a process enters in wait state
for a resource and continues in that state forever without making any progress in the
execution, in a livelock condition a process always does something but is unable to make any
progress in the execution completion. The livelock condition is better explained with the real
world example, two people attempting to cross each other in a narrow corridor. Both the
persons move towards each side of the corridor to allow the opposite person to cross. Since
the corridor is narrow, none of them are able to cross each other. Here both of the persons
perform some action but still they are unable to achieve their target, cross each other. We will
make the livelock, the scenario more clear in a later section—The Dining Philosophers '
Problem, of this chapter.

Starvation: In the multitasking cont on is the condition in which a process does not get the
resources required to continue its execution for a long time. As time progresses the process
starves on resource. Starvation may arise due to various conditions like byproduct of
preventive measures of deadlock, scheduling policies favoring high priority tasks and tasks
with shortest execution time, etc.

3.3.1.3 The Dining Philosophers' Problem: The 'Dining philosophers 'problem' is an


interesting example for synchronization issues in resource utilization. The terms 'dining',
'philosophers', etc. may sound awkward in the operating system context, but it is the best way
to explain technical things abstractly using non-technical terms. Now coming to the problem
definition:
Five philosophers (It can be 'n'. The number 5 is taken for illustration) are sitting
around a round table, involved in eating and brainstorming. At any point of time each
philosopher will be in any one of the three states: eating, hungry or brainstorming. (While
eating the philosopher is not involved in brainstorming and while brainstorming the
philosopher is not involved in eating). For eating, each philosopher requires 2 forks. There
are only 5 forks available on the dining table ('n' for 'n' number of philosophers) and they are
arranged in a fashion one fork in between two philosophers. The philosopher can only use the
forks on his/her immediate left and right that too in the order pickup the left fork first and
then the right fork. Analyze the situation and explain the possible outcomes of this scenario.

Let's analyze the various scenarios that may occur in this situation.

Scenario 1: All the philosophers involve in brainstorming together and try to eat together.
Each philosopher picks up the left fork and is unable to proceed since two forks are required
for eating the spaghetti present in the plate. Philosopher 1 thinks that Philosopher 2 sitting to
the right of him/her will put the fork down and waits for it. Philosopher 2 thinks that
Philosopher 3' sitting to the right of him/her will

Figure: Visualization of the ‘Dining Philosophers' Problem’

put the fork down and waits for it, and so on. This forms a circular chain of un-granted
requests. If the philosophers continue in this state waiting for the fork from the philosopher
sitting to the right of each, they will not make any progress in eating and this will result in
starvation of the philosophers and deadlock.

Scenario 2: All the philosophers start brainstorming together. One of the philosophers is
hungry and he/ she picks up the left fork. When the philosopher is about to pick up the right
fork, the philosopher sitting. to his right also become hungry and tries to grab the left fork
which is the right fork of his neighboring philosopher who is trying to lift it, resulting in a
'Race condition'..
Scenario 3: All the philosophers involve in brainstorming together and by to eat together.
Each philosopher picks up the left fork and is unable to proceed, since two forks are required
for eating the spaghetti present in the plate. Each of them anticipates that the adjacently
sitting philosopher will put his/her fork down and waits for a fixed duration grid after this
puts the fork down. Each of them again tries to lift the fork after a fixed duration of time.
Since all philosophers are trying to lift the fork at the same time, none of them will be able to
grab two forks. This condition leads to livelock and starvation of philosophers, where each
philosopher tries to do something, but they are unable to make any progress in achieving the
target.

Figure illustrates these scenarios.

Solution: We need to find out alternative solutions to avoid the.deadlock, livelock, racing
and starvation condition that may arise due to the concurrent access of forks by philosophers.
This situation can be handled in many ways by allocating the forks in different allocation
techniques including round Robin allocation, FIFO allocation: etc.

But the requirement is that the solution should be optimal, avoiding deadlock and starvation
of the philosophers and allowing maximum number of philosophers to eat at a time. One
solution that we could think of is:
• Imposing rules in accessing the forks by philosophers, like: The philosophers should put
down the fork he/she already have in hand (left fork) after waiting for a fixed duration for the
second fork (right fork) and should wait for a fixed time before making the next attempt.

This solution works fine to some extent, but, if all the philosophers try to lift the forks
at the same time, a livelock situation is resulted.

Another solution which gives maximum concurrency that can be thought of is each
philosopher ac-quires a semaphore (mutex) before picking up any fork. When a philosopher
feels hungry he/she checks whether the philosopher sitting to the left and right of him is
already using the fork, by checking the state of the associated semaphore. If the forks are in
use by the neighboring philosophers, the philosopher waits till the forks are available. A
philosopher when finished eating puts the forks down and informs the philosophers sitting to
his/her left and right, who are hungry (waiting for the forks), by signaling the semaphores
associated with the forks.
Figure: The 'Real Problems' in the 'Dining Philosophers problem' (a) Starvation
and Deadlock (b) Racing (c) Livelock and Starvation

We will discuss about semaphores and mutexes at a latter section of this chapter. In the
operating system context, the dining philosophers represent the processes and forks represent
the resources. The dining philosophers' problem is an analogy of processes competing for
shared resources and the different problems like racing, deadlock, starvation and livelock
arising from the competition.

3.3.1.4 Producer-Consumer/Bounded Buffer Problem: Producer-Consumer problem is a


common data sharing problem where two processes concurrently access a shared buffer with
fixed size. A thread/process which produces data is called 'Producer thread/process' and a
thread/process which consumes the data produced by a producer thread/process is known as
'Consumer thread/process'. Imagine a situation where the producer thread keeps on producing
data and puts it into the buffer and the consumer thread keeps on consuming the data from the
buffer and there is no synchronization between the two. There may be chances where in
which the producer produces data at a faster rate than the rate at which it is consumed by the
consumer. This will lead to 'buffer overrun' where the producer tries to put data to a full
buffer. If the consumer consumes data at a faster rate than the rate at which it is produced by
the producer, it will lead to the situation `buffer under-run' in which the consumer tries to
read from an empty buffer. Both of these conditions will lead to inaccurate data and data loss.
The following code snippet illustrates the producer-consumer problem
Here the 'producer thread' produces random numbers and puts it in a buffer of size 20. If the
'producer thread' fills the buffer fully it re-starts the filling of the buffer from the bottom. The
'consumer thread' consumes the data produced by, the 'producer thread'. For consuming the
data, the 'consumer thread' reads the buffer which is shared with the 'producer thread'. Once
the 'consumer thread' consumes all the data, it starts consuming the data from the bottom of
the buffer. These two threads run independently and are scheduled for execution based on the
scheduling policies adopted by the OS. The different situations that may arise based on the
scheduling of the 'producer thread' and 'consumer thread' is listed below.

1. 'Producer thread' is scheduled more frequently than the 'consumer thread': There are
chances for overwriting the data in the buffer by the 'producer thread'. This leads to
inaccurate data.

2. Consumer thread' is scheduled more frequently than the 'producer thread': There are
chances for reading the old data in the buffer again by the 'consumer thread'. This will also
lead to inaccurate data.

The output of the above program when executed on a Windows XP machine is shown in Fig.
10.29. The output shows that the consumer thread runs faster than the producer thread and
most often leads to buffer under-run and thereby inaccurate data.

Note
It should be noted that the scheduling of the threads 'producer_thread' ,and ‘consumer_thread’
is OS kernel scheduling policy dependent and you may not get the same output all the time
when you run this piece of code in Windows XP.

The producer-consumer problem can be rectified in various methods. One simple solution is
the `sleep and wake-up'. The 'sleep and wake-up' can be implemented in various process
synchronization techniques like semaphores, mutex, monitors, etc. We will discuss it in a
latter section of this chapter.
Figure: Output of win32 program illustrating producer-consumer problem

3.3.1.5 Readers-Writers Problem: Tire Readers-Writers problem is a common issue


observed in processes competing for limited shared resources. The Readers-Writers problem
is characterized by multiple processes trying to read and write shared data concurrently. A
typical real-world example for the Readers-Writers problem is the banking system where one
process tries to read the account information like available balance and the other process tries
to update the available balance for that account. This may result in inconsistent results. If
multiple processes try to read a shared data concurrently it may not create any impacts,
whereas when multiple processes try to write and read concurrently it will definitely create
inconsistent results. Proper synchronization techniques should be applied to avoid the
readers-writers problem. We will discuss about the various synchronization techniques in a
later section of this chapter.
3.3.1.6 Priority Inversion: Priority inversion is the byproduct of the combination of
blocking based (lock based) process synchronization and pre-emptive priority scheduling.
'Priority inversion' is the condition in which a high priority task needs to wait for a low
priority task to release a resource which is shared between the high priority task and the low
priority task, and a medium priority task which doesn't require the shared resource continue
its execution by preempting the low priority task. Priority based preemptive scheduling
technique ensures that a high priority task is always executed first, whereas the lock based
process synchronization mechanism (like mutex, semaphore, etc.) ensures that a process will
not access a shared resource, which is currently in use by another process. The
synchronization technique is only interested in avoiding conflicts that may arise due to the
concur-rent access of the shared resources and not at all bothered about the priority of the
process which tries to access the shared resource. In fact, the priority based preemption and
lock based synchronization are the two contradicting OS primitives. Priority inversion is
better explained with the following scenario: Let Process A, Process B and Process C be three
processes with priorities High, Medium and Low respectively. Process A and Process C share
a variable 'X' and the access to this variable is synchronized through a mutual exclusion
mechanism like Binary Semaphore S.

Imagine a situation where Process C is ready and is picked up for execution by the scheduler
and 'Process C' tries to access the shared variable 'X'. 'Process C' acquires the 'Semaphore S'
to indicate the other processes that it is accessing the shared variable 'X'. Immediately after
'Process C' acquires the 'Semaphore S', 'Process B' enters the 'Ready' state. Since 'Process B'
is of higher priority compared to 'Process C', 'Process C' is preempted, and 'Process B' starts
executing. Now imagine 'Process A' enters the 'Ready' state at this stage. Since 'Process A' is
of higher priority than 'Process B', 'Process B' is preempted, and 'Process A' is scheduled for
execution. 'Process A' involves accessing of shared variable 'X' which is currently being
accessed by 'Process C'. Since 'Process C' acquired the semaphore for signaling the access of
the shared variable 'X', 'Process A' will not be able to access it. Thus 'Process A' is put into
blocked state (This condition is called Pending on resource). Now 'Process B' gets the CPU
and it continues its execution until it relinquishes the CPU voluntarily or enters a wait state or
preempted by another high priority task. The highest priority process 'Process A' has to wait
till 'Process C' gets a chance to execute and release the semaphore. This produces unwanted
delay in the execution of the high priority task which is supposed to be executed immediately
when it was 'Ready'. Priority inversion may be sporadic in nature but can lead to potential
damages as a result f missing critical deadlines. Literally speaking, priority inversion 'inverts'
the priority of a high priority task with that of a low priority task. Proper workaround
mechanism should be adopted for handling the priority inversion problem. The commonly
adopted priority inversion workarounds are:

through a mutual exclusion mechanism like Binary Semaphore S. Imagine a situation where
Process C is ready and is picked up for execution by the scheduler and 'Process C' tries to
access the shared variable 'X'. 'Process C' acquires the 'Semaphore S' to indicate the other
processes that it is accessing the shared variable 'X'. Immediately after 'Process C' acquires
the 'Semaphore S', 'Process B' enters the 'Ready' state. Since 'Process B' is of higher priority
compared to 'Process C', 'Process C' is preempted, and 'Process B' starts executing. Now
imagine 'Process A' enters the 'Ready' state at this stage. Since 'Process A' is of higher priority
than 'Process B', 'Process B' is preempted, and 'Process A' is scheduled for execution. 'Process
A' involves accessing of shared variable 'X' which is currently being accessed by 'Process C'.
Since 'Process C' acquired the semaphore for signaling the access of the shared variable 'X',
'Process A' will not be able to access it. Thus 'Process A' is put into blocked state (This
condition is called Pending on resource). Now 'Process B' gets the CPU and it continues its
execution until it relinquishes the CPU voluntarily or enters a wait state or preempted by
another high priority task. The highest priority process 'Process A' has to wait till 'Process C'
gets a chance to execute and release the semaphore. This produces unwanted delay in the
execution of the high priority task which is supposed to be executed immediately when it was
'Ready'. Priority inversion may be sporadic in nature but can lead to potential damages as a
result f missing critical deadlines. Literally speaking, priority inversion 'inverts' the priority of
a high priority task with that of a low priority task. Proper workaround mechanism should be
adopted for handling the priority inversion problem. The commonly adopted priority
inversion workarounds are:

Priority Inheritance: A low-priority task that is currently accessing (by holding the lock) a
shared resource requested by a high-priority task temporarily 'inherits' the priority of that
high-priority task, from the moment the high-priority task raises the request. Boosting the
priority of the low priority task to that of the priority of the task which requested the shared
resource holding by the low priority task eliminates the preemption of the low priority task by
other tasks whose priority are below that of the task requested the shared resource 'and
thereby reduces the delay in waiting to get the resource requested by the high priority task.
The priority of the low priority task which is temporarily boosted to high is brought to the
original value when it releases the shared resource. Implementation of Priority inheritance
workaround in the priority inversion problem discussed for Process A, Process B and Process
C example will change the execution sequence as shown in Figure.
Figure: Handling Priority Inversion problem with priority Inheritance.

Priority inheritance is only a work around and it will not eliminate the delay in
waiting the high priority task to get the resource from the low priority task. The only thing is
that it helps the low priority task to continue its execution and release the shared resource as
soon as possible. The moment, at which the low priority task releases the shared resource, the
high priority task kicks the low priority task out and grabs the CPU - A true form of
selfishness. Priority inheritance handles priority inversion at the cost of run-time overhead at
scheduler. It imposes the overhead of checking the priorities of all tasks which tries to access
shared resources and adjust the priorities dynamically.

Priority Ceiling: In 'Priority Ceiling', a priority is associated with each shared resource. The
priority associated to each resource is the priority of the highest priority task which uses this
shared resource. This priority level is called 'ceiling priority'. Whenever a task accesses a
shared resource, the scheduler elevates the priority of the task to that of the ceiling priority of
the resource. If the task which accesses the shared resource is a low priority task, its priority
is temporarily boosted to the priority of the highest priority task to which the resource is also
shared. This eliminates the pre-emption of the task by other medium priority tasks leading to
priority inversion. The priority of the task is brought back to the original level once the task
completes the accessing of the shared resource. 'Priority Ceiling' brings the added advantage
of sharing resources without the need for synchronization techniques like locks. Since the
priority of the task accessing a shared resource is boosted to the highest priority of the task
among which the resource is shared, the concurrent access of shared resource is automatically
handled. Another advantage of 'Priority Ceiling' technique is that all the overheads are at
compile time instead of run-time. Implementation of 'priority ceiling' workaround in the
priority inversion problem discussed for Process A, Process B and Process C example will
change the execution sequence as shown in Figure.
Figure: Handling Priority Inversion problem with priority Ceiling.

The biggest drawback of 'Priority Ceiling' is that it may produce hidden priority inversion.
With 'Priority Ceiling' technique, the priority of a task is always elevated no matter another
task wants the shared resources. This unnecessary priority elevation always boosts the
priority of a low priority task to that of the highest priority tasks among which the resource is
shared and other tasks with priorities higher than that of the low priority task is not allowed to
preempt the low priority task when it is accessing a shared resource. This always gives the
low priority task the luxury of running at high priority when accessing shared resources.
3.3.2 Task Synchronization Techniques
So far we discussed about the various task/process synchronization issues encountered in
multitasking systems due to concurrent resource access. Now let's have a discussion on the
various techniques used for synchronization in concurrent access in multitasking.
Process/Task synchronization is essential for

1. Avoiding conflicts in resource access (racing, deadlock, starvation, livelock, etc.) in a


multitasking environment.

2. Ensuring proper sequence of operation across processes. The producer consumer


problem is a typical example for processes requiring proper sequence of operation. In
producer consumer problem, accessing the shared buffer by different processes is not the
issue; the issue is the writing process should write to the shared buffer only if the buffer is
not full and the consumer thread should not read from the buffer if it is empty. Hence
proper synchronization should be provided to implement this sequence of operations.

3. Communicating between processes.

The code memory area which holds the program instructions (piece of code) for accessing a
shared resource (like shared memory, shared variables, etc.) is known as 'critical section'. In
order to synchronize the access to shared resources, the access to the critical section should
be exclusive. The exclusive access to critical section of code is provided through mutual
exclusion mechanism. Let us have a look at how mutual exclusion is important in concurrent
access. Consider two processes Process A and Process B running on a multitasking system.
Process A is currently running and it enters its critical section. Before Process A completes its
operation in the critical section, the scheduler preempts Process A and schedules Process B
for execution (Process B is of higher priority compared to Process A). Process B also
contains the access to the critical section which is already in use by Process A. If Process B
continues its execution and enters the critical section which is already in use by Process A, a
racing condition will be resulted. A mutual exclusion policy enforces mutually exclusive
access of critical sections. Mutual exclusions can be enforced in different ways. Mutual
exclusion blocks a process. Based on the behaviour of the blocked process, mutual exclusion
methods can be classified into two categories. In the following section we will discuss them
in detail.

3.3.2.1 Mutual Exclusion through Busy Waiting/Spin Lock: 'Busy waiting' is the simplest
method for enforcing mutual exclusion. The following code snippet illustrates how 'Busy
waiting' enforces mutual exclusion.
The 'Busy waiting' technique uses a lock variable for implementing mutual exclusion. Each
process/ thread checks this lock variable before entering the critical section. The lock is set to
'1' by a process/ thread if the process/thread is already in its critical section; otherwise the
lock is set to '0'. The major challenge in implementing the lock variable based
synchronization is the non-availability of a single atomic instruction which combines the
reading, comparing and setting of the lock variable. Most often the three different operations
related to the locks, viz. the operation of Reading the lock variable, checking its present
value, and setting it are achieved with multiple low-level instructions. The low-level
implementation of these operations are dependent on the underlying processor instruction set
and the (cross) compiler in use. The low-level implementation of the 'Busy waiting' code
snippet, which we discussed earlier, under Windows XP operating system running on an Intel
Centrino Duo processor is given below. The code snippet is compiled with Microsoft Visual
Studio 6.0 compiler.
The assembly language instructions reveals that the two high level instructions
(while(bFlag==false); and bFlag=true;), corresponding to the operation of reading the lock
variable, checking its present value and setting it is implemented in the processor level using
six low level instructions. Imagine a situation where ‘Process 1' read the lock variable and
tested it and found that the lock is available and it is about to set the lock for acquiring the
critical section. But just before 'Process 1' sets the lock variable, 'Process 2' preempts 'Process
1' and starts executing. 'Process 2' contains a critical section code and it tests the lock variable
for its availability. Since 'Process 1' was unable to set the lock variable, its state is still '0' and
'Process 2' sets it and acquires the critical section. Now the scheduler preempts 'Process 2' and
schedules 'Process 1' before 'Process 2' leaves the critical section. Remember, `Process l' was
preempted at a point just before setting the lock variable (‘Process 1' has already tested the
lock variable just before it is preempted and found that the lock is available). Now 'Process 1'
sets the lock variable and enters the critical section. It violates the mutual exclusion policy
and may pro-duce unpredicted results.

Device Driver
Device driver is a piece of software that acts as a bridge between the operating system and
the hardware. In an operating system based product architecture, the user applications talk to
the Operating System kernel for all necessary information exchange including
communication with the hardware peripherals. The architecture of the OS kernel will not
allow direct device access from the user application. All the device related access should flow
through the OS kernel and the OS kernel mutes it to the concerned hardware peripheral. OS
provides interfaces in the form of Application Programming Interfaces (APIs) for accessing
the hardware. The device driver abstracts the hardware from user applications. The topology
of user applications and hardware interaction in an RTOS based system is depicted in Fig.
Device drivers are responsible for initiating and managing the communication with
the hardware peripherals. They are responsible for establishing the connectivity, initializing
the hardware (setting up various registers of the hardware device) and transferring data. An
embedded product may contain different types of hardware components like Wi-Fi module,
File systems, Storage device interface, etc. The initialization of these devices and the
protocols required for communicating with these devices may be different. All these
requirements are implemented in drivers and a single driver will not be able to satisfy all
these. Hence each hardware (more specifically each class of hardware) requires a unique
driver component.

User Level Applications/Tasks

Apps Apps Apps

Operating System Services

Device Drivers

Hardware

Figure: Role of device driver in Embedded OS based products

Certain drivers come as part of the OS kernel and certain drivers need to be installed
on the fly. For example, the program storage memory for an embedded product, say NAND
Flash memory requires a NAND Flash driver to read and write data from/to it. This driver
should come as part of the OS kernel image. Certainly the OS will not contain the drivers for
all devices and peripherals under the Sun. It contains only the necessary drivers to
communicate with the onboard devices (Hardware devices which are part of the platform)
and for certain set of devices supporting standard protocols and device class (Say USB Mass
storage device or HID devices like Mouse/keyboard). If an external device, whose driver
software is not available with OS kernel image, is connected to the embedded device (Say a
medical device with custom USB class implementation is connected to the USB port of the
embedded product), the OS prompts the user to install its driver manually. Device drivers
which are part of the OS image are known as 'Built-in drivers' or 'On-board drivers'. These
drivers are loaded by the OS at the time of booting the device and are always kept in the
RAM. Drivers which need to be installed for accessing a device are known. as 'Installable
drivers'. These drivers are loaded by the OS on a need basis. Whenever the device is
connected, the OS loads the corresponding driver to memory. When the device is removed,
the driver is unloaded from memory. The Operating system maintains a record of the drivers
corresponding to each hardware.

The implementation of driver is OS dependent. There is no universal implementation for a


driver. How the driver communicates with the kernel is dependent on the OS structure and
implementation. Different Operating Systems follow different implementations.

It is very essential to know the hardware interfacing details like the memory address assigned
to the device, the Interrupt used, etc. of on-board peripherals for writing a driver for that
peripheral. It varies on the hardware design of the product. Some Real-Time operating
systems like 'Windows CE' support a layered architecture for the driver which separates out
the low level implementation from the OS specific interface. The low level implementation
part is generally known as Platform Dependent Device (PDD) layer. The OS specific
interface part is known as Model Device Driver (MDD) or Logical Device Driver (LDD). For
a standard driver, for a specific operating system, the MDD/LDD always remains the same
and only the PDD part needs to be modified according to the target hardware for a particular
class of devices.

Most of the time, the hardware developer provides the implementation for all on board
devices for a specific OS along with the platform. The drivers are normally shipped in the
form of Board Support Package. The Board Support Package contains low level driver
implementations for the onboard peripherals and OEM Adaptation Layer (OAL) for
accessing the various chip level functionalities and a bootloader for loading the operating
system. The OAL facilitates communication between the Operating System (OS) and the
target device and includes code to handle interrupts, timers, power management, bus
abstraction; generic I/O control codes (IOCTLs), etc. The driver files are usually in the form
of a dll file. Drivers can run on either user space or kernel space. Drivers which run in user
space are known as user mode drivers and the drivers which run in kernel space are known as
kernel mode drivers. User mode drivers are safer than kernel mode drivers. If an error or
exception occurs in a user mode driver, it won't affect the services of the kernel. On the other
hand, if an exception occurs in the kernel mode driver, it may lead to the kernel crash. The
way how a device driver is written and how the interrupts are handled in it are operating
system and target hardware specific. However regardless of the OS types, a device driver
implements the following:

1. Device (Hardware) Initialization and Interrupt configuration


2. Interrupt handling and processing
3. Client interfacing (Interfacing with user applications)
The Device (Hardware) initialisation part of the driver deals with configuring the different
registers of the device (target hardware). For example configuring the I/O port line of the
processor as Input or output line and setting its associated registers for building a General
Purpose IO (GPIO) driver. The interrupt configuration part deals with configuring the
interrupts that needs to be associated with the hardware. In the case of the GPIO driver, if the
intention is to generate an interrupt when the Input line is asserted, we need to configure the
interrupt associated with the I/O port by modifying its associated registers. The basic
Interrupt configuration involves the following.
1. Set the interrupt type (Edge Triggered (Rising/Flailing) or Level Triggered (Low or
High)), enable the interrupts and set the interrupt priorities.
2. Bind the Interrupt with an Interrupt Request (IRQ). The processor identifies an interrupt
through IRQ. These IRQs are generated by the Interrupt Controller. In order to identify an
interrupt the interrupt needs to be bonded to an IRQ.
3. Register an Interrupt Service Routine (ISR) with an Interrupt Request (IRQ). ISR is the
handler for an Interrupt. In order to service an interrupt, an ISR should be associated with an
IRQ. Registering an ISR with an IRQ takes care of it.
With these the interrupt configuration is complete. If an interrupt occurs, depending
on its priority, it is serviced and the corresponding ISR is invoked. The processing part of an
interrupt is handled in an ISR. The whole interrupt processing can be done by the ISR itself
or by invoking an Interrupt Service Thread (IST). The IST performs interrupt processing on
behalf of the ISR. To make the ISR compact and short, it is always advised to use an IST for
interrupt processing. The intention of an interrupt is to send or receive command or data to
and from the hardware device and make the received data available to user programs for
application specific processing. Since interrupt processing happens at kernel level, user
applications may not have direct access to the drivers to pass and receive data. Hence it is the
responsibility of the Interrupt processing routine or thread to inform the user applications that
au interrupt is occurred and data is available for further processing. The client interfacing part
of the device driver takes care of this. The client interfacing implementation makes use of the
Inter Process communication mechanisms supported by the embedded OS for communicating
and synchronising with user applications and drivers. For example, to inform a user
application that an interrupt is occurred and the data received from the device is placed in I
shared buffer, the client interfacing code can signal (or set) an event. The user application
creates the event, registers it and waits for the driver to signal it. The driver can share the
received data through shared memory techniques. IOCTLs, shared buffers, etc. can be used
for data sharing. The story line is incomplete without performing an interrupt done (Interrupt
processing completed) functionality in the driver. Whenever an interrupt is asserted, while
vectoring to its corresponding ISR, all interrupts of equal and low'5riorities are disabled.
They are re-enable only on executing the interrupt done function (Same as the Return from
Interrupt RETI instruction execution for 8051) by the driver. The interrupt done function can
be invoked at the end of corresponding ISR or IST.

You might also like