0% found this document useful (0 votes)
25 views

Chapter Three

Chapter Three

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Chapter Three

Chapter Three

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Chapter Three

Design Process II

3.1. Introduction

In an operating system, a process is a program that is being executed. During its execution, a
process goes through different states. Understanding these states helps us see how the operating
system manages processes, ensuring that the computer runs efficiently.

There must be a minimum of five states. Even though the process could be in one of these states
during execution, the names of the states are not standardized. Each process goes through several
stages throughout its life cycle.

3.2. States and state diagrams

The states of a process are as follows:

 New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create
the process.

 Ready State: After the creation of a process, the process enters the ready state i.e. the
process is loaded into the main memory. The process here is ready to run and is waiting
to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.

 Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.

 Blocked or Wait State: Whenever the process requests access to I/O needs input from
the user or needs access to a critical region (the lock for which is already acquired) it
enters the blocked or waits state. The process continues to wait in the main memory and

1
does not require CPU. Once the I/O operation is completed the process goes to the ready
state.

 Terminated or Completed State: Process is killed as well as PCB is deleted. The


resources allocated to the process will be released or deallocated.

 Suspend Ready: Process that was initially in the ready state but was swapped out of
main memory(refer to Virtual Memory topic) and placed onto external storage by the
scheduler is said to be in suspend ready state. The process will transition back to a ready
state whenever the process is again brought onto the main memory.

 Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process
which was performing I/O operation and lack of main memory caused them to move to
secondary memory. When work is finished it may go to suspend ready.

Figure 1: operating system Process state Diagram

 CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations,
then it is called CPU bound process. Similarly, If the process is intensive in terms of I/O
operations then it is called I/O bound process.

2
A process can move between different states in an operating system based on its execution status
and resource availability. Here are some examples of how a process can move between different
states:

 New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.

 Ready to Running: When the CPU becomes available, the operating system selects a
process from the ready queue depending on various scheduling algorithms and moves it
to the running state.

 Running to Blocked: When a process needs to wait for an event to occur (I/O operation
or system call), it moves to the blocked state. For example, if a process needs to wait for
user input, it moves to the blocked state until the user provides the input.

 Running to Ready: When a running process is preempted by the operating system, it


moves to the ready state. For example, if a higher-priority process becomes ready, the
operating system may preempt the running process and move it to the ready state.

 Blocked to Ready: When the event a blocked process was waiting for occurs, the
process moves to the ready state. For example, if a process was waiting for user input and
the input is provided, it moves to the ready state.

 Running to Terminated: When a process completes its execution or is terminated by the


operating system, it moves to the terminated state.

Types of Schedulers

 Long-Term Scheduler: Decides how many processes should be made to stay in the
ready state. This decides the degree of multiprogramming. Once a decision is taken it
lasts for a long time which also indicates that it runs infrequently. Hence it is called a
long-term scheduler.

3
 Short-Term Scheduler: Short-term scheduler will decide which process is to be
executed next and then it will call the dispatcher. A dispatcher is a software that
moves the process from ready to run and vice versa. In other words, it is context
switching. It runs frequently. Short-term scheduler is also called CPU scheduler.
 Medium Scheduler: Suspension decision is taken by the medium-term scheduler.
The medium-term scheduler is used for swapping which is moving the process from
main memory to secondary and vice versa. The swapping is done to reduce degree of
multiprogramming.

Multiprogramming

We have many processes ready to run. There are two types of multiprogramming:

 Preemption – Process is forcefully removed from CPU. Pre-emption is also called time
sharing or multitasking.

 Non-Preemption Processes are not removed until they complete the execution. Once
control is given to the CPU for a process execution, till the CPU releases the control by
itself, control cannot be taken back forcibly from the CPU.

Degree of Multiprogramming

The number of processes that can reside in the ready state at maximum decides the degree of
multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can
reside in the ready state at maximum.

Operation on The Process

 Creation: The process will be ready once it has been created, enter the ready queue
(main memory), and be prepared for execution.

 Planning: The operating system picks one process to begin executing from among the
numerous processes that are currently in the ready queue. Scheduling is the process of
choosing the next process to run.
4
 Application: The processor begins running the process as soon as it is scheduled to run.
During execution, a process may become blocked or wait, at which point the processor
switches to executing the other processes.

 Killing or Deletion: The OS will terminate the process once its purpose has been
fulfilled. The process’s context will be over there.

 Blocking: When a process is waiting for an event or resource, it is blocked. The


operating system will place it in a blocked state, and it will not be able to execute until
the event or resource becomes available.

 Resumption: When the event or resource that caused a process to block becomes
available, the process is removed from the blocked state and added back to the ready
queue.

 Context Switching: When the operating system switches from executing one process to
another, it must save the current process’s context and load the context of the next
process to execute. This is known as context switching.

 Inter-Process Communication: Processes may need to communicate with each other to


share data or coordinate actions. The operating system provides mechanisms for inter-
process communication, such as shared memory, message passing, and synchronization
primitives.

 Process Synchronization: Multiple processes may need to access a shared resource or


critical section of code simultaneously. The operating system provides synchronization
mechanisms to ensure that only one process can access the resource or critical section at a
time.

 Process States: Processes may be in one of several states, including ready, running,
waiting, and terminated. The operating system manages the process states and transitions
between them.

Features of The Process State


5
 A process can move from the running state to the waiting state if it needs to wait for a
resource to become available.

 A process can move from the waiting state to the ready state when the resource it was
waiting for becomes available.

 A process can move from the ready state to the running state when it is selected by the
operating system for execution.

 The scheduling algorithm used by the operating system determines which process is
selected to execute from the ready state.

 The operating system may also move a process from the running state to the ready state
to allow other processes to execute.

 A process can move from the running state to the terminated state when it completes its
execution.

 A process can move from the waiting state directly to the terminated state if it is aborted
or killed by the operating system or another process.

 A process can go through ready, running and waiting state any number of times in its
lifecycle but new and terminated happens only once.

 The process state includes information about the program counter, CPU registers,
memory allocation, and other resources used by the process.

 The operating system maintains a process control block (PCB) for each process, which
contains information about the process state, priority, scheduling information, and other
process-related data.

 The process state diagram is used to represent the transitions between different states of a
process and is an essential concept in process management in operating systems.

6
3.3. Structures (ready list, process control blocks, and so forth)

While creating a process, the operating system performs several operations. To identify the
processes, it assigns a process identification number (PID) to each process. As the operating
system supports multi-programming, it needs to keep track of all the processes. For this task, the
process control block (PCB) is used to track the process’s execution status. Each block of
memory contains information about the process state, program counter, stack pointer, status of
opened files, scheduling algorithms, etc.

All this information is required and must be saved when the process is switched from one state to
another. When the process makes a transition from one state to another, the operating system
must update information in the process’s PCB. A process control block (PCB) contains
information about the process, i.e. registers, quantum, priority, etc. The process table is an array
of PCBs, which logically contains a PCB for all of the current processes in the system.

Structure of the Process Control Block

A Process Control Block (PCB) is a data structure used by the operating system to manage
information about a process. The process control keeps track of many important pieces of
information needed to manage processes efficiently. The diagram helps explain some of these
key data items.

7
Figure 2: Process Control Block

Process Control Block Components

 Pointer: It is a stack pointer that is required to be saved when the process is switched
from one state to another to retain the current position of the process.

 Process state: It stores the respective state of the process.

 Process number: Every process is assigned a unique id known as process ID or PID


which stores the process identifier.

 Program counter: Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.

 Register: Registers in the PCB, it is a data structure. When a processes is running and it’s
time slice expires, the current value of process specific registers would be stored in the
PCB and the process would be swapped out. When the process is scheduled to be run, the
register values is read from the PCB and written to the CPU registers. This is the main
purpose of the registers in the PCB.

 Memory limits: This field contains the information about memory management system
used by the operating system. This may include page tables, segment tables, etc.

8
 List of Open files: This information includes the list of files opened for a process.

Figure 3: Process Table and Process Control Block

Additional Points to Consider for Process Control Block (PCB)

 Interrupt Handling: The PCB also contains information about the interrupts that a
process may have generated and how they were handled by the operating system.

 Context Switching: The process of switching from one process to another is called
context switching. The PCB plays a crucial role in context switching by saving the state
of the current process and restoring the state of the next process.

 Real-Time Systems: Real-time operating systems may require additional information in


the PCB, such as deadlines and priorities, to ensure that time-critical processes are
executed in a timely manner.

 Virtual Memory Management: The PCB may contain information about a process’s
virtual memory management, such as page tables and page fault handling.

 Inter-Process Communication: The PCB can be used to facilitate inter-process


communication by storing information about shared resources and communication
channels between processes.

9
 Fault Tolerance: Some operating systems may use multiple copies of the PCB to
provide fault tolerance in case of hardware failures or software errors.

Location of The Process Control Block

The Process Control Block (PCB) is stored in a special part of memory that normal users can’t
access. This is because it holds important information about the process. Some operating systems
place the PCB at the start of the kernel stack for the process, as this is a safe and secure spot.

Advantages

 Efficient Process Management: The process table and PCB provide an efficient way to
manage processes in an operating system. The process table contains all the information
about each process, while the PCB contains the current state of the process, such as the
program counter and CPU registers.

 Resource Management: The process table and PCB allow the operating system to
manage system resources, such as memory and CPU time, efficiently. By keeping track
of each process’s resource usage, the operating system can ensure that all processes have
access to the resources they need.

 Process Synchronization: The process table and PCB can be used to synchronize
processes in an operating system. The PCB contains information about each process’s
synchronization state, such as its waiting status and the resources it is waiting for.

 Process Scheduling: The process table and PCB can be used to schedule processes for
execution. By keeping track of each process’s state and resource usage, the operating
system can determine which processes should be executed next.

10
Disadvantages

 Overhead: The process table and PCB can introduce overhead and reduce system
performance. The operating system must maintain the process table and PCB for each
process, which can consume system resources.

 Complexity: The process table and PCB can increase system complexity and make it
more challenging to develop and maintain operating systems. The need to manage and
synchronize multiple processes can make it more difficult to design and implement
system features and ensure system stability.

 Scalability: The process table and PCB may not scale well for large-scale systems with
many processes. As the number of processes increases, the process table and PCB can
become larger and more difficult to manage efficiently.

 Security: The process table and PCB can introduce security risks if they are not
implemented correctly. Malicious programs can potentially access or modify the process
table and PCB to gain unauthorized access to system resources or cause system
instability.

 Miscellaneous Accounting and Status Data – This field includes information about the
amount of CPU used, time constraints, jobs or process number, etc. The process control
block stores the register content also known as execution content of the processor when it
was blocked from running. This execution content architecture enables the operating
system to restore a process’s execution context when the process returns to the running
state. When the process makes a transition from one state to another, the operating system
updates its information in the process’s PCB. The operating system maintains pointers to
each process’s PCB in a process table so that it can access the PCB quickly.

3.4. Dispatching and Context switching in Operating System

Switching the CPU to another process requires performing a state save of the current process and
a state restore of a different process. This task is known as a context switch. When a context

11
switch occurs, the kernel saves the context of the old process in its PCB and loads the saved
context of the new process scheduled to run. Context-switch time is pure overhead because the
system does no useful work while switching. Switching speed varies from machine to machine,
depending on the memory speed, the number of registers that must be copied, and the existence
of special instructions (such as a single instruction to load or store all registers). A typical speed
is a few milliseconds. Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun Ultra SPARC) provide multiple sets of registers. A
context switch here simply requires changing the pointer to the current register set. Of course, if
there are more active processes than there are register sets, the system resorts to copying register
data to and from memory, as before. Also, the more complex the operating system, the greater
the amount of work that must be done during a context switch

Need of Context Switching

Context switching enables all processes to share a single CPU to finish their execution and store
the status of the system’s tasks. The execution of the process begins at the same place where
there is a conflict when the process is reloaded into the system.

The operating system’s need for context switching is explained by the reasons listed below.

 One process does not directly switch to another within the system. Context switching
makes it easier for the operating system to use the CPU’s resources to carry out its tasks
and store its context while switching between multiple processes.

 Context switching enables all processes to share a single CPU to finish their execution
and store the status of the system’s tasks. The execution of the process begins at the same
place where there is a conflict when the process is reloaded into the system.

 Context switching only allows a single CPU to handle multiple processes requests
parallel without the need for any additional processors.

12
Context Switching Triggers

The three different categories of context-switching triggers are as follows.

 Interrupts

 Multitasking

 User/Kernel switch

Interrupts: When a CPU requests that data be read from a disc, if any interruptions occur,
context switching automatically switches to a component of the hardware that can handle the
interruptions more quickly.

Multitasking: The ability for a process to be switched from the CPU so that another process can
run is known as context switching. When a process is switched, the previous state is retained so
that the process can continue running at the same spot in the system.

Kernel/User Switch: This trigger is used when the OS needed to switch between the user mode
and kernel mode.

When switching between user mode and kernel/user mode is necessary, operating systems use
the kernel/user switch.

What is Process Control Block(PCB)?

So, The Process Control block(PCB) is also known as a Task Control Block. it represents a
process in the Operating System. A process control block (PCB) is a data structure used by a
computer to store all information about a process. It is also called the descriptive process. When
a process is created (started or installed), the operating system creates a process manager.

13
Figure 4: State Diagram of Context Switching

Working Process Context Switching

So the context switching of two processes, the priority-based process occurs in the ready queue
of the process control block. These are the following steps.

 The state of the current process must be saved for rescheduling.

 The process state contains records, credentials, and operating system-specific information
stored on the PCB or switch.

 The PCB can be stored in a single layer in kernel memory or in a custom OS file.

 A handle has been added to the PCB to have the system ready to run.

14
 The operating system aborts the execution of the current process and selects a process
from the waiting list by tuning its PCB.

 Load the PCB’s program counter and continue execution in the selected process.

 Process/thread values can affect which processes are selected from the queue, this can be
important.

What is a Scheduler?

Schedulers are a special type of operating system software that manages process scheduling in a
variety of ways. Its main function is to select the jobs that are to be submitted to the system and
decide which process will run.

There are three types of schedulers, which are as follows:

1. Long-Term (job) Scheduler: Due to the small size of the main memory, initially all the
programs are stored in secondary memory. When they are loaded or stored in main
memory, then they are known as processes. The long-term scheduler decides how many
processes will remain in the ready queue. So, in simple words, the long-term scheduler
decides the degree of multiprogramming of the system.

2. Medium-Term Scheduler: Often a running process needs an I/0 operation that does not
require CPU time. That is why when a process needs I/O operation during its execution,
the operating system sends that process to a blocked queue. When that process completes
its I/O operation, then it is again shifted to the ready queue . All these decisions are taken
by the medium-term scheduler. Medium-term scheduling is part of swapping.

3. Short-Term (CPU) Scheduler: When there are many processes initially in the main
memory, all are present in the ready queue. Out of all these processes, only one is
selected for execution. This decision is in the hands of the short-term scheduler or CPU
scheduler.

15
Advantages of a Scheduler

 Optimized CPU Utilization: It helps to check that the CPU is loaded to the maximum
possible extent by frequently choosing tasks to execute.

 Fair Process Handling: Demands a fixed amount of CPU time for each process like
FCFS, SJF, and RR in order to offer equal opportunity for processes in the system.

 Process Management: Oversees the processes in different states (for instance, ready,
blocked, or running).

Disadvantages of a Scheduler

 Complexity: They may also not be well suited for all system designs and can also be
very hard to actually schedule.

 Overhead: There are some drawbacks in maintaining the scheduler, as the latter could
present some overhead to the system, and this could be especially so in a real-time
environment where decisions require to be made rapidly.

16
Figure 5: Scheduler in operating System

What is a Dispatcher?

Dispatcher is a special type of program whose work starts after the scheduler. When the
scheduler completes its task of selecting a process, it is the dispatcher that moves the process to
the queue it needs to go to.

The dispatcher is the module that hands over control of the CPU to the process that has been
selected by the short-term scheduler.

The following things happen in this function:

1. switching context: Cores the current process and restores the state of the process to be
run next.

2. Switching to User Mode: Makes sure that it runs in the user mode and not kernel mode,
which is for security and privilege break-

17
3. Jumps to the correct location in the user program from where the program can be
restarted.

Advantages of a Dispatcher

 Fast Process Switching: Evaluates circumstances where a procedure shifts from the
ready queue to the execution phase in a way that causes less delay.

 Efficient CPU Time Allocation: Is important in making sure that processes receive CPU
time, hence giving the necessary support for multitasking to occur.

Disadvantages of a Dispatcher

 Dispatch Latency: Although the time taken is considerably small, this lateness in
dispatching the requests can slow down the system.

 Dependent on Scheduler: The dispatcher hence cannot work on her own since she is
reliant on the decisions made by the scheduler.

Difference Between Dispatcher and Scheduler

Consider a situation, where various processes are residing in the ready queue waiting to be
executed. The CPU cannot execute all of these processes simultaneously, so the operating system
has to choose a particular process on the basis of the scheduling algorithm used. So, this
procedure of selecting a process among various processes is done by the scheduler. Once the
scheduler has selected a process from the queue, the dispatcher comes into the picture, and it is
the dispatcher who takes that process from the ready queue and moves it into the running state.
Therefore, the scheduler gives the dispatcher an ordered list of processes which the dispatcher
moves to the CPU over time.

Example – There are 4 processes in the ready queue, P1, P2, P3, P4; Their arrival times are t0,
t1, t2, t3 respectively. A First in First out (FIFO) scheduling algorithm is used. Because P1
arrived first, the scheduler will decide it is the first process that should be executed, and the
dispatcher will remove P1 from the ready queue and give it to the CPU. The scheduler will then

18
determine P2 to be the next process that should be executed, so when the dispatcher returns to
the queue for a new process, it will take P2 and give it to the CPU. This continues in the same
way for P3, and then P4.

3.5. The role of interrupts

What is Interrupt in OS?

An interrupt is a signal emitted by hardware or software when a process or an event needs


immediate attention. It alerts the processor to a high-priority process requiring interruption of the
current working process. In I/O devices, one of the bus control lines is dedicated for this purpose
and is called the Interrupt Service Routine (ISR).

When a device raises an interrupt at the process, the processor first completes the execution of an
instruction. Then it loads the Program Counter (PC) with the address of the first instruction of
the ISR. Before loading the program counter with the address, the address of the interrupted
instruction is moved to a temporary location. Therefore, after handling the interrupt, the
processor can continue with the process.

While the processor is handling the interrupts, it must inform the device that its request has been
recognized to stop sending the interrupt request signal. Also, saving the registers so that the
interrupted process can be restored in the future increases the delay between the time an interrupt
is received and the start of the execution of the ISR. This is called Interrupt Latency.

19
Figure 6: Interrupt in operating system

A single computer can perform only one computer instruction at a time. But, because it can be
interrupted, it can manage how programs or sets of instructions will be performed. This is known
as multitasking. It allows the user to do many different things simultaneously, and the computer
turns to manage the programs that the user starts. Of course, the computer operates at speeds that
make it seem like all user tasks are being performed simultaneously.

An operating system usually has some code that is called an interrupt handler. The interrupt
handler prioritizes the interrupts and saves them in a queue if more than one is waiting to be
handled. The operating system has another little program called a scheduler that figures out
which program to control next.

Types of Interrupt

Interrupt signals may be issued in response to hardware or software events. These are classified
as hardware interrupts or software interrupts, respectively.

Figure 7: Types of Interrupts

20
1. Hardware Interrupts

A hardware interrupt is a condition related to the state of the hardware that may be signaled by
an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices
embedded in processor logic to communicate that the device needs attention from the operating
system. For example, pressing a keyboard key or moving a mouse triggers hardware interrupts
that cause the processor to read the keystroke or mouse position.

Hardware interrupts can arrive asynchronously for the processor clock and at any time during
instruction execution. Consequently, all hardware interrupt signals are conditioned by
synchronizing them to the processor clock and act only at instruction execution boundaries.

In many systems, each device is associated with a particular IRQ signal. This makes it possible
to quickly determine which hardware device is requesting service and expedite servicing of that
device.

On some older systems, all interrupts went to the same location, and the OS used specialized
instruction to determine the highest priority unmasked interrupt outstanding. On contemporary
systems, there is generally a distinct interrupt routine for each type of interrupt or each interrupts
source, often implemented as one or more interrupt vector tables. Hardware interrupts are further
classified into two types, such as:

 Maskable Interrupts: Processors typically have an internal interrupt mask register


which allows selective enabling and disabling of hardware interrupts. Each interrupt
signal is associated with a bit in the mask register; on some systems, the interrupt is
enabled when the bit is set and disabled when the bit is clear, while on others, a set bit
disables the interrupt. When the interrupt is disabled, the associated interrupt signal will
be ignored by the processor. Signals which are affected by the mask are called maskable
interrupts.
The interrupt mask does not affect some interrupt signals and therefore cannot be
disabled; these are called non-maskable interrupts (NMI).
 NMIs indicate high priority events that need to be processed immediately and which
cannot be ignored under any circumstances, such as the timeout signal from a watchdog

21
timer.
To mask an interrupt is to disable it, while to unmask an interrupt is to enable it.
 Spurious Interrupts:
A spurious interrupt is a hardware interrupt for which no source can be found. The term
phantom interrupt or ghost interrupt may also use to describe this phenomenon. Spurious
interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-
sensitive processor input. Such interrupts may be difficult to identify when a system
misbehaves.
In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt
line's bias resistor will cause a small delay before the processor recognizes that the
interrupt source has been cleared. If the interrupting device is cleared too late in the
interrupt service routine (ISR), there won't be enough time for the interrupt circuit to
return to the quiescent state before the current instance of the ISR terminates. The result
is the processor will think another interrupt is pending since the voltage at its interrupt
request input will be not high or low enough to establish an unambiguous internal logic 1
or logic 0. The apparent interrupt will have no identifiable source, and hence this is called
spurious.
A spurious interrupt may result in system deadlock or other undefined operation if the
ISR doesn't account for the possibility of such an interrupt occurring. As spurious
interrupts are mostly a problem with wired-OR interrupt circuits, good programming
practice in such systems is for the ISR to check all interrupt sources for activity and take
no action if none of the sources is interrupting.

2. Software Interrupts

The processor requests a software interrupt upon executing particular instructions or when
certain conditions are met. Every software interrupt signal is associated with a particular
interrupt handler.

A software interrupt may be intentionally caused by executing a special instruction that invokes
an interrupt when executed by design. Such instructions function similarly to subroutine calls

22
and are used for various purposes, such as requesting operating system services and interacting
with device drivers.

Software interrupts may also be unexpectedly triggered by program execution errors. These
interrupts are typically called traps or exceptions.

Handling Multiple Devices

When more than one device raises an interrupt request signal, additional information is needed to
decide which device to consider first. The following methods are used to decide which device to
select first,

Figure 8: Handling Interrupt methods in operating system

1. Polling
In polling, the first device encountered with the IRQ bit set is to be serviced first, and

23
appropriate ISR is called to service the same. It is easy to implement, but a lot of time is
wasted by interrogating the IRQ bit of all devices.
2. Vectored-Interrupts
In vectored interrupts, a device requesting an interrupt identifies itself directly by sending
a special code to the processor over the bus. This enables the processor to identify the
device that generated the interrupt. The special code can be the starting address of the
ISR or where the ISR is located in memory and is called the interrupt vector.
3. Interrupt-Nesting
In this method, the I/O device is organized in a priority structure. Therefore, an interrupt
request from a higher priority device is recognized, whereas a lower priority device is
not. The processor accepts interrupts only from devices/processes having priority more
than it.
Processors priority is encoded in a few bits of PS (Process Status register), and it can be
changed by program instructions that write into the PS. The processor is in supervised
mode only while executing OS routines, and it switches to user mode before executing
application programs.

Interrupt Handling

We know that the instruction cycle consists of fetch, decode, execute and read/write functions.
After every instruction cycle, the processor will check for interrupts to be processed. If there is
no interrupt in the system, it will go for the next instruction cycle, given by the instruction
register. If there is an interrupt present, then it will trigger the interrupt handler. The handler will
stop the present instruction that is processing and save its configuration in a register and load the
program counter of the interrupt from a location given by the interrupt vector table.

After processing the interrupt by the processor, the interrupt handler will load the instruction and
its configuration from the saved register. The process will start its processing where it's left. This
saves the old instruction processing configuration, and loading the new interrupt configuration is
also called context switching. There are different types of interrupt handlers.

24
1. First Level Interrupt Handler (FLIH) is a hard interrupt handler or fast interrupt handler.
These interrupt handlers have more jitter while process execution, and they are mainly
maskable interrupts.
2. Second Level Interrupt Handler (SLIH) is a soft interrupt handler and slow interrupt
handler. These interrupt handlers have less jitter.

The interrupt handler is also called an interrupt service routine (ISR). The main features of the
ISR are

 Interrupts can occur at any time, and they are asynchronous, and ISR's can call for
asynchronous interrupts.
 An interrupt service mechanism can call the ISR's from multiple sources.
 ISR's can handle both maskable and non-maskable interrupts. An instruction in a program
can disable or enable an interrupt handler call.
 ISR at the beginning of execution will disable other devices interrupt services. After
completion of the ISR execution, it will reinitialize the interrupt services.
 The nested interrupts are allowed in ISR for diversion to other ISR.

Interrupt Latency

When an interrupt occurs, the service of the interrupt by executing the ISR may not start
immediately by context switching. The time interval between the occurrence of interrupt and the
start of execution of the ISR is called interrupt latency.

 Tswitch = time taken for context switch


 ΣTexec = The sum of the time interval for executing the ISR
 Interrupt Latency = Tswitch + ΣTexec

How CPU Response to Interrupts

A key point towards understanding how operating systems work understands what the CPU does
when an interrupt occurs. The CPU hardware does the same for each interrupt, allowing

25
operating systems to take control away from the currently running user process. The switching of
running processes to execute code from the OS kernel is called a context switch.

CPUs rely on the data contained in a couple of registers to handle interrupts correctly. One
register holds a pointer to the process control block of the currently running process, and this
register is set each time a process is loaded into memory. The other register holds a pointer to a
table containing pointers to the instructions in the OS kernel for interrupt handlers and system
calls. The value in this register and contents of the table are set when the operating system is
initialized at boot time. The CPU performs the following actions in response to an interrupt:

3.6. Concurrent execution: advantages and disadvantages

Concurrency in software engineering refers to the simultaneous execution of multiple sequential


instructions. From the operating system's perspective, this arises when multiple process threads are run in
parallel.

Enhanced Efficiency

Concurrency enables the simultaneous execution of multiple applications, leading to increased


efficiency and overall productivity of workstations.

Optimized Resource Usage

It facilitates better utilization of resources by allowing unused assets or data to be accessed by


other applications in an organized manner. This reduces the waiting time between threads and
enhances average response time. Thanks to the efficient utilization of available resources,
applications can continue their operations without waiting for others to complete.

Improved System Performance

Concurrency contributes to the improved performance of the operating system. This is achieved
by enabling various hardware resources to be accessed concurrently by different applications or
threads.

26
It allows simultaneous use of the same resources and supports the parallel utilization of different
resources. This seamless integration of resources and applications helps accomplish the main
objective quickly and effectively.

Cons of Concurrency

Here are important points to keep in mind regarding the challenges of concurrency when
planning processes:

Minimizing Interference

When multiple applications run concurrently, it’s crucial to safeguard them from causing
disruptions to each other’s operations.

Coordinated Execution

Applications running in parallel need careful coordination, synchronization, and well-organized


scheduling. This involves allocating resources and determining the order of execution.

Coordinating Systems

Designing additional systems becomes necessary to manage the coordination among concurrent
applications effectively.

Increased Complexity

Operating systems encounter greater complexity and performance overheads when switching
between various applications running in parallel.

Performance Impact

Too many simultaneous processes can lead to decreased or degraded overall system
performance.

27
Considering these considerations helps us understand the complexities and challenges
concurrency can bring during process planning.

Issues of Concurrency

Understanding Concurrency Challenges: Non-Atomic Operations, Race Conditions, Blocking,


Starvation, and Deadlocks

In the world of software, dealing with multiple processes running at the same time brings its own
set of challenges. Let’s dive into some common issues that can arise:

Non-Atomic Operations

Imagine processes working together like a symphony. When operations aren’t atomic, other
processes can interrupt them, causing issues. An atomic operation happens independently of
other processes or threads. Any operation that relies on another process is non-atomic, which can
lead to problems.

Race Conditions

Think of this as a software traffic jam. It’s when the output of a program depends on the
unpredictable timing or sequence of events. This often happens in software that handles multiple
tasks simultaneously, threads that cooperate, or when sharing resources. It’s like trying to cross a
busy intersection without traffic lights!

Blocking

Imagine a process putting its work on hold while it waits for something else to happen, like a
resource becoming available or an input operation finishing. It’s like waiting for a green light to
move forward. But if a process gets stuck waiting a long time, it’s not pleasant, especially when
regular updates are needed.

28
Starvation

Picture a process that’s always hungry for resources but keeps getting overlooked. In concurrent
computing, starvation occurs when a process is continuously denied the resources it needs to do
its job. It can be caused by errors in how resources are allocated or managed.

Deadlock

Imagine a group of friends, each waiting for another to make a move, resulting in no one
moving. That’s a deadlock. In the computing world, it’s when processes or threads are stuck
waiting for each other to release a lock or send a message. Deadlocks can occur in systems where
processes share resources, like in parallel computing or distributed systems.

By understanding these challenges and their implications, developers can create better strategies
for managing concurrent processes. Like orchestrating a well-coordinated performance, handling
concurrency requires careful planning and synchronization.

3.7. The “mutual exclusion” problem and some solutions

The problem which mutual exclusion addresses is a problem of resource sharing: how can a
software system control multiple processes' access to a shared resource, when each process needs
exclusive control of that resource while doing its work? The mutual-exclusion solution to this
makes the shared resource available only while the process is in a specific code segment called
the critical section. It controls access to the shared resource by controlling each mutual execution
of that part of its program where the resource would be used.

A successful solution to this problem must have at least these two properties:

 It must implement mutual exclusion: only one process can be in the critical section at a
time.
 It must be free of deadlocks: if processes are trying to enter the critical section, one of
them must eventually be able to do so successfully, provided no process stays in the
critical section permanently.
29
Hardware solutions

On single-processor systems, the simplest solution to achieve mutual exclusion is to disable


interrupts when a process is in a critical section. This will prevent any interrupt service
routines (such as the system timer, I/O interrupt request, etc) from running (effectively
preventing a process from being interrupted). Although this solution is effective, it leads to many
problems. If a critical section is long, then the system clock will drift every time a critical section
is executed because the timer interrupt (which keeps the system clock in sync) is no longer
serviced, so tracking time is impossible during the critical section. Also, if a process halts during
its critical section, control will never be returned to another process, effectively halting the entire
system. A more elegant method for achieving mutual exclusion is the busy-wait.

Busy-waiting is effective for both single-processor and multiprocessor systems. The use of
shared memory and an atomic (remember - we talked about atomic) test-and-set instruction
provide the mutual exclusion. A process can test-and-set on a variable in a section of shared
memory, and since the operation is atomic, only one process can set the flag at a time. Any
process that is unsuccessful in setting the flag (it is unsuccessful because the process can NOT
gain access to the variable until the other process releases it) can either go on to do other tasks
and try again later, release the processor to another process and try again later, or continue to
loop while checking the flag until it is successful in acquiring it. Preemption is still possible, so
this method allows the system to continue to function—even if a process halts while holding the
lock.

Software solutions

In addition to hardware-supported solutions, some software solutions exist that use busy waiting
to achieve mutual exclusion.

It is often preferable to use synchronization facilities provided by an operating system's


multithreading library, which will take advantage of hardware solutions if possible but will use
software solutions if no hardware solutions exist. For example, when the operating system's lock
library is used and a thread tries to acquire an already acquired lock, the operating system could

30
suspend the thread using a context switch and swap it out with another thread that is ready to be
run, or could put that processor into a low power state if there is no other thread that can be run

3.8. Deadlock: causes, conditions, prevention

A deadlock is a situation where a set of processes is blocked because each process is holding a
resource and waiting for another resource acquired by some other process. In this article, we will
discuss deadlock, its necessary conditions, etc. in detail.

What is Deadlock?

4. Deadlock is a situation in computing where two or more processes are unable to proceed
because each is waiting for the other to release resources. Key concepts include mutual
exclusion, resource holding, circular wait, and no preemption.

Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. This is a
practical example of deadlock.

How Does Deadlock occur in the Operating System?

Before going into detail about how deadlock occurs in the Operating System, let’s first discuss
how the Operating System uses the resources present. A process in an operating system uses
resources in the following way.

 Requests a resource

 Use the resource

 Releases the resource

A situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1

31
is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.

Figure 9: Deadlock in operating system

Examples of Deadlock

There are several examples of deadlock. Some of them are mentioned below.

1. The system has 2 tape drives. P0 and P1 each hold one tape drive and each needs another one.

2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:

 P0 executes wait(A) and preempts.

 P1 executes wait(B).

 Now P0 and P1 enter in deadlock.

32
3. Assume the space is available for allocation of 200K bytes, and the following sequence of
events occurs.

Deadlock occurs if both processes progress to their second request.

Necessary Conditions for Deadlock in OS

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

 Mutual Exclusion: Two or more resources are non-shareable (Only one process can use
at a time).

33
 Hold and Wait: A process is holding at least one resource and waiting for resources.
 No Preemption: A resource cannot be taken from a process unless the process releases
the resource.

 Circular Wait: A set of processes waiting for each other in circular form.

What is Deadlock Detection?

Deadlock detection is a process in computing where the system checks if there are any sets of
processes that are stuck waiting for each other indefinitely, preventing them from moving
forward. In simple words, deadlock detection is the process of finding out whether any process
are stuck in loop or not. There are several algorithms like

 Resource Allocation Graph


 Banker’s Algorithm

What are the Methods For Handling Deadlock?

There are three ways to handle deadlock

 Deadlock Prevention or Avoidance

 Deadlock Recovery

 Deadlock Ignorance

Deadlock Prevention or Avoidance

Deadlock Prevention and Avoidance is the one of the methods for handling deadlock. First, we
will discuss Deadlock Prevention, then Deadlock Avoidance.

What is Deadlock Prevention?

In deadlock prevention the aim is to not let full-fill one of the required condition of the deadlock.
This can be done by this method:

34
(i) Mutual Exclusion

We only use the Lock for the non-share-able resources and if the resource is share- able (like
read only file) then we not use the locks here. That ensure that in case of share -able resource ,
multiple process can access it at same time. Problem- Here the problem is that we can only do it
in case of share-able resources but in case of no-share-able resources like printer , we have to use
Mutual exclusion.

(ii) Hold and Wait

To ensure that Hold and wait never occurs in the system, we must guarantee that whenever
process request for resource , it does not hold any other resources.

 we can provide the all resources to the process that is required for it’s execution before
starting it’s execution . problem – for example if there are three resource that is required
by a process and we have given all that resource before starting execution of process then
there might be a situation that initially we required only two resource and after one hour
we want third resources and this will cause starvation for the another process that wants
this resources and in that waiting time that resource can allocated to other process and
complete their execution.

 We can ensure that when a process request for any resources that time the process does
not hold any other resources. Ex- Let there are three resources DVD, File and Printer .
First the process request for DVD and File for the copying data into the file and let
suppose it is going to take 1 hour and after it the process free all resources then again
request for File and Printer to print that file.

(iii) No Preemption

If a process is holding some resource and requestion other resources that are acquired and
these resource are not available immediately then the resources that current process is holding
are preempted. After some time process again request for the old resources and other required
resources to re-start.

35
For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2.
then process p1 preempt r1 and after some time it try to restart by requesting both r1 and r2
resources.

Problem – This can cause the Live Lock Problem.

Live Lock : Live lock is the situation where two or more processes continuously changing
their state in response to each other without making any real progress.

Example:

 suppose there are two processes p1 and p2 and two resources r1 and r2.

 Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.

 so according to above method- Both p1 and p2 detect that they can’t acquire second
resource, so they release resource that they are holding and then try again.

 continuous cycle- p1 again acquired r1 and requesting to r2 p2 again acquired r2 and


requesting to r1 so there is no overall progress still process are changing there state as
they preempt resources and then again holding them. This the situation of Live Lock.

(iv) Circular Wait:

To remove the circular wait in system we can give the ordering of resources in which a
process needs to acquire.

Example: If there are process p1 and p2 and resources r1 and r2 then we can fix the resource
acquiring order like the process first need to acquire resource r1 and then resource r2. so the
process that acquired r1 will be allowed to acquire r2 , other process needs to wait until r1 is free.
This is the Deadlock prevention methods but practically only fourth method is used as all other
three condition removal method have some disadvantages with them.

36
What is Deadlock Avoidance?

Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an


assumption. We need to ensure that all information about resources that the process will need is
known to us before the execution of the process. We use Banker’s algorithm (Which is in turn a
gift from Dijkstra) to avoid deadlock.

In prevention and avoidance, we get the correctness of data but performance decreases.

What is Deadlock Recovery?

If Deadlock prevention or avoidance is not applied to the software then we can handle this by
deadlock detection and recovery. which consist of two phases:

1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.

2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.

In Deadlock detection and recovery, we get the correctness of data but performance decreases.

Methods of Deadlock Recovery

There are several Deadlock Recovery Techniques:

 Manual Intervention

 Automatic Recovery

 Process Termination

 Resource Preemption

37
1. Manual Intervention

When a deadlock is detected, one option is to inform the operator and let them handle the
situation manually. While this approach allows for human judgment and decision-making, it can
be time-consuming and may not be feasible in large-scale systems.

2. Automatic Recovery

An alternative approach is to enable the system to recover from deadlock automatically. This
method involves breaking the deadlock cycle by either aborting processes or preempting
resources. Let’s delve into these strategies in more detail.

3. Process Termination

 Abort all Deadlocked Processes

o This approach breaks the deadlock cycle, but it comes at a significant cost. The
processes that were aborted may have executed for a considerable amount of time,
resulting in the loss of partial computations. These computations may need to be
recomputed later.

 Abort one process at a time

o Instead of aborting all deadlocked processes simultaneously, this strategy involves


selectively aborting one process at a time until the deadlock cycle is eliminated.
However, this incurs overhead as a deadlock-detection algorithm must be invoked
after each process termination to determine if any processes are still deadlocked.

o Factors for choosing the termination order:

 The process’s priority

 Completion time and the progress made so far

 Resources consumed by the process

38
 Resources required to complete the process

 Number of processes to be terminated

 Process type (interactive or batch)

4. Resource Preemption

 Selecting a Victim

o Resource preemption involves choosing which resources and processes should be


preempted to break the deadlock. The selection order aims to minimize the overall
cost of recovery. Factors considered for victim selection may include the number
of resources held by a deadlocked process and the amount of time the process has
consumed.

 Rollback

o If a resource is preempted from a process, the process cannot continue its normal
execution as it lacks the required resource. Rolling back the process to a safe state
and restarting it is a common approach. Determining a safe state can be
challenging, leading to the use of total rollback, where the process is aborted and
restarted from scratch.

 Starvation Prevention

o To prevent resource starvation, it is essential to ensure that the same process is not
always chosen as a victim. If victim selection is solely based on cost factors, one
process might repeatedly lose its resources and never complete its designated task.
To address this, it is advisable to limit the number of times a process can be
chosen as a victim, including the number of rollbacks in the cost factor.

39
What is Deadlock Ignorance?

If a deadlock is very rare, then let it happen and reboot the system. This is the approach that both
Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.

In Deadlock, ignorance performance is better than the above two methods but the correctness of
data is not there.

Safe State

A safe state can be defined as a state in which there is no deadlock. It is achievable if:

 If a process needs an unavailable resource, it may wait until the same has been released
by a process to which it has already been allocated. if such a sequence does not exist, it is
an unsafe state.

Exercise1:

1. Suppose n processes, P1, …. Pn shares m identical resource units, which can be reserved and
released one at a time. The maximum resource requirement of process Pi is Si, where Si > 0.
Which one of the following is a sufficient condition for ensuring that deadlock does not occur?
(GATE CS 2005)

3.9. Models and mechanisms (semaphores, monitors, condition variables,


rendezvous)

40
Semaphores

 Definition: A semaphore is a synchronization primitive that can be used to control access


to a common resource in a concurrent system.
 Types:
o Counting Semaphore: Allows an arbitrary number of resources to be available.
o Binary Semaphore (or Mutex): A special case that only allows values 0 and 1,
often used for mutual exclusion.
 Usage: Semaphores use wait (decrement) and signal (increment) operations to manage
resource access. If the semaphore is zero, a process requesting access must wait.

2. Monitors

 Definition: A monitor is a high-level synchronization construct that allows safe access to


shared resources by encapsulating them along with the procedures that operate on them.
 Features:
o Only one process can be in the monitor at a time.
o Provides condition variables for synchronization.
 Usage: Monitors ensure mutual exclusion and can allow threads to wait for certain
conditions to be true before proceeding.

3. Condition Variables

 Definition: Condition variables are a synchronization mechanism that allows threads to


wait for certain conditions to become true.
 Usage: Used in conjunction with monitors, they allow a thread to suspend execution and
release the monitor lock until another thread signals that the condition is true.
 Operations:
o Wait: A thread waits on a condition variable, releasing the monitor lock.
o Signal: Wakes up one waiting thread.
o Broadcast: Wakes up all waiting threads.

41
4. Rendezvous

 Definition: A rendezvous is a synchronization mechanism used in concurrent


programming where two or more processes must meet at a certain point before
continuing execution.
 Features: Often used in message-passing systems or in programming languages with
built-in support for concurrency (e.g., Ada).
 Usage: Typically involves blocking until all participating threads reach the rendezvous
point, ensuring synchronized progress.

3.10. Producer-consumer problems and synchronization

he producer-consumer problem is a classic synchronization issue in concurrent programming,


where two types of processes, producers and consumers, share a common resource, typically a
buffer. Here's a detailed overview:

Producer-Consumer Problem

Definition:

 In this problem, producers generate data and place it into a shared buffer, while
consumers retrieve and process that data. The challenge is to ensure that producers do not
add data to a full buffer and consumers do not remove data from an empty buffer.

Key Concepts

1. Shared Buffer:
o A finite storage area where produced items are held until consumed. This can be
implemented as a queue.
2. Synchronization:
o Essential to ensure that producers and consumers operate correctly without
causing race conditions or inconsistencies.

42
Synchronization Mechanisms

1. Semaphores:
o Counting Semaphore: Used to count the number of empty slots (for producers)
and the number of full slots (for consumers).
o Binary Semaphore: Can also be used for mutual exclusion to protect the buffer.

Example:

o Let empty be a semaphore initialized to the size of the buffer (number of empty
slots).
o Let full be a semaphore initialized to 0 (number of full slots).
o Use a mutex semaphore to protect access to the buffer.
2. Condition Variables:
o Allow threads to wait for certain conditions to become true.
o Producers wait on a condition variable when the buffer is full, and consumers wait
when it is empty.

Implementation Steps

1. Producer Process:
o Wait for an empty slot (wait(empty)).
o Acquire the mutex lock to ensure exclusive access to the buffer.
o Add an item to the buffer.
o Release the mutex lock.
o Signal that there is a new full slot (signal(full)).
2. Consumer Process:
o Wait for a full slot (wait(full)).
o Acquire the mutex lock.
o Remove an item from the buffer.
o Release the mutex lock.
o Signal that there is a new empty slot (signal(empty)).

43
Example Code (Pseudocode)

semaphore empty = BUFFER_SIZE;


semaphore full = 0;
semaphore mutex = 1;

function producer() {
while (true) {
item = produce_item();
wait(empty);
wait(mutex);
add_to_buffer(item);
signal(mutex);
signal(full);
}
}

function consumer() {
while (true) {
wait(full);
wait(mutex);
item = remove_from_buffer();
signal(mutex);
signal(empty);
consume_item(item);
}
}
3.11. Multiprocessor issues (spin-locks, reentrancy)

44
Multiprocessor systems introduce unique challenges and considerations for synchronization and
resource management. Two key issues are spin-locks and reentrancy. Here’s a breakdown of
each:

Spin-Locks

Definition: A spin-lock is a type of synchronization mechanism that allows a thread to


repeatedly check (or "spin") on a lock until it becomes available.

Characteristics:

 Busy-waiting: The thread actively checks the lock status in a loop rather than yielding
control or sleeping.
 Low overhead: Spin-locks can be efficient when the wait time is expected to be very
short, as they avoid the overhead of context switching.

Usage:

 Spin-locks are often used in low-level programming (like operating system kernels)
where threads need to acquire locks quickly without the overhead of more complex
mechanisms.

Drawbacks:

 CPU resource waste: If the lock is held for a long time, the spinning thread consumes
CPU cycles that could be used by other threads.
 Not suitable for blocking operations: Since spin-locks do not relinquish control, they
are inefficient in scenarios with longer wait times.

Reentrancy

45
Definition: Reentrancy refers to the property of a function or method that allows it to be
interrupted in the middle of its execution and safely called again ("re-entered") before the
previous executions are complete.

Characteristics:

 Statelessness: A reentrant function does not rely on shared or static state; it typically uses
local variables.
 Safety in concurrency: Multiple threads can call a reentrant function simultaneously
without corrupting shared data.

Usage:

 Reentrancy is crucial for functions that may be called from multiple threads or interrupt
contexts, such as in signal handlers or during asynchronous events.

Examples:

 A function that performs a calculation using only its parameters and local variables is
reentrant.
 A function that modifies shared data without proper synchronization is not reentrant.

Considerations in Multiprocessor Systems

1. Contention:
o In multiprocessor systems, multiple threads or processes may contend for the
same locks or resources. Choosing appropriate synchronization mechanisms (like
spin-locks for short waits) is critical to performance.
2. Cache Coherency:

46
o Spin-locks can lead to issues with cache coherence, as different processors might
cache the same data. Frequent updates can cause unnecessary cache invalidation
and degrade performance.
3. Deadlocks and Starvation:
o Properly managing locks and ensuring reentrancy can help avoid deadlocks and
starvation in multiprocessor environments.

47

You might also like