0% found this document useful (0 votes)
39 views27 pages

Understanding Processes and Scheduling

A process is a program in execution that can be divided into four sections: stack, heap, text, and data. The process life cycle includes states such as start, ready, running, waiting, and terminated, while the Process Control Block (PCB) stores essential information about each process. Process scheduling is crucial for managing multiple processes, with different types of schedulers (long-term, short-term, medium-term) and various scheduling algorithms like FCFS, SJN, and Round Robin.

Uploaded by

bapeho4982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views27 pages

Understanding Processes and Scheduling

A process is a program in execution that can be divided into four sections: stack, heap, text, and data. The process life cycle includes states such as start, ready, running, waiting, and terminated, while the Process Control Block (PCB) stores essential information about each process. Process scheduling is crucial for managing multiple processes, with different types of schedulers (long-term, short-term, medium-term) and various scheduling algorithms like FCFS, SJN, and Round Robin.

Uploaded by

bapeho4982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Process

A process is basically a program in execution. The execution of a process must progress in a sequential
fashion. To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory

Component &
Description

Stack
1 The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

Heap
2
This is dynamically allocated memory to a process during its run time.

Text
3 This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.

Data
4
This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language. For example, here is a simple
program written in C programming language −

#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A collection
of computer programs, libraries and related data are referred to as a software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

Start
1
This is the initial state when a process is first started/created.

Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor
2 allocated to them by the operating system so that they can run. Process may come into this state
after Start state or while running it by but interrupted by the scheduler to assign CPU to some other
process.

Running
3 Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.

Waiting
4 Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or
waiting for a file to become available.

Terminated or Exit
5 Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.
Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process as listed below in the table −

S.N. Information & Description

Process State
1
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

Process privileges
2
This is required to allow/disallow access to system resources.

Process ID
3
Unique identification for each of the process in the operating system.

Pointer
4
A pointer to parent process.

Program Counter
5
Program Counter is a pointer to the address of the next instruction to be executed for this process.

CPU registers
6
Various CPU registers where process need to be stored for execution for running state.

CPU Scheduling Information


7
Process priority and other scheduling information which is required to schedule the process.

Memory management information


8 This includes the information of page table, memory limits, Segment table depending on memory
used by the operating system.

Accounting information
9
This includes the amount of CPU used for process execution, time limits, execution ID etc.

IO status information
10
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB . The PCB is
maintained for a process throughout its lifetime, and is deleted once the process terminates.

PROCESS SCHEDULING

The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have one
entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

S.N. State & Description

Running
1
When a new process is created, it enters into the system as in the running state.

Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of
2
dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting
queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher
then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance
with the chosen set of criteria. It is the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to execute and allocates CPU to one of
them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.

Speed is lesser than short term Speed is fastest among other Speed is in between both short
2
scheduler two and long term scheduler.

It provides lesser control


It controls the degree of It reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming
It is almost absent or minimal in It is also minimal in time It is a part of Time sharing
4
time sharing system sharing system systems.

It selects processes from pool It can re-introduce the process


It selects those processes
5 and loads them into memory for into memory and execution can
which are ready to execute
execution be continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time. Using this
technique, a context switcher enables multiple processes to share a single CPU. Context switching is an
essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.

Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or more
sets of processor registers. When the process is switched, the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time


Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1
is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a newer
ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.

 Multiple queues are maintained for processes with common characteristics.


 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue.

THREAD

A thread is a flow of execution through the process code, with its own program counter that keeps track of
which instruction to execute next, system registers which hold its current working variables, and a stack
which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.

Thread Library
A thread library provides the programmer with an Application program interface for creating and
managing thread.
Ways of implementing thread library
There are two primary ways of implementing thread library, which are as follows −
 The first approach is to provide a library entirely in user space with kernel support. All code and
data structures for the library exist in a local function call in user space and not in a system call.
 The second approach is to implement a kernel level library supported directly by the operating
system. In this case the code and data structures for the library exist in kernel space.
Invoking a function in the application program interface for the library typically results in a system call
to the kernel.
The main thread libraries which are used are given below −
 POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided as
either a user level or a kernel level library.
 WIN 32 thread − The windows thread library is a kernel level library available on windows
systems.
 JAVA thread − The JAVA thread API allows threads to be created and managed directly as
JAVA programs.

Implicit Threading
One way to address the difficulties and better support the design of multithreaded applications is to
transfer the creation and management of threading from application developers to compilers and run-time
libraries. This, termed implicit threading, is a popular trend today. Implicit threading is mainly the use of
libraries or other language support to hide the management of threads. The most common implicit
threading library is OpenMP, in context of C.
OpenMP is a set of compiler directives as well as an API for programs written in C, C++, or FORTRAN
that provides support for parallel programming in shared-memory environments. OpenMP identifies
parallel regions as blocks of code that may run in parallel. Application developers insert compiler
directives into their code at parallel regions, and these directives instruct the OpenMP run-time library to
execute the region in parallel. The following C program illustrates a compiler directive above the parallel
region containing the printf() statement:
It creates as many threads which are processing cores in the system. Thus, for a dual-core system, two
threads are created, for a quad-core system, four are created; and so forth. Then all the threads
simultaneously execute the parallel region. When each thread exits the parallel region, it is terminated.
OpenMP provides several additional directives for running code regions in parallel, including
parallelizing loops.
In addition to providing directives for parallelization, OpenMP allows developers to choose among
several levels of parallelism. Eg, they can set the number of threads manually. It also allows developers
to identify whether data are shared between threads or are private to a thread. OpenMP is available on
several open-source and commercial compilers for Linux, Windows, and Mac OS X systems.

Difference between Process and Thread

S.N. Process Thread


Thread is
light
weight,
taking
1 Process is heavy weight or resource intensive.
lesser
resources
than a
process.

Thread
switching
does not
need to
2 Process switching needs interaction with operating system.
interact
with
operating
system.

All
threads
can share
In multiple processing environments, each process executes the same code but has its own same set
3
memory and file resources. of open
files,
child
processes.

While
one
thread is
blocked
and
If one process is blocked, then no other process can execute until the first process is
4 waiting, a
unblocked.
second
thread in
the same
task can
run.
Multiple
threaded
5 Multiple processes without using threads use more resources. processes
use fewer
resources.

One
thread
can read,
write or
6 In multiple processes each process operates independently of the others.
change
another
thread's
data.

Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

Threads are implemented in following two ways −

 User Level Threads − User managed threads.


 Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a
single thread.
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.

The Kernel maintains context information for the process as a whole and for individuals threads within
the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage
than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
 Kernel routines themselves can be multithreaded.

Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode switch to
the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types

 Many to many relationship.


 Many to one relationship.
 One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of
kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This
model provides the best accuracy on concurrency and when a thread performs a blocking system call, the
kernel can schedule another thread for execution.

Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is
done in user space by the thread library. When thread makes a blocking system call, the entire process
will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in
parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more
concurrency than the many-to-one model. It also allows another thread to run when a thread makes a
blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

Kernel-level threads are slower to create


1 User-level threads are faster to create and manage.
and manage.

Operating system supports creation of


2 Implementation is by a thread library at the user level.
Kernel threads.

User-level thread is generic and can run on any Kernel-level thread is specific to the
3
operating system. operating system.

Multi-threaded applications cannot take advantage of Kernel routines themselves can be


4
multiprocessing. multithreaded.

Process Synchronization

When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced. A
cooperative process is the one which can affect the execution of other process or can be affected by the
execution of other process. Such processes need to be synchronized so that their order of execution can be
guaranteed. The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization. There are various synchronization mechanisms that are
used to synchronize the processes.
Critical Section Problem

Critical Section is the part of a program which tries to access shared resources. That resource may be any
resource in a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system faces
the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition
among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section problem.
We need to provide a solution in such a way that the following conditions can be satisfied. Requirements
of Synchronization mechanisms

Primary

1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is
executing inside critical section then the other process must not enter in the critical section.

2. Progress

Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.

Secondary

1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.
2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.

Peterson Solution

Peterson’s solution provides a good algorithmic description of solving the critical-section


problem and illustrates some of the complexities involved in designing software that addresses
the requirements of mutual exclusion, progress, and bounded waiting.

do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
/* critical section */
flag[i] = false;
/* remainder section */
}
while (true);
The structure of process Pi in Peterson’s solution. This solution is restricted to two processes that
alternate execution between their critical sections and remainder sections. The processes are numbered
P0 and P1. We use Pj for convenience to denote the other process when Pi is present; that is, j equals 1 −
I, Peterson’s solution requires the two processes to share two data items −
int turn;
boolean flag[2];
The variable turn denotes whose turn it is to enter its critical section. I.e., if turn == i, then process Pi is
allowed to execute in its critical section. If a process is ready to enter its critical section, the flag array is
used to indicate that. For E.g., if flag[i] is true, this value indicates that Pi is ready to enter its critical
section. With an explanation of these data structures complete, we are now ready to describe the
algorithm shown in above. To enter the critical section, process Pi first sets flag[i] to be true and then sets
turn to the value j, thereby asserting that if the other process wishes to enter the critical section, it can do
so. Turn will be set to both i and j at roughly the same time, if both processes try to enter at the same
time. Only one of these assignments will occur ultimately; the other will occur but will be overwritten
immediately. The final value of turn determines which of the two processes is allowed to enter its critical
section first. We now prove that this solution is correct. We need to show that −

 Mutual exclusion is preserved.


 The progress requirement is satisfied.
 The bounded-waiting requirement is met.
To prove 1, we note that each Pi enters its critical section only if either flag[j] == false or turn == i. Also
note that, if both processes can be executing in their critical sections at the same time, then flag[0] ==
flag[1] == true. These two observations indicate that P0 and P1 could not have successfully executed
their while statements at about the same time, since the value of turn can be either 0 or 1 but cannot be
both. Hence, one of the processes — say, Pj — must have successfully executed the while statement,
whereas Pi had to execute at least one additional statement (“turn == j”). However, at that time, flag[j]
== true and turn == j, and this condition will persist as long as Pj is in its critical section; as a result,
mutual exclusion is preserved.
To prove properties 2 and 3, we note that if a process is stuck in the while loop with the condition flag[j]
== true and turn == j, process Pi can be prevented from entering the critical section only; this loop is the
only one possible. flag[j] will be == false, and Pi can enter its critical section if Pj is not ready to enter
the critical section. If Pj has set, flag[j] = true and is also executing in its while statement, then either turn
== i or turn == j. If turn == i, Pi will enter the critical section then. Pj will enter the critical section, If
turn == j. Although once Pj exits its critical section, it will reset flag[j] to false, allowing Pi to enter its
critical section. Pj must also set turn to i, if Pj resets flag[j] to true. Hence, since Pi does not change the
value of the variable turn while executing the while statement, Pi will enter the critical section (progress)
after at most one entry by Pj (bounded waiting).
Disadvantage
 Peterson’s solution works for two processes, but this solution is best scheme in user mode for
critical section.
 This solution is also a busy waiting solution so CPU time is wasted. So that “SPIN
LOCK” problem can come. And this problem can come in any of the busy waiting solution.

SYNCHRONIZATION HARDWARE

Hardware Locks are used to solve the problem of `process synchronization. The process synchronization
problem occurs when more than one process tries to access the same resource or variable. If more than
one process tries to update a variable at the same time then a data inconsistency problem can occur. This
process synchronization is also called synchronization hardware in the operating system.

Hardware Synchronization Algorithms

Process Syncronization problem can be solved by software as well as a hardware solution. Peterson
solution is one of the software solutions to the process synchronization problem. Peterson algorithm
allows two or more processes to share a single-use resource without any conflict. In this article, we will
discuss the Hardware solution to the problem. The hardware solution is as follows:

[Link] and Set

2. Swap

[Link] and Lock

Test and Set

Test and set algorithm uses a boolean variable 'lock' which is initially initialized to false. This lock
variable determines the entry of the process inside the critical section of the code.

Swap
Swap function uses two boolean variables lock and key. Both lock and key variables are initially
initialized to false. Swap algorithm is the same as lock and set algorithm. The Swap algorithm uses a
temporary variable to set the lock to true when a process enters the critical section of the program.

Unlock and lock

Unlock and lock algorithm uses the TestAndSet method to control the value of lock. Unlock and lock
algorithm uses a variable waiting[i] for each process i. Here i is a positive integer i.e 1,2,3,..which
corresponds to processes P1, P2, P3... and so on. Waiting [i] checks if the process i is waiting or not to
enter into the critical section.

All the processes are maintained in a ready queue before entering into the critical section. The processes
are added to the queue with respect to their process number. The queue is the circular queue.

 Process Synchronization problem is solved by Hardware locks.


 The three Hardware lock algorithms are Test and Set, Swap, Unlock and lock Algorithm.
 Test and set algorithm uses a boolean variable lock to regulate mutual exclusion of processes.
 Swap algorithm uses two boolean variable lock and key to regulate mutual exclusion of
processes.
 Unlock and lock algorithm uses three boolean variable lock, key, and waiting[i] for each process
to regulate mutual exclusion.
 Unlock and lock algorithm maintains a ready queue for the waiting and incoming processes and
the algorithm checks the queue for the next process j when the first process i comes out of the
critical section.
 Unlock and lock algorithm ensures bounded waiting.

Mutex Lock

Mutex locks, also referred to as mutual exclusion locks, are synchronization basic functions used to
prevent simultaneous possession of resources that are shared by numerous threads or procedures. The
word "mutex" means "mutual exclusion."

A mutex lock makes it possible to implement mutual exclusion by limiting the number of threads or
processes that can simultaneously acquire the lock. A single thread or procedure has to first try to obtain
the mutex lock for something that is shared before it can access it. The seeking string or procedure gets
halted and placed in a state of waiting as long as the encryption key turns into accessible if it is being
organized by a different thread or process. The thread or process is able to use the resource that has been
shared after acquiring the lock. When finished, it introduces the lock so that different threads or
processes can take possession of it.
Components of Mutex Locks
Below we discuss some of the main components of Mutex Locks.
Mutex Variable − A mutex variable is used to represent the lock. It is a data structure that maintains the
state of the lock and allows threads or processes to acquire and release it.
Lock Acquisition − Threads or processes can attempt to acquire the lock by requesting it. If the lock is
available, the requesting thread or process gains ownership of the lock. Otherwise, it enters a waiting
state until the lock becomes available.
Lock Release − Once a thread or process has finished using the shared resource, it releases the lock,
allowing other threads or processes to acquire it.
Types of Mutex Locks
Mutex locks come in a variety of forms that offer varying levels of capabilities and behavior. Below are a
few different kinds of mutex locks that are frequently used &mi nus;
Recursive Mutex
A recursive mutex enables multiple lock acquisitions without blocking an operating system or procedure.
It keeps track of the number of occasions it was previously purchased and needs the same amount of
discharges before it can be completely unlocked.
Example
Think about a data framework where a tree-like representation of the directory structure exists. Each
node in the tree stands for a directory, and several threads are simultaneously going through the tree to
carry out different operations. The use of a recursive mutex can prevent conflicts. A thread is a program
that passes through a directory node, takes control of the lock, executes its actions, and then declares
itself repeatedly to enter subdirectories. The recursive mutex enables multiple acquisitions of the
identical lock without blocking the thread, maintaining appropriate traversal and synchronization.
Error-Checking Mutex
When lock processes, an error-checking mutex executes further error checking. By hindering looping
lock appropriation, it makes a guarantee that an application or procedure doesn't take over a mutex lock it
presently already holds.
Example
Consider a multi-threaded program in which multiple processes update a common contrary fluctuating.
The counter is secured by a mutex lock to avoid conflicts and race conditions. A fault occurs if a thread
unintentionally attempts to obtain the mutex lock it currently possesses. Such coding errors can be found
and quickly fixed by programmers thanks to the error-checking mutex.
Times Mutex
For a predetermined amount of time, an algorithm or procedure can try to acquire a lock using a timed
mutex. The acquiring functioning fails if the security key fails to become accessible during the allotted
time, as well as if the thread/process can respond appropriately.
Example
Think about a system that operates in real-time that has a number of operations or strings that require
access to a constrained number of resources that are fixed. Each assignment requires a particular resource
to complete it. Nevertheless, a task might need to execute a different course of action or indicate a
mistake if it can't get the needed resource in an appropriate period of time. Each task can make an
attempt to obtain the material for a set amount of time through the use of a timed mutex. The task at hand
can continue with an alternate method or take the necessary action depending on the time limit
circumstance if the resource in question is not accessible within the allotted time.
Priority Inheritance Mutex
A particular kind of mutex that aids in reducing the inversion of priority issues is the priority inheritance
mutex (also known as the priority ceiling mutex). The priority inheritance mutex momentarily increases
the ranking associated with the low-priority organization to the level of the highest-priority organization
awaiting the lock to be released as a low-priority string or procedure is holding the lock and a higher-
priority string or procedure requires it.
Example
Consider a system that operates in real-time with several threads operating at various priorities. These
processes have access to shared assets that are mutex lock protected. A priority inheritance mutex can be
used to avoid highest-priority inversion, which happens when a lower-priority string blocks a higher-
priority string by retaining the lock. In order to ensure that the highest-priority string receives immediate
access to the material, the mutex momentarily raises the lower-priority thread's importance to coincide
with that associated with the highest-priority thread awaiting the lock.
Read-Write Mutex
A read-write lock is a synchronization system that permits several threads or procedures to access the
same resource concurrently whereas implementing exclusion between them throughout write activities,
though it does not constitute solely an instance of a mutex lock.
Example
Only one thread is in charge of drafting new frames into the provided buffer whereas various threads
read provides and from it in an immediate time online video implementation. A read-write lock can be
used to permit simultaneous reading but reserved writing. For unrestricted access to the structures,
numerous reading strings can concurrently gain the read lock. Yet, the writer string develops the write
lock solely when it wishes to modify the storage device, making sure that no other threads are able to
read or write throughout the procedure for updating.
Use Cases of Mutex Locks
Shared Resource Protection − Mutex locks are commonly used to protect shared resources in multi-
threaded or multi-process environments. They ensure that only one thread or process can access the
shared resource at a time, preventing data corruption and race conditions.
Critical Sections − Mutex locks are used to define critical sections in a program, where only one thread
at a time can execute the code inside the critical section. This ensures the integrity of shared data and
prevents concurrent access issues.
Synchronization − Mutex locks enable synchronization between threads or processes, allowing them to
coordinate their actions and access shared resources in a controlled manner. They ensure that certain
operations are performed atomically, avoiding conflicts and ensuring consistency.
Deadlock Avoidance − Mutex locks can be used to prevent deadlock situations where multiple threads
or processes are waiting indefinitely for resources held by each other. By following proper locking
protocols and avoiding circular dependencies, deadlock situations can be avoided.
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}

 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about
these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are
used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in the correct
order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens
because the wait and signal operations prevent the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.
Monitors

 Monitor in OS (operating system) is a synchronization construct that enables multiple processes or


threads to coordinate actions and ensures that they are not interfering with each other or producing
unexpected results. Also, it ensures that only one thread is executed at a critical code section.
 The monitors’ concept was introduced in the programming language Concurrent Pascal by Per
Brinch Hansen in 1972. Since then, they have been implemented in various programming
languages. Monitors are dynamic tools that help to manage concurrent access to shared resources
in the operating system. Concurrent access means allowing more than one user to access a
computer simultaneously.
 Why are Monitors Used?
 Monitors in operating systems are used to manage access to shared resources, like files or data,
among multiple processes. They ensure that only one process can use a resource simultaneously,
preventing conflicts and data corruption. Monitors simplify synchronization and protect data
integrity, making it easier for programmers to create reliable software.
 They serve as "guards" for critical code sections, ensuring that no two processes can enter them
simultaneously. Monitors are like traffic lights that control access to resources, preventing crashes
and ensuring a smooth flow of data and tasks in an operating system.

SYN

You might also like