0% found this document useful (0 votes)
20 views

Osy Unit 4 Solved Assignment PDF

Osy unit 4 solved assignment pdf

Uploaded by

Soham Bijwar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Osy Unit 4 Solved Assignment PDF

Osy unit 4 solved assignment pdf

Uploaded by

Soham Bijwar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Unit 4 CPU Scheduling and Algorithm

CPU Scheduling is a process of determining which process will own CPU for execution
while another process is on hold. The main task of CPU scheduling is to make sure that
whenever the CPU remains idle, the OS at least select one of the processes available in the
ready queue for execution. The selection process will be carried out by the CPU scheduler. It
selects one of the processes in memory that are ready for execution.
Scheduling Objectives :
1. Be Fair: A scheduler make sure that each process gets an fair chance to share the CPU
time.
2. Maximize resource Utilization: Resources of the system should be kept busy by the
scheduler.
3. Response Time : A scheduler should minimize the response time
4. Maximize throughput : A scheduler should maximize the number of jobs processed
per unit time
5. Turnaround: A scheduler should minimize the turnaround time.
6. Minimize Overhead: A scheduler should minimize the overhead resulting in wasted
resources
7. Efficiency : A Scheduler should keep the system busy and let the more jobs done per
second

CPU scheduling Terminologies

 Burst Time/Execution Time: It is a time required by the process to complete


execution. It is also called running time.
 Arrival Time: when a process enters in a ready state
 Finish Time: when process complete and exit from a system
 Multiprogramming: A number of programs which can be present in memory at the
same time.
 Jobs: It is a type of program without any kind of user interaction.
 User: It is a kind of program having user interaction.
 Process: It is the reference that is used for both job and user.
 CPU/IO burst cycle: Characterizes process execution, which alternates between CPU
and I/O activity. CPU times are usually shorter than the time of I/O.

CPU i/O burst cycles:

 Almost all programs have some alternating cycle of CPU number crunching and
waiting for I/O of some kind. ( Even a simple fetch from memory takes a long time
relative to CPU speeds. )
 In a simple system running a single process, the time spent waiting for I/O is wasted,
and those CPU cycles are lost forever.
 A scheduling system allows one process to use the CPU while another is waiting for
I/O, thereby making full use of otherwise lost CPU cycles.
 The challenge is to make the overall system as "efficient" and "fair" as possible,
subject to varying and often dynamic conditions, and where "efficient" and "fair" are
somewhat subjective terms, often subject to shifting priority policies.

 Almost all processes alternate between two states in a continuing cycle, as


shown in Figure below :
o A CPU burst of performing calculations, and
o An I/O burst, waiting for data transfer in or out of the system.
Types of CPU Scheduling

Here are two kinds of Scheduling methods:

1. Preemptive Scheduling

In this type of Scheduling, the tasks are usually assigned with priorities. At times it is
necessary to run a certain task that has a higher priority before another task although it is
running. Therefore, the running task is interrupted for some time and resumed later when the
priority task has finished its execution.

2. Non-Preemptive Scheduling

Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.

This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.

It is the only method that can be used on certain hardware platforms, because It does not
require the special hardware(for example: a timer) needed for preemptive scheduling.
CPU scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.

In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one


exists in the ready queue) must be selected for execution. There is a choice, however in
circumstances 2 and 3.

When Scheduling takes place only under circumstances 1 and 4, we say the scheduling
scheme is non-preemptive; otherwise the scheduling scheme is preemptive.

CPU Scheduling Criteria :


A CPU scheduling algorithm tries to maximize and minimize the following:

Maximize:

CPU utilization: CPU utilization is the main task in which the operating system needs to
make sure that CPU remains as busy as possible. It can range from 0 to 100 percent.
However, for the RTOS, it can be range from 40 percent for low-level and 90 percent for the
high-level system.

Throughput: The number of processes that finish their execution per unit time is known
Throughput. So, when the CPU is busy executing the process, at that time, work is being
done, and the work completed per unit time is called Throughput.

Minimize:

Waiting time: Waiting time is an amount that specific process needs to wait in the ready
queue.
Response time: It is an amount to time in which the request was submitted until the first
response is produced.

Turnaround Time: Turnaround time is an amount of time to execute a specific process. It is


the calculation of the total time spent waiting to get into the memory, waiting in the queue
and, executing on the CPU. The period between the time of process submission to the
completion time is the turnaround time.
Dispatcher :

It is a module that provides control of the CPU to the process selected by short term
scheduler . The Dispatcher should be fast so that it can run on every context switch. Dispatch
latency is the amount of time needed by the CPU scheduler to stop one process and start
another.

Functions performed by Dispatcher:

 Context Switching
 Switching to user mode
 Moving to the correct location in the newly loaded program.
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

First Come First Serve


First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU
scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the
CPU allocation first. This scheduling method can be managed with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail
of the queue. So, when CPU becomes free, it should be assigned to the process at the
beginning of the queue.
Characteristics of FCFS method:

 It offers non-preemptive and pre-emptive scheduling algorithm.


 Jobs are always executed on a first-come, first-serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and the general wait time is quite high.

Examples:

Process CPU burst time


P1 8
P2 4
P3 9
P4 5

Grant chart :

P1 P2 P3 P4
0 8 12 21 26

Waiting time of Process P1 = 0

Waiting time of Process P2=8

Waiting time of Process P3=12

Waiting time of Process P4=21

Average waiting time= (0+8+12+21)/4 =10.25

Turnaround time of process P1=8

Turnaround time of process P2=12

Turnaround time of process P3=21

Turnaround time of process P4=26

Average Turnaround time =(8+12+21+26)/4= 16.75


Process Arrival Time CPU burst time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

P1 P2 P3 P4
0 8 12 21 26

Waiting time =arrival time – starting time

Waiting time of Process P1 = 0-0=0

Waiting time of Process P2=8-1=7

Waiting time of Process P3=12-2=10

Waiting time of Process P4=21-3=18

Average waiting time= (0+7+10+18)/4 =8.75

Turnaround time= ending time – arrival time

Turnaround time of process P1=8-0=8

Turnaround time of process P2=12-1=11

Turnaround time of process P3=21-2=19

Turnaround time of process P4=26-3=23

Average Turnaround time =(8+11+19+23)/4= 15.25


Shortest Job First
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with
the shortest execution time should be selected for execution next. This scheduling method can
be preemptive or non-preemptive. It significantly reduces the average waiting time for other
processes awaiting execution. Processes are processed in the ascending order of their burst
time.

1.Non-preemptive SJF: If CPU is executing one job and it is not stopped until its completion
then it is a Non-preemptive SJF

2. Preemptive SJF: In this method,if CPU is executing one job and a new job with smaller
burst time arrives then the current job is preemted and new job is executed first

Characteristics of SJF Scheduling

 It is associated with each job as a unit of time to complete.


 In this method, when the CPU is available, the next process or job with the shortest
completion time will be executed first.
 It is Implemented with non-preemptive policy.
 This algorithm method is useful for batch-type processing, where waiting for jobs to
complete is not critical.
 It improves job output by offering shorter jobs, which should be executed first, which
mostly have a shorter turnaround time.

Example :
Process Arrival Time CPU burst time
P1 0 4
P2 1 1
P3 2 2
P4 3 1

By Non-premptive SJF :

P1 P2 P4 P3
0 4 5 6 8

Waiting time of Process P1 = 0-0=0

Waiting time of Process P2=4-1=3

Waiting time of Process P3=6-2=4

Waiting time of Process P4=5-3=2

Average waiting time= (0+3+4+2)/4 =2.25

Turnaround time= ending time – arrival time

Turnaround time of process P1=4-0=4

Turnaround time of process P2=5-1=4

Turnaround time of process P3=8-2=6

Turnaround time of process P4=6-3=3

Average Turnaround time =(4+4+6+3)/4= 4.25


By Premptive SJF :

P1 P2 P3 P3 P4 P1
0 1 2 3 4 5 8

Waiting time of Process P1 = 0-0+(5-1)=4

Waiting time of Process P2=1-1=0

Waiting time of Process P3=2-2=0

Waiting time of Process P4=4-3=1

Average waiting time= (4+0+0+1)/4 =1.25

Turnaround time= ending time – arrival time

Turnaround time of process P1=8-0=8

Turnaround time of process P2=2-1=1

Turnaround time of process P3=4-2=2

Turnaround time of process P4=5-3=2

Average Turnaround time =(8+1+2+2)/4= 3.25

Shortest Remaining Time-SRTN(Shortest running time


first)
The full form of SRT is Shortest remaining time. It is also known as SJF pre-emptive
scheduling. In this method, the process will be allocated to the task, which has the smallest
amount of time remaining until completion.

In this algorithm, process with the smallest estimated run time to completion is run next.

In SJF once a process begin its execution, it runs till completion. In SRT a running process
may be pre-empted by a user process with a smallest estimated run time. This method
prevents a newer ready state process from holding the completion of an older process.
Characteristics of SRT scheduling method:

 This method is mostly applied in batch environments where short jobs are required to
be given preference.
 This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
 Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.

Process Arrival Time CPU burst time


P1 0 8
P2 1 4
P3 2 9
P4 3 5

P1 P2 P4 P1 P3
0 1 5 10 17 26

Priority Based Scheduling


Priority scheduling is a method of scheduling processes based on priority. In this method, the
scheduler selects the tasks to work as per the priority.

Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis. Priority can be decided based on memory requirements, time
requirements, etc.

Priorities are categorised into :

Internal Priorities: are based on measurable quantities as CPU burst time,memory


requirement ect.
External Priorities: are human created as seniority,influence of any person

When a process is entered in ready queue its priority is compared with priority of running
process.

It can be pre-emptive or non-pre-emptive.

Example :

Process Priority CPU burst time


P1 3 10
P2 1 1
P3 3 2
P4 4 1
P5 2 5
Gantt Chart Will be :

P2 P5 P1 P3 P4
0 1 6 16 18 19

Round-Robin Scheduling
Round robin is the scheduling algorithm especially designed for time sharing system. The
name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turn. It is mostly used for scheduling algorithms in multitasking.
This algorithm method helps for starvation free execution of processes.

It is a pre-emptive version of FCFS algorithm.

Processes are dispatched in First in First Out manner but are allowed to run for only a limited
amount of time.

To allow the pre-emption small amount of time called time quantum or time slice is added. If
a process does not complete its execution before the time slice expires it is pre-empted.
Example :

Consider a time slice of 3

Process Arrival time Service time


P1 0 5
P2 1 3
P3 2 8
P4 3 6
Gantt Chart Will be :

P1 P2 P3 P4 P1 P3 P4 P4 P3
0 3 6 9 12 14 17 20 22

Multilevel Queue Scheduling Algorithm :

It may happen that processes in the ready queue can be divided into different classes where
each class has its own scheduling needs. For example, a common division is a foreground
(interactive) process and background (batch) processes.These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.

Ready Queue is divided into separate queues for each class of processes. For example, let us
take three different types of process System processes, Interactive processes and Batch
Processes. All three process have there own queue.
All three different types of processes have their own queue. Each queue has its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can
use FCFS to schedule there processes.

To determine this Scheduling among the queues is necessary. There are two ways to do so –

1. Fixed priority pre-emptive scheduling method – Each queue has absolute priority
over lower priority queue. Let us consider following priority order queue 1 > queue 2 >
queue 3.According to this algorithm no process in the batch queue(queue 3) can run
unless queue 1 and 2 are empty. If any batch process (queue 3) is running and any
system (queue 1) or Interactive process(queue 2) entered the ready queue the batch
process is preempted.
2. Time slicing – In this method each queue gets certain portion of CPU time and can use
it to schedule its own processes.For instance, queue 1 takes 50 percent of CPU time
queue 2 takes 30 percent and queue 3 gets 20 percent of CPU time.

Deadlock :

Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there
is only one track, none of the trains can move once they are in front of each other. Similar
situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process
1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
System Models :

A system consist of finite number of resources (memory,files and i/o devices,cpu) that are
shared among a number of processes.A process may request as many resources as it requires
forcarrying out its designated task.

A process in operating systems uses different resources and uses resources in following way.
1) Requests a resource
2) Use the resource
2) Releases the resource

Operating system checks to make sure that the process has requested and has been allocated
the resource.A ssyetm table records whether each resource is allocated or free.

Necessary conditions for Deadlocks


1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

Difference between Starvation and Deadlock

Sr. Deadlock Starvation


Starvation is a situation where the low
Deadlock is a situation where no process got
1 priority process got blocked and the high
blocked and no process proceeds
priority processes proceed.
Starvation is a long waiting but not
2 Deadlock is an infinite waiting.
infinite.
3 Every Deadlock is always a starvation. Every starvation need not be deadlock.
The requested resource is blocked by the other The requested resource is continuously be
4
process. used by the higher priority processes.
Deadlock happens when Mutual exclusion,
It occurs due to the uncontrolled priority
5 hold and wait, No preemption and circular
and resource management.
wait occurs simultaneously.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
One can zoom into each category individually, Prevention is done by negating one of above
mentioned necessary conditions for deadlock.
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources which process WILL
need are known to us prior to execution of the process. We use Banker’s algorithm (Which is
in-turn a gift from Dijkstra) in order to avoid deadlock.

2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.

Deadlock Prevention
 Deadlocks can be prevented by preventing at least one of the four required conditions:

Mutual Exclusion

 Shared resources such as read-only files do not lead to deadlocks.


 Unfortunately some resources, such as printers and tape drives, require exclusive
access by a single process.

Hold and Wait

 To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others. There are several
possibilities for this:
o Require that all processes request all resources at one time. This can be
wasteful of system resources if a process needs one resource early in its
execution and doesn't need some other resource until much later.
o Require that processes holding resources must release them before requesting
new resources, and then re-acquire the released resources along with the new
ones in a single new request. This can be a problem if a process has partially
completed an operation using a resource and then fails to get it re-allocated
after releasing it.
o Either of the methods described above can lead to starvation if a process
requires one or more popular resources.
No Preemption

 Preemption of process resource allocations can prevent this condition of deadlocks,


when it is possible.
o One approach is that if a process is forced to wait when requesting a new
resource, then all other resources previously held by this process are implicitly
released, ( preempted ), forcing this process to re-acquire the old resources
along with the new resources in a single request, similar to the previous
discussion.
o Another approach is that when a resource is requested and not available, then
the system looks to see what other processes currently have those resources
and are themselves blocked waiting for some other resource. If such a process
is found, then some of their resources may get preempted and added to the list
of resources for which the process is waiting.
o Either of these approaches may be applicable for resources whose states are
easily saved and restored, such as registers and memory, but are generally not
applicable to other devices such as printers and tape drives.

Circular Wait

 One way to avoid circular wait is to number all resources, and to require that
processes request resources only in strictly increasing ( or decreasing ) order.
 In other words, in order to request resource Rj, a process must first release all Ri such
that i >= j.
 One big challenge in this scheme is determining the relative ordering of the different
resources

Deadlock Avoidance

 The general idea behind deadlock avoidance is to prevent deadlocks from ever
happening, by preventing at least one of the aforementioned conditions.
 This requires more information about each process, AND tends to lead to low device
utilization. ( I.e. it is a conservative approach. )
 In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the
scheduler can also take advantage of the schedule of exactly what resources may be
needed in what order.
 When a scheduler sees that starting a process or granting resource requests may lead
to future deadlocks, then that process is just not started or the request is not granted.
 A resource allocation state is defined by the number of available and allocated
resources, and the maximum requirements of all processes in the system.

Safe State

 A state is safe if the system can allocate all resources requested by all processes ( up
to their stated maximums ) without entering a deadlock state.
 More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2,
..., PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i. ( I.e. if all the processes prior
to Pi finish and free up their resources, then Pi will be able to finish also, using the
resources that they have freed up. )
 If a safe sequence does not exist, then the system is in an unsafe state, which MAY
lead to deadlock. ( All safe states are deadlock free, but not all unsafe states lead to
deadlocks. )

Figure - Safe, unsafe, and deadlocked state spaces.

 For example, consider a system with 12 tape drives, allocated as follows. Is this a safe
state? What is the safe sequence?
Maximum Needs Current Allocation

P0 10 5

P1 4 2

P2 9 2

 What happens to the above table if process P2 requests and is granted one more tape
drive?
 Key to the safe state approach is that when a request is made for resources, the request

Resource-Allocation Graph Algorithm

 If resource categories have only single instances of their resources, then deadlock
states can be detected by cycles in the resource-allocation graphs.
 In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process
to a resource that it may request in the future.
 In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources.
( Alternatively, processes may only make requests for resources for which they have
already established claim edges, and claim edges cannot be added to any process that
is currently holding resources. )
 When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
 This approach works by denying requests that would produce cycles in the resource-
allocation graph, taking claim edges into effect.
 Consider for example what happens when process P2 requests resource R2:
Figure - Resource allocation graph for deadlock avoidance

 The resulting resource-allocation graph would have a cycle in it, and so the request
cannot be granted.

Figure - An unsafe state in a resource allocation graph

Banker's Algorithm

 For resource categories that contain more than one instance the resource-allocation
graph method does not work, and more complex ( and less efficient ) methods must be
chosen.
 The Banker's Algorithm gets its name because it is a method that bankers could use to
assure that when they lend out resources they will still be able to satisfy all their
clients. ( A banker won't loan out a little money to start building a house unless they
are assured that they will later be able to loan out the rest of the money to finish the
house. )
 When a process starts up, it must state in advance the maximum allocation of
resources it may request, up to the amount available on the system.
 When a request is made, the scheduler determines whether granting the request would
leave the system in a safe state. If not, then the process must wait until the request can
be granted safely.
 The banker's algorithm relies on several key data structures: ( where n is the number
of processes and m is the number of resource categories. )
o Available[ m ] indicates how many resources are currently available of each
type.
o Max[ n ][ m ] indicates the maximum demand of each process of each
resource.
o Allocation[ n ][ m ] indicates the number of each resource category allocated
to each process.
o Need[ n ][ m ] indicates the remaining resources needed of each type for each
process. ( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )
 For simplification of discussions, we make the following notations / observations:
o One row of the Need vector, Need[ i ], can be treated as a vector
corresponding to the needs of process i, and similarly for Allocation and Max.
o A vector X is considered to be <= a vector Y if X[ i ] <= Y[ i ] for all i.

Safety Algorithm

 In order to apply the Banker's algorithm, we first need an algorithm for determining
whether or not a particular state is safe.
 This algorithm determines if the current state of a system is safe, according to the
following steps:
1. Let Work and Finish be vectors of length m and n respectively.
 Work is a working copy of the available resources, which will be
modified during the analysis.
 Finish is a vector of booleans indicating whether a particular process
can finish. ( or has finished so far in the analysis. )
 Initialize Work to Available, and Finish to false for all elements.
2. Find an i such that both (A) Finish[ i ] == false, and (B) Need[ i ] < Work.
This process has not finished, but could with the given available working set.
If no such i exists, go to step 4.
3. Set Work = Work + Allocation[ i ], and set Finish[ i ] to true. This corresponds
to process i finishing up and releasing its resources back into the work pool.
Then loop back to step 2.
4. If finish[ i ] == true for all i, then the state is a safe state, because a safe
sequence has been found.
 ( JTB's Modification:
1. In step 1. instead of making Finish an array of booleans initialized to false,
make it an array of ints initialized to 0. Also initialize an int s = 0 as a step
counter.
2. In step 2, look for Finish[ i ] == 0.
3. In step 3, set Finish[ i ] to ++s. S is counting the number of finished processes.
4. For step 4, the test can be either Finish[ i ] > 0 for all i, or s >= n. The benefit
of this method is that if a safe state exists, then Finish[ ] indicates one safe
sequence ( of possibly many. ) )

Resource-Request Algorithm ( The Bankers Algorithm )

 Now that we have a tool for determining if a particular state is safe or not, we are now
ready to look at the Banker's algorithm itself.
 This algorithm determines if a new request is safe, and grants it only if it is safe to do
so.
 When a request is made ( that does not exceed currently available resources ), pretend
it has been granted, and then see if the resulting state is a safe one. If so, grant the
request, and if not, deny the request, as follows:
1. Let Request[ n ][ m ] indicate the number of resources of each type currently
requested by processes. If Request[ i ] > Need[ i ] for any process i, raise an
error condition.
2. If Request[ i ] > Available for any process i, then that process must wait for
resources to become available. Otherwise the process can continue to step 3.
3. Check to see if the request can be granted safely, by pretending it has been
granted and then seeing if the resulting state is safe. If so, grant the request,
and if not, then the process must wait until its request can be granted
safely.The procedure for granting a request ( or pretending to for testing
purposes ) is:
 Available = Available - Request
 Allocation = Allocation + Request
 Need = Need - Request

An Illustrative Example

 Consider the following situation:

 And now consider what happens if process P1 requests 1 instance of A and 2


instances of C. ( Request[ 1 ] = ( 1, 0, 2 ) )

 What about requests of ( 3, 3,0 ) by P4? or ( 0, 2, 0 ) by P0? Can these be safely


granted? Why or why not?

You might also like