Osy Unit 4 Solved Assignment PDF
Osy Unit 4 Solved Assignment PDF
CPU Scheduling is a process of determining which process will own CPU for execution
while another process is on hold. The main task of CPU scheduling is to make sure that
whenever the CPU remains idle, the OS at least select one of the processes available in the
ready queue for execution. The selection process will be carried out by the CPU scheduler. It
selects one of the processes in memory that are ready for execution.
Scheduling Objectives :
1. Be Fair: A scheduler make sure that each process gets an fair chance to share the CPU
time.
2. Maximize resource Utilization: Resources of the system should be kept busy by the
scheduler.
3. Response Time : A scheduler should minimize the response time
4. Maximize throughput : A scheduler should maximize the number of jobs processed
per unit time
5. Turnaround: A scheduler should minimize the turnaround time.
6. Minimize Overhead: A scheduler should minimize the overhead resulting in wasted
resources
7. Efficiency : A Scheduler should keep the system busy and let the more jobs done per
second
Almost all programs have some alternating cycle of CPU number crunching and
waiting for I/O of some kind. ( Even a simple fetch from memory takes a long time
relative to CPU speeds. )
In a simple system running a single process, the time spent waiting for I/O is wasted,
and those CPU cycles are lost forever.
A scheduling system allows one process to use the CPU while another is waiting for
I/O, thereby making full use of otherwise lost CPU cycles.
The challenge is to make the overall system as "efficient" and "fair" as possible,
subject to varying and often dynamic conditions, and where "efficient" and "fair" are
somewhat subjective terms, often subject to shifting priority policies.
1. Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is
necessary to run a certain task that has a higher priority before another task although it is
running. Therefore, the running task is interrupted for some time and resumed later when the
priority task has finished its execution.
2. Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.
This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.
It is the only method that can be used on certain hardware platforms, because It does not
require the special hardware(for example: a timer) needed for preemptive scheduling.
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates.
When Scheduling takes place only under circumstances 1 and 4, we say the scheduling
scheme is non-preemptive; otherwise the scheduling scheme is preemptive.
Maximize:
CPU utilization: CPU utilization is the main task in which the operating system needs to
make sure that CPU remains as busy as possible. It can range from 0 to 100 percent.
However, for the RTOS, it can be range from 40 percent for low-level and 90 percent for the
high-level system.
Throughput: The number of processes that finish their execution per unit time is known
Throughput. So, when the CPU is busy executing the process, at that time, work is being
done, and the work completed per unit time is called Throughput.
Minimize:
Waiting time: Waiting time is an amount that specific process needs to wait in the ready
queue.
Response time: It is an amount to time in which the request was submitted until the first
response is produced.
It is a module that provides control of the CPU to the process selected by short term
scheduler . The Dispatcher should be fast so that it can run on every context switch. Dispatch
latency is the amount of time needed by the CPU scheduler to stop one process and start
another.
Context Switching
Switching to user mode
Moving to the correct location in the newly loaded program.
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail
of the queue. So, when CPU becomes free, it should be assigned to the process at the
beginning of the queue.
Characteristics of FCFS method:
Examples:
Grant chart :
P1 P2 P3 P4
0 8 12 21 26
P1 P2 P3 P4
0 8 12 21 26
1.Non-preemptive SJF: If CPU is executing one job and it is not stopped until its completion
then it is a Non-preemptive SJF
2. Preemptive SJF: In this method,if CPU is executing one job and a new job with smaller
burst time arrives then the current job is preemted and new job is executed first
Example :
Process Arrival Time CPU burst time
P1 0 4
P2 1 1
P3 2 2
P4 3 1
By Non-premptive SJF :
P1 P2 P4 P3
0 4 5 6 8
P1 P2 P3 P3 P4 P1
0 1 2 3 4 5 8
In this algorithm, process with the smallest estimated run time to completion is run next.
In SJF once a process begin its execution, it runs till completion. In SRT a running process
may be pre-empted by a user process with a smallest estimated run time. This method
prevents a newer ready state process from holding the completion of an older process.
Characteristics of SRT scheduling method:
This method is mostly applied in batch environments where short jobs are required to
be given preference.
This is not an ideal method to implement it in a shared system where the required
CPU time is unknown.
Associate with each process as the length of its next CPU burst. So that operating
system uses these lengths, which helps to schedule the process with the shortest
possible time.
P1 P2 P4 P1 P3
0 1 5 10 17 26
Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis. Priority can be decided based on memory requirements, time
requirements, etc.
When a process is entered in ready queue its priority is compared with priority of running
process.
Example :
P2 P5 P1 P3 P4
0 1 6 16 18 19
Round-Robin Scheduling
Round robin is the scheduling algorithm especially designed for time sharing system. The
name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turn. It is mostly used for scheduling algorithms in multitasking.
This algorithm method helps for starvation free execution of processes.
Processes are dispatched in First in First Out manner but are allowed to run for only a limited
amount of time.
To allow the pre-emption small amount of time called time quantum or time slice is added. If
a process does not complete its execution before the time slice expires it is pre-empted.
Example :
P1 P2 P3 P4 P1 P3 P4 P4 P3
0 3 6 9 12 14 17 20 22
It may happen that processes in the ready queue can be divided into different classes where
each class has its own scheduling needs. For example, a common division is a foreground
(interactive) process and background (batch) processes.These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.
Ready Queue is divided into separate queues for each class of processes. For example, let us
take three different types of process System processes, Interactive processes and Batch
Processes. All three process have there own queue.
All three different types of processes have their own queue. Each queue has its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can
use FCFS to schedule there processes.
To determine this Scheduling among the queues is necessary. There are two ways to do so –
1. Fixed priority pre-emptive scheduling method – Each queue has absolute priority
over lower priority queue. Let us consider following priority order queue 1 > queue 2 >
queue 3.According to this algorithm no process in the batch queue(queue 3) can run
unless queue 1 and 2 are empty. If any batch process (queue 3) is running and any
system (queue 1) or Interactive process(queue 2) entered the ready queue the batch
process is preempted.
2. Time slicing – In this method each queue gets certain portion of CPU time and can use
it to schedule its own processes.For instance, queue 1 takes 50 percent of CPU time
queue 2 takes 30 percent and queue 3 gets 20 percent of CPU time.
Deadlock :
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there
is only one track, none of the trains can move once they are in front of each other. Similar
situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process
1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
System Models :
A system consist of finite number of resources (memory,files and i/o devices,cpu) that are
shared among a number of processes.A process may request as many resources as it requires
forcarrying out its designated task.
A process in operating systems uses different resources and uses resources in following way.
1) Requests a resource
2) Use the resource
2) Releases the resource
Operating system checks to make sure that the process has requested and has been allocated
the resource.A ssyetm table records whether each resource is allocated or free.
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
Deadlock Prevention
Deadlocks can be prevented by preventing at least one of the four required conditions:
Mutual Exclusion
To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others. There are several
possibilities for this:
o Require that all processes request all resources at one time. This can be
wasteful of system resources if a process needs one resource early in its
execution and doesn't need some other resource until much later.
o Require that processes holding resources must release them before requesting
new resources, and then re-acquire the released resources along with the new
ones in a single new request. This can be a problem if a process has partially
completed an operation using a resource and then fails to get it re-allocated
after releasing it.
o Either of the methods described above can lead to starvation if a process
requires one or more popular resources.
No Preemption
Circular Wait
One way to avoid circular wait is to number all resources, and to require that
processes request resources only in strictly increasing ( or decreasing ) order.
In other words, in order to request resource Rj, a process must first release all Ri such
that i >= j.
One big challenge in this scheme is determining the relative ordering of the different
resources
Deadlock Avoidance
The general idea behind deadlock avoidance is to prevent deadlocks from ever
happening, by preventing at least one of the aforementioned conditions.
This requires more information about each process, AND tends to lead to low device
utilization. ( I.e. it is a conservative approach. )
In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the
scheduler can also take advantage of the schedule of exactly what resources may be
needed in what order.
When a scheduler sees that starting a process or granting resource requests may lead
to future deadlocks, then that process is just not started or the request is not granted.
A resource allocation state is defined by the number of available and allocated
resources, and the maximum requirements of all processes in the system.
Safe State
A state is safe if the system can allocate all resources requested by all processes ( up
to their stated maximums ) without entering a deadlock state.
More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2,
..., PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i. ( I.e. if all the processes prior
to Pi finish and free up their resources, then Pi will be able to finish also, using the
resources that they have freed up. )
If a safe sequence does not exist, then the system is in an unsafe state, which MAY
lead to deadlock. ( All safe states are deadlock free, but not all unsafe states lead to
deadlocks. )
For example, consider a system with 12 tape drives, allocated as follows. Is this a safe
state? What is the safe sequence?
Maximum Needs Current Allocation
P0 10 5
P1 4 2
P2 9 2
What happens to the above table if process P2 requests and is granted one more tape
drive?
Key to the safe state approach is that when a request is made for resources, the request
If resource categories have only single instances of their resources, then deadlock
states can be detected by cycles in the resource-allocation graphs.
In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process
to a resource that it may request in the future.
In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources.
( Alternatively, processes may only make requests for resources for which they have
already established claim edges, and claim edges cannot be added to any process that
is currently holding resources. )
When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
This approach works by denying requests that would produce cycles in the resource-
allocation graph, taking claim edges into effect.
Consider for example what happens when process P2 requests resource R2:
Figure - Resource allocation graph for deadlock avoidance
The resulting resource-allocation graph would have a cycle in it, and so the request
cannot be granted.
Banker's Algorithm
For resource categories that contain more than one instance the resource-allocation
graph method does not work, and more complex ( and less efficient ) methods must be
chosen.
The Banker's Algorithm gets its name because it is a method that bankers could use to
assure that when they lend out resources they will still be able to satisfy all their
clients. ( A banker won't loan out a little money to start building a house unless they
are assured that they will later be able to loan out the rest of the money to finish the
house. )
When a process starts up, it must state in advance the maximum allocation of
resources it may request, up to the amount available on the system.
When a request is made, the scheduler determines whether granting the request would
leave the system in a safe state. If not, then the process must wait until the request can
be granted safely.
The banker's algorithm relies on several key data structures: ( where n is the number
of processes and m is the number of resource categories. )
o Available[ m ] indicates how many resources are currently available of each
type.
o Max[ n ][ m ] indicates the maximum demand of each process of each
resource.
o Allocation[ n ][ m ] indicates the number of each resource category allocated
to each process.
o Need[ n ][ m ] indicates the remaining resources needed of each type for each
process. ( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )
For simplification of discussions, we make the following notations / observations:
o One row of the Need vector, Need[ i ], can be treated as a vector
corresponding to the needs of process i, and similarly for Allocation and Max.
o A vector X is considered to be <= a vector Y if X[ i ] <= Y[ i ] for all i.
Safety Algorithm
In order to apply the Banker's algorithm, we first need an algorithm for determining
whether or not a particular state is safe.
This algorithm determines if the current state of a system is safe, according to the
following steps:
1. Let Work and Finish be vectors of length m and n respectively.
Work is a working copy of the available resources, which will be
modified during the analysis.
Finish is a vector of booleans indicating whether a particular process
can finish. ( or has finished so far in the analysis. )
Initialize Work to Available, and Finish to false for all elements.
2. Find an i such that both (A) Finish[ i ] == false, and (B) Need[ i ] < Work.
This process has not finished, but could with the given available working set.
If no such i exists, go to step 4.
3. Set Work = Work + Allocation[ i ], and set Finish[ i ] to true. This corresponds
to process i finishing up and releasing its resources back into the work pool.
Then loop back to step 2.
4. If finish[ i ] == true for all i, then the state is a safe state, because a safe
sequence has been found.
( JTB's Modification:
1. In step 1. instead of making Finish an array of booleans initialized to false,
make it an array of ints initialized to 0. Also initialize an int s = 0 as a step
counter.
2. In step 2, look for Finish[ i ] == 0.
3. In step 3, set Finish[ i ] to ++s. S is counting the number of finished processes.
4. For step 4, the test can be either Finish[ i ] > 0 for all i, or s >= n. The benefit
of this method is that if a safe state exists, then Finish[ ] indicates one safe
sequence ( of possibly many. ) )
Now that we have a tool for determining if a particular state is safe or not, we are now
ready to look at the Banker's algorithm itself.
This algorithm determines if a new request is safe, and grants it only if it is safe to do
so.
When a request is made ( that does not exceed currently available resources ), pretend
it has been granted, and then see if the resulting state is a safe one. If so, grant the
request, and if not, deny the request, as follows:
1. Let Request[ n ][ m ] indicate the number of resources of each type currently
requested by processes. If Request[ i ] > Need[ i ] for any process i, raise an
error condition.
2. If Request[ i ] > Available for any process i, then that process must wait for
resources to become available. Otherwise the process can continue to step 3.
3. Check to see if the request can be granted safely, by pretending it has been
granted and then seeing if the resulting state is safe. If so, grant the request,
and if not, then the process must wait until its request can be granted
safely.The procedure for granting a request ( or pretending to for testing
purposes ) is:
Available = Available - Request
Allocation = Allocation + Request
Need = Need - Request
An Illustrative Example