0% found this document useful (0 votes)
5 views

OS unit 2

The document discusses various CPU scheduling algorithms, including priority scheduling, multilevel queue scheduling, and multilevel feedback queue scheduling, highlighting their characteristics and performance implications. It also covers multiprocessor scheduling approaches such as asymmetric and symmetric multiprocessing, along with concepts like processor affinity and load balancing. The importance of managing process priorities and preventing starvation through techniques like aging is emphasized throughout the text.

Uploaded by

premaja.b
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

OS unit 2

The document discusses various CPU scheduling algorithms, including priority scheduling, multilevel queue scheduling, and multilevel feedback queue scheduling, highlighting their characteristics and performance implications. It also covers multiprocessor scheduling approaches such as asymmetric and symmetric multiprocessing, along with concepts like processor affinity and load balancing. The importance of managing process priorities and preventing starvation through techniques like aging is emphasized throughout the text.

Uploaded by

premaja.b
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

low numbers to represent low priority; others use low numbers for high priority.

sponsoring
 Priorities thecan work, beand other, often
defined eitherpolitical,
internally factors.
or externally. Internally defined priorities use some
 Priority
measurable quantity or quantities to compute theorpriority
scheduling can be either preemptive nonpreemptive.
of a process. When a processtime
For example, arrives at the
limits, ready
memory
queue, its priority
requirements, is compared
the number of open with theand
files, priority
the ratioUNIT
of the 2
currentlyI/O
of average running
burst toprocess.
averageACPU preemptive
burst have priority
been
scheduling
used in computing priorities. External priorities are set by criteria outside the operating system, such the
algorithm will preempt the CPU if the priority of the newly arrived process is higher than as
priority
the importanceof the currently Scheduling
running process. ACriteria
nonpreemptive priority scheduling algorithm will simply
putDifferent
the newCPU-scheduling
process at the head algorithms
of the ready havequeue.
different properties, and the choice of a particular algorithm may
 A
 Aone
favour multilevel
major class queue
of processes
problem scheduling algorithm
over another.
with priority scheduling partitions
In choosing
algorithms whichtheindefinite
is ready
algorithmqueue to into
use inseveral
blocking, separate
ora starvation.
particular Aqueues.
situation,
process weThemust
that
considerprocesses are
the properties
is ready permanently
to run but of waiting assigned
the various to
for algorithms.one queue, generally based on some property
the CPU can be considered blocked. A priority scheduling algorithm can of the process, such as
memory
Many
leave somesize,
criteria process
low haveprioritypriority, or
beenprocesses process
suggested fortype.
waiting comparing
indefinitely. CPU-scheduling
In a heavily loaded algorithms. Whichsystem,
computer characteristics
a steadyare

used forEach queue
comparison
stream has its own scheduling
can makeprocesses
of higher-priority a substantial algorithm.
can difference For example, separate
in which algorithm
prevent a low-priority queues
process isfrom judged might
evertogettingbe used
be best.
theThefor
CPU. foreground
criteria include
and
A solution to the problem of in definite blockage of low-priority processes is aging. Agingwhile
the following:background processes. The foreground queue might be scheduled by an RR algorithm, the
involves
 CPU background
gradually
utilization. queue
increasing We is want
scheduled
the priority
to keep by of
an processes
the FCFS
CPU as algorithm.
thataswait
busy in theConceptually,
possible. system for a CPU long utilization
time. For can example,
range fromif
 0In addition,
priorities there
range from
to 100 percent. must be scheduling
127 system,
In a real among
(low) toit0 should
(high), range the queues,
we could which
fromincrease
40 percent is commonly
the priority implemented
of a waiting
(for a lightly as fixed-priority
process by
loaded system) to 901 every
percent
preemptive
15 minutes.
(for scheduling.
a heavily loaded system). For example, the foreground queue may have absolute priority over the
background
Example:queue.
 Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the
 number
Consider of the example
processes thatof are
a multilevel
completed queue
per timescheduling algorithm
unit, called with fiveFor
throughput. queues, listed below
long processes, in rate
this ordermay of be
priority:
Round-Robin
one process Scheduling
per hour; for short transactions, it may be ten processes per second.
1.
  TurnaroundSystem
The round-robin processes
time.(RR) The scheduling
interval from algorithm
the timeis of designed
submissionespecially
of a for timesharing
process to the timesystems. It is similaristothe
of completion
2. Interactive
FCFS scheduling,
turnaround processes
but preemption
time. Turnaround time is added
the sum to enable the system
of the periods spent to waiting
switch between
to get into processes.
memory, waiting in the
3.
 ready
A smallInteractive
queue,unitexecutingediting
of time, called processes
on thea CPU, and doing or
time quantum I/O. time slice, is defined. A time quantum is generally from10
4.
 Waiting Batch processes
to 100 milliseconds in length.
time. The CPU-scheduling algorithm does not affect the amount of time during which a process
5.
 executes Student
The ready or queueprocesses
does I/O. is treated
It affectsas aonlycircular
the amount
queue. To of time
implement
that a RR process
scheduling,
spends waiting
we againin treat
the ready
the ready
queue.
 Waiting
Each
queue as queue
timea FIFOhas absolute
is the queue priority
sum ofoftheprocesses. over
periods spent lower-priority
Newwaitingprocesses queues.
in the areready No process
addedqueue. in the batch queue,
to the tail of the ready queue. The CPU for example,
could
 Response run
schedulertime. unless
picksIn the queues
theanfirst
interactive for system
process system, processes,
from theturnaround
ready queue, interactive
timesetsmaya notprocesses,
timer be to and criterion.
theinterrupt
best interactive
after 1Thus,editing
time processes
another
quantum, measure
and
were all
isdispatches empty.
the time from the submission of a request until the first response is produced. This measure, called response
the process.
 time,
If an
One is interactive
ofthetwotime it editing
things takes
will to process
start
then entered
The the
responding,
happen. notready
process the may queue
time while
it takes
have toaoutput
a CPU batch process
burst the
of response.
less was1The
than running,
time the batch
turnaround
quantum. timeIn is
process
generally would
this case,limited be
the process preempted.
by the itself
speed willof therelease
outputthe CPU voluntarily. The
device.
 Another possibility is to time-slice among the queues. Here, each queue gets a certain portion of the CPU
scheduler will then proceed to the next process in the ready queue. If the CPU burst of the currently
time,
It iswhich
desirable it can then schedule among its various processes. Forand instance, in the foreground–background
running process to maximize
is longer thanCPU1 time utilization
quantum, andthethroughput
timer will go off to and
minimize turnaround
will cause time,towaiting
an interrupt the
queue
time, operating example,
and response the
time. foreground queue can be given 80 percent of the CPU time for RR scheduling
system. A context switch will be executed, and the process will be put at the tail of the ready
among its processes, while the
queue. The CPU scheduler will then select the next process in the ready queue.
 background
The average queue waitingreceives
time under 20 percent
the RRof the CPU
policy to give
is often long.to its processes on an FCFS basis.
Scheduling
 The performance algorithms of the RR algorithm depends heavily on the size of the time quantum. At one extreme, if
First-Come, First-Served
the time quantum is extremely Scheduling
large, the RR policy is the same as the FCFS policy. In contrast, if the time
 First-Come,
quantum is extremely small (say, 1scheduling
First-Served (FCFS) millisecond), algorithm
the RRisapproach
the simplest canCPU-scheduling
result in a large algorithm.
number of context
 With this scheme,
switches. It creates theaprocess
processor that sharing
requestsand the creates
CPU first an isappearance
allocated the thatCPU
each first.
of n processes has its own
 The implementation
processor running atof1/n thethe
FCFSspeed policy
of theisrealeasily managed with a FIFO queue. When a process enters the ready
processor.
queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the
Example:
head of the queue. The running process is then removed from the queue.
Multilevel
There isQueue a convoy effect as all the other processes wait for the one big process to get off the CPU. This effect
Scheduling
 results
Another in lower
class CPU and devicealgorithms
of scheduling utilizationhas thanbeenmightcreated
be possible if the shorter
for situations processes
in which were allowed
processes are easily to go
first.
classified into different groups.
 FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a process, that process
keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
 The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is important that each

1
Multilevel Feedback Queue Scheduling
 Normally, when the multilevel queue scheduling algorithm is used, processes are permanently assigned
to a queue when they enter the system. If there are separate queues for foreground and background
processes, for example, processes do not move from one queue to the other, since processes do not
change their foreground or background nature.
 This setup has the advantage of low scheduling overhead, but it is inflexible.
 The multilevel feedback queue scheduling algorithm, in contrast, allows a process to move between
queues.
 The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses
too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and
interactive processes in the higher-priority queues. In addition, a process that waits too long in a lower-
priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.
 A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of
8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is
empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not
complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS basis but
are run only when queues 0 and 1 are empty.

 A multilevel feedback queue scheduler is defined by the following parameters:


 The number of queues.
 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher priority queue.
 The method used to determine when to demote a process to a lower priority queue.
 The method used to determine which queue a process will enter when that process needs service

Multiple- Processor Scheduling


The following are the several concerns in multiprocessor scheduling, Approaches to
Multiple-Processor Scheduling
1. Asymmetric Multiprocessing
 All scheduling decisions, I/O processing, and other system activities handled by a single
processor—the master server. The other processors execute only user code.
 This asymmetric multiprocessing is simple because only one processor accesses the system
data structures, reducing the need for data sharing.

 structures, reducing the need for data shar


2
2. Symmetric Multiprocessing (SMP),
 A second approach uses symmetric multiprocessing (SMP), where each processor is self-scheduling. All
processes may be in a common ready queue, or each processor may have its own private queue of ready
processes.
 Regardless, scheduling proceeds by having the scheduler for each processor examine the ready queue and
select a process to execute.
 If we have multiple processors trying to access and update a common data structure, the scheduler must be
programmed carefully.
 We must ensure that two separate processors do not choose to schedule the same process and that processes
are not lost from the queue.
 Virtually all modern operating systems support SMP, including Windows, Linux, & Mac OS X.

Processor Affinity
Most SMP systems try to avoid migration of processes from one processor to another and instead attempt to
keep a process running on the same processor. This is known as processor affinity—that is, a process has an affinity
for the processor on which it is currently running.
Processor affinity takes several forms,

 Soft Affinity: The operating system will attempt to keep a process on a single processor, but it is possible for
a process to migrate between processors.
 Hard Affinity: It allows a process to specify a subset of processors on which it may run. The main-memory
architecture of a system can affect processor affinity issues. Consider, non-uniform memory access (NUMA).
The CPUs on a board can access the memory on that board faster than they can access memory on other
boards in the system.

Load Balancing
Load balancing attempts to keep the workload evenly distributed across all processors in an SMP
system. It is necessary only on systems where each processor has its own private queue of eligible processes to
execute.
There are two general approaches to load balancing:
1..Push Migration
 With push migration, a specific task periodically checks the load on each processor and—if it finds an
imbalance—evenly distributes the load by moving (or pushing) processes from overloaded to idle or less-
busy processors.

3
2.Pull Migration
 Pull migration occurs when an idle processor pulls a waiting task from a busy processor. Push and pull
migration need not be mutually exclusive and are in fact often implemented in parallel on load-balancing
systems.

Deadlocks - System Model, Deadlocks Characterization, Methods for Handling Deadlocks,


Deadlock Prevention, Deadlock Avoidance, Deadlock Detection, and Recovery from Deadlock
Process Management and Synchronization - The Critical Section Problem, Synchronization
Hardware,Semaphores, and Classical Problems of Synchronization, Critical Regions, Monitors

DEADLOCKS
System model:
A system consists of a finite number of resources to be distributed among a number of competing
processes. The resources are partitioned into several types, each consisting of some number of
identical instances. Memory space, CPU cycles, files, I/O devices are examples of resource types. If a
system has 2 CPUs, then the resource type CPU has 2 instances.
A process must request a resource before using it and must release the resource after using it. A
process may request as many resources as it requires to carry out its task. The number of resources as
it requires to carry out its task. The number of resources requested may not exceed the total number of
resources available in the system. A process cannot request 3 printers if the system has only two.
A process may utilize a resource in the following sequence:
1.REQUEST: The process requests the resource. If the request cannot be granted immediately (if
the resource is being used by another process), then the requesting process must wait until it can
acquire the resource.
2.USE: The process can operate on the resource .if the resource is a printer, the process can print
on the printer.
3.RELEASE: The process release theresource.
For each use of a kernel managed by a process the operating system checks that the process has
requested and has been allocated the resource. A system table records whether each resource is free
(or) allocated. For each resource that is allocated, the table also records the process to which it is allocated.
If a process requests a resource that is currently allocated to another process, it can be added to a queue
of processes waiting for this resource.
To illustrate a deadlocked state, consider a system with 3 CDRW drives. Each of 3 processes old one of these
CDRW drives.
If each processes now request another drive, the 3 processes will be in a deadlocked state. Each is waiting for the event
“CDRW is released” which can be caused only by one of the other waiting processes. This example illustrates a deadlock
involving the same resource type.
Deadlocks may also involve different resource types. Consider a system with one printer and one
DVD drive. The process Pi is holding the DVD and process P j is holding the printer. If P i
requests the printer and Pj requests the DVD drive, a deadlock occurs.
DEADLOCK CHARACTERIZATION:
In a deadlock, processes never finish executing, and system resources are tied up, preventing other jobs
from starting.
4
NECESSARY CONDITIONS:
A deadlock situation can arise if the following 4 conditions hold simultaneously in a system:
1. MUTUAL EXCLUSION: Only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until theresource has beenreleased.
2. HOLD AND WAIT: A process must be holding at least one resource and waitingto acquire additional
resources that are currently being held by otherprocesses.
3. NO PREEMPTION: Resources cannot be preempted. A resource can be released only voluntarily by the
process holding it, after that process has completed itstask.
4. CIRCULAR WAIT: A set {P0,P1,…..Pn} of waiting processes must exist such that P0 is waiting for
resource held by P1, P1 is waiting for a resource held by P2,……,Pn-1 is waiting for a resource held by Pn
and Pn is waiting for a resource held byP0.

RESOURCE ALLOCATION GRAPH


Deadlocks can be described more precisely in terms of a directed graph called a system resource
allocation graph. This graph consists of a set of vertices V and a set of edges E. the set of verticesV
is partitioned into 2 different types of nodes:
P = {P1, P2….Pn}, the set consisting of all the active processes in the system. R= {R 1, R2….Rm},
the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi ->Rj. It signifies that process Pi
has requested an instance of resource type Rj and is currently waiting for that resource.
A directed edge from resource type Rj to process Pi is denoted by Rj ->Pi, it signifies that an
instance of resource type Rj has been allocated to process Pi.
A directed edge Pi ->Rj is called a requested edge. A directed edge Rj->Piis called an
assignmentedge.
We represent each process Pi as a circle, each resource type Rj as a rectangle. Since resource type Rj
may have more than one instance. We represent each such instance as a

Dot within the rectangle.A request edge points to only the rectangle Rj.An assignment edge must also
designate one of the dots in therectangle.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the resource
allocation graph. When this request can be fulfilled, the request edge is instantaneously
transformed to an assignment edge. When the process no longer needs access to the resource, it
releases the resource, as a result, the assignment edge is deleted. The sets P, R, E:
P= {P1, P2, P3}
R= {R1, R2, R3, R4}
E= {P1 ->R1, P2 ->R3, R1 ->P2, R2 ->P2, R2 ->P1, R3 ->P3}

5
One instance of resource type R1
Two instances of resource type R2 One instance of resource type R3 Three instances of resource type
R4 PROCESS STATES:
Process P1 is holding an instance of resource type R2 and is waiting for an instance of resourcetype
R1.
Process P2 is holding an instance of R1 and an instance of R2 and is waiting for instance of
R3.Process P3 is holding an instance of R3.
If the graph contains no cycles, then no process in the system is deadlocked. Ifthe graph does contain
a cycle, then a deadlock may exist.
Suppose that process P3 requests an instance of resource type R2. Since no resource instance is
currently available, a request edge P3 ->R2 is added to the graph.
2 cycles:
P1 ->R1 ->P2 ->R3 ->P3 ->R2 ->P1P2 ->R3 ->P3 ->R2 ->P2

Processes P1, P2, P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by
process P3.process P3 is waiting for either process P1 (or) P2 to release resource R2. In
addition,process P1 is waiting for process P2 to release resource R1.
We also have a cycle: P1 ->R1 ->P3 ->R2 ->P1
However there is no deadlock. Process P4 may release its instance of resource type R 2. Thatresource
can then be allocated to P3, breaking the cycle.
DEADLOCK PREVENTION
For a deadlock to occur, each of the 4 necessary conditions must held. By ensuring that at leastone of
these conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion – not required for s h a r e a b l e resources; must hold for non shareable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any
other resources
-Require process to request and be allocated all its resources before it begins execution, or allow
process to request resources only when the process has none.
-Low resource utilization,starvation problem;

No Preemption –
--If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
--Preempted resources are added to the list of resources for whichthe process is waiting
--Process will be restarted only when it can regain its old resources, aswell as the new
ones that it is requesting

Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration Deadlock Avoidance Requires that the
system has some additional a priori information available
--Simplest and most useful model requires that each process declare the
maximum number
of resources of each type that it may need
--The deadlock-avoidance algorithm dynamically examines the resource- allocation
state to ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes .
Safe State
 When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes
in the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j <I
That is:
o If Pi resource needs are not immediately available, then Pi can wait until all
Pj have finished
o When Pj is finished, Pi can obtain needed resources, execute,return allocated
resources, and terminate
o When Pi terminates, Pi +1 can obtain its needed resources,
and so on If a sys tem is in safe state no deadlocks
If a system is in uns afe state possibility of deadlock Avoidance ensure
that a system will never enter an unsafe stateAvoidance algorithms
Single instance of a resource type
o Use a resource-allocation graph Multiple instances of a resource type
o Use the banker’s algorithm
Resource-Allocation Graph Scheme
Claim edgePiÆRj indicated that process Pj may request resource Rj;represented by a
dashed line
Claim edge converts to request edge when a process requests a resource Request edge
converted to an assignment edge when the resource is allocatedto the process When a
resource is released by a process, assignment edge reconverts to a claim edge Resources
must be claimed a priori in the system.

Unsafe State In Resource-Allocation Graph


Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount of time Let n
= number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource type
Rjavailable
Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k
instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currentlyallocated k instances of
Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of
Rjto complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
1. Let Work and Finish be vectors of length m and n,respectively.
2. Initialize: Work = Available
Finish [i] = false fori = 0, 1, …,n- 1
3. Find an isuch that both:
(a) Finish [i] = false
(b) Needi=Work
If no such iexists, go to step 4
4. Work = Work + AllocationiFinish[i] = true
go to step 2
5. IfFinish [i] == true for all i, then the system is in a safe state
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti[j] = k then process Pi wants
k instances of resource type Rj
1. If Requesti£Needigo to step 2.
Otherwise, raise error condition, since processhas
exceeded its maximum claim
2. If Requesti£Available, go to step 3.
Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available –
Request; Allocationi=
Alloc
ationi + Requesti;Needi=Needi – Requesti;
o If safe the resources are allocated to Pi
o If unsafe Pi must wait, and the old resource-allocation state is restored

Example of Banker’s Algorithm(REFER CLASS NOTES)


consider 5 processes P0 through
P4; 3 resourcetypes:
A (10 instances), B (5instances), and C (7 instances)

Snapshot at time T0:


Allocati Max Available
on
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Σ The content of the matrix Need is defined to be Max
– Allocation NeedA B C

The system is in safe state since the sequence<p1,p3,p4,p2,p0>

satisfies safety criteria


P1 Request (1,0,2)
Check that Request £ Available (that i s, (1,0,2) £ (3,3,2) true

Allocation Need Available

ABC ABC ABC


P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 2 600
P3 2 1 1 011
P4 0 0 2 431
Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety
requirement
Deadlock Detection
Allow system to enter deadlock state Detection algorithm
Recovery scheme
Single Instance of Each Resource Type
Maintain wait-for graphNodes are processes
PiÆP jif Piis waiting forPj
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a
cycle,there exists a deadlock
An algorithm to detect a cycle in a graph requires an order of n2 operations,where n is
the number of vertices in the graph

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait- for graph

Several Instances of a Resource Type:


:
Available:A vector of length M indicates the number of available resources of each type

Allocation: An n x m matrix defines the number of resources of each type


currently allocated to each process.
Request: An n x m matrix indicates the current request of each process.
If Request [i][j] = k, then process Pi is requesting k more instances of resource type.Rj.
Detection Algorithm
1.Let Work and Finish be vectors of length m and n, respectively Initialize:
A)Work = Available
B)For i = 1,2, …, n, if Allocationiπ 0, then Finish[i] = false; otherwise,
Finish[i] = true
2.Find an index isuch that both:
A)Finish[i] == false
B)Requesti£Work If no such i exists, go to step 4
3.Work=Work+Allocationi Finish[i] = true
go to step 2
4.If Finish[i] == false, for some i, 1 £i£n, then the system is in deadlock state. Moreover,
if
Finish[i] == false, then Pi is deadlocked
Recovery from Deadlock:
Process Termination
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle is eliminated In which order
should we choose to abort?
1.Priority of the process
2.How long process has computed, and how much longer to completion
3.Resources the process has used
4.Resources process needs to complete
5.How many processes will need to be terminated
6.Is process interactive or batch?

Resource Preemption
Selecting a victim – minimize cost
Rollback – return to some safe state, restart process for that state Starvation –
same process may always be picked as victim, include numberof rollback in cost
factor

System Calls
 fork()
 Most operating systems (including UNIX, Linux, and Windows) identify processes
according to a unique process identifier (or pid), which is typically an integer
number.
 A new process is created by the fork () system call. The new process consists of a
copy of the address space of the original process.
 This mechanism allows the parent process to communicate easily with its child
process. Both processes (the parent and the child) continue execution at the
instruction after the fork (), with one difference: the return code for the fork () is
zero for the new (child) process, whereas the (nonzero) process identifier of the child
is returned to the parent.
 exec()
 After a fork () system call, one of the two processes typically uses the exec () system
call to replace the process’s memory space with a new program.
 The exec () system call loads a binary file into memory and starts its execution. In this
manner, the two processes are able to communicate and then go their separate ways.

 wait()
 The parent can then create more children; or, if it has nothing else to do while the
child runs, it can issue a wait () system call to move itself off the ready queue until the
termination of the child. Because the call to exec () overlays the process’s address
space with a new program, the call to exec () does not return control unless an error
occurs.
grams to interact with the operating system. A computer program makes a system
callsS
sS when it requests the operating system’s kernel.
SYSTEM CALL INTERFACE FOR PROCESS MANAGEMENT

A system call is initiated by the program executing a specific instruction, which triggers a
switch to kernel mode, allowing the program to request a service from the OS. The OS then
handles the request, performs the necessary operations, and returns the result back to the
program.
System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each
program would need to implement its methods for accessing hardware and system services,
leading to inconsistent and error-prone behavior.

Fork(): The fork() system call is used by processes to create copies of themselves. It is one of
the methods used the most frequently in operating systems to create processes. When a parent
process creates a child process, the parent process’s execution is suspended until the child
process is finished. The parent process regains control once the child process has finished
running.

Exit(): A system call called exit() is used to terminate a program. In environments with
multiple threads, this call indicates that the thread execution is finished. After using the exit()
system function, the operating system recovers the resources used by the process.

Wait(): In some systems, a process might need to hold off until another process has finished
running before continuing. When a parent process creates a child process, the execution of the
parent process is halted until the child process is complete. The parent process is stopped
using the wait() system call. The parent process regains control once the child process has
finished running.

System call provides an interface between user program and operating system. The
structure of system call is as follows −

When the user wants to give an instruction to the OS then it will do it through system
calls. Or a user program can access the kernel which is a part of the OS through
system calls.

It is a programmatic way in which a computer program requests a service from the


kernel of the operating system.
Types of system calls

The different system calls are as follows −

System calls for Process management

System calls for File management

System calls for Directory management

System calls for Process management

A system is used to create a new process or a duplicate process called a fork.

The duplicate process consists of all data in the file description and registers common.
The original process is also called the parent process and the duplicate is called the
child process.

The fork call returns a value, which is zero in the child and equal to the child’s PID
(Process Identifier) in the parent. The system calls like exit would request the services
for terminating a process.

Loading of programs or changing of the original image with duplicate needs


execution of exec. Pid would help to distinguish between child and parent processes.

Example

Process management system calls in Linux.

fork − For creating a duplicate process from the parent process.

wait − Processes are supposed to wait for other processes to complete their work.

exec − Loads the selected program into the memory.

exit − Terminates the process.

The pictorial representation of process management system calls is as follows −


fork() − A parent process always uses a fork for creating a new child process. The
child process is generally called a copy of the parent. After execution of fork, both
parent and child execute the same program in separate processes.

exec() − This function is used to replace the program executed by a process. The child
sometimes may use exec after a fork for replacing the process memory space with a
new program executable making the child execute a different program than the parent.

exit() − This function is used to terminate the process.

wait() − The parent uses a wait function to suspend execution till a child terminates.
Using wait the parent can obtain the exit status of a terminated child.

You might also like