0% found this document useful (0 votes)
4 views

OS_Unit-2

Process Management

Uploaded by

ankita shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

OS_Unit-2

Process Management

Uploaded by

ankita shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 85

Operating System (OS)

GTU # 3140702

UNIT 2

Process and
Threads
Management
Process concept?
● Process is a program under execution.
● It is an instance of an executing program, including the current values of
the pro-
● gram counter, registers & variables.
● Process is an abstraction of a running program.
● each process has its own address space
1)text region: store code of process
2)data region :store dynamically allocated
memory
3)stack region :store instruction and
What is Process?

● Process can run to completion after getting all the resources and
requested hardware and software
● process may be a independent of other processes in the system
● It has its own private memory area and virtual CPU in which its run
● Process is an abstraction of a running program.
Difference between process and program
What is Process Control Block (PCB)?
? A Process Control Block (PCB) is a data structure maintained by the
operating system for every process.

? PCB is used for storing the collection of information about the processes.

? The PCB is identified by an integer process ID (PID).

? A PCB keeps all the information needed to keep track of a process.


? The PCB is maintained for a process throughout its lifetime and is deleted
once the process terminates.
? The architecture of a PCB is completely dependent on operating system
and may contain different information in different operating systems.

? PCB lies in kernel memory space.


Fields of Process Control Block (PCB)
? Process ID - Unique identification for each of the
process in the operating system.
? Process State - The current state of the process i.e.,
whether it is ready, running, waiting.
? Pointer - A pointer to parent process.
? Priority - Priority of a process.
? Program Counter - Program Counter is a pointer to
the address of the next instruction to be executed for
this process.
? CPU registers - Various CPU registers where process
need to be stored for execution for running state.
? IO status information - This includes a list of I/O
devices allocated to the process.
? Accounting information - This includes the amount of
CPU used for process execution, time limits etc.
Process states
● The state of a process is defined by the current activity of that process.
● During execution, process changes its state.
1. New: OS creates a new process and resources are not allocated

2. Ready: Whenever a process is created, it directly enters in the ready


state, in which, it waits for the CPU to be assigned. The processes which
are ready for the execution

3. Running: Process that is currently being executed and all the resources
are allocated

4. Waiting: waiting until some event occur such as completion of an input


-output operation

5. Completion or termination: When a process finishes its execution, it


comes in the termination state. All the context of the process (Process
Control Block) will also be deleted the process will be terminated by the
Operating system.
Process states transition
Sr No State Transition Remarks
1 Ready to running Process to dispatch
2 Running to Ready Process time slices expires

3 Running to Blocked When a process block


4 Blocked to Ready the event for which it has been waiting
occur
5 Ready to exit parent process may terminate

6 Running to Exit currently running process is complete


? When and how these transitions occur (process
moves from one state to another)?
1. Process blocks for input or waits for an event
(i.e. printer is not available)
Runni
2. Scheduler picks another process ng
▪ End of time-slice or pre-emption.
1
3. Scheduler picks this process 3 2
4. Input becomes available, event arrives (i.e.
printer become available) Block
Ready
ed 4

? Processes are always either executing


(running) or waiting to execute (ready) or
waiting for an event (blocked) to occur.
Queue Diagram
HD RA RA -
D M Dispat M
Adm ch Relea
Ne it Rea Runni se Ex
w dy Time- ng it
out
Even
Eve
t
nt Process
Occu
Block
Wai is Process
rs
ed
t
Schedu is
Ready
RA Adm led comple
Queue Process ted
M it to run
Dispatch or Exit

Time-
out

Event
Even Wait
t Blocked
Occu Queue
Process Creation

? Operating system creates process in following situation

● Starting a new job


● User request for creating a new job
● To provide new services by OS
● System call from currently running process
OS creates process with specified attributes and identifiers and may
create new sub processes
Process Creation
1. System initialization
⮩ At the time of system (OS) booting various
processes are created
⮩ Foreground and background processes are created
⮩ Background process – that do not interact
with user e.g. process to accept mail
⮩ Foreground Process – that interact with user

2. Execution of a process creation system


call (fork) by running process
⮩ Running process will issue system call (fork) to
P create one or more new process to help it.
P1
3
2 ⮩ A process fetching large amount of data and
execute it will create two different processes one
for fetching data and another to execute it.
Process Creation
3. A user request to create a new
process
⮩ Start process by clicking an icon (opening word
file by double click) or by typing command.

4. Initialization of batch process


⮩ Applicable to only batch system found on large
mainframe
Process Termination

❏ When process finishes its normal execution that is delete using exit()
❏ process memory space become free
❏ OS passes the child exit status to the parent process
❏ deallocates the resources hold by the process
❏ Reasons
❏ Normal completion of operation
❏ memory is not available
❏ time slices expired
❏ parent termination
❏ FAilure of I/O
❏ Request from parent process
Process Termination

1. Normal exit 2. Error exit (voluntary)


(voluntary) ⮩ The process discovers a fatal error e.g. user
⮩ Terminated because types the command cc foo.c to compile the
process has done its work. program foo.c and no such file exists, the
compiler simply exit.
Process Termination

3. Fatal error (involuntary) 4. Killed by another process


⮩ An error caused by a process often due (involuntary)
to a program bug e.g. executing an ⮩ A process executes a system call
illegal instruction, referencing telling the OS to kill some other
nonexistent memory or divided by zero. process using kill system call.
Process Hierarchies
? Parent process can create child process, child P
process can create its own child process. 1 Parent
process
? UNIX has hierarchy concept which is
P P
known as process group 2 P 4 Child
3 process
? Windows has no concept of hierarchy
⮩ All the process as treated equal (use handle P P
concept) 5 6

P
3 Parent
process
P P
5 6 Child
process
Handle
? When a process is created, the parent
process is given a special token called
handle. P
1 Parent
? This handle is used to control the child process
process. P
P
P
Child
2 4
? A process is free to pass this token to 3 process
some other process.
Multiprogramming
? The real CPU switches back and forth from process to process.
? This rapid switching back and forth is called multiprogramming.
? The number of processes loaded simultaneously in memory is called
degree of multiprogramming.
Multiprogramming execution
P1 P P Memor
2 3 yLogical
Program Counter
P1
Process
or
Logical Physical
Program Counter Program Counter
P2

Logical
Program Counter
P3

? There are three processes, one processor (CPU), three logical program
counter (one for each processes) in memory and one physical program
counter in processor.
? Here CPU is free (no process is running).
? No data in physical program counter.
Multiprogramming execution
P1 P P Memor
2 3 yLogical
Program Counter
P1
P Process
1 or
Logical Physical
Program Counter Program Counter
P2

Logical
Program Counter
P3

? CPU is allocated to process P1 (process P1 is running).


? Data of process P1 is copied from its logical program counter to the
physical program counter.
Multiprogramming execution
P1 P P Memor
2 3 yLogical
Program Counter
P Process
1 or
Logical P Physical
Program Counter 2 Program Counter
P2 P1

Logical
Program Counter
P3

? CPU switches from process P1 to process P2.


? CPU is allocated to process P2 (process P2 is running).
? Data of process P1 is copied back to its logical program counter.
? Data of process P2 is copied from its logical program counter to the
physical program counter.
Multiprogramming execution
P1 P P Memor
2 3 yLogical
Program Counter
P1
Process
or
Logical Physical
P
Program Counter Program Counter
2
P2
P
3
Logical
Program Counter
P3

? CPU switches from process P2 to process P3.


? CPU is allocated to process P3 (process P3 is running).
? Data of process P2 is copied back its logical program counter.
? Data of process P3 is copied from its logical program counter to the
physical program counter.
Context switching
? Context switch means stopping one
process and restarting another
process.
? When an event occur, the OS saves
the state of an active process
(into its PCB) and restore the
state of new process (from its
PCB).
? Context switching is purely
overhead because system does not
perform any useful work while
context switch.
? Sequence of action:
1. OS takes control (through interrupt)
2. Saves context of running process in
the process PCB
3. Reload context of new process from
Context switching

? Context switch means can occur in only kernel mode

? It is highly dependent on hardware support

? It will be the most costly operation on operating system

? Situation in which context switch needs :- multitasking , interrupt


handling ,user and kernel mode switching
THREADS
? Thread is light weight process created by a process.
? Processes are used to execute large, ‘heavyweight’ jobs such as working
in word, while threads are used to carry out smaller or ‘lightweight’ jobs
such as auto saving a word document.

Thread
s
What is Threads?
? Thread is light weight process created by a process.
? Thread is a single sequence stream within a process.
? Thread has it own
? program counter that keeps track of which instruction to execute next.
? system registers which hold its current working variables.
? stack which contains the execution history.

? Every program has at least on e thread


? Each thread can execute a set of instruction independent of other thread
and processes
Difference between thread and process

THREAD PROCESS
Lightweight process heavyweight process
operating system is not operating system is required for
required for thread switching thread switching
One thread can read ,write or each process operates
even completely clean another independently
threads stacks
If one thread is blocked and If one process is blocked then
waiting then second thread in other process can not execute
the same task can run until first process is unblocked
use fewer resources use more resources
Similarities between Process & Thread
? Like processes threads share CPU and only one thread is running at
a time.
? Like processes threads within a process execute sequentially.
? Like processes thread can create children's.
? Like a traditional process, a thread can be in any one of several
states: running, blocked, ready or terminated.
? Like process threads have Program Counter, Stack, Registers and
State.
Benefits/Advantages of Threads
? Threads minimize the context switching time.
? Use of threads provides concurrency within a process.
? Efficient communication.
? It is more easy to create and context switch threads.
? Threads can execute in parallel on multiprocessors.
? With threads, an application can avoid per-process overheads
⮩ Thread creation, deletion, switching easier than processes.

? Threads have full access to address space (easy sharing).


Thread Lifecycle
1. New: Thread is created

2. Ready: Executing a thread OS put thread into ready queue

3. Running: Highest priority ready thread enters the running running state

4. Blocked: Thread is waiting for a lock to access an object

5. Waiting: waiting for another thread to perform an action

6. Sleeping: Sleep for specified time after the expiration of time it enters
into ready state

7. Dead: thread complete it task or operation


.
Single Threaded Process VS Multiple Threaded Process
▪ A single-threaded
process is a process
with a single thread.
▪ A multi-threaded
process is a process
with multiple threads.
▪ The multiple threads
have its own registers,
stack and counter but
they share the code and
data segment.
Types of Threads
1. Kernel Level Thread
2. User Level Thread

User
Level
Threa
ds

Kernel
Level
Threa
ds
User level thread
? User-level threads are small and much faster than kernel level threads.
? They are represented by a program counter(PC), stack, registers
? there is no kernel involvement in synchronization for user-level threads.
? Created by runtime libraries that are transparent to the OS
? Do not invoke the Kernel for scheduling decision
? Also called many to one mapping
Advantages
● User-level threads are easier and faster to create
● They can also be more easily managed.
● User-level threads can be run on any operating system.
● There are no kernel mode privileges required for thread switching in user-level
threads.
Disadvantages
● Multithreaded applications in user-level threads cannot use multiprocessing to
their advantage.
Kernel level thread
? handled by the operating system directly and the thread management is
done by the kernel.
? slower than user-level threads.
? Thread controlled and created by system call

Advantages
● Multiple threads of the same process can be scheduled on different
processors
● It can also be multithreaded.
● If l thread is blocked, another thread of the same process can be scheduled
by the kernel.

Disadvantages
● Slower than the user level thread
Difference between user level and kernel level thread

USER LEVEL THREAD KERNEL LEVEL THREAD


User thread are implemented by Kernel threads are implemented by
users. OS.
OS doesn’t recognize user level Kernel threads are recognized by
threads. OS.
Implementation of user threads is Implementation of kernel thread is
easy. complex.
Context switch time is less. Context switch time is more.
If one user level thread perform If one kernel thread perform
blocking operation then entire blocking operation then another
process will be blocked. thread with in same
process can continue execution.
multithread application cannot take multithread application take
advantage of multiprocessing advantage of multiprocessing
Hybrid Threads
? Combines the advantages of user level and
kernel level thread.
? It uses kernel level thread and then
multiplex user level thread on to some or all
of kernel threads.
? Gives flexibility to programmer that how
many kernel level threads to use and how
many user level thread to multiplex on each
one.
? Kernel is aware of only kernel level threads
and schedule it.
Multi Threading Models
1)ONE TO ONE

● Each user threads mapped to one kernel thread.


● Create more concurrency
● multiple thread are run in parallel on multiprocessor system
● number of threads increase memory is also increased
● System performance is slow
● Problem with this model is that creating a user thread requires the
corresponding kernel thread.
Multi Threading Models
2) MANY TO ONE

● Multiple user threads mapped to one kernel thread.


● System performance is increase by customizing the
thread library scheduling process
● Problem with this model is that a user thread can block
entire process because we have only one kernel thread.
● does not allow individual process to be split across
multiple CPU
● multiple threads cannot run in parallel as only one
thread can access the kernel at a time.
Multi Threading Models
3) MANY TO MANY

● Multiple user threads multiplex to more than one kernel


threads.
● Advantage with this model is that a user thread can not
block entire process because we have multiple kernel
thread.
● There can be as many user threads as required and their
corresponding kernel threads can run in parallel on a
multiprocessor.
● there is one limitation that OS design becomes
complicated
System calls
? A system call is the programmatic way in which a computer program
requests a service from the kernel of the operating system it is executed
on.
? A system call is a way for programs to interact with the operating system.
? A computer program makes a system call when it makes a request to the
operating system’s kernel.
? System call provides the services of the operating system to the user
programs via Application Program Interface(API).
? It provides an interface between a process and operating system to allow
user-level processes to request services of the operating system.
? System calls are the only entry points into the kernel system.
? All programs needing resources must use system calls.
System calls
? ps (process status):- The ps (process status) command is used to
provide information about the currently running processes, including their
process identification numbers (PIDs).
? fork:- Fork system call is used for creating a new process, which is called
child process, which runs concurrently with the process that makes the
fork() call (parent process).
? wait:- Wait system call blocks the calling process until one of its child
processes exits or a signal is received. After child process terminates,
parent continues its execution after wait system call instruction.
? exit:- Exit system call terminates the running process normally.
? exec family:- The exec family of functions replaces the current running
process with a new process.
System calls
? ps (process status):- The ps (process status) command is used to
provide information about the currently running processes, including their
process identification numbers (PIDs).
Syntax:-ps [option]
it sends without any option then it display at least two processes currently
on the system :the shell and ps

PID – This is the unique process ID


TTY – This is the typeof terminal that the user is logged in to
TIME – This is the time in minutes and seconds that the process has been
running
System calls
? fork:- Fork system call is used for creating a new process, which is called
child process, which runs concurrently with the process that makes the
fork() call (parent process).
Syntax:-pid=fork();
what fork system do:-
1. create child process that is clone of parent
2. child running same program of parents
3. child inherit open file descriptor from the parents
4. child begin life with the same register values as parent
what kernel does:-
5. it allocates a slot in the process table for the new process
6. assigns unique ID number to the childs
7. make logical copy of the parent process
8. increments the file and table counter
System calls
? Uses of Fork:-
when process wants to duplicate itself so parents and child can execute
different sections of code at same time
Example : in networking if parents wait for the service request from
client when the request arrives the parents called fork and let the child
handle the request and then parent goes back for waiting for next
service
SCHEDULING
what is scheduling
? The process scheduling is the activity of the process manager that
handles the removal of
? the running process from the CPU and the selection of another process
on the basis of a
? particular strategy.
? It is an essential part of a Multiprogramming operating system.

objective
? Share time fairly
? prevent starvation of a process
? Use processor efficiently
? Have low overhead
SCHEDULING
characteristics of good scheduling algorithm

CPU Utilization − should be designed so that CPU remains busy as possible.


Throughput − Throughput is the amount of work completed in a unit of time. The
scheduling algorithm must look to maximize the number of jobs processed per time
unit.
Response time − It is the time taken to start responding to the request. A
scheduler must aim to minimize response time for interactive users.
Turnaround time − Turnaround time refers to the time between the moment of
submission of a job and the time of its completion.
Turnaround Time= Waiting Time + Burst Time(Execution Time)
Waiting time − It is the time a job waits for resource .The aim is to minimize the
waiting time.
Scheduling Queue
Scheduling Queue
● All processes, upon entering into the system, are stored in the Job Queue.
● Processes in the Ready state are placed in the Ready Queue.
● Processes waiting for a device to become available are placed in Device
Queues. There are unique device queues available for each I/O device.

A new process is initially put in the Ready queue. It waits in the ready queue
until it is selected for execution(or dispatched). Once the process is assigned
to the CPU and is executing, one of the following several events can occur:
● The process could issue an I/O request, and then be placed in the I/O
queue.
● The process could create a new subprocess and wait for its termination.
● The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
Type of Scheduler

MEDIUM SHORT
TERM TERM
SCHEDUL SCHEDUL
ER ER
Type of Scheduler
1)LONG TERM SCHEDULER
● It decide which program must get into the job queue. From the job queue,
the Job Processor, selects processes and loads them into the memory for
execution.
● Primary aim is to maintain a good degree of Multiprogramming. An
optimal degree of Multiprogramming means the average rate of process
creation is equal to the average departure rate of processes from the
execution memory.
● used in real time OS

1)SHORT TERM SCHEDULER


● This is also known as CPU Scheduler and Dispatcher
Type of Scheduler
3) MEDIUM TERM SCHEDULER

● This scheduler removes the processes from memory


● reduces the degree of multiprogramming.
● This can also be called as suspending and resuming the process.
● also useful to improve the mix of I/O bound and CPU bound processes in
the memory.
Difference between type of scheduler
Scheduling Algorithm
● is a process of determining which process will own CPU for execution while
another process is on hold.
● The main task of CPU scheduling is to make sure that whenever the CPU remains
idle, the OS at least select one of the processes available in the ready queue for
execution.

Preemptive Scheduling
the tasks are mostly assigned with their priorities. Sometimes it is important to run
a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task resumes when the higher priority
task finishes its execution.
Difference between Preemptive and non Preemptive

Preemptive Non- Preemptive

Once resources are allocated to a


In this resources are allocated to a process, the process holds it till it
process for a limited time. completes its burst time or switches to
waiting state.

Process can be interrupted in Process can not be interrupted until it


between. terminates

CPU utilization is high. It is low in non preemptive scheduling.


incurs the cost associated with access
does not increase the cost
shared data
it is more complex Simple but very inefficient

Examples of preemptive scheduling Examples of non-preemptive scheduling


Type of Scheduling Algorithm
1)FCFS (FIRST COME FIRST SERVE)

EXAMPLE.1

● p1+p2+p3+p4
● Average waiting time: (0+21+24+30)/4 =18.75ms
● Turnaround Time=Waiting Time+Burst Time
p1=0+21=21
p2=21+3=24
p3=24+6=30
p4=30+2=32
1)FCFS (FIRST COME FIRST SERVE)

● It is is a non-preemptive scheduling algorithm that is easy to understand


and use.
● the process which reaches first is executed first
● Example of FCFS: buying tickets at the ticket counter.
● simple to use and implement.
● poor in performance due to high waiting times.
● the resource utilization is poor in FCFS
EXAMPLE.1

● p1+p2+p3+p4
● Average waiting time: (0+21+24+30)/4 =18.75ms
EXAMPLE.2
PROCESS BURST TIME

P1 4

P2 7

P3 3

P4 3

P5 5

EXAMPLE.3

PROCESS BURST TIME

P1 10

P2 6

P3 12

P4 15
SOLUTION OF EXAMPLE.2
PROCESS BURST TIME/
priority

P1 4 3

P2 7 5

P3 3 2

P4 3 1
● Average waiting time: (0+4+11+14+17)/5 =9.2ms
P5 5 4

SOLUTION OF
EXAMPLE 3
PROCESS BURST TIME

P1 10

P2 6

P3 12
● Average waiting time: (0+10+16+28)/ =13.5ms
P4 15
2) SJF (SHORTEST JOB FIRST)
● It based on the length of their CPU cycle time
● reduce average time over FCFS
● non preemptive algorithm
ALGORITHM:-1) Sort all the process according to the arrival time.
1)Then select that process which has minimum arrival time and
minimum Burst time.
2)After completion of process make a pool of process which after till the
completion of previous process and select that process among the
pool which
PROCESS BURST TIME is having minimum Burst time.

EXAMPLE:
P1 5

P2 2
PROCESS WAITING TIME TURNAROUND TIME
P3 6
P1 6 6+5=11
P4 4
P2 0 0+2=2
Average waiting time: (6+0+11+2)/4 =4.75ms P3 11 11+6=17

P4 2 2+4=6
EXAMPLE 1.
PROCESS BURST
TIME/PRIORITY

P1 10 2

P2 6 1

P3 12 4

P4 15 3

EXAMPLE 2.

PROCESS BURST
TIME/priority

P1 4 2

P2 7 1

P3 3 3

P4 2 4
SOLUTION OF EXAMPLE
PROCESS WAITING TIME TURNAROUND TIME
PROCESS BURST TIME
P1 6 6+10=16
P1 10
P2 0 0+6=6
P2 6
P3 16 16+12=28
P3 12
P4 28 28+15=43
P4 15

Average waiting time: (0+6+16+28)/4


=12.5ms
PROCESS BURST TIME PROCESS WAITING TIME TURNAROUND TIME

P1 4 P1 5 5+4=9

P2 7 P2 9 9+7=16

P3 3 P3 2 2+3=5

P4 2 P4 0 0+2=2

Average waiting time:


3) PRIORITY SCHEDULING
● Each process is assigned a priority. Process with the highest priority is to be
executed first and so on.
● Processes with the same priority are executed on first come first served basis.

PROBLEM:- Waiting time is more for lower priority proceed


Starvation problem

EXAMPLE:
PROC BURST Priority
ESS TIME

P1 5 3

P2 2 4

P3 6 1 PROCESS WAITING TIME TURNAROUND TIME


P4 4 2 P1 10 10+5=15

P2 15 15+2=17
Average waiting time: (10+15+0+6)/4 =7.75msP3 0 0+6=6

P4 6 6+4=10
4) ROUND ROBIN SCHEDULING
● Round Robin is the preemptive process scheduling algorithm.
● Each process is provided a fix time to execute, it is called a quantum.
● Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
● Context switching is used to save states of preempted processes.
● QUANTUM SLICE 2 UNIT
PROCESS WAITING TIME TURNAROUND TIME

P1 0+(9-2)+(13-11)=9 9+5=14

P2 2+(11-4)=9 9+3=12

P3 4 4+1=5

P4 5 5+2=7

P5 7+(12-9)=10 10+3=13

Average waiting time:


0 2 4 5 7 9 11
12 13 14
(9+9+4+5+10)/5=7.4ms
P1 P2 P3 P4 P5 P1 P2 P5 P1
ADVANTAGE
● In terms of average response time, this algorithm gives the best performance.
● With the help of this algorithm, all the jobs get a fair allocation of CPU.
● In this algorithm, there are no issues of starvation or convoy effect.
● This algorithm is cyclic in nature.

DISADVANTAGE
● This algorithm spends more time on context switches.
● For small quantum, it is time-consuming scheduling.
● This algorithm offers a larger waiting time and response time.
● In this, there is low throughput.
● If time quantum is less for scheduling then its Gantt chart seems to be too big.
EXAMPLE

QUANTUM SLICE 1 UNIT

PROCESS BURST TIME

A 4

B 1

C 8

D 1

QUANTUM SLICE 2
UNIT
PROCESS BURST TIME

P1 6

P2 5

P3 2

P4 3

P5 7
SOLUTION EXAMPLE
PROCESS WAITING TIME TURNAROUND TIME
PROCESS BURST TIME
A 0+3+1+1=5 5+4=9
A 4
B 1 1+1=2
B 1
C 2+2+1+1=6 6+8=14
C 8
D 3 3+1=4
D 1

Average waiting time: (5+1+6+3)/4 =


3.75ms
PROCESS WAITING TIME TURNAROUND TIME
PROCESS BURST TIME
P1 0+8+5=13 13+6=19
P1 6
P2 2+8+5=15 15+5=20
P2 5
P3 4 4+2=6
P3 2
P4 6+6=12 12+3=15
P4 3
P5 8+5+3=16 16+7=23
P5 7

Average waiting time:


(13+15+4+12+16)/5 =12ms
COMPARISON BETWEEN FCFS AND ROUND ROBIN

FCFS Round Robin

non preemptive in nature preemptive in nature

Response time is high provide good response time

FCFS is inconvenient to use in the It is mainly designed for the time sharing
time sharing system. system and hence convenient to use.

Average waiting time is generally not


In Round Robin Scheduling Algorithm
minimal in First Come First Served
average waiting time is minimal.
Scheduling Algorithm.

The process is simply processed in the It is similar like FCFS in processing but
order of their arrival in FCFS. uses time quantum.

Examples of preemptive scheduling Examples of non-preemptive scheduling


COMPARISON OF CPU SCHEDULING ALGORITHM

FCFS Round Robin PRIORITY SJF

non non
non preemptive in
POLICY preemptive preemptive
preemptive nature

Easy to provide good Minimize


Ensure fast
ADVANTAGE implement response time average
completion of
S minimum provide fair waiting time
important jobs
overhead CPU time

require
Average indefinite
DISADVANTA selection of starvation
waiting is postponement
GE good time problem
more of some job
slices
THREAD SCHEDULING
LOAD SHARING FCFS:-on arrival of advantage : disadvantage:
Tread are not thread it is place in No centralized High degree of
assigned to queue and next scheduler is coordination is
processor ,it thread selected required required
selects a thread when processor is load distribution is
from global queue idle equal
serving all
processors Smaller no of
load is evenly thread:- organized
distributed according to the
priority and selects
highest priority
thread

Preemptive
smallest number of
thread first:-
highest priority is
THREAD SCHEDULING
GANG SCHEDULING ● Group of related thread are scheduled as a
unit

● All members of gang run simultaneously on


different time shared cpu
DEDICATED PROCESSOR ● Provides implicit scheduling for the
ASSIGNMENT duration of program execution

● processor are chosen from the available


pool
DYNAMIC SCHEDULING ● numbers of threads in process altered
dynamically by the application
● operating system involve in making
scheduling decisions
● it adjust load to improve the use
REAL TIME SCHEDULING
● Hard real time system:- tasks be completed within their required
deadlines .Failure to meet lead to critical catastrophic system failure such as
physical damage or loss of life
● Firm real time system:- a few miss deadlines will not lead to total failure but
missing more than a few may lead to complete and catastrophic failure.
● Soft real time system:- provide priority of real time tasks over non real time
tasks. performance degradation is tolerated by failure to meet several deadlines
time constraints with decreased quality but no critical consequences
REAL TIME SCHEDULING

CHARACTERISTICS

Determinism:- operation are performed at fixed times or intervals


Responsiveness:- time required to service the interrupt
User control:- Specify paging or process swapping
Decide which processes must reside in main memory
Select algorithm for disk scheduling
Establish the right of process
Control over task priorities
Reliability:- control the failure of the process . mean time between the failures
should be very high
CLASS OF ALGORITHM

Static table-driven approaches:


● These algorithms usually perform a static analysis associated with
scheduling and capture the schedules that are advantageous.
● This helps in providing a schedule that can point out a task with which the
execution must be started at run time.
● Input required: periodic arrival time, execution time, periodic
ending ,deadline and priority of task

Static priority-driven preemptive approaches:

● it provides a useful way of assigning priorities among various tasks in


preemptive scheduling.
● Priority is related to the time constraints on the task
CLASS OF ALGORITHM
Dynamic planning-based approaches:

Here, the feasible schedules are identified dynamically (at run time). It carries
a certain fixed time interval and a process is executed if and only if satisfies
the time constraint.

Dynamic best effort approaches:


These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is used
widely is most of the real-time systems.
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)

● Static priority scheduling


● process with lowest period will get the highest priority
● Where n is the number of processes in the process set, Ci is the computation time
of the process, Ti is the Time period for the process to run and U is the processor
utilization.

ADVANTAGE: Simple to understand


Easy to implement
Stable algorithm
DISADVANTAGE: Lower CPU utilization
Processes involved should not share the resources with other
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
U= 0.5/3 +1/4 +2/6 = 0.167+ 0
+ 0.333
= 0.75(LESS THAN 1 CAN BE
EXECUTED)
NOTE:
T1 execute 0.5 unit in every 3 time
slices
T2 execute 1 unit in every 4 time
slices
T3 execute 2 unit in every 6 time
slices
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
● task with shorter period has higher priority so T1 has high priority, T2 has
intermediate priority and T3 has lowest priority. At t=0 all the tasks are released.
Now T1 has highest priority so it executes first till t=0.5.
● At t=0.5 task T2 has higher priority than T3 so it executes first for one-time units
till t=1.5. After its completion only T3 remain so it starts its execution and
executes till t=3.
● At t=3 T1 releases, as it has higher priority than T3 so it preempts or blocks T3
and starts it execution till t=3.5. After that the remaining part of T3 executes.
● At t=4 T2 releases and completes it execution as there is no task running in the
system at this time.
● At t=6 both T1 and T3 are released at the same time but T1 has higher priority
due to shorter period so it preempts T3 and executes till t=6.5, after that T3
starts running and executes till t=8.
● At t=8 T2 with higher priority than T3 releases so it preempts T3 and starts its
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
TASK CPU BURST TIME/EXECUTION PERIOD/ U = 6/9+ 5/15+1/5= 1.199
TIME TIME
It is more than 1 so test is fa
T1 6 9

T2 5 15

T3 1 5

TASK CPU BURST TIME/EXECUTION PERIOD/


TIME TIME U =50/100+30/200+100/50
0.85
T1 50 100

It is less than 1 so test is


T2 30 200
schedule
T3 100 500
EARLIEST DEADLINE FIRST SCHEDULING (EDF)
● an optimal dynamic priority scheduling algorithm used in real-time systems.
● It can be used for both static and dynamic real-time scheduling.
● EDF uses priorities to the jobs for scheduling. It assigns priorities to the task
according to the absolute deadline. The task whose deadline is closest gets the
highest priority.
● In EDF, any executing task can be preempted if any other periodic instance with
an earlier deadline is ready for execution and becomes active.

ADVANTAGE: It is optimal
Give best CPU utilization
DISADVANTAGE: need dynamic priority
performance degrade under overloaded
EARLIEST DEADLINE FIRST SCHEDULING (EDF)

● U= 1/4 +2/6 +3/8 = 0.25 +


0.333 +0.375 = 0.95
EARLIEST DEADLINE FIRST SCHEDULING (EDF)
● At t=0 all the tasks are released T1 has higher priority as its deadline is 4 earlier
than T2 whose deadline is 6 and T3 whose deadline is 8,
● At t=1 again absolute deadlines are compared and T2 has shorter deadline so it
executes and after that T3 starts execution but at t=4 T1, at this instant both T1
and T3 has same deadlines so ties are broken randomly so we continue to
execute T3.
● At t=6 T2 deadline of T1 is earliest than T2 so it starts execution and after that T2
begins to execute. At t=8 again T1 and T2 have same deadlines i.e. t=16, so ties
are broken randomly an T2 continues its execution and then T1 completes. Now
at t=12 T1 and T2 has same deadlines therefore ties broken randomly and we
continue to execute T3.
● At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the
system so it completes it execution.
● At t=16 T1 and T2 are released together, priorities are decided according to
absolute deadlines so T1 execute first as its deadline is t=20 and T3’s deadline is
t=24.After T1 completion T3 starts and reaches at t=17 t=24 so ties broken
randomly ant we T continue to execute T3.

You might also like