0% found this document useful (0 votes)
2 views

updated Process management Chapter 2

The document provides an overview of processes in operating systems, detailing process management, memory layout, and the five-state model of processes. It discusses various process scheduling techniques, including First-Come, First-Served, Shortest Job First, Round Robin, and Priority Scheduling, along with their advantages and disadvantages. Additionally, it covers interprocess communication methods and the creation and termination of processes, highlighting the importance of resource management and scheduling in efficient operating system performance.

Uploaded by

issac.lewton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

updated Process management Chapter 2

The document provides an overview of processes in operating systems, detailing process management, memory layout, and the five-state model of processes. It discusses various process scheduling techniques, including First-Come, First-Served, Shortest Job First, Round Robin, and Priority Scheduling, along with their advantages and disadvantages. Additionally, it covers interprocess communication methods and the creation and termination of processes, highlighting the importance of resource management and scheduling in efficient operating system performance.

Uploaded by

issac.lewton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter 2

Process
What is process?
A process is a program in execution.

What is process management?


The activities involved in managing the
execution of multiple processes in an
operating system
Layout of a process in memory:
Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
Heap Section: Dynamically Memory to process during its run time
Data Section: Contains the global variable.
Text Section: The executable code
Five-state model:
New: The process is being created.
Running:Instructions are being executed.
Waiting: The process is waiting for some event to
occur (such as an I/O completion or reception of
signal).
Ready: The process is waiting to be assigned to a
processor
Terminated:The process has finished execution.
What is Process Scheduling

Process Scheduling is an OS task that schedules processes of


different states like ready, waiting, and running.

Type of Process Schedulers


There are mainly three types of Process Schedulers:
Long Term Scheduler:New to ready
Short Term Scheduler:ready To Running
Medium Term Scheduler: It is responsible for
suspending and resuming the process.

It mainly does swapping (moving


processes from main memory to disk
and vice versa)
Context Switch:
Interprocess Communication
1.Independent process
2.cooperating Process
• One way of communication using shared memory can be
imagined like this: Suppose process1 and process2 are
executing simultaneously, and they share some resources or
use some information from another process. Process1
generates information about certain computations or resources
being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in
the record stored in shared memory and take note of the
information generated by process1 and act accordingly.
Processes can use shared memory for extracting information
as a record from another process as well as for delivering any
specific information to other processes.
IPC in Message-Passing Systems
 Naming
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
 Synchronization
• Blocking send. The sending process is blocked until the message is
received by the receiving process or by the mailbox.
• Nonblocking send. The sending process sends the message and
resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a
null.
• In message passing, each process has a unique identifier,
known as a process ID, and messages are sent from one
process to another using this identifier. When a process
sends a message, it specifies the recipient process ID and
the contents of the message, and the operating system is
responsible for delivering the message to the recipient
process. The recipient process can then retrieve the
contents of the message and respond, if necessary.
Buffering
Zero capacity: The queue has a maximum length of zero;

Bounded capacity: The queue has finite length n; thus, at most n

messages can reside in it .

Unbounded capacity: The queue’s length is potentially infinite; thus, any


number of messages can wait in it. The sender never
blocks.
Operations on Processes

Process Creation
A process can create several new processes through creating
process system calls during the process execution. Creating a
process we call it the parent process and the new process is a child
process.
Every new process creates another process forming a tree-like
structure. It can be identified with a unique process identifier that
usually represents it as pid which is typically an integer number.
Every process needs some resources like CPU time, memory, file,
I/O devices to accomplish.
Whenever a process creates a new process, there are two
possibilities in terms of execution, which are as follows −
• The parent continues to execute concurrently with its
children.
• The parent waits till some or all its children have terminated
.
There are two more possibilities in terms of address space of
the new process, which are as follows −
• The child process is a duplicate of the parent process.
• The child process has a new program loaded into it.
Process Termination
• The child has exceeded its usage of some of the resources that it
has been allocated. (To determine whether this has occurred, the
parent must have a mechanism to inspect the state of its
children.)

• The task assigned to the child is no longer required.

• The parent is exiting, and the operating system does not allow a

child to continue if its parent terminates.


/* exit with status 1 */
exit(1);
Whenever the process finishes executing its final statement and asks
the operating system to delete it by using exit() system call.
At that point of time the process may return the status value to its
parent process with the help of wait() system call.
All the resources of the process including physical and virtual
memory, open files, I/O buffers are deallocated by the operating
system.
Reasons for process termination
The reasons that the process may terminate the execution of one of
its children are as follows −
• The child exceeds its usage of resources that it has been allocated.
• The task that is assigned to the child is no longer required.
• The parent is exiting and the operating system does not allow a child
to continue if its parent terminates.
Causes for termination
The other causes for termination are as follows −
• Time slot expired − When the process execution does not
complete within the time quantum, then the process gets
terminated from the running state. The CPU picks the next
job in the ready queue to execute.
• Memory bound violation − If a process needs more
memory than the available memory.
• I/O failure − When the operating system does not provide
an I/O device, the process enters a waiting state.
• Process request − If the parent process requests the child
process about termination.
• Invalid instruction
Communication in Client–Server Systems
 Sockets
 Remote Procedure Calls
 Pipes
 Remote Method Invocation (Java)
Sockets
Asocket is defined as an endpoint for communication
Communication using sockets
Remote Procedure Calls

These are interprocess communication techniques that are used for client-server
based applications. A remote procedure call is also known as a subroutine call or a
function call.
A client has a request that the RPC translates and sends to the server. This request
may be a procedure or a function call to a remote server. When the server receives
the request, it sends the required response back to the client.
A diagram that illustrates remote procedure calls is given as follows −
Pipes
A pipe acts as a conduit allowing two processes to communicate. Pipes were
one of the first IPC mechanisms in early UNIX systems.
Ordinary Pipes:
Ordinary pipes allow two processes to communicate in standard producer–
consumer fashion: the producer writes to one end of the pipe (the write end)
and the consumer reads from the other end (the read end).
Named Pipes:
First-Come, First-Served Scheduling

The process which arrives first in the ready


queue is firstly assigned the CPU.
In case of a tie, process with smaller process
id is executed first.
It is always non-preemptive in nature.
First-Come, First-Served Scheduling
Process Id Arrival time Burst time
P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3

0 2 3 7 10 13 14
p3 p1 p5 p2 p4
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–4=0

P2 13 13 – 5 = 8 8–3=5

P3 2 2–0=2 2–2=0

P4 14 14 – 5 = 9 9–1=8

P5 10 10 – 4 = 6 6–3=3
Advantages
It is simple and easy to understand.
It can be easily implemented using queue data structure.
It does not lead to starvation.

Disadvantages:
It does not consider the priority or burst time of the
processes.
It suffers from convoy effect
AT : Arrival Time
BT : Burst Time or CPU Time
TAT : Turn Around Time
WT : Waiting Time

Processes AT BT CT TAT WT

P1 0 5 5 5-0 = 5 5-5 = 0

P2 0 3 8 8-0 = 8 8-3 = 5

P3 0 8 16 16-0 = 16 16-8 = 8

•Average Turn around time = (8 + 12 + 11)/3 = 31/3 = 10.33 ms

•Average waiting time = (4 + 7 + 8)/3 = 19/3 = 6.33 ms


Scenario 2: Processes with Different Arrival Times

Consider the following table of arrival time and burst time for three
processes P1, P2 and P3

Process Burst Time (BT) Arrival Time (AT)

P1 5 ms 2 ms

P2 3 ms 0 ms

P3 4 ms 4 ms
Turnaround Time (TAT = Waiting Time (WT = TAT
Process Completion Time (CT)
CT – AT) – BT)

P2 3 ms 3 ms 0 ms

P1 8 ms 6 ms 1 ms

P3 12 ms 8 ms 4 ms

•Average Turnaround time = 1.67

•Average waiting time = 5.67

Code Implementation
Shortest Job First:
• SJF Scheduling can be used in both preemptive and non-preemptive
mode.
• Preemptive mode of Shortest Job First is called as Shortest Remaining
Time First (SRTF).
• for n processes, time complexity = nlog(n)

• Critaria: Burst Time


Example of Shortest-remaining-time-first
 Now we add the concepts of varying arrival times and preemption to the
analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec


Shortest Job First:
Preemptive mode of Shortest Job First is called as Shortest
Remaining Time First (SRTF); Mood : 1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 16
p4 p2 p2 p1 p2 p2 p3 p3 p5 p5 p5 p4 p4 p4

0 1 3 4 6 8 11 16
p4 p2 p1 p2 p3 p5 p4
•Turnaround Time (TAT) = Completion Time - Arrival Time

•Waiting Time (WT) = TAT - Burst Time

Completion Turnaround Waiting


Process Arrival Time Burst Time
Time Time (TAT) Time (WT)
P1 0 10 16 16 - 0 = 16 16 - 10 = 6
P2 2 4 8 8-2=6 6-4=2
P3 4 2 6 6-4=2 2-2=0
P4 10 8 24 24 - 10 = 14 14 - 8 = 6
Shortest Job First:
Advantages
 SRTF is optimal and guarantees the minimum average waiting
time.
 It provides a standard for other algorithms since no other
algorithm performs better than it.

Disadvantages
 It can not be implemented practically since burst time of the
processes can not be known in advance.
 It leads to starvation for processes with larger burst time.
 Priorities can not be set for the processes.
 Processes with larger burst time have poor response time.
Round Robin Scheduling:

 CPU is assigned to the process on the basis of FCFS for


a fixed amount of time.
 This fixed amount of time is called as time quantum or
time slice.
 After the time quantum expires, the running process is
preempted and sent to the ready queue.
 Then, the processor is assigned to the next arrived
process.
 higher value of time quantum is better in terms of
number of context switch
 smaller value of time quantum is better in terms of
response time
Round-Robin Scheduling:
Process Id Arrival time Burst time
p1 0 4
p2 1 5
p3 2 2
p4 3 1
p5 4 6

p6 6 3
time quantum = 2 unit

Ready Queue:
P1, P2, P3, P1, P4, P5, P2, P6, P5, P2, P6, P5

0 2 4 6 8 9 11 13 15 17 18 19 21
p1 p2 p3 p1 p4 p5 p2 p6 p5 p2 p6 p5
Round Robin Scheduling:

Advantages

It gives the best performance in terms of average response time.


It is best suited for time sharing system, client server architecture and
interactive system.

Disadvantages
It leads to starvation for processes with larger burst time as they have
to repeat the cycle many times.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
Priority Scheduling:
 Out of all the available processes, CPU is assigned to
the process having the highest priority.
 In case of a tie, it is broken by FCFS Scheduling.

 Priority Scheduling can be used in both preemptive


and non-preemptive mode.
 Priority scheduling in preemptive mode is best suited
for real time operating system.
Important Note:

The waiting time for the process having the highest priority
will always be zero in preemptive mode.

The waiting time for the process having the highest priority
may not be zero in non-preemptive mode.

Priority scheduling in preemptive and non-preemptive mode


behaves exactly same under following conditions

The arrival time of all the processes is same


All the processes become available
Priority Scheduling:

Higher number represents higher priority


Process Id Arrival time Burst time Priority

p1 0 4 2
p2 1 3 3
p3 2 1 4
p4 3 5 5
p5 4 2 5

0 4 9 11 12 15
p1 p4 p5 p3 p2
Priority Scheduling:
Advantages

It considers the priority of the processes and allows the


important processes to run first.
Priority scheduling in preemptive mode is best suited for
real time operating system.

Disadvantages

Processes with lesser priority may starve for CPU.


There is no idea of response time and waiting time.
Deterministic Evaluation:
 For each algorithm, calculate minimum average waiting time
 Simple and fast, but requires exact numbers for input, applies only to those
inputs

 FCS is 28ms:

Average waiting time: (0+10+39+42+49)/5= 28 ms

 Non-preemptive SFJ is 13ms:

Average waiting time: (10+32+0+3+20)/5= 13 ms


 RR is 23ms:(preemptive )

Average waiting time: (0+32+20+23+40)/5= 23 ms


Multilevel Queue Scheduling:
Problem called Starvation
Multilevel Feedback Queue Scheduling
Real-Time CPU Scheduling:
Two types of latencies affect the performance of real-time
systems:
1.Interrupt latency : Interrupt latency is the time
delay between when an interrupt is received by
the system and when the system starts
processing that interrupt
2.Dispatch latency –:Dispatch latency is the time
delay between when a task is ready to run and
when the system starts executing that task.
Rate-Monotonic Scheduling
A priority is assigned based on the inverse of its period
Shorter periods = higher priority;
Longer periods = lower priority
P1 is assigned a higher priority than P2
Example:
p1 = 50 and p2 = 100.
The processing times are t1 = 20 for P1 and t2 = 35 for P2.
Missed Deadlines with Rate Monotonic Scheduling
Example:
p1 = 50 and p2 =80.
The processing times are t1 = 25 for P1 and t2 = 35 for
P2.
Earliest Deadline First Scheduling (EDF)
Priorities are assigned according to deadlines:

**The earlier the deadline, the higher the priority;


the later the deadline, the lower the priority
Example:
p1 = 50 and p2 =80.
The processing times are t1 = 25 for P1 and t2 = 35 for P2.
Proportional Share Scheduling

POSIX Real-Time Scheduling

You might also like