0% found this document useful (0 votes)
41 views90 pages

Unit-Ii-R23 PPT Os

A process is defined as a program in execution, transitioning from a passive program stored on disk to an active entity in memory. It consists of various sections including text, data, BSS, heap, and stack, and can exist in five states: new, ready, running, waiting, and terminated. Process scheduling is crucial for managing multiple processes in an operating system, involving long-term, short-term, and medium-term schedulers, along with context switching and interprocess communication mechanisms.

Uploaded by

Swapna CH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views90 pages

Unit-Ii-R23 PPT Os

A process is defined as a program in execution, transitioning from a passive program stored on disk to an active entity in memory. It consists of various sections including text, data, BSS, heap, and stack, and can exist in five states: new, ready, running, waiting, and terminated. Process scheduling is crucial for managing multiple processes in an operating system, involving long-term, short-term, and medium-term schedulers, along with context switching and interprocess communication mechanisms.

Uploaded by

Swapna CH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 90

UNIT-II

PROCESSES
Process Concept

A program loaded into memory and executing is called a process.


In simple, a process is a program in execution.
When a program is created then it is just some pieces of Bytes which is stored in
Hard Disk as a passive entity.
Then the program starts loading in memory and become an active entity called
as process, when a program is double-clicked in windows or entering the name
of the executable file on the command line. (i.e. a.out or prog.exe)
Program becomes process when executable file loaded into memory
So Process is a program in execution.
Program is passive entity, process is an active entity
One program can be several processes
 Consider multiple users executing the same program
What is Process?
Process
Program

Process is a program under execution.


Process is an abstraction of a running program.
Process is an instance of an executing program, including the current values of the program counter,
registers & variables.
Process in Memory
A program loaded into memory and executing is called a process
Let’s look at how does a process look like within memory
Process in Memory
TEXT SECTION
This section of memory contains the executable instructions of a program.
It also contains constants, macros and it is read-only segment to prevent
accidentally modification of an instruction.
It is also sharable so that the so that another process can use this whenever
it is required.

DATA SECTION
Data Section segment of memory contains the global and static variables
that are initialized by the programmer prior to the execution of a program.
This segment is not read-only, as the value of the variables can be changed
at the runtime.

Example C program −
#include<stdio.h>
int b=20; //will be stored in data section
int K=0 // will be stored in BSS section
int m;
int main()
{ static int a=10; //will be stored in data section
static int n; } //will be stored in bss section
BSS SECTION
The BSS segment, also known as uninitialized data, is usually adjacent to the
data segment. The BSS segment contains all global variables and static variables
that are initialized to zero or do not have explicit initialization in source code.
For instance, a variable defined as static int i; would be contained in the BSS
segment.

HEAP SECTION
To allocate memory for variables whose size cannot be statically determined by
the compiler before program execution, requested by the programmer, there is
a requirement of dynamic allocation of memory which is done in heap segment.
 It can be only determined at run-time. It is managed via system calls to
malloc, calloc, free, delete etc

STACK SECTION
A process generally also includes the process stack, which contains temporary
data i.e. function parameters, return addresses, and local variables.
Process States
The current activity of a process is referred as process state.
 Throughout the life of execution , the process is transitioned
from one state to another state called process states .
The following are the 5 states of a process.
• new: The process is being created
• ready: The process is waiting to be assigned to a processor
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• terminated: The process has finished execution
Five State Process Model and
Transitions
New
Admit
Ready
Dispatch
Running
exit
Exit
Time-out

Event
Occurs Event
Wait
Waiting

• New – process is being created


• Ready – process is waiting to run (runnable), temporarily stopped to
let another process run
• Running – process is actually using the CPU
• Waiting – unable to run until some external event happens
• Exit (Terminated) – process has finished the execution
Diagram of Process State
The PCB is maintained for a process throughout its lifetime, and is
deleted once the process terminates.
Process Control Block (PCB)
The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.
CPU Switch From Process to Process
PROCESS SCHEDULING
The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming and Time
sharing operating systems.
The objective of multiprogramming is to have some process running
at all times, to maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.
To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for program execution on
the CPU.
PROCESS SCHEDULING
For a single-processor system, there will never be more than one running
process.
If there are more processes , the rest will have to wait until the CPU is free
and can be rescheduled.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of
all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.
PROCESS SCHEDULING
The Operating System maintains the following important process scheduling
queues
Job queue: This queue keeps all the processes in the system.
Ready queue: This queue keeps a set of all processes residing in main
memory, ready and waiting to execute.
Device queues: The processes which are blocked due to unavailability of an
I/O device constitute this queue.
PROCESS SCHEDULERS
Schedulers are special system software which handle process scheduling in
various ways.
Their main task is to select the jobs to be submitted into the system and to
decide which process to run.

Schedulers are of three types −

• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long-term scheduler : (or job scheduler) – A long-term scheduler determines
which programs are admitted to the system for processing.
It selects processes from the job queue and loads them into memory (Ready
queue) for execution.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
• I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
• CPU-bound process – spends more time doing computations; few very long
CPU bursts
It also controls the degree of multiprogramming.
Short-term scheduler (or CPU scheduler) – Its main objective is to increase
system performance in accordance with the chosen set of criteria.
 It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to
execute(present in ready queue) and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next.
 Short-term schedulers are faster than long-term schedulers.
Sometimes the only scheduler in a system
Representation of Process Scheduling
Medium Term Scheduling
Medium Term Scheduler: the process can be
reintroduced into memory, and its execution can be
continued where it left off. This scheme is called
swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler
Comparison among Schedulers
S.No Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between both
short term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them which are ready to process into memory and
into memory for execute execution can be
execution continued.
Context Switching
Context switch means stopping one process and restarting another process.
When an event occur, the OS saves the state of an active process and
• restore the state of new process.
Context switching is purely overhead because system does not perform any
• useful work while context switch.
Stepsofperformed
• Sequence action: by OS during Context switching
1. OS takes control (through
interrupt)
2. Saves context of running
process in the process PCB
3. Reload context of new
process from the new
process PCB
4. Return control to new
process
Operations on Processes
1)Process Creation
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes
• Generally, process identified and managed via a process identifier
(pid)
• Resource sharing
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution
• Parent and children execute concurrently
• Parent waits until children terminate
• Address space
• Child duplicate of parent(it has same program and data as parent)
• Child has a new program loaded into it
2) Process Termination
• Process executes last statement and asks the operating system to delete it (exit)
• Output data from child to parent (via wait)
• Process’ resources are deallocated by operating system
• Parent may terminate execution of children processes (abort)
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating system do not allow child to continue if its parent terminates
• All children terminated - cascading termination
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
i) Shared Memory Systems
Producer-Consumer problem
Paradigm for cooperating processes, producer process
produces information that is consumed by a consumer
process
unbounded-buffer places no practical limit on the size
of the buffer
bounded-buffer assumes that there is a fixed buffer size
Message Passing Systems
• Mechanism for processes to communicate and to synchronize their
actions
• Message system – processes communicate with each other without
resorting to shared variables
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
The communication link can be
implemented by
• Direct or Indirect communication
• Synchronous and asynchronous communication
• Automatic or explicit buffering
Direct Communication
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q

• Properties of communication link


• Links are established automatically
• A link is associated with exactly one pair of communicating processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional
Indirect Communication
• Messages are directed and received from mailboxes (also referred to as ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a mailbox

• Properties of communication link


• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links
• Link may be unidirectional or bi-directional
Synchronization
• Message passing may be either blocking or non-blocking

• Blocking is considered synchronous


• Blocking send has the sender block until the message is received
• Blocking receive has the receiver block until a message is available

• Non-blocking is considered asynchronous


• Non-blocking send has the sender send the message and continue
• Non-blocking receive has the receiver receive a valid message or null
Buffering
• Queue of messages attached to the link; implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Scheduling Criteria (1/2)
► Different CPU-scheduling algorithms have different properties.

► The choice of a particular algorithm may favor one class of processes over
another.

► CPU utilization: keep the CPU as busy as possible (Max).

► Throughput: # of processes that complete their execution per time unit


(Max).

CPU Scheduling
Scheduling Criteria (2/2)
► Turnaround time: amount of time to execute a particular process (Min).

► Waiting time: amount of time a process has been waiting in the ready
queue (Min).

► Response time: amount of time it takes from when a request was


submitted until the first response is produced, not output (for time-
sharing environment) (Min).

CPU Scheduling
Scheduling Algorithms

CPU Scheduling
Scheduling Algorithms

► First-Come, First-Served Scheduling

► Shortest-Job-First Scheduling

► Priority Scheduling

► Round-Robin Scheduling

► Multilevel Queue Scheduling

► Multilevel Feedback Queue Scheduling

CPU Scheduling
First-Come, First-Served (FCFS)
Scheduling

CPU Scheduling
First Come First Served (FCFS)
• Selection criteria:
• The process that request first is served first.
• It means that processes are served in the exact order of their arrival
• Decision Mode:
• Non preemptive: Once a process is selected, it runs until either it is blocked
for an I/O or some other event or it is terminated.
• Implementation:
• This strategy can be easily implemented by using FIFO (First In First Out)
queue.
• The processes are placed in a queue based on their arrival .
• When CPU becomes free, a process from the first position in a queue is
selected to run.
First Come First Served (FCFS)
Process Arrival Time (T0) Time required for completion (∆T) (CPU Burst
Time)
P0 0 10
• Example
P1 1 6
P2 3 2
P3 5 4

• Gantt Chart
P0 P1 P2 P3
0 10 16 18 22
First Come First Served (FCFS)
• Advantages
• Simple and fair.
• Easy to understand and implement.
• Every process will get a chance to run, so starvation
doesn't occur.
• Disadvantages
• Not efficient because average waiting time is too high.
• Convoy effect is possible. All small I/O bound processes
wait for one big CPU bound process to acquire CPU.
• CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes.
Shortest-Job-First (SJF) Scheduling

CPU Scheduling
Shortest Job First (SJF)
• Selection criteria
• The process, that requires shortest time to complete execution, is
served first.
• Decision Mode
• Non preemptive: Once a process is selected, it runs until either it is
blocked for an I/O or some other event or it is terminated.
• Implementation:
• This strategy can be easily implemented by using FIFO (First In First
Out) queue.
• All processes in a queue are sorted in ascending order based on
their required CPU bursts.
• When CPU becomes free, a process from the first position in a queue
is selected to run.
Shortest Job First (SJF)
• Advantages:
• Less waiting time.
• Good response for short processes.
• Disadvantages :
• It is difficult to estimate time required to complete execution.
• Starvation is possible for long process. Long process may wait
forever.
Shortest Remaining Time Next (SRTN)
• Selection criteria :
• The process, whose remaining run time is shortest, is served first. This is a
preemptive version of SJF scheduling.
• Decision Mode:
• Preemptive: When a new process arrives, its total time is compared to the current
process remaining run time.
• If the new process needs less time to finish than the current process, the current process
is suspended and the new job is started.
• Implementation :
• This strategy can also be implemented by using sorted FIFO queue.
• All processes in a queue are sorted in ascending order on their remaining run time.
• When CPU becomes free, a process from the first position in a queue is selected to run.
Multilevel Queue Scheduling

CPU Scheduling
Multilevel Queue Scheduling

► Ready queue is partitioned into separate queues, e.g.:


• foreground (interactive)
• background (batch)

► Process permanently in a given queue.


► Each queue has its own scheduling algorithm:
• foreground: RR
• background: FCFS

CPU Scheduling
Multilevel Queue Scheduling (3/3)

CPU Scheduling
Multilevel Feedback Queue
Scheduling

CPU Scheduling
Multilevel Feedback Queue - Example

► For example, three queues:


• Q0: RR with time quantum 8 milliseconds
• Q1: RR time quantum 16 milliseconds
• Q2: FCFS

► A new job enters queue Q0 which is served FCFS:


• When it gains CPU, job receives 8 milliseconds.
• If it does not finish in 8 milliseconds, job is moved to queue Q1.

► At Q1 job is again served FCFS and receives 16 additional millisec- onds.


• If it still does not complete, it is preempted and moved to queue Q2.
CPU Scheduling
Multiple-Processor Scheduling

► CPU scheduling is more complex when multiple CPUs are available.

► Asymmetric multiprocessing
• Only one processor does all scheduling decisions, I/O processing, and other
system activities.
• The other processors execute only user code.

► Symmetric multiprocessing (SMP)


• Each processor is self-scheduling
• All processes in common ready queue, or each has its own private queue of
ready processes.
• Currently, the most common.
CPU Scheduling 16 / 58
Processor Affinity

► Processor affinity: keep a process running on the same processor.


• Soft affinity: the OS attempts to keep a process on a single processor,
but it is possible for a process to migrate between processors.
• Hard affinity: allowing a process to specify a subset of processors on which it
may run.

CPU Scheduling 17 / 58
Load Balancing

► If SMP, need to keep all CPUs loaded for efficiency.

► Load balancing attempts to keep workload evenly distributed.

► Push migration: periodic task checks load on each processor, and if found
pushes task from overloaded CPU to other CPUs.

► Pull migration: idle processors pulls waiting task from busy proces- sor.

CPU Scheduling 19 / 58
Multicore Processors

► Place multiple processor cores on same physical chip.

► Faster and consumes less power.

► Memory stall: when a processor accesses memory, it spends a sig- nificant


amount of time waiting for the data to become available.

CPU Scheduling
Multithreaded Multicore System

► Multiple threads per core also growing.


• Takes advantage of memory stall to make progress on another thread
while memory retrieve happens.

CPU Scheduling
Thread
A basic unit of CPU utilization.

Threads 1393/7/12 63 / 45
Threads
e A traditional process: has a single
thread.

Threads 1393/7/12 4 / 45
Thread Libraries

e Thread library provides programmer with API for creating


and managing threads.

e Two primary ways of implementing:


• Library entirely in user-space.
• Kernel-level library supported by the OS.

Threads 24 / 45
Thread Libraries

e Pthread
• Either a user-level or a kernel-level library.

e Windows thread
• Kernel-level library.

e Java thread
• Uses a thread library available on the host system.

Threads 25 / 45
Pthreads

A Pthread is a POSIX API for thread creation and


synchronization.

e API specifies behavior of the thread library, implementation is


up to development of the library.

Threads 27 / 45
Thread ID
e The thread ID (TID) is the thread analogue to the process ID
(PID).

e The PID is assigned by the Linux kernel, and TID is assigned in


the Pthread library.

e Represented by p t h re a d _t .

e Obtaining a TID at runtime:


#include <pthread.h>

pthread_t p t h re a d _ s e l f ( v o i d ) ;

Threads 28 / 45
Creating Threads
e p t h re a d c r e a t e ( ) defines and launches a new
thread.
#include <pthread.h>

i n t pthread_create (pthread_t *thread , const p t h re a d _ a t t r _ t


* a t t r , void * ( * s t a r t _ ro u t i n e ) ( v o i d * ) , void * a rg ) ;

es t a r t r o u t i n e has the following


signature:
void * s t a r t _ t h re a d ( v o i d * a rg ) ;

Threads
Terminating Threads
e Terminating yourself by calling p t h re a d
exit().
#include <pthread.h>

void pthread_exit (void * r e t v a l ) ;

e Terminating others by calling p t h re a d


cancel().
#include <pthread.h>

i n t p t h re a d _ c a n c e l ( p t h re a d _ t t h r e a d ) ;

Threads
Joining and Detaching Threads
e Joining allows one thread to block while waiting for the
termination of another.

e You use join if you care about what value the thread returns
when it is done, and use detach if you do not.
#include <pthread.h>

i n t pthread_join (pthread_t t h re a d , void


* * r e t v a l ) ; i n t p t h re a d _ d e t a c h ( p t h re a d _ t
thread);

[https://computing.llnl.gov/tutorials/pthreads/
#Joining]
Threads 31 / 45
A Threading Example

v o i d * s t a r t _ t h r e a d ( v o i d *message)
{ p r i n t f ( " s \ n " , (const char
* ) m e s s a g e ) ; r e t u r n message;
}

i n t main(void) {
pthread_t t h rea d 1 1 , t h rea d 2 ;
c o n s t c h a r *message1 = "Thread 1 " ;
c o n s t c h a r *message2 = "Thread
2";

/ / Create two t h re a d s , each with a d i f f e r e n t message.


p t h re a d _ c re a t e ( & t h re a d 1 , NULL, s t a r t _ t h r e a d , ( v o i d
* ) m e s s a ge 1 ) ; p t h re a d _ c re a t e ( & t h re a d 2 , NULL,
s t a r t _ t h r e a d , ( v o i d * ) m e s s a ge 2 ) ;

/ / Wait f o r t h e threads t o e x i t .
p t h r e a d _ j o i n ( t h r e a d 1 , NULL);
p t h r e a d _ j o i n ( t h r e a d 2 , NULL);
Threads
Multithreaded server architecture
Multi-threading
Models

Threads
User Threads and Kernel Threads

e User threads: management done by user-level threads


library.

e Kernel threads: supported by the Kernel.

Threads 18 / 45
User Threads and Kernel Threads

e User threads: management done by user-level threads


library.
• Three primary thread libraries:
• POSIX threads
• Windows threads
• Java threads

e Kernel threads: supported by the Kernel.


• All general purpose operating systems, including:
Windows, Solaris, Linux, Tru64 UNIX, Mac OS X

Threads 18 / 45
Multi-Threading Models

e Many-to-One

e One-to-One

e Many-to-Many

Threads
Many-to-One Model
eMany user-level threads
mapped to single kernel thread.

Threads 20 / 45
One-to-One Model

e Each user-level thread maps to a kernel


thread.

Threads 21 / 45
Many-to-Many Model
e Allows many user-level threads to
be mapped to many kernel
threads.

Threads 22 / 45
Two-level Model
eIt multiplexes many user-level threads to a smaller or equal
number of kernel threads but also allows user-level thread
to be bound to a kernel thread.
EX: Versions older than Solaris 9

Threads 22 / 45
Threading
Issues

Threads
Threading Issues
Some of the issues to consider in designing multithreaded programs
are following.
• The fork() and exec() System Calls
• Signal Handling
• Thread Cancellation
• Thread-Pools
• Scheduler Activations
The fork() and exec() System Calls

e Does f o r k ( ) duplicate only the calling thread or all threads?


• Some UNIXes have two versions of fork.

e e xe c ( ) usually works as normal: replaces the running process


in- cluding all threads.

Threads 41 / 45
Thread Cancellation
• 1. Asynchronous cancellation: One thread immediately terminates the
target thread.
• 2. Deferred cancellation: The target thread periodically checks
whether it should terminate, allowing it an opportunity to terminate
itself in an orderly fashion.
Signal Handling
• A signal is used in UNIX systems to notify a process that a particular
event has occurred.
• A signal may be handled by one of two possible handlers:
1. A default signal handler
2. A user-defined signal handler
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process
Thread-Pools
• A thread pool in an operating system (OS) is a collection
of worker threads that are used to execute tasks in a
program.
Scheduler Activations

One scheme for


communication
between the user-
thread library and
the kernel is known
as scheduler
activation.

You might also like