Unit-Ii-R23 PPT Os
Unit-Ii-R23 PPT Os
PROCESSES
Process Concept
DATA SECTION
Data Section segment of memory contains the global and static variables
that are initialized by the programmer prior to the execution of a program.
This segment is not read-only, as the value of the variables can be changed
at the runtime.
Example C program −
#include<stdio.h>
int b=20; //will be stored in data section
int K=0 // will be stored in BSS section
int m;
int main()
{ static int a=10; //will be stored in data section
static int n; } //will be stored in bss section
BSS SECTION
The BSS segment, also known as uninitialized data, is usually adjacent to the
data segment. The BSS segment contains all global variables and static variables
that are initialized to zero or do not have explicit initialization in source code.
For instance, a variable defined as static int i; would be contained in the BSS
segment.
HEAP SECTION
To allocate memory for variables whose size cannot be statically determined by
the compiler before program execution, requested by the programmer, there is
a requirement of dynamic allocation of memory which is done in heap segment.
It can be only determined at run-time. It is managed via system calls to
malloc, calloc, free, delete etc
STACK SECTION
A process generally also includes the process stack, which contains temporary
data i.e. function parameters, return addresses, and local variables.
Process States
The current activity of a process is referred as process state.
Throughout the life of execution , the process is transitioned
from one state to another state called process states .
The following are the 5 states of a process.
• new: The process is being created
• ready: The process is waiting to be assigned to a processor
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• terminated: The process has finished execution
Five State Process Model and
Transitions
New
Admit
Ready
Dispatch
Running
exit
Exit
Time-out
Event
Occurs Event
Wait
Waiting
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long-term scheduler : (or job scheduler) – A long-term scheduler determines
which programs are admitted to the system for processing.
It selects processes from the job queue and loads them into memory (Ready
queue) for execution.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound.
• I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
• CPU-bound process – spends more time doing computations; few very long
CPU bursts
It also controls the degree of multiprogramming.
Short-term scheduler (or CPU scheduler) – Its main objective is to increase
system performance in accordance with the chosen set of criteria.
It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to
execute(present in ready queue) and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next.
Short-term schedulers are faster than long-term schedulers.
Sometimes the only scheduler in a system
Representation of Process Scheduling
Medium Term Scheduling
Medium Term Scheduler: the process can be
reintroduced into memory, and its execution can be
continued where it left off. This scheme is called
swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler
Comparison among Schedulers
S.No Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between both
short term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them which are ready to process into memory and
into memory for execute execution can be
execution continued.
Context Switching
Context switch means stopping one process and restarting another process.
When an event occur, the OS saves the state of an active process and
• restore the state of new process.
Context switching is purely overhead because system does not perform any
• useful work while context switch.
Stepsofperformed
• Sequence action: by OS during Context switching
1. OS takes control (through
interrupt)
2. Saves context of running
process in the process PCB
3. Reload context of new
process from the new
process PCB
4. Return control to new
process
Operations on Processes
1)Process Creation
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes
• Generally, process identified and managed via a process identifier
(pid)
• Resource sharing
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution
• Parent and children execute concurrently
• Parent waits until children terminate
• Address space
• Child duplicate of parent(it has same program and data as parent)
• Child has a new program loaded into it
2) Process Termination
• Process executes last statement and asks the operating system to delete it (exit)
• Output data from child to parent (via wait)
• Process’ resources are deallocated by operating system
• Parent may terminate execution of children processes (abort)
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating system do not allow child to continue if its parent terminates
• All children terminated - cascading termination
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
i) Shared Memory Systems
Producer-Consumer problem
Paradigm for cooperating processes, producer process
produces information that is consumed by a consumer
process
unbounded-buffer places no practical limit on the size
of the buffer
bounded-buffer assumes that there is a fixed buffer size
Message Passing Systems
• Mechanism for processes to communicate and to synchronize their
actions
• Message system – processes communicate with each other without
resorting to shared variables
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
The communication link can be
implemented by
• Direct or Indirect communication
• Synchronous and asynchronous communication
• Automatic or explicit buffering
Direct Communication
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
► The choice of a particular algorithm may favor one class of processes over
another.
CPU Scheduling
Scheduling Criteria (2/2)
► Turnaround time: amount of time to execute a particular process (Min).
► Waiting time: amount of time a process has been waiting in the ready
queue (Min).
CPU Scheduling
Scheduling Algorithms
CPU Scheduling
Scheduling Algorithms
► Shortest-Job-First Scheduling
► Priority Scheduling
► Round-Robin Scheduling
CPU Scheduling
First-Come, First-Served (FCFS)
Scheduling
CPU Scheduling
First Come First Served (FCFS)
• Selection criteria:
• The process that request first is served first.
• It means that processes are served in the exact order of their arrival
• Decision Mode:
• Non preemptive: Once a process is selected, it runs until either it is blocked
for an I/O or some other event or it is terminated.
• Implementation:
• This strategy can be easily implemented by using FIFO (First In First Out)
queue.
• The processes are placed in a queue based on their arrival .
• When CPU becomes free, a process from the first position in a queue is
selected to run.
First Come First Served (FCFS)
Process Arrival Time (T0) Time required for completion (∆T) (CPU Burst
Time)
P0 0 10
• Example
P1 1 6
P2 3 2
P3 5 4
• Gantt Chart
P0 P1 P2 P3
0 10 16 18 22
First Come First Served (FCFS)
• Advantages
• Simple and fair.
• Easy to understand and implement.
• Every process will get a chance to run, so starvation
doesn't occur.
• Disadvantages
• Not efficient because average waiting time is too high.
• Convoy effect is possible. All small I/O bound processes
wait for one big CPU bound process to acquire CPU.
• CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes.
Shortest-Job-First (SJF) Scheduling
CPU Scheduling
Shortest Job First (SJF)
• Selection criteria
• The process, that requires shortest time to complete execution, is
served first.
• Decision Mode
• Non preemptive: Once a process is selected, it runs until either it is
blocked for an I/O or some other event or it is terminated.
• Implementation:
• This strategy can be easily implemented by using FIFO (First In First
Out) queue.
• All processes in a queue are sorted in ascending order based on
their required CPU bursts.
• When CPU becomes free, a process from the first position in a queue
is selected to run.
Shortest Job First (SJF)
• Advantages:
• Less waiting time.
• Good response for short processes.
• Disadvantages :
• It is difficult to estimate time required to complete execution.
• Starvation is possible for long process. Long process may wait
forever.
Shortest Remaining Time Next (SRTN)
• Selection criteria :
• The process, whose remaining run time is shortest, is served first. This is a
preemptive version of SJF scheduling.
• Decision Mode:
• Preemptive: When a new process arrives, its total time is compared to the current
process remaining run time.
• If the new process needs less time to finish than the current process, the current process
is suspended and the new job is started.
• Implementation :
• This strategy can also be implemented by using sorted FIFO queue.
• All processes in a queue are sorted in ascending order on their remaining run time.
• When CPU becomes free, a process from the first position in a queue is selected to run.
Multilevel Queue Scheduling
CPU Scheduling
Multilevel Queue Scheduling
CPU Scheduling
Multilevel Queue Scheduling (3/3)
CPU Scheduling
Multilevel Feedback Queue
Scheduling
CPU Scheduling
Multilevel Feedback Queue - Example
► Asymmetric multiprocessing
• Only one processor does all scheduling decisions, I/O processing, and other
system activities.
• The other processors execute only user code.
CPU Scheduling 17 / 58
Load Balancing
► Push migration: periodic task checks load on each processor, and if found
pushes task from overloaded CPU to other CPUs.
► Pull migration: idle processors pulls waiting task from busy proces- sor.
CPU Scheduling 19 / 58
Multicore Processors
CPU Scheduling
Multithreaded Multicore System
CPU Scheduling
Thread
A basic unit of CPU utilization.
Threads 1393/7/12 63 / 45
Threads
e A traditional process: has a single
thread.
Threads 1393/7/12 4 / 45
Thread Libraries
Threads 24 / 45
Thread Libraries
e Pthread
• Either a user-level or a kernel-level library.
e Windows thread
• Kernel-level library.
e Java thread
• Uses a thread library available on the host system.
Threads 25 / 45
Pthreads
Threads 27 / 45
Thread ID
e The thread ID (TID) is the thread analogue to the process ID
(PID).
e Represented by p t h re a d _t .
pthread_t p t h re a d _ s e l f ( v o i d ) ;
Threads 28 / 45
Creating Threads
e p t h re a d c r e a t e ( ) defines and launches a new
thread.
#include <pthread.h>
Threads
Terminating Threads
e Terminating yourself by calling p t h re a d
exit().
#include <pthread.h>
i n t p t h re a d _ c a n c e l ( p t h re a d _ t t h r e a d ) ;
Threads
Joining and Detaching Threads
e Joining allows one thread to block while waiting for the
termination of another.
e You use join if you care about what value the thread returns
when it is done, and use detach if you do not.
#include <pthread.h>
[https://computing.llnl.gov/tutorials/pthreads/
#Joining]
Threads 31 / 45
A Threading Example
v o i d * s t a r t _ t h r e a d ( v o i d *message)
{ p r i n t f ( " s \ n " , (const char
* ) m e s s a g e ) ; r e t u r n message;
}
i n t main(void) {
pthread_t t h rea d 1 1 , t h rea d 2 ;
c o n s t c h a r *message1 = "Thread 1 " ;
c o n s t c h a r *message2 = "Thread
2";
/ / Wait f o r t h e threads t o e x i t .
p t h r e a d _ j o i n ( t h r e a d 1 , NULL);
p t h r e a d _ j o i n ( t h r e a d 2 , NULL);
Threads
Multithreaded server architecture
Multi-threading
Models
Threads
User Threads and Kernel Threads
Threads 18 / 45
User Threads and Kernel Threads
Threads 18 / 45
Multi-Threading Models
e Many-to-One
e One-to-One
e Many-to-Many
Threads
Many-to-One Model
eMany user-level threads
mapped to single kernel thread.
Threads 20 / 45
One-to-One Model
Threads 21 / 45
Many-to-Many Model
e Allows many user-level threads to
be mapped to many kernel
threads.
Threads 22 / 45
Two-level Model
eIt multiplexes many user-level threads to a smaller or equal
number of kernel threads but also allows user-level thread
to be bound to a kernel thread.
EX: Versions older than Solaris 9
Threads 22 / 45
Threading
Issues
Threads
Threading Issues
Some of the issues to consider in designing multithreaded programs
are following.
• The fork() and exec() System Calls
• Signal Handling
• Thread Cancellation
• Thread-Pools
• Scheduler Activations
The fork() and exec() System Calls
Threads 41 / 45
Thread Cancellation
• 1. Asynchronous cancellation: One thread immediately terminates the
target thread.
• 2. Deferred cancellation: The target thread periodically checks
whether it should terminate, allowing it an opportunity to terminate
itself in an orderly fashion.
Signal Handling
• A signal is used in UNIX systems to notify a process that a particular
event has occurred.
• A signal may be handled by one of two possible handlers:
1. A default signal handler
2. A user-defined signal handler
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process
Thread-Pools
• A thread pool in an operating system (OS) is a collection
of worker threads that are used to execute tasks in a
program.
Scheduler Activations