Operating Systems
10EC65
Unit 3
Process Management
Reference Book:
Operating Systems - A Concept based Approach
D. M. Dhamdhare, TMH, 3rd Edition, 2010
Shrishail Bhat, AITM Bhatkal
Introduction Process Concept
A process is an execution of a program
A programmer uses processes to achieve execution of programs in
a sequential or concurrent manner as desired
An OS uses processes to organize execution of programs
In programmer view, we discuss how concurrent processes are
created and how they interact with one another to meet a common
goal
In OS view, we discuss how an OS creates processes, how it keeps
track of process states, and how it uses the process state
information to organize execution of programs
A thread is an execution of a program that uses the environment of
a process, i.e., its code, data and resources
Shrishail Bhat, AITM Bhatkal
Processes and Programs
A program is a passive entity that does not perform any action
by itself; it has to be executed to realize the actions specified in
it
A process is an execution of a program; it actually performs the
actions specified in a program
Shrishail Bhat, AITM Bhatkal
Processes and Programs (continued)
Shrishail Bhat, AITM Bhatkal
Relationships between Processes and Programs
The program consists of a main program and a set of functions
The OS is not aware of the existence of functions. Hence
execution of the program constitutes a single process
Shrishail Bhat, AITM Bhatkal
Child Processes
The OS initiates an execution of a program by creating a
process for it Main process
The main process may make system calls to create other
processes, which become its child processes
A child process may itself create other processes, and so on.
Child processes and parents create a process tree
Typically, a process creates one or more child processes and
delegates some of its work to each
Multitasking within an application
Shrishail Bhat, AITM Bhatkal
Child Processes (continued)
5.7
Shrishail Bhat, AITM Bhatkal
Example: Child Processes in a Real-Time Application
Shrishail Bhat, AITM Bhatkal
Programmer View of Processes
Processes are a means to achieve concurrent execution of a
program
The main process of a concurrent program creates child
processes
It assigns appropriate priorities to them
The main process and the child processes interact to achieve
their common goal
This interaction may involve exchange of data or may require
the processes to coordinate their activities with one another
Shrishail Bhat, AITM Bhatkal
Programmer View of Processes (continued)
An OS provides the following operations to implement
programmer view of processes:
Creating child processes and assigning priorities to them
Terminating child processes
Determining the status of child processes
Sharing, communication and synchronization between processes
Shrishail Bhat, AITM Bhatkal
Sharing, Communication and Synchronization Between Processes
Processes of an application need to interact with one another
because they work towards a common goal
Shrishail Bhat, AITM Bhatkal
Concurrency and Parallelism
Parallelism: quality of occurring at the same time
Two tasks are parallel if they are performed at the same time
Obtained by using multiple CPUs
As in a multiprocessor system
Concurrency is an illusion of parallelism
Two tasks are concurrent if there is an illusion that they are being
performed in parallel whereas only one of them may be performed at any
time
In an OS, obtained by interleaving operation of processes on the CPU
Both concurrency and parallelism can provide better throughput
Shrishail Bhat, AITM Bhatkal
OS View of Processes
In the OS view, a process is an execution of a program
The OS creates processes, schedules them for use of the CPU,
and terminates them
To realize this, the OS uses the process state to keep a track of
what a process is doing at any moment
Whether executing on the CPU
Waiting for the CPU to be allocated to it
Waiting for an I/O operation to complete
Waiting to be swapped into memory
Shrishail Bhat, AITM Bhatkal
Background Execution of Programs
When a process is scheduled, the CPU executes instructions in
the program
When CPU is to be switched to some other process, the kernel
saves the CPU state
This state is loaded back into the CPU when the process is
scheduled again
Shrishail Bhat, AITM Bhatkal
Process Definition
Accordingly, a process comprises six components:
(id, code, data, stack, resources, CPU state)
where
id is the unique id assigned by the OS
code is the code of the program (it is also called the text of a program)
data is the data used in the execution of the program, including data from files
stack contains parameters of functions and procedures called during execution of
the program, and their return addresses
resources is the set of resources allocated by the OS
CPU state is composed of contents of the PSW and the general-purpose registers
(GPRs) of the CPU (we assume that the stack pointer is maintained in a GPR)
Shrishail Bhat, AITM Bhatkal
Controlling Processes
The OS view of a process consists of two parts:
Code and data of the process, including its stack and resources
allocated to it
Information concerning program execution
Shrishail Bhat, AITM Bhatkal
Process Environment
Also called process context
It contains the address space of a process and all the
information necessary for accessing and controlling resources.
The OS creates a process environment by
allocating memory to the process
loading the process code in the allocated memory
setting up its data space
Shrishail Bhat, AITM Bhatkal
Process Environment (continued)
Shrishail Bhat, AITM Bhatkal
Process Control Block (PCB)
A PCB is a kernel data structure that contains information
concerning process id and CPU state
Shrishail Bhat, AITM Bhatkal
Controlling Processes (continued)
The kernel uses four fundamental functions to
control processes:
1. Context save: Saving CPU state and information
concerning resources of the process whose
operation is interrupted
2. Event handling: Analyzing the condition that led
to an interrupt, or the request by a process that
led to a system call, and taking appropriate
actions
3. Scheduling: Selecting the process to be executed
next on the CPU
4. Dispatching: Setting up access to resources of
the scheduled process and loading its saved CPU
state in the CPU to begin or resume its operation
Shrishail Bhat, AITM Bhatkal
Process States and Transitions
Shrishail Bhat, AITM Bhatkal
Process States and Transitions (continued)
A state transition for a process is a change in its state
Caused by the occurrence of some event such as the start or end of
an I/O operation
Shrishail Bhat, AITM Bhatkal
Process States and Transitions (continued)
Shrishail Bhat, AITM Bhatkal
Process States and Transitions (continued)
Shrishail Bhat, AITM Bhatkal
Example: State Transition in a Time Sharing System
A system contains two processes P1 and P2
Shrishail Bhat, AITM Bhatkal
Suspended Processes
OS needs additional states to describe processes suspended
due to swapping
The process in suspended state is not considered for scheduling
Two typical causes of suspension
A process is moved out of memory, i.e., it is swapped out
The user who initiates a process specifies that a process should not
be scheduled until some condition is satisfied
Shrishail Bhat, AITM Bhatkal
Suspended Processes (continued)
Shrishail Bhat, AITM Bhatkal
Process Control Block (PCB) (continued)
The PCB contains all information pertaining to a process that is
used in controlling its operation, accessing resources and
implementing communication with other processes
Shrishail Bhat, AITM Bhatkal
Process Control Block (PCB) (continued)
Shrishail Bhat, AITM Bhatkal
Context Save, Scheduling and Dispatching
Context save function:
Saves CPU state in PCB, and saves information concerning process
environment
Changes process state from running to ready
Scheduling function:
Uses process state information from PCBs to select a ready process
for execution and passes its id to dispatching function
Dispatching function:
Sets up environment of the selected process, changes its state to
running, and loads saved CPU state from PCB into CPU
Shrishail Bhat, AITM Bhatkal
Example - Context Save, Scheduling and Dispatching
An OS contains two processes P1 and P2, with P2 having a
higher priority than P1. Let P2 be blocked on an I/O operation
and let P1 be running. The following actions take place when
the I/O completion event occurs for the I/O operation of P2:
1. The context save function is performed for P1 and its state is changed
to ready
2. Using the event information field of PCBs, the event handler finds
that the I/O operation was initiated by P2, so it changes the state of
P2 from blocked to ready
3. Scheduling is performed: P2 is selected because it is the highest-
priority ready process
4. P2s state is changed to running and it is dispatched
Shrishail Bhat, AITM Bhatkal
Process Switching
Functions 1, 3, and 4 in this example collectively perform
switching between processes P1 and P2
Switching between processes also occurs when a running
process becomes blocked as a result of a request or gets
preempted at the end of a time slice
Switching between processes involves more than saving the
CPU state of one process and loading the CPU state of another
process. The process environment needs to be switched as well.
State information of a process - all the information that needs
to be saved and restored during process switching
Shrishail Bhat, AITM Bhatkal
Events Pertaining to a Process
The following events occur during the operation of an OS:
1.Process creation event: A new process is created
2.Process termination event: A process finishes its execution
3.Timer event: The timer interrupt occurs
4.Resource request event: Process makes a resource request
5.Resource release event: A resource is released
6.I/O initiation request event: Process wishes to initiate an I/O operation
Shrishail Bhat, AITM Bhatkal
Events Pertaining to a Process (continued)
7. I/O completion event: An I/O operation completes
8. Message send event: A message is sent by one process to another
9. Message receive event: A message is received by a process
10. Signal send event: A signal is sent by one process to another
11. Signal receive event: A signal is received by a process
12. A program interrupt: An instruction executed in the running
process malfunctions
13. A hardware malfunction event: A unit in the computers hardware
malfunctions
Shrishail Bhat, AITM Bhatkal
Event Control Block (ECB)
When an event occurs, the kernel must find the process whose
state is affected by it
OSs use various schemes to speed this up
E.g., Event Control Blocks (ECBs)
Shrishail Bhat, AITM Bhatkal
Example Use of ECB for Handling I/O Completion
The actions of the kernel when process Pi requests an I/O operation
on some device d, and when the I/O operation completes, are as
follows:
1. The kernel creates an ECB, and initializes it as follows:
a) Event description := end of I/O on device d
b) Process awaiting the event := Pi
2. The newly created ECB (let us call it ECBj ) is added to a list of ECBs
3. The state of Pi is changed to blocked and the address of ECBj is put into the
Event information field of Pi s PCB
4. When the interrupt End of I/O on device d occurs, ECBj is located by
searching for an ECB with a matching event description field
5. The id of the affected process, i.e., Pi , is extracted from ECBj . The PCB of
Pi is located and its state is changed to ready.
Shrishail Bhat, AITM Bhatkal
Example Use of ECB for Handling I/O Completion (continued)
P1 initiates I/O operation on d
Shrishail Bhat, AITM Bhatkal
Summary of Event Handling
Shrishail Bhat, AITM Bhatkal
Threads
A thread is an alternative model of program execution
A process creates a thread through a system call
Thread operates within process context
Since a thread is a program execution, it has its own stack and
CPU state
Switching between threads incurs less overhead than switching
between processes
Threads of the same process share code, data and resources
with one another
Shrishail Bhat, AITM Bhatkal
Threads (continued)
Shrishail Bhat, AITM Bhatkal
Threads (continued)
Process Pi has three threads represented by wavy lines
The kernel allocates a stack and a thread control block (TCB) to
each thread
The threads execute within the environment of Pi
The OS saves only the CPU state and stack pointer while
switching between threads of the same process
Use of threads effectively splits the process state into two parts
Resource state remains with process
CPU state (Execution state) is associated with thread
Shrishail Bhat, AITM Bhatkal
Thread Control Block
A TCB is analogous to the PCB and stores the following
information:
Thread scheduling informationthread id, priority and state
CPU state, i.e., contents of the PSW and GPRs
Pointer to PCB of parent process
TCB pointer, which is used to make lists of TCBs for scheduling
Shrishail Bhat, AITM Bhatkal
Thread States and State Transitions
When a thread is created, it is put in the ready state because
its parent process already has the necessary resources
allocated to it
It enters the running state when it is scheduled
It does not enter the blocked state because of resource
requests, as it does not make any resource requests
However, it can enter the blocked state because of process
synchronization requirement
Shrishail Bhat, AITM Bhatkal
Advantages of Threads
Shrishail Bhat, AITM Bhatkal
Implementation of Threads
Kernel-Level Threads
Threads are managed by the kernel
User-Level Threads
Threads are managed by thread library
Hybrid Threads
Combination of kernel-level and user-level threads
Shrishail Bhat, AITM Bhatkal
Kernel-Level Threads
A kernel-level thread is
implemented by the kernel
Creation, termination and
checking their status is
performed through system
calls
Shrishail Bhat, AITM Bhatkal
Kernel-Level Threads (continued)
When a process makes a create_thread system call, the kernel
creates a thread, assigns an id to it, and allocates a TCB
The TCB contains a pointer to the PCB of the process
When an event occurs, the kernel saves the CPU state of the
interrupted thread in its TCB
After event handling, the scheduler selects one ready thread
The context save and load is done if the selected thread belongs to
a different process than the interrupted thread
If both threads belong to the same process, actions to save and
load process context are unnecessary. This reduces switching
overhead.
Shrishail Bhat, AITM Bhatkal
Kernel-Level Threads (continued)
Advantages
A kernel-level thread is like a process except that it has a smaller
amount of state information. This similarity is convenient for
programmers
In a multiprocessor system, kernel-level threads provide parallelism
Better computation speed-up
Disadvantages
Switching between threads is performed as a result of event
handling. Hence it incurs the overhead of event handling even if
the interrupted thread and the selected thread belong to the same
process
Shrishail Bhat, AITM Bhatkal
User-Level Threads
Implemented by a thread library which is linked
to the code of a process
Shrishail Bhat, AITM Bhatkal
User-Level Threads (continued)
The library sets up the thread implementation arrangement
without involving the kernel, and interleaves operation of
threads in the process
The kernel is not aware of presence of user-level threads; it
sees only the process
The scheduler considers PCBs and selects a ready process; the
dispatcher dispatches it
Shrishail Bhat, AITM Bhatkal
User-Level Threads (continued)
Advantages
Thread synchronization and scheduling is implemented by the
thread library. So thread switching overhead is smaller than in
kernel-level threads.
This also enables each process to use a scheduling policy that best
suits its nature
Disadvantages
Blocking a thread would block its parents process. In effect, all
threads of the process would get blocked.
User-level threads cannot provide parallelism and concurrency
Shrishail Bhat, AITM Bhatkal
Hybrid Thread Models
It has both user-level threads and kernel-level threads
Can provide a combination of low switching overhead of user-level threads
and high concurrency and parallelism of kernel-level threads
Shrishail Bhat, AITM Bhatkal
Hybrid Thread Models (continued)
The thread library creates user-level threads in a process and
associates a TCB with each user-level thread
The kernel creates kernel-level threads in a process and
associates a kernel thread control block (KTCB) with each
kernel-level thread
Three methods of association
Many-to-one
One-to-one
Many-to-many
Shrishail Bhat, AITM Bhatkal
Hybrid Thread Models (continued)
Many-to-one association
A single kernel-level thread is created in a process by the kernel and all
user level threads created in a process by the thread library are associated
with this kernel-level thread
Provides an effect similar to mere user-level threads
One-to-one association
Each user-level thread is permanently mapped into a kernel-level thread
Provides an effect similar to mere kernel-level threads
Many-to-many association
Permits a user-level thread to be mapped into different kernel-level
threads at different times
It provides parallelism and low overhead of switching
Shrishail Bhat, AITM Bhatkal
Case Studies of Processes and Threads
Processes in Unix
Threads in Solaris
Shrishail Bhat, AITM Bhatkal
Processes in Unix
Data Structures
Unix uses two data structures to hold control data about
processes
proc structure
Holds scheduling related data
Always held in memory
u area (user area)
Holds data related to resource allocation and signal handling
Needs to be in memory only when the process is executing
Shrishail Bhat, AITM Bhatkal
Processes in Unix (continued)
Types of Processes
User processes
Execute user computations
Associated with the users terminal
Daemon processes
Detached from the users terminal
Run in the background and perform functions on a system-wide basis
Vital in controlling the computational environment of the system
Kernel processes
Execute the code of the kernel
Concerned with background activities of the kernel like swapping
Shrishail Bhat, AITM Bhatkal
Process Creation and Termination
The system call fork creates a child process and sets up its
context
It allocates a proc structure for the newly created process and
marks its state as ready, and also allocates a u area for the
process
The kernel keeps track of the parentchild relationships using
the proc structure
fork returns the id of the child process
Shrishail Bhat, AITM Bhatkal
Process Creation and Termination (continued)
After booting, the system creates a process init
This process creates a child process for every terminal
connected to the system
After a sequence of exec calls, each child process starts running
the login shell
When a programmer indicates the name of a file from the
command line, the shell creates a new process that executes
an exec call for the named file, in effect becoming the primary
process of the program
Thus the primary process is a child of the shell process
Shrishail Bhat, AITM Bhatkal
Process Creation and Termination (continued)
The shell process now executes the wait system call to wait for
end of the primary process of the program
Thus it becomes blocked until the program completes, and
becomes active again to accept the next user command
If a shell process performs an exit call to terminate itself, init
creates a new process for the terminal to run the login shell
Shrishail Bhat, AITM Bhatkal
Process States
Unix permits a user process to create child processes and to
synchronize its activities w.r.t its child processes
Unix uses two distinct running states.
User running and Kernel running states
A user process executes user code while in the user running state,
and kernel code while in the kernel running state
A process Pi can wait for the termination of a child process
through the system call wait
Shrishail Bhat, AITM Bhatkal
Process States (continued)
A process Pi can terminate itself through the exit system call
exit (status_code)
The kernel saves the status code in the proc structure of Pi ,
closes all open files, releases the memory allocated to the
process, and destroys its u area
The proc structure is retained until the parent of Pi destroys it
The terminated process is dead but it exists, hence it is called a
zombie process
Shrishail Bhat, AITM Bhatkal
Processes in Unix (continued)
Shrishail Bhat, AITM Bhatkal
Threads in Solaris
Solaris is a Unix 5.4 based OS
Uses three kinds of entities to govern concurrency and
parallelism within a process:
User threads
Light weight processes (LWPs)
Provides parallelism within a process
User threads are mapped into LWPs by thread libraries
Kernel threads
Shrishail Bhat, AITM Bhatkal
Threads in Solaris (continued)
Shrishail Bhat, AITM Bhatkal
Threads in Solaris (continued)
Solaris originally provided a hybrid thread model that actually
supported all three association methods of hybrid threads
This model has been called the M N model
Solaris 8 continued to support this model and also provided an
alternative 1 : 1 implementation, which is equivalent to kernel-
level threads
Use of the 1 : 1 model led to simpler signal handling
It also eliminated the need for scheduler activations, and provided
better scalability
Hence the M N model was discontinued in Solaris 9
Shrishail Bhat, AITM Bhatkal
Interacting Processes An Advanced Programmer View of Processes
The processes of an application interact with one another to
coordinate their activities or to share some data.
Shrishail Bhat, AITM Bhatkal
Interacting Processes
Processes that do not interact are independent processes
Process synchronization is a generic term for the techniques
used to delay and resume processes to implement process
interactions
Shrishail Bhat, AITM Bhatkal
Race Condition
Uncoordinated accesses to shared data may affect consistency
of data
Consider processes Pi and Pj that update the value of ds
through operations ai and aj , respectively:
Operation ai : ds := ds + 10; Let fi(ds) represent its result
Operation aj : ds := ds + 5; Let fj(ds) represent its result
What situation could arise if they execute concurrently?
Shrishail Bhat, AITM Bhatkal
An Example of Race Condition
Race conditions
in cases 2 and 3
Shrishail Bhat, AITM Bhatkal
An Example of Race Condition (continued)
Shrishail Bhat, AITM Bhatkal
Race Condition (continued)
Race conditions are prevented by ensuring that operations ai
and aj do not execute concurrently
This is called mutual exclusion
Data access synchronization is coordination of processes to
implement mutual exclusion over shared data
Shrishail Bhat, AITM Bhatkal
Operating Systems, by Dhananjay Dhamdhere
6.73 Copyright 2008
Critical Sections
Mutual exclusion is implemented by using critical sections (CS)
of code
Using CSs causes delays in operation of processes
Process must not execute for too long inside CS
Process must not make system calls while in the CS that might put it
in blocked state
Kernel must not preempt a process that is engaged in executing a CS
We use a dashed box in the code to mark a CS
Shrishail Bhat, AITM Bhatkal
Operating Systems, by Dhananjay Dhamdhere
6.74 Copyright 2008
Critical Sections (continued)
Shrishail Bhat, AITM Bhatkal
Operating Systems, by Dhananjay Dhamdhere
6.75 Copyright 2008
Properties of a Critical Section Implementation
The progress and bounded wait properties together prevent
starvation
Shrishail Bhat, AITM Bhatkal
Control Synchronization
Shrishail Bhat, AITM Bhatkal
Interacting Processes (continued)
Message passing is implemented through system calls
The signals mechanism is implemented along the same lines as
interrupts
Shrishail Bhat, AITM Bhatkal