0% found this document useful (0 votes)
2 views

Process Concept

A process is a program in execution, consisting of four sections: stack, heap, text, and data. The document outlines the process life cycle, including states such as start, ready, running, waiting, and terminated, as well as the role of the Process Control Block (PCB) in managing process information. It also discusses process scheduling, types of schedulers, operations on processes like creation and termination, inter-process communication methods, and the concept of context switching.

Uploaded by

akaashvaanii
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Process Concept

A process is a program in execution, consisting of four sections: stack, heap, text, and data. The document outlines the process life cycle, including states such as start, ready, running, waiting, and terminated, as well as the role of the Process Control Block (PCB) in managing process information. It also discusses process scheduling, types of schedulers, operations on processes like creation and termination, inter-process communication methods, and the concept of context switching.

Uploaded by

akaashvaanii
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

PROCESS

3.1 Concept of processes:

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −

FIGURE 3.1

S.N. Component & Description

1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.

4 Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For
example, here is a simple program written in C programming language −

#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.

3.2 Process Life Cycle

FIGURE 3.2

Start

This is the initial state when a process is first started/created.

Ready

The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU
to some other process.
Running

Once the process has been assigned to a processor by the OS scheduler, the process state is set
to running and the processor executes its instructions.

Waiting

Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.

Terminated or Exit

Once the process finishes its execution, or it is terminated by the operating system, it is moved
to the terminated state where it waits to be removed from main memory.

3.3 Process Control Block

A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting,
or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to
schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

FIGURE 3.3

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

3.4 Process Scheduling


The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue
for each of the process states and PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

FIGURE 3.4

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues which
can only have one entry per processor core on the system; in the above diagram, it has been
merged with the CPU.

Two-State Process Model


Two-state process model refers to running and non-running states which are described below −
S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then selects a
process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser It reduces the degree of


multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time sharing


minimal in time sharing time sharing system systems.
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be
continued.

3.5 Operations on processes

Below we have discussed the two major operation Process Creation and Process Termination.
Process Creation

Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID
(PPID) is also stored for each process.
On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The
first thing done by it at system start-up time is to launch init, which gives that process PID 1.
Further Init launches all the system daemons and user logins, and becomes the ultimate parent of
all other processes.

FIGURE 3.5

A child process may receive some amount of shared resources with its parent depending on
system implementation. To prevent runaway children from consuming all of a certain system
resource, child processes may or may not be limited to a subset of the resources originally
allocated to the parent.

There are two options for the parent process after creating the child :

 Wait for the child process to terminate before proceeding. Parent process makes a
wait() system call, for either a specific child process or for any particular child process,
which causes the parent process to block until the wait() returns. UNIX shells normally wait
for their children to complete before issuing a new prompt.
 Run concurrently with the child, continuing to process without waiting. When a UNIX shell
runs a process as a background task, this is the operation seen. It is also possible for the
parent to run for a while, and then wait for the child later, which might occur in a sort of a
parallel processing operation.
There are also two possibilities in terms of the address space of the new process:

1. The child process is a duplicate of the parent process.


2. The child process has a program loaded into it.

To illustrate these different implementations, let us consider the UNIX operating system. In
UNIX, each process is identified by its process identifier, which is a unique integer. A new
process is created by the fork system call. The new process consists of a copy of the address
space of the original process. This mechanism allows the parent process to communicate easily
with its child process. Both processes (the parent and the child) continue execution at the
instruction after the fork system call, with one difference: The return code for the fork system
call is zero for the new(child) process, whereas the(non zero) process identifier of the child
is returned to the parent.
Typically, the execlp system call is used after the fork system call by one of the two processes to
replace the process memory space with a new program. The execlp system call loads a binary
file into memory - destroying the memory image of the program containing the execlp system
call – and starts its execution. In this manner the two processes are able to communicate, and
then to go their separate ways.
Below is a C program to illustrate forking a separate process using UNIX(made using Ubuntu):

#include<stdio.h>

void main(int argc, char *argv[])


{
int pid;

/* Fork another process */


pid = fork();

if(pid < 0)
{
//Error occurred
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0)
{
//Child process
execlp("/bin/ls","ls",NULL);
}
else
{
//Parent process
//Parent will wait for the child to complete
wait(NULL);
printf("Child complete");
exit(0);
}
}

Process Termination

By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :

 The inability of the system to deliver the necessary system resources.


 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed i.e. if the need
of having a child terminates.
 If the parent exits, the system may or may not allow the child to continue without a parent (In
UNIX systems, orphaned processes are generally inherited by init, which then proceeds to
kill them.)
When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
The process termination status and execution times are returned to the parent if the parent is
waiting for the child to terminate, or eventually returned to init if the process already became an
orphan.
The processes which are trying to terminate but cannot do so because their parent is not waiting
for them are termed zombies. These are eventually inherited by init as orphans and killed off.

3.6 Inter Process Communication (IPC)


A process can be of two type:
 Independent process.
 Co-operating process.

An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those processes,
which are running independently, will execute very efficiently but in practical, there are many
situations when co-operative nature can be utilized for increasing computational speed,
convenience and modularity. Inter process communication (IPC) is a mechanism which allows
processes to communicate each other and synchronize their actions. The communication between
these processes can be seen as a method of co-operation between them. Processes can
communicate with each other using these two ways:
1. Shared Memory
2. Message passing

FIGURE 3.6

The Figure 3.6 below shows a basic structure of communication between


processes via shared memory method and via message passing.
An operating system can implement both method of communication. First,
we will discuss the shared memory method of communication and then
message passing. Communication between processes using shared memory
requires processes to share some variable and it completely depends on how
programmer will implement it. One way of communication using shared
memory can be imagined like this: Suppose process1 and process2 are
executing simultaneously and they share some resources or use some
information from other process, process1 generate information about certain
computations or resources being used and keeps it as a record in shared
memory. When process2 need to use the shared information, it will check in
the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from other process as well as
for delivering any specific information to other process.

3.7 Context Switch


A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.

FIGURE 3.7

Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware systems
employ two or more sets of processor registers. When the process is switched, the following
information is stored for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

You might also like