Process Scheduling in Operating Systems
Process Scheduling in Operating Systems
Unit 2
Process Scheduling
Topics: Process Concept, Process Control Block, Types of Scheduler, Context Switch,
threads, multithreading model, goals of scheduling and different scheduling algorithms,
examples from WINDOWS 2000 & LINUX
Process:
Heap
This is the memory that is dynamically allocated to a process during its execution.
Text
This comprises the contents present in the processor’s registers as well as the
current activity reflected by the value of the program counter.
Data
The global as well as static variables are included in this section.
Attributes of a process
The Attributes of the process are used by the Operating System to create the process
control block (PCB) for each of them. This is also called context of the process. Attributes
which are stored in the PCB are described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the
process was suspended. The CPU uses this address when the execution of this process is
resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are
new, ready, running and waiting. We will discuss about them later in detail.
Every process has its own priority. The process with the highest priority among the
processes gets the CPU first. This is also stored on the process control block.
Every process has its own set of registers which are used to hold the data which is
generated during the execution of the process.
During the Execution, Every process uses some files which need to be present in the main
memory. OS also maintains a list of open files in the PCB.
OS also maintain the list of all open devices which are used during the execution of the
process.
When the process is created by the operating system it creates a data structure to
store the information of that process. This is known as Process Control Block
(PCB).
Process Control block (PCB) is a data structure that stores information of a process.
An Operating System helps in process creation, scheduling, and termination with
the help of Process Control Block.
The Process Control Block (PCB), which is part of the Operating System, aids in
managing how processes operate.
Every OS process has a Process Control Block related to it. By keeping data on
different things including their state, I/O status, and CPU Scheduling, a PCB
maintains track of processes.
The process control block contains many attributes such as process ID, process
state, process priority, accounting information, program counter, CPU registers`, etc
for each process.
OR
When a new process is created by the user, the operating system assigns a unique ID i.e
a process-ID to that process.
This ID helps the process to be distinguished from other processes existing in the system.
2. Process State:
A process, from its creation to completion goes through different states. A process can be
new, ready, running, waiting, etc.
3. Program Counter:
The program counter is a pointer that points to the next instruction in the program to be
executed. This attribute of PCB contains the address of the next instruction to be
executed in the process.
5. Process Priority:
Process priority is a numeric value that represents the priority of each process.
The lesser the value, the greater the priority of that process.
This priority is assigned at the time of the creation of the PCB and may depend on many
factors like the age of that process, the resources consumed, and so on. The user can also
externally assign a priority to the process.
6. PCB pointer:
This field contains the address of the next PCB, which is in ready state. This helps the
operating system to hierarchically maintain an easy control flow between parent
processes and child processes.
As the name suggests, It contains information on all the files that are used by that
process. This field is important as it helps the operating system to close all the opened
files at the termination state of the process.
8. Process I/O information:
In this field, the list of all the input/output devices which are required by that process
during its execution is mentioned.
A process has several stages that it passes through from beginning to end. There must be
a minimum of five states. Even though during execution, the process could be in one of
these states, the names of the states are not standardized. Each process goes through
several stages throughout its life cycle.
The states of a process are as follows:
1. New
2. Ready
3. Run
4. Wait/Block
5. Completed/ Terminated
New State
This is the first state of the process life cycle. When process creation is taking place, the
process is in a new state.
Ready State
When the process creation gets completed, the process comes into a ready state. During
this state, the process is loaded into the main memory and will be placed in the queue of
processes which are waiting for the CPU allocation.
When the process is in the creation process is in a new state and when the process gets
created process is in the ready state.
Running State
Whenever the CPU is allocated to the process from the ready queue, the process state
changes to Running.
When the process is executing the instructions, the process might require carrying out a
few tasks which might not require CPU.
If the process requires performing Input-Output task or the process needs some resources
which are already acquired by other processes, during such conditions process is brought
back into the main memory, and the state is changed to Blocking or Wait for the state.
Process is placed in the queue of processes that are in waiting or block state in the
main memory.
Terminated or Completed
When the entire set of instructions is executed and the process is completed. The process
is changed to terminated or completed state.
During this state the PCB of the process is also deleted.
It is possible that there are multiple processes present in the main memory at the same
time.
Suspend Ready
So whenever the main memory is full, the process which is in a ready state is swapped
out from main memory to secondary memory. The process is in a ready state when goes
through the transition of moving from main memory to secondary memory, the state of
Year/Semester: 2ndYr/3rdSem 8 Faculty Name: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
It’s possible that the process is waiting or blocked state can be swapped out to secondary
memory. Let’s understand in which state process in waiting or block state will go.
Whenever the process that is in waiting for state or block state in main memory gets to
swap out to secondary memory due to main memory being completely full, the process
state is changed to Suspend wait or Suspend blocked state.
Schedulers in OS:
A scheduler is a special type of system software that handles process scheduling in
numerous ways. It mainly selects the jobs that are to be submitted into the system and
decides whether the currently running process should keep running or not. If not then
which process should be the next one to run. A scheduler makes a decision:
When the state of the current process changes from running to waiting due to an
I/O request or some unsatisfied OS.
If the current process terminates.
When the scheduler needs to move a process from running to ready state as it has
already run for its allotted interval of time.
Process Scheduling:
Categories of Scheduling:
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
o Job queue − this queue keeps all the processes in the system.
Year/Semester: 2ndYr/3rdSem 11 Faculty Name: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the
above diagram, it has been merged with the CPU.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such
as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average
rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes
the state from new to ready, then there is use of long-term scheduler.
It is the change of ready state to running state of the process. CPU scheduler selects
a process among the processes that are ready to execute and allocates CPU to one of
them.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
Context Switching
The Context switching is a technique or method used by the operating system to
switch a process from one state to another to execute its function using CPUs in the
system.
When switching performs in the system, it stores the old running process's status in
the form of registers and assigns the CPU to a new process to execute its tasks.
Year/Semester: 2ndYr/3rdSem 14 Faculty Name: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
There are several steps involves in context switching of the processes. The following
diagram represents the context switching of two processes, P1 to P2, when an interrupt,
I/O needs, or priority-based process occurs in the ready queue of PCB.
In the above figure, initially, the P1 process is running on the CPU to execute its task, and at
the same time, another process, P2, is in the ready state. If an error or interruption has
occurred or the process requires input/output, the P1 process switches its state from
Year/Semester: 2ndYr/3rdSem 15 Faculty Name: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
1. First, the context switching needs to save the state of process P1 in the form of the
program counter and the registers to the PCB (Program Counter Block), which is in
the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue,
such as the ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new
process from the ready state, which is to be executed, or the process has a high
priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process
P2. It includes switching the process state from ready to running state or from
another state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to
resume its execution at the same time point where the system interrupt occurs.
Similarly, process P2 is switched off from the CPU so that the process P1 can resume
execution. P1 process is reloaded from PCB1 to the running state to resume its task at the
same point. Otherwise, the information is lost, and when the process is executed again, it
starts execution at the initial level.
Threads:
A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers
A thread shares with its peer threads little information like code segment, data
segment and open files. When one thread alters a code segment memory item, all
other threads see that.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. The following figure shows the working of a single-
threaded and a multithreaded process.
In multiple processes each process One thread can read, write or change
6
operates independently of the others. another thread's data.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
Year/Semester: 2ndYr/3rdSem 18 Faculty Name: Prof. Qudsiya Naaz
ANJUMAN COLLEGE OF ENGINEERING & TECHNOLOGY
MANGALWARI BAZAAR ROAD, SADAR, NAGPUR - 440001.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
User-level threads are faster to create and Kernel-level threads are slower to
1
manage. create and manage.
User-level thread is generic and can run on Kernel-level thread is specific to the
3
any operating system. operating system.