0% found this document useful (0 votes)
23 views33 pages

Cse Os Unit1 Notes

An operating system (OS) is an intermediary between users and computer hardware, managing resources and facilitating user programs. It consists of two main components: the kernel, which interacts with hardware, and the shell, which interprets commands. The OS performs various functions including process management, memory management, file management, I/O system management, and networking, while also providing services for program execution, I/O operations, and error detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views33 pages

Cse Os Unit1 Notes

An operating system (OS) is an intermediary between users and computer hardware, managing resources and facilitating user programs. It consists of two main components: the kernel, which interacts with hardware, and the shell, which interprets commands. The OS performs various functions including process management, memory management, file management, I/O system management, and networking, while also providing services for program execution, I/O operations, and error detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT-1

OPERATING SYSTEM
B.tech (CSE)
UNIT-I

What is an Operating System?


A program that acts as an intermediary between a user of a computer and the
computer hardware. An operating System is a collection of system
programs that together control the operations of a computer system.

Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows,


Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM.

Operating system goals:


● Execute user programs and make solving user problems easier.
● Make the computer system convenient to use.
● Use the computer hardware in an efficient manner.

Computer System Components


● Hardware – provides basic computing resources (CPU, memory, I/O devices).
● Operating system – controls and coordinates the use of the hardware
among the various application programs for the various users.
● Applications programs – Define the ways in which the system resources
are used to solve the computing problems of the users (compilers,
database systems, video games, business programs).
● Users (people, machines, other computers).

Placement of Operating System in a computer system.

Operating System Definitions


Resource allocator – manages and allocates resources.
Control program – controls the execution of user programs
and operations of I/O devices .
Kernel – The one program running at all times (all else
being application programs).
Components of OS: OS has two parts.
(1)Kernel.
(2)Shell.

(1) Kernel is an active part of an OS i.e., it is the part of OS running at all times.
It is a programs which can interact with the hardware. Ex: Device driver, dll
files, system files etc.

(2) Shell is called as the command interpreter. It is a set of programs used to


interact with the application programs. It is responsible for execution of
instructions given to OS (called commands).

Operating systems can be explored from two viewpoints: the user and the
system.
User View: From the user’s point view, the OS is designed for one user to
monopolize its resources, to maximize the work that the user is performing and
for ease of use.
System View: From the computer's point of view, an operating system is a
control program that manages the execution of user programs to prevent errors
and improper use of the computer. It is concerned with the operation and control
of I/O devices.

Abstract View of System Components

Operating System Definitions


Resource allocator – manages and allocates resources.
Control program – controls the execution of user programs
and operations of I/O devices . Kernel – The one program
running at all times (all else being application programs).
Components of OS: OS has two parts.
(1)Kernel.
(2)Shell.
(1) Kernel is an active part of an OS i.e., it is the part of OS running at all times.
It is a programs which can interact with the hardware. Ex: Device driver, dll
files, system files etc.
(2) Shell is called as the command interpreter. It is a set of programs used to
interact with the application programs. It is responsible for execution of
instructions given to OS (called commands).

Operating systems can be explored from two viewpoints:


the user and the system.
User View: From the user’s point view, the OS is designed for one user to
monopolize its resources, to maximize the work that the user is performing and
for ease of use.
System View: From the computer's point of view, an operating system is a
control program that manages the execution of user programs to prevent errors
and improper use of the computer. It is concerned with the operation and control
of I/O devices.

Functions of Operating System:


Process Management
A process is a program in execution. A process needs certain resources,
including CPU time, memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in
connection with process management.
✦ Process creation and deletion.
✦ process suspension and resumption.
✦ Provision of mechanisms for:
• process synchronization
• process communication

Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a
repository of quickly accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device. It loses its contents in the case of
system failure.
The operating system is responsible for the following activities in connections
with memory management:
♦ Keep track of which parts of memory are currently being used and by whom.
♦ Decide which processes to load when memory space becomes available. ♦
Allocate and de-allocate memory space as needed.

File Management

A file is a collection of related information defined by its creator. Commonly,


files represent programs (both source and object forms) and data.
The operating system is responsible for the following activities in
connections with file management:

✦ File creation and deletion.


✦ Directory creation and deletion.
✦ Support of primitives for manipulating files and directories.
✦ Mapping files onto secondary storage.
✦ File backup on stable (nonvolatile) storage media.
I/O System Management
The I/O system consists of:
✦ A buffer-caching system
✦ A general device-driver interface
✦ Drivers for specific hardware devices

Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate
all data and programs permanently, the computer system must provide
secondary storage to back up main memory. Most modern computer systems
use disks as the principle on-line storage medium, for both programs and data.
The operating system is responsible for the following activities in connection
with disk management:
✦ Free space management
✦ Storage allocation
✦ Disk scheduling

Networking (Distributed Systems)


● A distributed system is a collection of processors that do not share
memory or a clock. Each processor has its own local memory.
● The processors in the system are connected through a communication network.
● Communication takes place using a protocol.
● A distributed system provides user access to various system resources.
Access to a shared resource allows:
✦ Computation speed-up
✦ Increased data availability
✦ Enhanced reliability

Protection System
♦ Protection refers to a mechanism for controlling access by programs,
processes, or users to both system and user resources.
The protection mechanism must:
✦ distinguish between authorized and unauthorized usage.
✦ specify the controls to be imposed.
✦ provide a means of enforcement.

Command-Interpreter System
● Many commands are given to the operating system by control
statements which deal with:

✦ process creation and management


✦ I/O handling
✦ secondary-storage management
✦ main-memory management
✦ file-system access
✦ protection
✦ networking
• The program that reads and interprets control statements is called variously:
✦ command-line interpreter
✦ shell (in UNIX)
• Its function is to get and execute the next command statement.

Operating-System Structures
• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation
Common System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System

1.Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing –
automatically transfers control from one job to another. First rudimentary
operating system. Resident monitor
● initial control in monitor
● control transfers to job
● when job completes control transfers pack to monitor
2. Batch Processing Operating System:
This type of OS accepts more than one jobs and these jobs are batched/ grouped
together according to their similar requirements. This is done by computer
operator. Whenever the computer becomes available, the batched jobs are sent
for execution and gradually the output is sent back to the user.
It allowed only one program at a time.
This OS is responsible for scheduling the jobs according to priority and the
resource required.
3. Multiprogramming Operating System:
● This type of OS is used to execute more than one jobs simultaneously by
a single processor. it increases CPU utilization by organizing jobs so that
the CPU always has one job to execute.
● The concept of multiprogramming is described as follows:
○ All the jobs that enter the system are stored in the job pool( in
disc). The operating system loads a set of jobs from job pool into
main memory and begins to execute.
○ During execution, the job may have to wait for some task, such as
an I/O operation, to complete. In a multiprogramming system, the
operating system simply switches to another job and executes.
When that job needs to wait, the CPU is switched to another job,
and so on.
○ When the first job finishes waiting and it gets the CPU back.
○ As long as at least one job needs to execute, the CPU is never idle.
Multiprogramming operating systems use the mechanism of job
scheduling and CPU scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming.
It provides extra facilities such as:
● Faster switching between multiple jobs to make processing faster.
● Allows multiple users to share computer system simultaneously.
● The users can interact with each job while it is running.

These systems use a concept of virtual memory for effective utilization of


memory space. Hence, in this OS, no jobs are discarded. Each one is executed
using virtual memory concept. It uses CPU scheduling, memory management,
disc management and security management. Examples: CTSS, MULTICS, CAL,
UNIX etc.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly
coupled OS. Such operating systems have more than one processor in close
communication that sharing the computer bus, the clock and sometimes memory
and peripheral devices. It executes multiple jobs at same time and makes the
processing faster. Multiprocessor systems have three main advantages:
● Increased throughput: By increasing the number of processors, the
system performs more work in less time. The speed-up ratio with N
processors is less than N.
● Economy of scale: Multiprocessor systems can save more money than
multiple single-processor systems, because they can share peripherals,
mass storage, and power supplies.
● Increased reliability: If one processor fails to done its task, then each of
the remaining processors must pick up a share of the work of the failed
processor. The failure of one processor will not halt the system, only slow
it down.

The ability to continue providing service proportional to the level of


surviving hardware is called graceful degradation. Systems designed for
graceful degradation are called fault tolerant.

The multiprocessor operating systems are classified into two categories:


1. Symmetric multiprocessing system
2. Asymmetric multiprocessing system
In symmetric multiprocessing system, each processor runs an identical copy of
the operating system, and these copies communicate with one another as
needed.
In asymmetric multiprocessing system, a processor is called master processor
that controls other processors called slave processor. Thus, it establishes a
master-slave relationship. The master processor schedules the jobs and
manages the memory for entire system.
5. Distributed Operating Systems
In distributed system, the different machines are connected in a network
and each machine has its own processor and own local memory.
In this system, the operating systems on all the machines work together to
manage the collective network resource.
It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
Advantages of distributed systems.
Resources Sharing
Computation speed up – load sharing
Reliability
Communications
Requires networking infrastructure.
Local area networks (LAN) or Wide area networks (WAN)
.
6. Desktop Systems/Personal Computer Systems
The PC operating system is designed for maximizing user convenience and
responsiveness. This system is neither multi-user nor multitasking.
These systems include PCs running Microsoft Windows and the Apple
Macintosh.
The MS-DOS operating system from Microsoft has been superseded by
multiple flavors of Microsoft Windows and IBM has upgraded MS-DOS to
the OS/2 multitasking system.
The Apple Macintosh operating system has been ported to more advanced
hardware, and now includes new features such as virtual memory and
multitasking.

7. Real-Time Operating Systems (RTOS)


A real-time operating system (RTOS) is a multitasking operating system
intended for applications with fixed deadlines (real-time computing). Such
applications include some small embedded systems, automobile engine
controllers, industrial robots, spacecraft, industrial control, and some large-scale
computing systems. ⮊ The real time operating system can be classified into two
categories:
1. hard real time system and
2. soft real time system.
A hard real-time system guarantees that critical tasks be completed on time.
This goal requires that all delays in the system be bounded, from the retrieval
of stored data to the time that it takes the operating system to finish any
request made of it. Such time constraints dictate the facilities that are
available in hard real-time systems.
A soft real-time system is a less restrictive type of real-time system. Here, a
critical real-time task gets priority over other tasks and retains that priority until
it completes. Soft real time system can be mixed with other types of systems.
Due to less restriction, they are risky to use for industrial control and robotics.
Operating System Services
Following are the five services provided by operating systems
to the convenience of the users.
1. Program Execution
The purpose of computer systems is to allow the user to
execute programs. So the operating system provides an
environment where the user can conveniently run programs.
Running a program involves the allocating and deallocating
memory, CPU scheduling in case of multiprocessing.
2. I/O Operations
Each program requires an input and produces output. This involves the use of I/O. So
the operating systems are providing I/O makes it convenient for the users to run
programs.
3. File System Manipulation
The output of a program may need to be written into new files or input taken from
some files. The operating system provides this service.
4. Communications
The processes need to communicate with each other to exchange information during
execution. It may be between processes running on the same computer or running on
the different computers. Communications can be occur in two ways: (i) shared
memory or (ii) message passing
5. Error Detection
An error is one part of the system may cause malfunctioning of the complete system.
To avoid such a situation operating system constantly monitors the system for
detecting the errors. This relieves the user of the worry of errors propagating to
various part of the system and causing malfunctioning.

Following are the three services provided by operating systems for


ensuring the efficient operation of the system itself.
1. Resource allocation
When multiple users are logged on the system or multiple jobs are running at the
same time, resources must be allocated to each of them. Many different types of
resources are managed by the operating system.
2. Accounting
The operating systems keep track of which users use how many and which kinds
of computer resources. This record keeping may be used for accounting (so that
users can be billed) or simply for accumulating usage statistics.

3. Protection
When several disjointed processes execute concurrently, it should not be
possible for one process to interfere with the others, or with the operating
system itself. Protection involves ensuring that all access to system resources is
controlled. Security of the system from outsiders is also important. Such security
starts with each user having to authenticate him to the system, usually by means
of a password, to be allowed access to the resources.

System Call:

● System calls provide an interface between the process and the operating
system.
● System calls allow user-level processes to request some services from the
operating system which process itself is not allowed to do.
● For example, for I/O a process involves a system call telling the
operating system to read or write particular area and this request is
satisfied by the operating system.
● The following different types of system calls provided by an operating
system:
• create process, terminate process
• get process attributes, set process attributes

Process Concept
• Informally, a process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the current
activity, as represented by the value of the program counter and the contents of the
processor's registers. In addition, a process generally includes the process stack,
which contains temporary data (such as method parameters, return addresses, and
local variables), and a data section, which contains global variables.
• An operating system executes a variety of programs:
✦ Batch system – jobs
✦ Time-shared systems – user programs or tasks
• Process – a program in execution; process execution must progress in sequential
fashion. • A
process includes: program counter , stack, data section,
Process State
As a process executes, it changes state
New State: The process is being created.
Running State: A process is said to be running if it has the CPU, that is, process
actually using the CPU at that
particular instant.
Blocked (or waiting) State: A process is said to be blocked if it is waiting for some
event to happen such that as an I/O completion before it can proceed. (Note that a
process is unable to run until some external event happens.)
Ready State: A process is said to be ready if it needs a CPU to execute. A ready state
process is runnable but
temporarily stopped running to let another process run.
Terminated state: The process has finished Execution.

What is the difference between process and program?


1) Both are the same beast with different names or when this beast is sleeping (not
executing) it is called a program and when it is executing it becomes a process.
2) Program is a static object whereas a process is a dynamic object.
3) A program resides in secondary storage whereas a process resides in main memory.
4) The span time of a program is unlimited but the span time of a process is limited.
5) A process is an 'active' entity whereas a program is a 'passive' entity.
6) A program is an algorithm expressed in programming language whereas a process
is expressed in assembly language or machine language.

Diagram of Process State


Process Control Block (PCB)
Information associated with each process.

• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information

Process state: The state may be new, ready, running, waiting, halted, and SO on.
Program counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU registers: The registers vary in number and type, depending on the computer
architecture. They include
accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information.
Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the
process to be continued correctly afterward.
CPU-scheduling information: This information includes a process priority, pointers
to scheduling queues, and any other scheduling parameters.
Memory-management information: This information may include such information
as the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from
process to process.

Process Scheduling Queues


Job Queue: This queue consists of all processes in the system; those processes are
entered to the system as
new processes.
Ready Queue: This queue consists of the processes that are residing in main memory
and are ready and
waiting to execute by CPU. This queue is generally stored as a linked list. A
ready-queue header contains
pointers to the first and final PCBs in the list. Each PCB includes a pointer field that
points to the next PCB in
the ready queue.
Device Queue: This queue consists of the processes that are waiting for a particular
I/O device. Each device
has its own device queue.

Representation of Process Scheduling

Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the
first response is
produced, not output (for time-sharing environment)
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

Schedulers
A scheduler is a decision maker that selects the processes from one scheduling queue
to another or allocates
CPU for execution. The Operating System has three types of scheduler: 1. Long-term
scheduler or Job scheduler
2. Short-term scheduler or CPU scheduler
3. Medium-term scheduler

Long-term scheduler or Job scheduler


The long-term scheduler or job scheduler selects processes from discs and loads them
into main memory for execution. It executes much less frequently.
It controls the degree of multiprogramming (i.e., the number of processes in memory).
Because of the longer interval between executions, the long-term scheduler can afford
to take more time to
select a process for execution.
Short-term scheduler or CPU scheduler
The short-term scheduler or CPU scheduler selects a process from among the
processes that are ready to
execute and allocates the CPU.
The short-term scheduler must select a new process for the CPU frequently. A process
may execute for only a
few milliseconds before waiting for an I/O request.
Medium-term scheduler
The medium-term scheduler schedules the processes as intermediate level of
scheduling Processes can be described as either:
✦ I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts.
✦ CPU-bound process – spends more time doing computations; few very long CPU
bursts.

Context Switch
• When CPU switches to another process, the system must save the state of the old
process and load the
saved state for the new process.
• Context-switch time is overhead; the system does no useful work while switching.
• Time dependent on hardware support.
Process Creation
Parent process create children processes, which, in turn create other processes,
forming a tree of processes.
Resource sharing:
✦ Parent and children share all resources.
✦ Children share subset of parent’s resources.
✦ Parent and child share no resources.
Execution:
✦ Parent and children execute concurrently.
✦ Parent waits until children terminate.
Address space:
✦ Child duplicate of parent.
✦Child has a program loaded into it.
UNIX examples
✦ fork system call creates new process
✦ exec system call used after a fork to replace the process’ memory space with a new
program.
Process Termination
• Process executes last statement and asks the operating system to decide it (exit).
✦ Output data from child to parent (via wait).
✦ Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes (abort).
✦ Child has exceeded allocated resources.
✦ Task assigned to child is no longer required.
✦ Parent is exiting.
Operating system does not allow child to continue if its parent terminates.
Cascading termination.

Cooperating Processes
The concurrent processes executing in the operating system may be either independent
processes or
cooperating processes. A process is independent if it cannot affect or be affected by
the other processes executing
in the system. Clearly, any process that does not share any data (temporary or
persistent) with any other process
is independent. On the other hand, a process is cooperating if it can affect or be
affected by the other processes
executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.
Advantages of process cooperation
Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.
Computation speedup: If we want a particular task to run faster, we must break it
into subtasks, each of which will be executing in parallel with the others. Such a
speedup can be achieved only if the computer has multiple processing elements (such
as CPUS or I/O channels).
Modularity: We may want to construct the system in a modular fashion, dividing the
system functions into
separate processes or threads
Convenience: Even an individual user may have many tasks on which to work at one
time. For instance, a user may be editing, printing, and compiling in parallel.
Concurrent execution of cooperating processes requires mechanisms that allow
processes to communicate with one another and to synchronize their actions.

CPU scheduling is the process of deciding which process will own the CPU to use
while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.

Objectives of Process Scheduling Algorithm


● Utilization of CPU at maximum level.
● Keep CPU as busy as possible.
● Allocation of CPU should be fair.
● Throughput should be Maximum. i.e. Number of processes that complete
their execution per time unit should be maximized.
● Minimum turnaround time, i.e. time taken by a process to finish
execution should be the least.
● There should be a minimum waiting time and the process should not
starve in the ready queue.
● Minimum response time. It means that the time when a process produces
the first response should be as less as possible.

Terminologies Used in CPU Scheduling


● Arrival Time: Time at which the process arrives in the ready queue.
● Completion Time: Time at which process completes its execution.
● Burst Time: Time required by a process for CPU execution.
● Turn Around Time: Time Difference between completion time and arrival
time.
○ Turn Around Time = Completion Time – Arrival Time
● Waiting Time(W.T): Time Difference between turn around time and burst
time.
○ Waiting Time = Turn Around Time – Burst Time

The Different Types of CPU Scheduling Algorithms


There are mainly two types of scheduling methods:

● Preemptive Scheduling: Preemptive scheduling is used when a process


switches from running state to ready state or from the waiting state to the
ready state.
● Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a
process terminates , or when a process switches from running state to
waiting state.
1. First Come First Serve
FCFS is considered to be the simplest of all operating system scheduling algorithms.
First come first serve scheduling algorithm states that the process that requests the
CPU first is allocated the CPU first and is implemented by using FIFO queue.

Characteristics of FCFS

● FCFS supports non-preemptive and preemptive CPU scheduling algorithms.


● Tasks are always executed on a First-come, First-serve concept.
● FCFS is easy to implement and use.
● This algorithm is not very efficient in performance, and the wait time is
quite high.

Advantages of FCFS

● Easy to implement
● First come, first serve method

Disadvantages of FCFS

● FCFS suffers from the Convoy effect.


● The average waiting time is much higher than the other algorithms.
● FCFS is very simple and easy to implement and hence not much efficient.

Convoy Effect

The Convoy Effect is a phenomenon in CPU scheduling where short processes get
stuck waiting behind a long process in the ready queue, causing inefficient CPU
utilization and increased waiting times for all processes. This effect is common in
non-preemptive scheduling algorithms like First-Come, First-Served (FCFS),
where once a process starts running, it holds the CPU until completion, regardless of
the length of other processes in the queue.
Why Does the Convoy Effect Happen?

The convoy effect occurs because non-preemptive scheduling algorithms do not allow
the CPU to switch from a long-running process to a shorter one. This leads to
situations where a long-running process creates a "convoy" of short processes that
have to wait, even though they could be executed much more quickly.

Example of Convoy Effect

Consider three processes with the following burst times:

In a First-Come, First-Served (FCFS) scheduling algorithm, P1 starts execution first


since it arrives at time 0. P2 and P3 must wait for P1 to finish, even though they have
shorter burst times. The Gantt chart would look like this: | P1 | P2 | P3 |

0 15 18 20

● Completion Time (CT):


○ P1 = 15, P2 = 18, P3 = 20
● Turnaround Time (TAT) = Completion Time - Arrival Time:
○ P1 = 15 - 0 = 15
○ P2 = 18 - 1 = 17
○ P3 = 20 - 2 = 18
● Waiting Time (WT) = Turnaround Time - Burst Time:
○ P1 = 15 - 15 = 0
○ P2 = 17 - 3 = 14
○ P3 = 18 - 2 = 16
The convoy effect is evident here because P2 and P3, which have much shorter burst
times, suffer from long waiting times due to P1's execution.

Effects of the Convoy Effect

● Increased Waiting Time: Short processes are forced to wait behind long
processes, increasing their waiting time significantly.
● Poor CPU Utilization: The CPU might spend unnecessary time processing
long tasks while several short tasks could have been completed in that time.
● Decreased Throughput: Fewer processes are completed per unit of time due
to long waits.

How to Mitigate the Convoy Effect

Scheduling algorithms like Shortest Job First (SJF) and Round Robin (RR) help to
reduce or eliminate the convoy effect by allowing shorter processes to run before
longer ones (in the case of SJF) or by time-slicing the CPU among processes (in the
case of RR).

The convoy effect demonstrates the limitations of simple scheduling policies like
FCFS and highlights the importance of balancing fairness and efficiency in CPU
scheduling.
2. Shortest Job First(SJF)

Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not
be preemptive. Significantly reduces the average waiting time for other processes
waiting to be executed. The full form of SJF is Shortest Job First.

Characteristics of SJF

Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.

It is associated with each task as a unit of time to complete.

It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
Advantages of SJF

As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.

SJF is generally used for long term scheduling

Disadvantages of SJF

One of the demerits SJF has is starvation.

Many times it becomes complicated to predict the length of the upcoming CPU
request.

3. Longest Job First(LJF)

Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF),
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first. Longest Job First is non-preemptive in nature.

Characteristics of LJF

● Among all the processes waiting in a waiting queue, CPU is always


assigned to the process having largest burst time.
● If two processes have the same burst time then the tie is broken using FCFS
i.e. the process that arrived first is processed first.
● LJF CPU Scheduling can be of both preemptive and non-preemptive types.

Advantages of LJF

● No other task can schedule until the longest job or process executes
completely.
● All the jobs or processes finish at the same time approximately.
Disadvantages of LJF

● Generally, the LJF algorithm gives a very high average waiting time and
average turn-around time for a given set of processes.
● This may lead to convoy effect.

4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this algorithm,
the editor sets the functions to be as important, meaning that the most important
process must be done first. In the case of any conflict, that is, where there is more than
one process with equal value, then the most important CPU planning algorithm works
on the basis of the FCFS (First Come First Serve) algorithm.

Characteristics of Priority Scheduling

● Schedules tasks based on priority.


● When the higher priority work arrives and a task with less priority is
executing, the higher priority proess will takes the place of the less priority
proess and
● The later is suspended until the execution is complete.
● Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling

● The average waiting time is less than FCFS


● Less complex

Disadvantages of Priority Scheduling

● One of the most common demerits of the Preemptive priority CPU


scheduling algorithm is the Starvation Problem. This is the problem in
which a process has to wait for a longer amount of time to get scheduled
into the CPU. This condition is called the starvation problem.

5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is cyclically
assigned a fixed time slot. It is the preemptive version of First come First Serve CPU
Scheduling algorithm. Round Robin CPU Algorithm generally focuses on Time
Sharing technique.

Characteristics of Round robin

● It’s simple, easy to use, and starvation-free as all processes get the balanced
CPU allocation.
● One of the most widely used methods in CPU scheduling as a core.
● It is considered preemptive as the processes are given to the CPU for a very
limited time.

Advantages of Round robin

● Round robin seems to be fair as every process gets an equal share of CPU.
● The newly created process is added to the end of the ready queue.

6. Multiple Queue Scheduling


Processes in the ready queue can be divided into different classes where each class has
its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have
different scheduling needs. For this kind of situation Multilevel Queue Scheduling is
used.
The description of the processes in the above diagram is as follows:

● System Processes: The CPU itself has its process to run, generally termed
as System Process.
● Interactive Processes: An Interactive Process is a type of process in which
there should be the same type of interaction.
● Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in the form of
a batch before the processing starts.

Advantages of Multilevel Queue Scheduling

The main merit of the multilevel queue is that it has a low scheduling overhead.

Disadvantages of Multilevel Queue Scheduling

Starvation problem

It is inflexible in nature
Starvation in Operating Systems

Starvation in operating systems refers to a situation where a process or a thread is


unable to make progress or execute its task due to a lack of essential resources, such
as CPU time, memory, or I/O operations. It occurs when other processes or threads
with higher priority continuously occupy or monopolize the necessary resources,
preventing the starved process from getting its fair share of resources.

The occurrence of starvation can be attributed to various factors, such as improper


resource allocation policies, priority inversion, or scheduling algorithms that favor
specific processes over others. In multi-process environments, the challenge lies in
maintaining a balance between fair resource allocation and meeting the needs of
high-priority tasks without neglecting lower-priority ones.

Thread

A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU


utilization; it comprises a thread ID, a program counter, a register set, and a stack. It
shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals. A traditional (or
heavyweight) process has a single thread of control. If the process has multiple threads
of control, it can do more than one task at a time.

Why Do We Need Thread?


● Threads run in parallel improving the application performance. Each such
thread has its own CPU state and stack, but they share the address space of
the process and the environment.
● Threads can share common data so they do not need to use inter-process
communication. Like the processes, threads also have states like ready,
executing, blocked, etc.
● Priority can be assigned to the threads just like the process, and the highest
priority thread is scheduled first.
● Each thread has its own Thread Control Block (TCB). Like the process, a
context switch occurs for the thread, and register contents are saved in
(TCB). As threads share the same address space and resources,
synchronization is also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.
● Stack Space
● Register Set
● Program Counter

Types of Thread in Operating System


Threads are of two types. These are described below.
● User Level Thread
● Kernel Level Thread

1. User Level Threads

User Level Thread is a type of thread that is not created using system calls. The kernel
has no work in the management of user-level threads. User-level threads can be easily
implemented by the user. In case when user-level threads are single-handed processes,
kernel-level thread manages them. Let’s look at the advantages and disadvantages of
User-Level Thread.
Advantages of User-Level Threads

● Implementation of the User-Level Thread is easier than Kernel Level


Thread.
● Context Switch Time is less in User Level Thread.
● User-Level Thread is more efficient than Kernel-Level Thread.
● Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.

Disadvantages of User-Level Threads

● There is a lack of coordination between Thread and Kernel.


● In case of a page fault, the whole process can be blocked.

2. Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads have
somehow longer context switching time. Kernel helps in the management of threads.

Advantages of Kernel-Level Threads

● It has up-to-date information on all threads.


● Applications that block frequency are to be handled by the Kernel-Level
Threads.
● Whenever any process requires more time to process, Kernel-Level Thread
provides more time to it.

Disadvantages of Kernel-Level threads

● Kernel-Level Thread is slower than User-Level Thread.


● Implementation of this type of thread is a little more complex than a
user-level thread.

What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads. For example, in a browser, multiple tabs can
be different threads. MS Word uses multiple threads: one thread to format the text,
another thread to process inputs, etc. More advantages of multithreading are discussed
below.

Multithreading is a technique used in operating systems to improve the performance


and responsiveness of computer systems. Multithreading allows multiple threads (i.e.,
lightweight processes) to share the same resources of a single process, such as the
CPU, memory, and I/O devices.
Benefits of Thread in Operating System
● Responsiveness: If the process is divided into multiple threads, if one
thread completes its execution, then its output can be immediately returned.
● Faster context switch: Context switch time between threads is lower
compared to the process context switch. Process context switching requires
more overhead from the CPU.
● Effective utilization of multiprocessor system: If we have multiple
threads in a single process, then we can schedule multiple threads on
multiple processors. This will make process execution faster.
● Resource sharing: Resources like code, data, and files can be shared
among all threads within a process. Note: Stacks and registers can’t be
shared among the threads. Each thread has its own stack and registers.
● Communication: Communication between multiple threads is easier, as the
threads share a common address space. while in the process we have to
follow some specific communication techniques for communication
between the two processes.
● Enhanced throughput of the system: If a process is divided into multiple
threads, and each thread function is considered as one job, then the number
of jobs completed per unit of time is increased, thus increasing the
throughput of the system.
Inter-process Communication (IPC)

• Mechanism for processes to communicate and to synchronize their actions.

• Message system – processes communicate with each other without resorting to


shared variables.

• IPC facility provides two operations:

1. send(message) – message size fixed or variable

2. receive(message)

• If P and Q wish to communicate, they need to:

1. establish a communication link between them

2. exchange messages via send/receive

• Implementation of communication link


1. physical (e.g., shared memory, hardware bus) 2.logical (e.g., logical properties)

Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how
they deal with clock interrupts.

Nonpreemptive Scheduling: A scheduling discipline is nonpreemptive , if once a


process has been used the CPU, the CPU cannot be taken away from that process.

Preemptive Scheduling: A scheduling discipline is preemptive, if once a process has


been used the CPU, the CPU can taken away.

You might also like