0% found this document useful (0 votes)
21 views

Lecture Note Operating System

Uploaded by

Sweety Chinky
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Lecture Note Operating System

Uploaded by

Sweety Chinky
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Operating System Module -1

An operating system acts as an intermediary between the user of a computer and the computer hardware. The
purpose of an operating system is to provide an environment in which a user can execute programs in a
convenient and efficient manner. An operating system is software that manages the computer hard- ware. The
hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to
prevent user programs from interfering with the proper operation of the system. Internally, operating systems
vary greatly in their makeup, since they are organized along many different lines. The design of a new operating
system is a major task. It is impotent that the goals of the system be well defined before the design begins. These
goals form the basis for choices among various algorithms and strategies. Because an operating system is large
and complex, it must be created piece by piece. Each of these pieces should be a well delineated portion of the
system, with carefully defined inputs, outputs, and functions.

What is operating system?

1. Computer is a machine which is the combination of Hardware and Software.


2. It performs task according to users’ instruction.
3. Those instructions are nothing but the software that runs the computer to perform any task.
4. Software is collection of set of instruction. Software is divided into different categories.
 System software
 Application software.
5. Operating system comes under system software because it runs the system.
6. Operating systems are everywhere, from cars and home appliances that include “Internet of
Things” devices, to smart phones, personal computers, enterprise computers, and cloud
computing environments.

Purpose of operating system:

The purpose of an operating system is to provide an environment in which a user can execute
programs in a convenient and efficient manner. An operating system acts as an intermediary between the
user of a computer and the computer hardware.

An operating system is a program on which application programs are executed and acts as a
communication bridge (interface) between the user and the computer hardware. The main task an
operating system carries out is the allocation of resources and services, such as the allocation of
memory, devices, processors, and information. The operating system also includes programs to
manage these resources, such as a traffic controller, a scheduler, a memory management module, I/O
programs, and a file system.

Uses of Operating System:

Operating System is used as a communication channel between the Computer hardware and the user.
It works as an intermediate between System Hardware and End-User. Operating System handles the
following responsibilities:

 It controls all the computer resources.


 It provides valuable services to user programs.
 It coordinates the execution of user programs.
 It provides resources for user programs.
 It provides an interface (virtual machine) to the user.
 It hides the complexity of software.
 It supports multiple execution modes.
 It monitors the execution of user programs to prevent errors.
Abstract view of OS:

Computer system can be divided into four components:

Hardware – provides basic computing resources

CPU, memory, I/O devices

Operating system

Controls and coordinates use of hardware among various applications and users

Application programs – define the ways in which the system resources are used to solve the computing
problems of the users

Word processors, compilers, web browsers, database systems, video games

Users:

People, machines, other computers

Functions of an Operating System:

Memory Management

The operating system manages the Primary Memory or Main Memory. Main memory is made up of a
large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be
first loaded in the main memory. An operating system manages the allocation and de-allocation of the
memory to various processes and ensures that the other process does not consume the memory
allocated to one process.An Operating System performs the following activities for Memory
Management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that has
not yet been used.

In multi programming, the OS decides the order in which processes are granted memory access, and
for how long.

It allocates the memory to a process when the process requests it and de-allocates the memory when
the process has terminated or is performing an I/O operation.

Processor Management

In a multi-programming environment, the OS decides the order in which processes have access to the
processor, and how much processing time each process has. This function of OS is called Process
Scheduling. An Operating System performs the following activities for Processor Management.

An operating system manages the processors work by allocating various jobs to it and ensuring that
each process receives enough time from the processor to function properly.

Keeps track of the status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates processor when a process is
no more required.

Device Management

An OS manages device communication via its respective drivers. It performs the following activities
for device management. Keeps track of all devices connected to the system. designates a program
responsible for every device known as the Input/Output controller. Decides which process gets access
to a certain device and for how long. Allocates devices effectively and efficiently.Deallocates devices
when they are no longer required. There are various input and output devices. an OS controls the
working of these input-output devices .It receives the requests from these devices, performs a specific
task, and communicates back to the requesting process.

File Management

A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings, the status of
every file, and more. These facilities are collectively known as the file system. An OS keeps track of
information regarding the creation, deletion, transfer, copy, and storage of files in an organized way. It
also maintains the integrity of the data stored in these files, including the file directory structure, by
protecting against unauthorized access.

User Interface or Command Interpreter

The user interacts with the computer system through the operating system. Hence OS act as an
interface between the user and the computer hardware. This user interface is offered through a set of
commands or a graphical user interface (GUI). Through this interface, the user makes interaction with
the applications and the machine hardware.

Booting the Computer


The process of starting or restarting the computer is known as booting. If the computer is switched off
completely and if turned on then it is called cold booting. Warm booting is a process of using the
operating system to restart the computer.

Security

The operating system uses password protection to protect user data and similar other techniques. it
also prevents unauthorized access to programs and user data. The operating system provides various
techniques which assure the integrity and confidentiality of user data. Following security measures are
used to protect user data:

X Protection against unauthorized access through login.


X Protection against intrusion by keeping firewall active.
X Protecting the system memory against malicious access.
X Displaying messages related to system vulnerabilities.

Control over System Performance

Overall system health to help improve performance. Records the response time between service
requests and system response to having a complete view of the system’s health. This can help improve
performance by providing important information needed to troubleshoot problems.

Job Accounting

The operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of users. In a
multitasking OS where multiple programs run simultaneously, the OS determines which applications
should run in which order and how time should be allocated to each application.

Error-Detecting Aids

The operating system constantly monitors the system to detect errors and avoid malfunctioning
computer systems. From time to time, the operating system checks the system for any external threat
or malicious software activity. It also checks the hardware for any type of damage. This process
displays several alerts to the user so that the appropriate action can be taken against any damage
caused to the system.

Coordination between Other Software and Users

Operating systems also coordinate and assign interpreters, compilers, assemblers, and other software
to the various users of the computer systems.

Performs Basic Computer Tasks

The management of various peripheral devices such as the mouse, keyboard, and printer is carried out
by the operating system. Today most operating systems are plug-and-play. These operating systems
automatically recognize and configure the devices with no user interference.

Network Management

Network Communication: Think of them as traffic cops for your internet traffic. Operating systems
help computers talk to each other and the internet. They manage how data is packaged and sent over
the network, making sure it arrives safely and in the right order.
Settings and Monitoring: Think of them as the settings and security guard for your internet
connection. They also let you set up your network connections, like Wi-Fi or Ethernet, and keep an
eye on how your network is doing. They make sure your computer is using the network efficiently and
securely, like adjusting the speed of your internet or protecting your computer from online threats.

Operating System Evolution

Operating system is divided into four generations, which are explained as follows −

First Generation (1945-1955)

It is the beginning of the development of electronic computing systems which are substitutes for
mechanical computing systems. Because of the drawbacks in mechanical computing systems like, the
speed of humans to calculate is limited and humans can easily make mistakes. In this generation there
is no operating system, so the computer system is given instructions which must be done directly.

Example − Type of operating system and devices used is Plug Boards.

Second Generation (1955-1965)

The Batch processing system was introduced in the second generation, where a job or a task that can
be done in a series, and then executed sequentially. In this generation, the computer system is not
equipped with an operating system, but several operating system functions exist like FMS and IBSYS.

Example − Type of operating system and devices used is Batch systems.

Third Generation (1965-1980)

The development of the operating system was developed to serve multiple users at once in the third
generation. Here the interactive users can communicate through an online terminal to a computer, so
the operating system becomes multi-user and multiprogramming.

Example − Type of operating system and devices used is Multiprogramming.

Fourth Generation (1980-Now)

In this generation the operating system is used for computer networks where users are aware of the
existence of computers that are connected to one another.

At this generation users are also comforted with a Graphical User Interface (GUI), which is an
extremely comfortable graphical computer interface, and the era of distributed computing has also
begun.

With the occurrence of new wearable devices like Smart Watches, Smart Glasses, VRGears, and
others, the demand for conventional operating systems has also increased.

And, with the onset of new devices like wearable devices, which includes Smart Watches, Smart
Glasses, VR gears etc, the demand for unconventional operating systems is also rising.

Example − Type of operating system and devices used is personal computers.

Operating System Structure


It is easier to create an operating system in pieces, much as we break down larger issues into smaller,
more manageable sub problems. Every segment is also a part of the operating system. Operating
system structure can be thought of as the strategy for connecting and incorporating various operating
system components within the kernel. Operating systems are implemented using many types of
structures:

Different types of structures of operating system are

1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-Kernel Structure
5. Exo-Kernel Structure
6. Virtual Machines

SIMPLE STRUCTURE

 It is the most straightforward operating system structure.


 IT is only appropriate for usage with tiny and restricted systems.
 Since the interfaces and degrees of functionality in this structure are clearly defined, programs
are able to access I/O routines, which may result in unauthorized access to I/O procedures.

EX: MS-DOS operating system

 There are four layers that make up the MS-DOS operating system, and each has its own set of
features.
 These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
 The MS-DOS operating system benefits from layering because each level can be defined
independently and, when necessary, can interact with one another.
 If the system is built in layers, it will be simpler to design, manage, and update. Because of
this, simple structures can be used to build constrained systems that are less complex.
 When a user program fails, the operating system as whole crashes.
 Because MS-DOS systems have a low level of abstraction, programs and I/O procedures are
visible to end users, giving them the potential for unwanted access.

Advantages of Simple Structure:


 Because there are only a few interfaces and levels, it is simple to develop.
 Because there are fewer layers between the hardware and the applications, it offers superior
performance.

Disadvantages of Simple Structure:

 The entire operating system breaks if just one user program malfunctions.
 Since the layers are interconnected, and in communication with one another, there is no
abstraction or data hiding.
 The operating system's operations are accessible to layers, which can result in data tampering
and system failure.

MONOLITHIC STRUCTURE

The monolithic operating system controls all aspects of the operating system's operation, including
file management, memory management, device management, and operational operations.

The core of an operating system for computers is called the kernel (OS). All other System components
are provided with fundamental services by the kernel. The operating system and the hardware use it as
their main interface. When an operating system is built into a single piece of hardware, such as a
keyboard or mouse, the kernel can directly access all of its resources.

The monolithic operating system is often referred to as the monolithic kernel. Multiple programming
techniques such as batch processing and time-sharing increase a processor's usability. Working on top
of the operating system and under complete command of all hardware, the monolithic kernel performs
the role of a virtual computer. This is an old operating system that was used in banks to carry out
simple tasks like batch processing and time-sharing, which allows numerous users at different
terminals to access the Operating System.

Advantages of Monolithic Structure:

 Because layering is unnecessary and the kernel alone is responsible for managing all
operations, it is easy to design and execute.
 Due to the fact that functions like memory management, file management, process
scheduling, etc., are implemented in the same address area, the monolithic kernel runs rather
quickly when compared to other systems. Utilizing the same address speeds up and reduces
the time required for address allocation for new processes.

Disadvantages of Monolithic Structure:

 The monolithic kernel's services are interconnected in address space and have an impact on
one another, so if any of them malfunctions, the entire system does as well.
 It is not adaptable. Therefore, launching a new service is difficult.

LAYERED STRUCTURE

The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer)
contains the hardware, and layer 1 (the highest layer) contains the user interface (layer N). These
layers are organized hierarchically, with the top-level layers making use of the capabilities of the
lower-level ones.

The functionalities of each layer are separated in this method, and abstraction is also an option.
Because layered structures are hierarchical, debugging is simpler, therefore all lower-level layers are
debugged before the upper layer is examined. As a result, the present layer alone has to be reviewed
since all the lower layers have already been examined.

Advantages of Layered Structure:

 Work duties are separated since each layer has its own functionality, and there is some amount
of abstraction.
 Debugging is simpler because the lower layers are examined first, followed by the top layers.

Disadvantages of Layered Structure:

 Performance is compromised in layered structures due to layering.


 Construction of the layers requires careful design because upper layers only make use of
lower layers' capabilities.

MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any
unnecessary parts. Systems and user applications are used to implement these optional kernel
components. So, Micro-Kernels is the name given to these systems that have been developed.

Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system is
now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating system
is unaffected and continues to function normally.

Advantages of Micro-Kernel Structure:

 It enables portability of the operating system across platforms.


 Due to the isolation of each Micro-Kernel, it is reliable and secure.
 The reduced size of Micro-Kernels allows for successful testing.
 The remaining operating system remains unaffected and keeps running properly even if a
component or Micro-Kernel fails.

Disadvantages of Micro-Kernel Structure:

 The performance of the system is decreased by increased inter-module communication.


 The construction of a system is complicated.

EXOKERNEL

An operating system called Exokernel was created at MIT with the goal of offering application-level
management of hardware resources. The exokernel architecture's goal is to enable application-specific
customization by separating resource management from protection. Exokernel size tends to be
minimal due to its limited operability.

Because the OS sits between the programs and the actual hardware, it will always have an effect on
the functionality, performance, and breadth of the apps that are developed on it. By rejecting the idea
that an operating system must offer abstractions upon which to base applications, the exokernel
operating system makes an effort to solve this issue. The goal is to give developers as few restriction
on the use of abstractions as possible while yet allowing them the freedom to do so when necessary.
Because of the way the exokernel architecture is designed, a single tiny kernel is responsible for
moving all hardware abstractions into unreliable libraries known as library operating systems.
Exokernels differ from micro- and monolithic kernels in that their primary objective is to prevent
forced abstraction.

Exokernel operating systems have a number of features, including:

 Enhanced application control support.


 Splits management and security apart.
 A secure transfer of abstractions is made to an unreliable library operating system.
 Brings up a low-level interface.
 Operating systems for libraries provide compatibility and portability.

Advantages of Exokernel Structure:

 Application performance is enhanced by it.


 Accurate resource allocation and revocation enable more effective utilisation of hardware
resources.
 New operating systems can be tested and developed more easily.
 Every user-space program is permitted to utilise its own customised memory management.

Disadvantages of Exokernel Structure:

A decline in consistency

Exokernel interfaces have a complex architecture.

VIRTUAL MACHINES (VMs)

The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC (Network
Interface Card), is abstracted by a virtual machine into a variety of various execution contexts based
on our needs, giving us the impression that each execution environment is a separate computer. A
virtual box is an example of it.

Using CPU scheduling and virtual memory techniques, an operating system allows us to execute
multiple processes simultaneously while giving the impression that each one is using a separate
processor and virtual memory. System calls and a file system are examples of extra functionalities that
a process can have that the hardware is unable to give. Instead of offering these extra features, the
virtual machine method just offers an interface that is similar to that of the most fundamental
hardware. A virtual duplicate of the computer system underneath is made available to each process.

We can develop a virtual machine for a variety of reasons, all of which are fundamentally connected
to the capacity to share the same underlying hardware while concurrently supporting various
execution environments, i.e., various operating systems.

Disk systems are the fundamental problem with the virtual machine technique. If the actual machine
only has three-disc drives but needs to host seven virtual machines, let's imagine that. It is obvious
that it is impossible to assign a disc drive to every virtual machine because the program that creates
virtual machines would require a sizable amount of disc space in order to offer virtual memory and
spooling. The provision of virtual discs is the solution.

The result is that users get their own virtual machines. They can then use any of the operating systems
or software programs that are installed on the machine below. Virtual machine software is concerned
with programming numerous virtual machines simultaneously into a physical machine; it is not
required to take into account any user-support software. With this configuration, it may be possible to
break the challenge of building an interactive system for several users into two manageable chunks.

Advantages of Virtual Machines:

 Due to total isolation between each virtual machine and every other virtual machine, there are
no issues with security.
 A virtual machine may offer architecture for the instruction set that is different from that of
actual computers.
 Simple availability, accessibility, and recovery convenience.

Disadvantages of Virtual Machines:

 Depending on the workload, operating numerous virtual machines simultaneously on a host


computer may have an adverse effect on one of them.
 When it comes to hardware access, virtual computers are less effective than physical ones.

Services of Operating System

 Program execution

 Input Output Operations

 Communication between Process

 File Management

 Memory Management

 Process Management

 Security and Privacy

 Resource Management

 User Interface

 Networking

 Error handling

 Time Management

Program Execution

It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is
in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available for
the efficient running of all types of functionalities.

Input Output Operations

Operating System manages the input-output operations and establishes communication


between the user and device drivers. Device drivers are software that is associated with
hardware that is being managed by the OS so that the sync between the devices works
properly. It also provides access to input-output devices to a program when needed.

Communication Between Processes

The Operating system manages the communication between processes. Communication


between processes includes data transfer among them. If the processes are not on the same
computer but connected through a computer network, then also their communication is
managed by the Operating System itself.

File Management

The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It
also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.

Memory Management

Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player will
be in playing 11 ,playing 15 or will not be included in team , based on his performance . In
the same way, OS first check whether the upcoming program fulfil all requirement to get
memory space or not ,if all things good, it checks how much memory space will be sufficient
for program and then load the program into memory at certain location. And thus , it prevents
program from using unnecessary memory.

Process Management

Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-
stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.

Security and Privacy

 Security :OS keep our computer safe from an unauthorized user by adding security
layer to it. Basically, Security is nothing but just a layer of protection which protect
computer from bad guys like viruses and hackers. OS provide us defenses like
firewalls and anti-virus software and ensure good safety of computer and personal
information.

 Privacy :OS give us facility to keep our essential information hidden like having a
lock on our door, where only you can enter and other are not allowed . Basically , it
respect our secrets and provide us facility to keep it safe.

Resource Management

System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices. The OS also ensures the proper use of all the resources
available by deciding which resource to be used by whom.

User Interface

User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.

A GUI offers the user a mouse-based window and menu system as an interface.

Networking

This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.

Error Handling

The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents
the process from coming to a deadlock. It also looks for any type of error or bugs that can
occur while any task. The well-secured OS sometimes also acts as a countermeasure for
preventing any sort of breach of the Computer System from any external source and probably
handling them.

Time Management

Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue), start(yellow)=>(ready queue),move(green)=>(under execution)
and this light (control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move smoothly
without traffic.
What is a System Call?

A system call is a mechanism used by programs to request services from the operating system
(OS). In simpler terms, it is a way for a program to interact with the underlying system, such
as accessing hardware resources or performing privileged operations.

A user program can interact with the operating system using a system call. A number of
services are requested by the program, and the OS responds by launching a number of
systems calls to fulfil the request. A system call can be written in high-level languages like C
or Pascal or in assembly language. If a high-level language is used, the operating system may
directly invoke system calls, which are predefined functions.

A system call is initiated by the program executing a specific instruction, which triggers a
switch to kernel mode, allowing the program to request a service from the OS. The OS then
handles the request, performs the necessary operations, and returns the result back to the
program.

System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each
program would need to implement its methods for accessing hardware and system services,
leading to inconsistent and error-prone behaviour.

Services Provided by System Calls

 Process Creation and Management

 Main Memory Management

 File Access, Directory, and File System Management

 Device Handling(I/O)

 Protection

 Networking, etc.

o Process Control: end, abort, create, terminate, allocate, and free memory.

o File Management: create, open, close, delete, read files,s, etc.

o Device Management

o Information Maintenance

o Communication

Features of System Calls


 Interface: System calls provide a well-defined interface between user programs and
the operating system. Programs make requests by calling specific functions, and the
operating system responds by executing the requested service and returning a result.

 Protection: System calls are used to access privileged operations that are not
available to normal user programs. The operating system uses this privilege to protect
the system from malicious or unauthorized access.

 Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system
resources, including hardware, memory, and other processes.

 Context Switching: A system call requires a context switch, which involves saving
the state of the current process and switching to the kernel mode to execute the
requested service. This can introduce overhead, which can impact system
performance.

 Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them
appropriately.

 Synchronization: System calls can be used to synchronize access to shared resources,


such as files or network connections. The operating system provides synchronization
mechanisms, such as locks or semaphores, to ensure that multiple programs can
access these resources safely.

How does System Call Work?

Here is a detailed explanation step by step how system calls work:

 Users need special resources: Sometimes programs need to do some special things
that can’t be done without the permission of the OS like reading from a file, writing to
a file, getting any information from the hardware, or requesting a space in memory.

 The program makes a system call request: There are special predefined instructions
to make a request to the operating system. These instructions are nothing but just a
“system call”. The program uses these system calls in its code when needed.

 Operating system sees the system call: When the OS sees the system call then it
recognizes that the program needs help at this time so it temporarily stops the program
execution and gives all the control to a special part of itself called ‘Kernel’. Now
‘Kernel’ solves the need of the program.

 The operating system performs the operations: Now the operating system performs
the operation that is requested by the program. Example: reading content from a file
etc.

 Operating system give control back to the program :After performing the special
operation, OS give control back to the program for further execution of program .
Examples of a System Call in Windows and Unix

System calls for Windows and Unix come in many different forms. These are listed in the
table below as follows:

Process Windows Unix


CreateProcess() Fork()

Process Control ExitProcess() Exit()

WaitForSingleObject() Wait()
Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()
SetConsoleMode() Ioctl()

Device Management ReadConsole() Read()

WriteConsole() Write()
GetCurrentProcessID() Getpid()

Information Maintenance SetTimer() Alarm()

Sleep() Sleep()
CreatePipe() Pipe()

Communication CreateFileMapping() Shmget()

MapViewOfFile() Mmap()
SetFileSecurity() Chmod()

Protection InitializeSecurityDescriptor() Umask()

SetSecurityDescriptorgroup() Chown()

Open(): Accessing a file on a file system is possible with the open() system call. It gives the
file resources it needs and a handle the process can use. A file can be opened by multiple
processes simultaneously or just one process. Everything is based on the structure and file
system.

Read(): Data from a file on the file system is retrieved using it. In general, it accepts three
arguments:

 A description of a file.
 A buffer for read data storage.

 How many bytes should be read from the file


Before reading, the file to be read could be identified by its file descriptor and opened
using the open() function.

Wait(): In some systems, a process might need to hold off until another process has finished
running before continuing. When a parent process creates a child process, the execution of
the parent process is halted until the child process is complete. The parent process is stopped
using the wait() system call. The parent process regains control once the child process has
finished running.

Write(): Data from a user buffer is written using it to a device like a file. A program can
produce data in one way by using this system call. generally, there are three arguments:

 A description of a file.

 A reference to the buffer where data is stored.

 The amount of data that will be written from the buffer in bytes.

Fork(): The fork() system call is used by processes to create copies of themselves. It is one of
the methods used the most frequently in operating systems to create processes. When a parent
process creates a child process, the parent process’s execution is suspended until the child
process is finished. The parent process regains control once the child process has finished
running.

Exit(): A system call called exit() is used to terminate a program. In environments with
multiple threads, this call indicates that the thread execution is finished. After using the exit()
system function, the operating system recovers the resources used by the process.

Advantages of System Calls

 Access to Hardware Resources: System calls allow programs to access hardware


resources such as disk drives, printers, and network devices.

 Memory Management: System calls provide a way for programs to allocate and
deallocate memory, as well as access memory-mapped hardware devices.

 Process Management: System calls allow programs to create and terminate


processes, as well as manage inter-process communication.

 Security: System calls provide a way for programs to access privileged resources,
such as the ability to modify system settings or perform operations that require
administrative permissions.

 Standardization: System calls provide a standardized interface for programs to


interact with the operating system, ensuring consistency and compatibility across
different hardware platforms and operating system versions.
Disadvantages of System Call

 Performance Overhead: System calls involve switching between user mode and
kernel mode, which can slow down program execution.

 Security Risks: Improper use or vulnerabilities in system calls can lead to security
breaches or unauthorized access to system resources.

 Error Handling Complexity: Handling errors in system calls, such as resource


allocation failures or timeouts, can be complex and require careful programming.

 Compatibility Challenges: System calls may vary between different operating


systems, requiring developers to write code that works across multiple platforms.

 Resource Consumption: System calls can consume significant system resources,


especially in environments with many concurrent processes making frequent calls.

System Programs in Operating System

System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, Hardware comes first then is
Operating System, System Programs, and finally Application Programs. Program
Development and Execution can be done conveniently in System Programs. Some of the
System Programs are simply user interfaces, others are complex. It traditionally sits between
the user interface and system calls.
In the context of an operating system, system programs are nothing but special software
which gives us facility to manage and control the computer’s hardware and resources. Here
are the examples of System Programs:

1. File Management: A file is a collection of specific information stored in the memory


of a computer system. File management is defined as the process of manipulating files
in the computer system; its management includes the process of creating, modifying
and deleting files.

2. Command Line Interface(CLI’s) : CLIs is the essential tool for user . It provide
user facility to write commands directly to the system for performing any operation .
It is a text-based way to interact with operating system. CLIs can perform many tasks
like file manipulation, system configuration and etc.

3. Device drivers :Device drivers work as a simple translator for OS and devices .
Basically it act as an intermediatory between the OS and devices and provide facility
to both OS and devices to understand each other’s language so that they can work
together efficiently without interrupt.

4. Status Information : Information like date, time amount of available memory, or


disk space is asked by some users. Others providing detailed performance, logging,
and debugging information which is more complex. All this information is formatted
and displayed on output devices or printed. Terminal or other output devices or files
or a window of GUI is used for showing the output of programs.
5. File Modification : This is used for modifying the content of files. Files stored on
disks or other storage devices, we use different types of editors. For searching
contents of files or perform transformations of files we use special commands.

6. Programming-Language support : For common programming languages, we use


Compilers, Assemblers, Debuggers, and interpreters which are already provided to
users. It provides all support to users. We can run any programming language. All
important languages are provided.

7. Program Loading and Execution : When the program is ready after Assembling and
compilation, it must be loaded into memory for execution. A loader is part of an
operating system that is responsible for loading programs and libraries. It is one of the
essential stages for starting a program. Loaders, relocatable loaders, linkage editors,
and Overlay loaders are provided by the system.

8. Communications : Connections among processes, users, and computer systems are


provided by programs. Users can send messages to another user on their screen, User
can send e-mail, browsing on web pages, remote login, the transformation of files
from one user to another.

Module-2
Process

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

Ex: we write our computer programs in a text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.

Stack

The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
Heap

This is dynamically allocated memory to a process during its run time.

Text

This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.

Data

This section contains the global and static variables.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.

Process States in Operating System

The states of a process are as follows:

 New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create the
process.

 Ready State: New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready to run
and is waiting to get the CPU time for its execution. Processes that are ready for execution by
the CPU are maintained in a queue called a ready queue for ready processes.
 Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available CPU cores.

 Blocked or Wait State: Whenever the process requests access to I/O or needs input from the
user or needs access to a critical region(the lock for which is already acquired) it enters the
blocked or waits state. The process continues to wait in the main memory and does not
require CPU. Once the I/O operation is completed the process goes to the ready state.

 Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or de-allocated.
 Suspend Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state.
 Suspend Wait or Suspend Blocked: Like suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary
memory. When work is finished, it may go to suspend ready.

How Does a Process Move from One State to Other State?

A process can move between different states in an operating system based on its execution status
and resource availability. Here are some examples of how a process can move between different
states:

 New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.

 Ready to Running: When the CPU becomes available, the operating system selects a process
from the ready queue depending on various scheduling algorithms and moves it to the
running state.

 Running to Blocked: When a process needs to wait for an event to occur (I/O operation or
system call), it moves to the blocked state. For example, if a process needs to wait for user
input, it moves to the blocked state until the user provides the input.

 Running to Ready: When a running process is preempted by the operating system, it moves
to the ready state. For example, if a higher-priority process becomes ready, the operating
system may preempt the running process and move it to the ready state.

 Blocked to Ready: When the event a blocked process was waiting for occurs, the process
moves to the ready state. For example, if a process was waiting for user input and the input
is provided, it moves to the ready state.

 Running to Terminated: When a process completes its execution or is terminated by the


operating system, it moves to the terminated state.

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep
track of a process as listed below in the table –
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
This is required to allow/disallow access to system resources.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.

CPU registers

Various CPU registers where process need to be stored for execution for running state.

CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.

Memory management information

This includes the information of page table, memory limits, and Segment table depending on
memory used by the operating system.

Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −Process Control
Block

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

Threads in Operating System (OS)


A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of
execution or thread of control. There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation records and control
blocks. Thread is often referred to as a lightweight process.

The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input
from another thread, etc.

Need of Thread:

o It takes far less time to create a new thread in an existing process than to create a new
process.

o Threads can share the common data, they do not need to use Inter- Process communication.

o Context switching is faster when working with threads.

o It takes less time to terminate a thread than a process.


Types of Threads

In the operating system, there are two types of threads.

1. Kernel level thread.

2. User-level thread.

User-level thread

The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes?examples: Java thread, POSIX threads, etc.

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.

2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.

3. It is faster and efficient.

4. Context switch time is shorter than the kernel-level threads.

5. It does not require modifications of the operating system.

6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.

7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.

2. If a thread causes a page fault, the entire process is blocked.


Kernel level thread

The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-level
thread is implemented by the operating system. The kernel knows about all the threads and manages
them. The kernel-level thread offers a system call to create and manage the threads from user-space.
The implementation of kernel threads is more difficult than the user thread. Context switch time is
longer in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue. Example: Window Solaris.

Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.

3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.

2. The implementation of kernel threads is difficult than the user thread.

3. The kernel-level thread is slower than user-level threads.

Components of Threads

Any thread has the following components.

1. Program counter

2. Register set

3. Stack space

Benefits of Threads

o Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.

o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.

o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.

o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.

o Communication: Multiple-thread communication is simple because the threads share the


same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.

o Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There
is a stack and register for each thread.

Difference Between Process and Thread


The primary difference is that threads within the same process run in a shared memory space, while
processes run in separate memory spaces. Threads are not independent of one another like
processes are, and as a result, threads share with other threads their code section, data section, and
OS resources (like open files and signals). But, like a process, a thread has its own program counter
(PC), register set, and stack space.

What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

Process Schedulers in Operating System

Process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process based on a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such operating


systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

Categories of Scheduling in OS

There are two categories of scheduling:

1. Non-preemptive: In non-preemptive, the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process terminates and
moves to a waiting state.

2. Preemptive: In preemptive scheduling, the OS allocates the resources to a process for a fixed
amount of time. During resource allocation, the process switches from running state to ready state or
from waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.

There are three types of process schedulers.

Long Term or Job Scheduler

A long-term scheduler is a scheduler that is responsible for bringing processes from the JOB queue
(or secondary memory) into the READY queue (or main memory). In other words, a long-term
scheduler determines which programs will enter into the RAM for processing by the CPU.

Long-term schedulers are also called Job Schedulers. Long-term schedulers have a long-term effect
on the CPU performance. They are responsible for the degree of multi programming, i.e., managing
the total processes present in the READY queue. For example, time-sharing operating systems like
Windows and UNIX usually don't have a long term scheduler. These systems put all the processes in
the main memory for the short term scheduler.
Short-Term or CPU Scheduler

1. Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the
ready queue and dispatch to the CPU for the execution.
2. A scheduling algorithm is used to select which job is going to be dispatched for the
execution. The Job of the short term scheduler can be very critical in the sense that if it
selects job whose CPU burst time is very high then all the jobs after that, will have to wait in
the ready queue for a very long time.
3. This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.Switching context.
4. Switching to user mode.
5. Jumping to the proper location in the newly loaded program.

Medium-Term Scheduler

It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve the
process mix or because a change in memory requirements has over committed available memory,
requiring memory to be freed up. It is helpful in maintaining a perfect balance between the I/O
bound and the CPU bound. It reduces the degree of multi programming.

Some Other Schedulers

I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations such as
reading and writing to discs or networks. They can use various algorithms to determine the order in
which I/O operations are executed, such as FCFS (First-Come, First-Served) or RR (Round Robin).

Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).

Context Switching

For a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A
multitasking operating system must include context switching among its features.
The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another. The state used to set the PC, registers, etc.
for the process that will run next is then loaded from its own PCB. After that, the second can start
processing.

For a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A
multitasking operating system must include context switching among its features.

Program Counter

 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

What is the need for CPU scheduling algorithm?


CPU scheduling is the process of deciding which process will own the CPU to use while
another process is suspended. The main function of the CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected one of the processes available
in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding processes
then most of the time, the CPU remains an idle. The function of an effective program is to
improve resource utilization.
If most operating systems change their status from performance to waiting then there
may always be a chance of failure in the system. So in order to minimize this excess, the
OS needs to schedule tasks in order to make full use of the CPU and avoid the possibility of
deadlock.
Objectives of Process Scheduling Algorithm:
 Utilization of CPU at maximum level. Keep CPU as busy as possible.
 Allocation of CPU should be fair.
 Throughput should be Maximum. i.e. Number of processes that complete their
execution per time unit should be maximized.
 Minimum turnaround time, i.e. time taken by a process to finish execution
should be the least.
 There should be a minimum waiting time and the process should not starve in
the ready queue.
 Minimum response time. It means that the time when a process produces the
first response should be as less as possible.

What are the different terminologies to take care of in any CPU Scheduling algorithm?
 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time

A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms. There are six popular process scheduling algorithms which
we are going to discuss in this chapter −

1. First-Come, First-Served (FCFS) Scheduling


2. Shortest-Job-Next (SJN) Scheduling
3. Priority Scheduling
4. Shortest Remaining Time
5. Round Robin(RR) Scheduling
6. Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are


designed so that once a process enters the running state, it cannot be preempted until it
completes its allotted time, whereas the preemptive scheduling is based on priority where a
scheduler may preempt a low priority running process anytime when a high priority process
enters into a ready state.
First Come First Serve (FCFS)

The first come first serve scheduling algorithm states that the process that requests the CPU
first is allocated the CPU first. It is implemented by using the FIFO queue. When a process
enters the ready queue, its PCB is linked to the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then removed
from the queue. FCFS is a non-preemptive scheduling algorithm.
Characteristics of FCFS
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not very efficient in performance, and the wait time is quite
high.
Algorithm for FCFS Scheduling
 The waiting time for the first process is 0 as it is executed first.
 The waiting time for the upcoming process can be calculated by:
wt[i] = ( at[i – 1] + bt[i – 1] + wt[i – 1] ) – at[i]
where
wt[i] = waiting time of current process
at[i-1] = arrival time of previous process
bt[i-1] = burst time of previous process
wt[i-1] = waiting time of previous process
at[i] = arrival time of current process

Wait time of each process is as follows −

Proces Wait Time : Service Time - Arrival Time


s

P0 0-0=0

P1 5-1=4

P2 8-2=6
P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Advantages of FCFS
 The simplest and basic form of CPU Scheduling algorithm
 Easy to implement
 First come first serve method
 It is well suited for batch systems where the longer time periods for each process
are often acceptable.
Disadvantages of FCFS
 As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it
finishes the execution.
 The average waiting time in the FCFS is much higher than in the others
 It suffers from the Convoy effect.
 Not very efficient due to its simplicity
 Processes that are at the end of the queue, have to wait longer to finish.
 It is not suitable for time-sharing operating systems where each process should
get the same amount of CPU time.

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
Average Waiting Time = 3+16+9+0=28/4=7ms
Advantages of SJF:
 SJF is better than the First come first serve(FCFS) algorithm as it reduces the
average waiting time.
 SJF is generally used for long term scheduling
 It is suitable for the jobs running in batches, where run times are already known.
 SJF is probably optimal in terms of average turnaround time.
Disadvantages of SJF:
 SJF may cause very long turn-around times or starvation.
 In SJF job completion time must be known earlier, but sometimes it is hard to predict.
 Sometimes, it is complicated to predict the length of the upcoming CPU request.
 It leads to the starvation that does not reduce average turnaround time.

Priority Scheduling Algorithm


In Priority scheduling, there is a priority number assigned to each process. In
some systems, the lower the number, the higher the priority. While, in the
others, the higher the number, the higher will be the priority. The Process with
the higher priority among the available processes is given the CPU. There are
two types of priority scheduling algorithm exists. One is Preemptive priority
scheduling while the other is Non Preemptive Priority scheduling.
The priority number assigned to each of the process may or may not vary. If the
priority number doesn't change itself throughout the process, it is called static
priority, while if it keeps changing itself at the regular intervals, it is
called dynamic priority.

Non Preemptive Priority Scheduling


In the Non Preemptive Priority scheduling, The Processes are scheduled
according to the priority number assigned to them. Once the process gets
scheduled, it will run till the completion. Generally, the lower the priority number,
the higher is the priority of the process. The people might get confused with the
priority numbers, hence in the GATE, there clearly mention which one is the
highest priority and which one is the lowest one.

Example
In the Example, there are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their
priorities, Arrival Time and burst time are given in the table.

Process ID Priority Arrival Time Burst Time

1 2 0 3

2 6 2 5

3 3 1 4

4 5 4 2
5 7 6 9

6 4 5 4

7 10 7 10

We can prepare the Gantt chart according to the Non Preemptive priority
scheduling.

The Process P1 arrives at time 0 with the burst time of 3 units and the priority
number 2. Since No other process has arrived till now hence the OS will schedule
it immediately.

Meanwhile the execution of P1, two more Processes P2 and P3 are arrived. Since
the priority of P3 is 3 hence the CPU will execute P3 over P2.

Meanwhile the execution of P3, All the processes get available in the ready
queue. The Process with the lowest priority number will be given the priority.
Since P6 has priority number assigned as 4 hence it will be executed just after
P3.

After P6, P4 has the least priority number among the available processes; it will
get executed for the whole burst time.

Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities. If two jobs have similar priority number
assigned to them, the one with the least arrival time will be executed.

From the GANTT Chart prepared, we can determine the completion time of every
process. The turnaround time, waiting time and response time will be
determined.

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turn Around Time - Burst Time

Proces Priorit Arriva Burs Completio Turnaroun Waitin Respons


s Id y l Time t n Time d Time g Time e Time
Time

1 2 0 3 3 3 0 0

2 6 2 5 18 16 11 13
3 3 1 4 7 6 2 3

4 5 4 2 13 9 7 11

5 7 6 9 27 21 12 18

6 4 5 4 11 6 2 7

7 10 7 10 37 30 18 27

Avg Waiting Time = (0+11+2+7+12+2+18)/7 = 52/7 units

Preemptive Priority Scheduling


In Preemptive Priority Scheduling, at the time of arrival of a process in the ready
queue, its Priority is compared with the priority of the other processes present in
the ready queue as well as with the one which is being executed by the CPU at
that point of time. The One with the highest priority among all the available
processes will be given the CPU next.

The difference between preemptive priority scheduling and non preemptive


priority scheduling is that, in the preemptive priority scheduling, the job which is
being executed can be stopped at the arrival of a higher priority job.

Once all the jobs get available in the ready queue, the algorithm will behave as
non-preemptive priority scheduling, which means the job scheduled will run till
the completion and no preemption will be done.

Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times are given in the table below.

Process Id Priority Arrival Time Burst Time

1 2(L) 0 1

2 6 1 7

3 3 2 3

4 5 3 6
5 4 4 5

6 10(H) 5 15

7 9 15 8

GANTT chart Preparation


At time 0, P1 arrives with the burst time of 1 units and priority 2. Since no other
process is available hence this will be scheduled till next job arrives or its
completion (whichever is lesser).

At time 1, P2 arrives. P1 has completed its execution and no other process is


available at this time hence the Operating system has to schedule it regardless
of the priority assigned to it.

The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2.
Hence the execution of P2 will be stopped and P3 will be scheduled on the CPU.

During the execution of P3, three more processes P4, P5 and P6 becomes
available. Since, all these three have the priority lower to the process in
execution so PS can't preempt the process. P3 will complete its execution and
then P5 will be scheduled with the priority highest among the available
processes.

Meanwhile the execution of P5, all the processes got available in the ready
queue. At this point, the algorithm will start behaving as Non Preemptive Priority
Scheduling. Hence now, once all the processes get available in the ready queue,
the OS just took the process with the highest priority and execute that process
till completion. In this case, P4 will be scheduled and will be executed till the
completion.

Since P4 is completed, the other process with the highest priority available in the
ready queue is P2. Hence P2 will be scheduled next.

P2 is given the CPU till the completion. Since its remaining burst time is 6 units
hence P7 will be scheduled after this.
The only remaining process is P6 with the least priority, the Operating System
has no choice unless of executing it. This will be executed at the last.

The Completion Time of each process is determined with the help of GANTT
chart. The turnaround time and the waiting time can be calculated by the
following formula.

1. Turnaround Time = Completion Time - Arrival Time


2. Waiting Time = Turn Around Time - Burst Time

Process Priority Arrival Burst Completion Turn around Waiting


Id Time Time Time Time Time

1 2 0 1 1 1 0

2 6 1 7 22 21 14

3 3 2 3 5 3 0

4 5 3 6 16 13 7

5 4 4 5 10 6 1

6 10 5 15 45 40 25

7 9 6 8 30 24 16

Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units

You might also like