0% found this document useful (0 votes)
236 views144 pages

Operating Systems

This document serves as a comprehensive guide to Operating Systems, detailing their structure, process management, and memory management. It includes theoretical and practical knowledge tailored for B.Sc students, with features like definitions, model papers, and structured questions to aid exam preparation. The book aims to enhance students' understanding and performance in the subject, incorporating feedback for future editions.

Uploaded by

sthomburapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
236 views144 pages

Operating Systems

This document serves as a comprehensive guide to Operating Systems, detailing their structure, process management, and memory management. It includes theoretical and practical knowledge tailored for B.Sc students, with features like definitions, model papers, and structured questions to aid exam preparation. The book aims to enhance students' understanding and performance in the subject, incorporating feedback for future editions.

Uploaded by

sthomburapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

Preface

ABOUT THE SUBJECT


An Operating System (OS) is a software that manages computer hardware and software resources and provides
common services for computer programs. It is an essential component of the system software in a computer
system. Examples of popular modern operating systems include Android, Blackberry, Linux, Microsoft Windows.
All these examples share roots in UNIX.

Process is the fundamental concept of operating systems structure. A program under execution is referred to as a
process. It can also be defined as an active entity that can be assigned to a processor for execution. A process is a
dynamic object that resides in main memory. A process includes the current values of the program counter and
processor’s registers. Each process possesses its own virtual CPU. A file is grouping of similar records or related
information together which is stored in secondary memory. A collection of files is called directory. Files and
directories are the basic mechanism of a file system. Directories are used to organize files. Protection is a
mechanism of controlling access of computer resources by users or processes. A protection enabled system can
find the differences between authorized and unauthorized access or usage and can take measure to defend the
system against misuse. If protection is not employed then errors may also occur among subcomponents of system.
This happens usually when a defected subsystem interacts with healthy subsystem through its interface. Then
healthy subsystem gets corrupted.

ABOUT THE BOOK


This book provides theoretical and practical knowledge to the student about “Operating Systems”. It covers the
complete syllabus of the subject prescribed by O.U. The content written in this book is presented in a consistently
readable and student friendly format so that student can prepare well for their both exams. This book is beneficial
because it gives complete up-to-date information about each topic and the questions to be asked in the exams. This
helps the student in getting a clear idea about the important questions in each topic. This book has been prepared
keeping student’s views, ideas and suggestions in mind. The main motivation behind the publication of this book
is to help the student to gain good marks and knowledge in the subject.

According to the examination pattern of B.Sc students, this book provides the following features:

 List of Definitions are provided before the units for easy reference.

 Every unit is structured into two main sections viz. Short Questions and Essay Questions with solutions
along with Learning Objectives and Introduction.

 Very Short Answers are also given.

 Three Model Papers are provided in order to help students to understand the paper pattern in the end examination.

 Important Questions are included to help the students to prepare for Internal and External Assessment.
The table below gives complete idea about the subject, which will be helpful to plan and score good marks in their
exams.

Unit
Unit Name Description
No.
This unit includes topics like Introduction of Computer System

Architecture, Computing Environments, Operating System Structures,

Operating System Services, User Interface for Operating System,


Introduction, OS Structures, System Calls, Types of System Calls, Operating System Structure,
1. Process Management and
Synchronization Process Management: Process Concept, Process Scheduling,

Operations on Processes, Inter Process Communication, Producer

Consumer Problem, Process Synchronization: Critical Section Problem,

Peterson’s Solution, Synchronization, Semaphores, Monitors.


This unit includes topics like CPU Scheduling: Concepts, Scheduling

Criteria, Scheduling Algorithms. Deadlocks: System Model, Deadlock


CPU Scheduling and
2. Characterization, Methods for Handling Deadlocks, Deadlock
Deadlocks
Prevention, Deadlock Avoidance, Deadlock Detection, Recovery from

Deadlock.
This unit includes topics like Main Memory: Introduction,

Swapping, Contiguous Memory Allocation, Segmentation, Paging.

Virtual Memory: Introduction, Demand Paging, Page Replacement,


Main and Virtual Memory, Allocation of Frames, Thrashing. Mass Storage Structure: Overview,
3. Mass-Storage Structure, File
Systems and Implementation Disk Scheduling, RAID Structure. File Systems: File Concept, Access

Methods, Directory and Disk Structure, File System Mounting,

Protection, File System Implementation, Directory Implementation,

Allocation Methods, Free Space Management.

It is sincerely hoped that this book will satisfy the expectations of students and at the same time helps them to score
maximum marks in exams.

Suggestions for improvement of the book from our esteemed readers will be highly appreciated and incorporated
in our forthcoming editions.
Model question papers with solutions Computer science Paper-V

Faculty of science
Model
Pa p e r 1
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60

Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.
1. What do you mean by multiprocessor systems? (Unit-I, Page No. 2, Q1)
2. Define system call. (Unit-I, Page No. 3, Q6)
3. What are short-term, long-term and medium term schedulings? . (Unit-II, Page No. 50, Q1)
4. List three overall strategies in handling deadlocks. (Unit-II, Page No. 51, Q7)
5. Write the differences between logical and physical address space. (Unit-III, Page No. 74, Q1)
6. List the file operations performed by operating systems. (Unit-III, Page No. 75, Q5)
7. What advantages are there to the two-level directory? . (Unit-III, Page No. 76, Q8)
8. Define a process. (Unit-I, Page No. 4, Q9)

Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.
9. (a) Discuss briefly about,
(i) Single processor systems
(ii) Multiple processor systems
(iii) Clustered systems. (Unit-I, Page No. 5, Q13)
OR
(b) What is inter-process communication? What are the models of IPC? (Unit-I, Page No. 24, Q33)
10. (a) Explain various scheduling concepts. (Unit-II, Page No. 53, Q12)
OR
(b) Briefly explain about deadlock prevention methods with examples of each. (Unit-II, Page No. 62, Q22)
11. (a) Write short notes on,
(i) Dynamic loading
(ii) Dynamic linking
(iii) Shared libraries. (Unit-III, Page No. 78, Q12)
OR
(b) What is a file? Discuss its attributes. (Unit-III, Page No. 102, Q36)

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. MP.1


Computer science Paper-V Operating Systems

Faculty of science
Model
Pa p e r 2
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60

Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.
1. Define operating system. Give two examples. (Unit-I, Page No. 2, Q2)

2. List the features of system call. (Unit-I, Page No. 3, Q8)

3. List any three scheduling algorithms. . (Unit-II, Page No. 50, Q2)

4. What is safe state in deadlocks? (Unit-II, Page No. 52, Q9)

5. Define a page and a frame. (Unit-III, Page No. 74, Q3)

6. List the differences among the file access methods. (Unit-III, Page No. 75, Q6)

7. What does OPEN do in file operations? . (Unit-III, Page No. 76, Q9)

8. Write short notes on semaphore. (Unit-I, Page No. 4, Q12)

Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.

9. (a) Discuss various approaches of designing an operating system. (Unit-II, Page No. 16, Q24)

OR

(b) Define the following,


(i) Process
(ii) Process control block

(iii) Process state diagram. (Unit-I, Page No. 18, Q26)

10. (a) Explain FCFS, SJF, Priority, Round robin scheduling algorithms. (Unit-II, Page No. 55, Q14)

OR

(b) Write about deadlock avoidance. (Unit-II, Page No. 64, Q23)

11. (a) Explain about page replacement algorithms. (Unit-III, Page No. 90, Q24)

OR

(b) What are the structures and operations that are used to implement file
system operations? (Unit-III, Page No. 114, Q49)
MP.2 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
Model question papers with solutions Computer science Paper-V

Faculty of science
Model
Pa p e r 3
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60

Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.

1. List various types of operating system. (Unit-I, Page No. 2, Q3)

2. What is Inter Process Communication (IPC)? List the models of IPC in operating systems. (Unit-I, Page No. 4, Q10)

3. Define deadlock. . (Unit-II, Page No. 51, Q5)

4. Draw a resource allocation graph to show a deadlock. (Unit-II, Page No. 52, Q10)

5. Define file management. (Unit-III, Page No. 74, Q4)

6. List the operations to be performed on directories. (Unit-III, Page No. 75, Q7)

7. What are tree structured directories? . (Unit-III, Page No. 76, Q10)

8. Write the services of operating system. (Unit-I, Page No. 3, Q5)

Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.

9. (a) Define operating system. What are the services of an operating system?
Explain. (Unit-I, Page No. 10, Q19)

OR

(b) Describe about semaphores and their usage and implementation. (Unit-I, Page No. 35, Q43)

10. (a) Define deadlock. Explain necessary conditions for arising deadlocks. (Unit-II, Page No. 61, Q19)

OR

(b) Explain all the strategies involved in deadlock detection. (Unit-II, Page No. 66, Q24)

11. (a) Explain various disk scheduling algorithms with an example. (Unit-III, Page No. 97, Q32)

OR

(b) Describe various file allocation methods briefly. (Unit-III, Page No. 119, Q53)

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. MP.3


Operating Systems
B.Sc iii-year V-Semester (oU)

CONT ENT S
Syllabus (As per (2016-17) Curriculum)
List of Important Deinitions l.1 – l.3

Model Question PaPers With solutions (As per oU Curriculum)


Model Paper-I MP.1 – MP.1
Model Paper-II MP.2 – MP.2
Model Paper-III MP.3 – MP.3

UNiT-wise shorT & essay TyPe QUesTioNs wiTh solUTioNs

Unit No. Unit Name Question Nos. Page Nos.


Topic No. Topic Name

Unit - i Introduction, OS Structures, Process Management


and Synchronization Q1 - Q49 1 - 48

Part-A Short QUeStionS with SolUtionS Q1 - Q12 2 - 4


Part-B eSSAy QUeStionS with SolUtionS Q13 - Q49 5 - 44
1.1 Introduction 5
1.1.1 Computer System Architecture Q13 - Q14 5
1.1.2 Computing Environments Q15 - Q18 7
1.2 Operating System Structures 10
1.2.1 Operating System Services Q19 10
1.2.2 User Interface for Operating System Q20 11
1.2.3 System Calls, Types of System Calls Q21 - Q22 13
1.2.4 Operating System Structure Q23 - Q25 14
1.3 Process Management 18
1.3.1 Process Concept Q26 - Q28 18
1.3.2 Process Scheduling Q29 - Q30 21
1.3.3 Operations on Processes Q31 - Q32 23
1.3.4 Inter Process Communication, Examples:
Producer-Consumer Problem Q33 - Q36 24
1.4 Process Synchronization 28

1.4.1 Critical-section Problem, Peterson’s Solution Q37 - Q40 28

1.4.2 Synchronization Q41 31

1.4.3 Semaphores Q42 - Q45 33

1.4.4 Monitors Q46 - Q49 39

internAl ASSeSSMent 45 - 48

Unit - ii CPU Scheduling and Deadlocks Q1 - Q27 49 - 72

Part-A Short QUeStionS with SolUtionS Q1 - Q10 50 - 52

Part-B eSSAy QUeStionS with SolUtionS Q11 - Q27 53 - 68

2.1 CPU Scheduling 53

2.1.1 Concepts Q11 - Q12 53

2.1.2 Scheduling Criteria Q13 54

2.1.3 Scheduling Algorithms Q14 - Q16 55

2.2 Deadlocks 60

2.2.1 System Model Q17 - Q18 60

2.2.2 Deadlock Characterization Q19 - Q20 61

2.2.3 Methods for Handling Deadlocks Q21 62

2.2.4 Deadlock Prevention Q22 62

2.2.5 Deadlock Avoidance Q23 64

2.2.6 Deadlock Detection Q24 - Q26 66

2.2.7 Recovery from Deadlock Q27 68

internAl ASSeSSMent 69 - 72

Unit - iii Main and Virtual Memory, Mass-Storage Structure,


File Systems and Implementation Q1 - Q55 73 - 128

Part-A Short QUeStionS with SolUtionS Q1 - Q10 74 - 76

Part-B eSSAy QUeStionS with SolUtionS Q11 - Q55 77 - 124

3.1 Main Memory 77

3.1.1 Introduction Q11 - Q12 77

3.1.2 Swapping Q13 78

3.1.3 Contiguous Memory Allocation Q14 - Q17 79

3.1.4 Segmentation, Paging Q18 - Q21 84


3.2 Virtual Memory 88

3.2.1 Introduction Q22 88

3.2.2 Demand Paging, Page Replacement Q23 - Q24 88


3.2.3 Allocation of Frames Q25 - Q28 94
3.2.4 Thrashing Q29 - Q30 95

3.3 Mass-Storage Structure 96

3.3.1 Overview Q31 96


3.3.2 Disk Scheduling Q32 97

3.3.3 RAID Structure Q33 - Q35 100

3.4 File Systems 102

3.4.1 File Concept, Access Methods Q36 - Q40 102


3.4.2 Directory and Disk Structure Q41 - Q43 105
3.4.3 File System Mounting, Protection Q44 - Q46 110

3.4.4 File System Implementation, Directory


Implementation Q47 - Q52 113
3.4.5 Allocation Methods Q53 119
3.4.6 Free Space Management Q54 - Q55 123

internAl ASSeSSMent 125 - 128

iMPortAnt QUeStionS iQ.1 – iQ.4


Syllabus
UNIT-I

Introduction: Computer System Architecture, Computing Environments.

Operating System Structures: Operating System Services, User Interface for Operating System, System

Calls, Types of System Calls, Operating System Structure.

Process Management: Process Concept, Process Scheduling, Operations on Processes, Inter Process

Communication, Examples – Producer-Consumer Problem.

Process Synchronization: Critical-section Problem, Peterson’s Solution, Synchronization, Semaphores,

Monitors.

UNIT-II

CPU Scheduling: Concepts, Scheduling Criteria, Scheduling Algorithms.

Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks, Deadlock

Prevention, Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.

UNIT-III

Main Memory: Introduction, Swapping, Contiguous Memory Allocation, Segmentation, Paging.

Virtual Memory: Introduction, Demand Paging, Page Replacement, Allocation of Frames, Thrashing.

Mass-Storage Structure: Overview, Disk Scheduling, RAID Structure.

File Systems: File Concept, Access Methods, Directory and Disk Structure, File System Mounting,

Protection. File System Implementation, Directory Implementation, Allocation Methods, Free-space

Management.
List of important Definitions Computer SCienCe paper-V

LIST OF IMPORTANT DeFINITIONS

UNIT - I
1. Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an
intermediate between the user and hardware.
2. Process
Process is the fundamental concept of operating systems structure. A program under execution is referred to as a process.
3. Inter Process Communication (IPC)
Inter Process Communication(IPC) is deined as the communication between process to process.
4. Command Line Interface
Commands line interface which is popularly known as command interpreter makes use of various commands with which
a user can directly interact with the operating system.
5. Thread
A thread can be thought as a basic unit of CPU utilization.
6. Schedulers
Scheduler is deined as a program which selects a user program from disk and allocates CPU to that program.
7. Short Term Scheduler (STS)
Short term scheduler is deined as a program (part of operating system) that selects among the processes that are ready
to execute and allocate the CPU to one of them.
8. Context Switching
Context switching refers to the process of switching the CPU to some other process thereby saving the state of the old
process and loading the saved state for the new process.
9. Critical Resource
A resource that cannot be shared between two or more processes at the same time is called a critical resource.
10. Critical Section
A critical section is a segment of code present in a process in which the process may be modifying or accessing common
variables or shared data items.
11. Starvation
Two or more processes are said to be in starvation, if they are waiting perpetually for a resource which is occupied by
another process.
12. Preemptive Kernel
A Kernel that permits a process to be preempted or interrupted during its execution is called preemptive kernel.
13. Non-preemptive Kernel
A Kernel that does not permit a process to be preempted or interrupted during its execution is called non-preemptive
kernel.
14. Peterson’s Solution
Peterson’s solution is a software based solution to the critical section problem that satisies all the requirements like mutual
exclusion, progress and bounded waiting.
15. Semaphore
A semaphore is an integer variable which can be accessed using two operations wait and signal.
16. Monitor
A monitor is a construct in a programming language which consists of procedures, variables and data structures.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. L.1
Computer SCienCe paper-V operating sYstems

UNIT - II
1. Deadlock
A situation in which a process waiting indeinitely for requested resources and that resources are held by other processes in a
waiting state.
2. Program
‘Program’ refers to the collection of instructions given to the system in any programming language. Alternatively a
program is a static object residing in a ile.
3. Scheduling
Scheduling is deined as the activity of deciding, when processes will receive the resources they request.
4. Jobs
A job is a sequence of programs used to perform a particular task. Typically a job is carried out in various steps where
each step depends on the successful execution of its preceding step. It is usually used in a non-interactive environment.
5. Job Scheduling
Job scheduling is also called as long-term scheduling which is responsible for selecting a job from disk and transferring
it into main memory for execution.
6. CPU Utilization
The amount of time that the CPU is kept busy executing processes.
7. Throughput
The number of processes that are completed per unit time.
8. Turnaround Time
The interval from the time of submission to the time of completion.
9. Waiting Time
The sum of periods spent waiting in the ready queue.
10. Response Time
The time from the submission of a request until the irst response is produced in an interactive system

UNIT - III
1. Page
A page refers to the logical memory location which contains ixed-sized blocks.
2. Frame
A frame refers to the physical memory location which is divided into ixed-sized blocks.
3. File
A ile is grouping of similar records or related information together which is stored in secondary memory.
4. File Management
The process of managing iles and the operations performed on iles is referred to as ile management.
5. Logical Address Space
Logical address is deined as the address which is generated by CPU.
6. Physical Address Space
Physical address is deined as the actual memory address where data instruction is present.
L.2 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
List of important Definitions Computer SCienCe paper-V
7. Fragmentation
Fragmentation is deined as a wastage of memory space.
8. Virtual Memory
Virtual memory is a concept of giving programmers an illusion that they have a large memory at their disposal even
though they have very small physical memory.
9. Demand Paging
Pure demand paging is a technique where a process starts execution without a single page in memory.
10. Thrashing
Thrashing refers to a situation wherein the operating system waste most of its crucial time in accessing the secondary
storage, looking-out for the referenced pages that are unavailable in the memory.
11. Working Set Model
Working set can be deined as the set of pages that a program is currently using (or) most recently used.
12. Seek Time
The time required to move the head to the desired cylinder or track is called as seek time or random access time or
positioning time.
13. Rotational Latency
The time required to move the head to the desired sector by spinning the platter is called as rotational latency.
14. Security
The term security refers to a state of being protected from harm or from those that cause negative effects.
15. Protection
Protection refers to, keeping the system safe physically as well as from unauthorized access.
16. Allocation Methods
An allocation method is considered to be eficient if it allocates space such that no disk space is wasted and accessing
of the iles take less time.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. L.3


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

Marketed by:

introduction, os
UnIT structures, process
management and

1 synchronization SIA GROUP

Learning Objectives

After studying this unit, a student will have thorough knowledge about the following key concepts,
 Computer System Architecture and Computing Environments.
 Various Operating System Services and User Interfaces for Operating System.
 System Call and various types of it.
 Process concept, Scheduling, Operations and Inter Process Communication.
 Critical Section Problem, Peterson's Solution, Synchronization, Semaphores and Monitors.

intrOductiOn
An operating system is a program or collection of programs that controls the computer hardware and acts as
an interface between user and hardware. The irst operating system was developed in early 1950s by General
Motors Research Laboratories. Later, different operating systems were developed such as batch processing,
multiprogramming, time sharing, distributed and real time systems. The major components of a typical OS are
process management, memory management, ile management, storage management and I/O system management.
The primary functionalities of an operating system is that, it acts as resource manager and as user/computer interface.
There are many services provided by OS which are accessed using different types of system calls.

A process is referred to as program which is under execution. The information about each process that is in
execution mode is made available in the process control block. There are two basic operations that can be
performed on a process i.e., process creation and deletion.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 1


Computer SCienCe paper-V operating SyStemS

part-a
short Questions with solutions
Q1. What do you mean by multiprocessor systems?
Answer : Model Paper-I, Q1

Computer systems that carry more than one general purpose processors are known as multiprocessor (or) parallel (or)
tightly coupled systems. These processors share computer bus, memory, clocks and various hardware components. These type
of systems are used because of following advantages.
(a) Increased reliability
(b) Increased throughput
(c) Economy of scale.
Q2. Deine operating system. Give two examples.
Answer : Model Paper-II, Q1

Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an interme-
diate between the user and hardware. It provides platform for application programs to run on it. It has the following objectives,
(i) Eficiency
An operating system must be capable of managing all the resources present in the system.
(ii) Convenience
An operating system should provide an environment that is simple and easy to use.
(iii) Ability to Evolve
An operating system should be developed in such a way that it provides lexibility and maintainability. Hence, the changes
can be done easily.
Operating Computer
User
system hardware

Figure: operating System as an Interface


Examples
Windows, Unix, MS-DOS.
Q3. List various types of operating system.
Answer : Model Paper-III, Q1

The following are different types of Operating Systems (OS),


1. Batch Processing
A batch processing operating system reads a set of separate jobs, each with its own control card. This control card contains
information about the task to be performed. Once the job is completed its output is printed.
Example
MS DOS
2. Multiprogramming
In multiprogramming, the OS picks one of the job from job pool and sends the job to CPU. When an I/O operation is
encountered in that job, OS allocates I/O devices for that jobs and allocate CPU to next job in the job pool.
Example
Windows
3. Time Sharing
In timesharing systems, each program is given a certain time slot i.e., CPU is allocated to the program for certain period of time
called "time quantum" (or) "time slice".
Example
Unix
2 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
4. Real Time System Q6. Deine system call.
Real time systems are time bounded systems wherein the Answer : Model Paper-I, Q2
system must responds to perform a speciic task within
The operating system provides a wide range of system
predeine boundary.
services and functionalities. These services can be accessed
Example by making use of system calls. The system calls acts as the
Airport trafic control space lights interface between user applications and operating system
(services). They are available as built-in functions or routines
5. Distributed System
in almost all high-level languages such as C, C++, etc.
The primary focus of distributed system is to provide
User User applications
transparency while accessing the shared resource i.e., a
area fork( )
user should not worry about the location of the data.
System call
Example
Interface
Novell network.
OS kernel
fork( )
Q4. Deine multi-user operating system and give area
{
two examples. //Implementation of
//fork system call
Answer : return
}
Multi-user Operating System
In contrast to single user operating system, this operat- Figure: User application Invoking System call
ing system enable multiple users to operate the computer simul- Q7. What are the types of system calls?
taneously. This operating systems perform eficient utilization
of CPU by assigning equal amount of time slice to every indi- Answer :
vidual user (connected through different terminals). System calls are categorized into ive groups depend-
Examples ing upon the functionality offered by them. They are,

The followings are the examples of multi-user operat- (i) Process control system calls
ing system.
(ii) File management system calls
(i) Structure of operating system which includes six lay-
(iii) System information management system calls
ers.
(iv) Device management system calls
(ii) Structure of MULTICS system which include several
concentric layers. (v) Communication system calls.
Q5. Write the services of operating system. Q8. List the features of system call.
Answer : Model Paper-III, Q8 Answer : Model Paper-II, Q2

The following are the services provided by an operating The following are the features of system calls,
system.
1. It offers a process to create, load, execute and terminate
(i) Program creation and execution them.
(ii) User interface 2. It offers a ile to perform operations such as open,
(iii) I/O device support close, read, write, get ile attributes and set ile attrib-
utes.
(iv) File system management
(v) Interprocess communication 3. It offers managing of system information like system
date and time, operating system version etc.
(vi) Resource allocation
4. It offers accessing of the system resources like main
(vii) Error detection
memory, disk drives etc.
(viii) Accounting
5. It offers a process to exchange information by message
(ix) Protection and security. passing or shared memory.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 3
Computer SCienCe paper-V operating SyStemS
Q9. Deine a process.
Answer : Model Paper-I, Q8

Process is the fundamental concept of operating systems structure. A program under execution is referred to as a process.
It can also be deined as an active entity that can be assigned to a processor for execution. A process is a dynamic object that
resides in main memory. A process includes the current values of the program counter and processor’s registers. Each process
possesses its own virtual CPU. A process contains the following two elements,
(a) Program code
(b) A set of data.
Q10. What is Inter Process Communication (IPC)? List the models of IPC in operating systems.
Answer : Model Paper-III, Q2

Inter Process Communication


Inter Process Communication(IPC) is deined as the communication between process to process. It provides a mechanism
to allow processes to communicate and to synchronize their actions.
Models in IPC
Inter process communication takes place through two ways,
(i) Shared memory system
(ii) Message passing system.
Q11. What is meant by shared memory in inter-process communication?
Answer :
Shared memory system requires communicating process to share some variables. The processes are expected to exchange
information through the use of shared variables. Here, the operating system needs to provide only shared memory and the
responsibility for providing communication is under taken by the application programmers and the operating system does not
interfere in communication.
Q12. Write short notes on semaphore.
Answer : Model Paper-II, Q8

Semaphore
Signals provide simple means of cooperation between two or more processes in such a way that a process can be forcefully
stopped at some speciied point till it receives the signal. For signalling between the processes a special variable called semaphore
(or counting semaphore) is used. For a semaphore ‘S’, a process can execute two primitives as follows,
(i) semSignal(S)
This semaphore primitive is used to transmit a signal through semaphore ‘S’.
(ii) semWait(S)
This semaphore or counting semaphore primitive is used to receive a signal using semaphore ‘S’. If the corresponding
transmit signal has not yet been sent then the process is suspended till a signal is received.

4 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

PART-B
ESSAY QUESTIONS WITH SOLUTIONS
1.1 introduction

1.1.1 computer system architecture


Q13. Discuss briely about,
(i) Single processor systems
(ii) Multiple processor systems
(iii) Clustered systems.
Answer : Model Paper-I, Q9(a)

(i) Single Processor Systems


As the name suggests that a single-processor system carries only one processor and most of the computer systems are based
on these types of systems. Typically, these systems carries a main Central Processing Unit (CPU) whose major responsibility
is to execute the general purpose instruction set apart from handling user processors. They also carry certain special purpose
processors that are device speciic which include keyboards, graphic controllers etc., while in mainframe computers some of the
processors are used for handling general purpose tasks like transferring data quickly between various components.
They perform limited tasks and are managed and monitored by the operating system. For instance, consider a disk-controller
microprocessors which is a special purpose processor used for implementing its own disk queue and scheduling algorithm by
receiving the instructions from main CPU. Moreover, modern computer peripherals are now carrying microprocessors say, in
keyboard which sends the converted codes of keystrokes that results in relieving the CPU from overhead.
In some systems, these special purpose processors do not involve operating system for performing their tasks as a result,
the operating system cannot communicate with these processors.
(ii) Multiprocessor Systems
Computer systems that carry more than one general purpose processors are known as multiprocessor (or) parallel (or)
tightly coupled systems. These processors share computer bus, memory, clocks and various hardware components. These type
of systems are used because of following advantages.
(a) Increased reliability
(b) Increased throughput
(c) Economy of scale.
(a) Increased Reliability
When multiple processors are used and tasks are thoroughly distributed among them, then even if a processors fails, the
overall system will not be halted. This is because the tasks of the failed process can be shared among different active
processors. This distribution slows down the performance of the system but prevent it from failing.
(b) Increased Throughput
With the of multiple processors and distribution of system functions among them, so that more work can be done in less
time. This is similar to multiple programmers working on a single project which results in completing the task quickly.
However, some overhead will be involved when more than one processors are working on a single task. This also results
in decreasing the expected gain from additional processors due to sharing of resources.
(c) Economy of Scale
Overall cost of multiprocessors systems is less when compared with the single processor systems. This is because a
single peripheral such as disk drive, power supply etc are shared among these processors instead of maintaining multiple
computers with their own disks and power suppliers.
In this types of systems, fault tolerance mechanism is used with which faults present in the system are detected, diagnosed
and corrected.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 5
Computer SCienCe paper-V operating SyStemS
(iii) Clustered Systems
These types of systems use the concept of multiprocessor systems i.e., usage of multiple processors. The difference is
that these systems uses physically separated systems which are interconnected (or) closely linked using LAN (or) IniBand and
share common storage as shown below.

Storage Area
Network

Figure: The general Structure of a clustered System


These systems provide increased availability by providing continuity of services in case of failure of few systems and by
providing certain level of redundancy.
Each cluster node is responsible for maintaining few different nodes and in case of failure of one node the one which is
monitoring the failed node takes the storage related responsibilities of the failed node and restart all its applications.
Similar to multiprocessor systems, clustering are also of two types that are symmetric and asymmetric. An asymmetric
clustered systems carry two types of machines, one which performs all the tasks/run the applications and the other is responsible
just for monitoring the active machine. In case of failure of active machine, the host-standby (or) the one which is monitoring it
becomes active.
In symmetric clustered systems, both machines are responsible for running applications and monitoring each other. As
there exist multiple machines, more than one applications are needed.
These systems provide high performance computing environment as they can run a single application on multiple computers
concurrently. However, such application needs to be written using parallelization technique that allows division of a single
application into individual components that can be accessed on individual computers. When each of the computer performed its
task on the application, the results of all the computers are combined to get the inal result.
Q14. Explain briely about different types of multiprocessor systems. Also differentiate them.
Answer :
Types of Multiprocessor Systems
There are two types of multiprocessor systems. They are,
(i) Asymmetric multiprocessor systems
(ii) Symmetric multiprocessor systems.
(i) Asymmetric Multiprocessor Systems
An asymmetric multiprocessor system is a typical master slave system in which one processor acts as a master processor
and all the others as slaves. Every slave processor performs its speciic tasks which are either assigned by the master processor
(or) that are predeined. The master processor is responsible for controlling the system and assigning tasks to other processors.
(ii) Symmetric Multiprocessor Systems
In a Symmetric Multiprocessor System (SMP) all the processors act as peers and hence there exist no master (or) slave
processors. Every processor involves in performing tasks associated with the operating system. Apart from sharing physical
memory, every processor in a symmetric multiprocessors system carry its own cache memory and a set of registers.
6 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

Memory

Cache Cache Cache


memory memory memory

Set of Set of Set of


registers registers registers

CPU (A) CPU (B) CPU (C)


Figure: architecture of Symmetric Multiprocessing
A major example of symmetric multiprocessor systems is solaris system which is developed by Sun Microsystems as a
commercial version of UNIX. It can support the connectivity of several dozens of processors running identical operating system
i.e., solaris.
One of the major advantages of SMP is that it can run multiple processes (equal to the number of processors) simultaneously
without any performance degradation. In order to ensure that the data is sent to the appropriate processor the Input-Output (I/O)
must be controlled carefully. In addition to this, the ineficiencies like overloading of one processor while others are idle can be
avoided by sharing certain data structures.
Difference between Asymmetric and Symmetric Multiprocessors Systems
Asymmetric Multiprocessing Symmetric Multiprocessing
1. It is a typical master slave system. 1. There are no master slave relationship among processors.
2. Master processor assign tasks to all the processors 2. Every processor acts as a peer and hence performs all
the tasks.
3. They are less complex than SMP. 3. They are more complex than ASMP.
4. Nodes/processors communicate with each other 4. Processors communicate with each other through shared
through master processor. memory.
5. They might carry different applications and roles 5. All the processors carry identical operating systems
on each processors.

1.1.2 computing environments


Q15. Explain about traditional computing and mobile computing.
Answer :
Traditional Computing
From the past few years, traditional computing environments have changed a lot. This is because of the major improvement
in the computing technologies which include increased WAN bandwidth, thin clients, wireless and cellular networks, irewalls
and many more. While considering an computing environment of an ofice, use of portals for providing internet access within
the internal servers are getting common instead of PCs connected through a network. Workstations are getting replaced with thin
clients or network computers that are easy to maintain. Portability is provided by wirelss networks and cellular data networks
with which devices can connect to the private networks wilrelessly.
In computing environment of a single user (home network) slow modems are replaced with high speed network connections
with support of connecting printers, clients and servers. Security is provided with irewalls which protect their network from
security attacks.
For some period of time, batch processing interactive and time sharing systems were used where bulk of jobs arranged
in sequence with input from various data sources gets processed in batch systems, processing of inputs from user in interactive
systems and time based time processing in time sharing systems. In time sharing systems, scheduling algorithms were used which
are now processed at user level and system level.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 7
Computer SCienCe paper-V operating SyStemS
Mobile Computing Some operating systems provide generic functions or
methods for accessing iles over a network. They manipulate
In the world of emerging technologies where,
the networking details present in the device driver of network
technology has touched heights the use of mobile and handheld
interface whereas, some other operating systems have separate
devices has become a boom where day by day its demand is
functions for network access and other local access. There are
increasing. This handheld devices are smaller in size, compact, various types of networks available for connecting computers
contains many application software for communicating with together. These networks are distinguished by the protocols they
other devices called as apps. Many of the smart phones are a use. Various protocols like ATM, TCP/IP, etc., are available but
combination of wireless technology and best features of mobile TCP/IP is the most popular network protocol and nearly all
devices. Hence, these things are making mobile devices popular operating systems including Unix, Linux and Windows supports
and wiping out the use of laptop’s. A mobile device consists TCP/IP protocol.
of many applications like media player, playing online games, A network operating system is one which has the
making conference calls. Connecting to the internet using Wi-Fi features of accessing iles over a network, communicating with
is one of the most important features of mobile and handheld other computers by sending messages. The network operating
devices wirelessly. system provides an autonomous environment for each computer
whereas, a distributed operating system provides a less
An advanced trend in these devices is the use of Global
autonomous environment, in addition to this, it gives a generic
Positioning System (GPS) which can be used to locate the view to remote resources. Its primary focus is on transparency
current position of the device with use of satellite signal. It also which hides the location details of a resource.
provide navigation i.e., providing ways to reach a particular
Virtualization
target location in any part of the world. Example, locating
Virtualization is a mechanism in which multiple
ATMs, restaurants, popular place etc.
independent virtual operating systems are made to run on a
These devices also provide features of an accelerometer single physical computer. It is useful for maintaining of return
with which orientation, shaking, titling etc., can be detected. on investment for the computers.
This feature is mostly in used in playing games without a The term virtualization was coined during the year 1960s
mouse, keyboard (or) joysticks. The operating systems used with content to a virtual machine which sometimes referred
now a days in these devices are Andriod ios and google android to as a pseudo-machine. The virtual machines were used to
which are designed for iphones, ipads and smartphones, tablets be created and managed, this process is oftenly referred to as
respectively. ‘platform virtualization’.
A software called control program is used to perform
One limitation of these devices is that they are slower in
the platform virtualization. This program generates a simulated
terms of speed and with less memory capacity when compared
environment called virtual computer which makes the device to
with traditional PCs. utilize the hosted software of a particular virtual environment
Q16. Discuss about distributed systems, vritualization which is sometime known as guest software.
and cloud computing. The guest software is oftenly behave as a complete
operating system and run as if it has been installed on a stand-
Answer :
alone computer.
Distributed Systems Oftenly, multiple virtual machine can be simulated on
It is a collection of independent, heterogeneous computer one physical computer. The number of virtual machine to be
systems which are physically separated but are connected used depends on the physical hardware resources of the host
device. Because the guest software can perform its function by
together via a network to share resources like iles, data,
accessing to particular peripheral devices. And therefore, the
devices, etc. The primary focus of distributed system is to
virtualized platform is used to support this access i.e., it suppors
provide transparency while accessing the shared resource i.e.,
guest interfaces to those devices. Example of this devices may
a user should not worry about the location of the data. There
include disk drive, DVD, CD-ROM and network interface card.
are various advantages of distributed systems like they help in
The technology of virtualization is an approach that can
increasing computation speed, functionality, data availability
be used to reduce the maximum of hardware acquisition, thus
and reliability.
used for costs maintenance.
8 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Cloud Computing
Cloud computing can be viewed as a model for distributing information technology. Inorder to gain access to the resources
from Internet without depending on direct connection with the server. The model can easily retrieve resources via web-based
tools and applications. Here, the information which is to be accessed is stored in clouds and it gives the privileged to the user
to access the information whenever and from where ever they want. Thereby, allowing the users to work remotely. In general
cloud computing is nothing but the use of computing resources such as hardware and software which are distributed as a service
across the network. It centralises the data storage, processing and bandwidth which in turn provides eficient computing process
to the users.
An example of cloud computing in Amazon EC2 (Elastic Compute Cloud) which offers tremendous facilities by providing
a huge storage capacity (in petabytes) along with virtual many virtual machines and servers to users through internal. These
services are provided to different users based on the type of cloud as follows,
 Services that are available to all the users over the internet are provided through public cloud.
 Services that are available to an organization are provided through private cloud.
 Cloud including both public and private cloud services are provided through hybrid cloud.
Some other cloud including SaaS, PaaS and IaaS are also used based on requirement.
(a) SaaS (Software-as-a-Service)
It is one of the form of cloud computing that supports multiuser architecture inorder to deliver application via browser to
thousands of users. In contrast to other managed services SaaS emphasizes mostly on end users, inorder to fulil their requirements.
Moreover, in SaaS computing the customers need not have to invest on any servers or in software licensing as there all are taken
care by service providers. These service providers experiences low cost with just one product in relative to the traditional hosting
model.
(b) PaaS (Platform-as-a-Service)
PaaS is a web service which is closely connected to SaaS and is considered as a distinct form of SaaS. Unlike SaaS, it
provides the user only a platform for work but not applications to work. In order to use the functionality over the internet, these
services provides only application program interfaces rather than large number of applications.
(c) IaaS (Infrastructure-as-a Service)
IaaS provides infrastucture i.e., servers and storage to be available over the internet.
Q17. Distinguish between the client-server and peer-to-peer models of distributed systems.
Answer :
Client-Server Model Peer-to-Peer Model
1. It uses centralized form of networking architecture. 1. It uses decentralized form of networking architecture.
2. In this model certain nodes in the network are 2. In this model all the nodes in the network are consid-
dedicated to serve some speciic services. The ered as peers. And therefore, any node can act as a
node that send request is called ‘client’ and the client or a server based on the condition that it req-
node that provide services to the request is uests for a service or provide a service to a request.
called ‘server’.
3. The speciic node perform their respective task 3. The tasks and workload of the network are divided
i.e., client request for resources and server pro- and shared between the various node (peers) of the
vide resources. network.
4. Its work is based on “make a request and the 4. Its work is based on “every one pulls their own
request will be granted” type of relationship. weight” type of relationship.
5. Clients are the respective nodes that do not share 5. Peers have the same privileges and rights on various
their information or data but work on their own data sources and devices. And they can communicate
and access data or resources by sending a with each other directly without the need of any
request to the server. intermediate.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 9


Computer SCienCe paper-V operating SyStemS

6. The client server model is mostly used in big 6. The peer-to-peer model is mostly used in small
corporations or organizations with high security business, home users and peer-to-peer ile sharing
data, e-mail, banking services etc. programs like Napster, Bitorrent etc.
7. Consider an example of client server network to 7. Consider an example of peer-to-peer network to which
which computer P, Q, R and S are connected. computer P, Q, R and S are connected. If P needs a
P is the server and Q, R and S are the clients. ile from then it sends a request to R. R accepts the
Suppose that a printer is attached to P, If Q requests and sends the ile to P if it inds. During this
needs to print a ile, it will send a request to process Q and S are ignored, but they function
P. Thus P will respond to Q by printing the ile. normally. If suppose all the computers are connected
If R sends a request asking for a ile to access. to a network printer and if P and Q each sends a
P will check R’s authentication for the data request to print. Then the request that reached irst
access, if it inds R as unauthorized it will reject will be granted irst and later the printer is granted
the request and will respond to C by turning the next request.
down its request.
8. Highly expensive to setup and maintain. 8. It is cheaper than client-server model.
9. Work load of the server increases on addition 9. It increases the eficiency with the addition of new
of more number of clients, thus causing low member to the system.
network speed.
10. It is not a robust model. 10. It is a very robust model.
11. It provides security to the network. 11. It does not provide security.

Q18. Write short notes on Real-time embedded systems.


Answer :
Embedded systems are small computers having a limited set of hardware like a small processor capable of processing a
limited set of instructions (often called as an Application Speciic Integrated Circuit (ASIC)), a small memory (RAM or EPROM)
and I/O devices. These systems usually do speciic tasks. Examples are microwave ovens, robots in a manufacturing unit, latest
automotive engines, etc.
A variety of embedded systems exists of which some are computers with standard operating systems, some have
dedicated programs embedded in their limited memories and often some don’t even have any software, hardware (ASIC) to do
processing. Nearly all embedded systems use real-time operating system because they are used as control devices and have rigid
time requirements. Sensors are used to input data such as temperature, air pressure, etc., from the environment to embedded
system where that data is analyzed and several other controls are adjusted by embedded system itself to control the situation of
system. Few examples are home appliance controllers, weapon controllers, boiler temperature controllers, fuel injection system
in automobile engines, etc. A real-time system needs that processing to be done in ixed time constraints.

1.2 operating system structures


1.2.1 operating system services
Q19. Deine operating system. What are the services of an operating system? Explain.
Answer : Model Paper-III, Q9(a)

Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an interme-
diate between the user and hardware. It provides platform for application programs to run on it. It has the following objectives,
(i) Eficiency
An operating system must be capable of managing all the resources present in the system.
(ii) Convenience
An operating system should provide an environment that is simple and easy to use.
10 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
(iii) Ability to Evolve (vi) Resource Allocation
An operating system should be developed in such a way In a system, multiple programs may executed concurrently.
that it provides lexibility and maintainability. Hence, It is the responsibility of the operating system to
the changes can be done easily. allocate resources (such as CPU time, main memory,
User
Operating Computer iles, etc.) to them. For example, various scheduling
system hardware
algorithms are used for allocating CPU time and
Figure: operating System as an Interface resources to processes.
Examples (vii) Error Detection
Windows, Unix, MS-DOS. The operating system is responsible for keeping track
Services of Operating System of various errors that may occur in CPU, memory, I/O
devices, user program, etc. Whenever errors occur,
The following are the services provided by an
the operating system takes appropriate steps and may
operating system,
provide debugging facilities.
(i) Program Creation and Execution
(viii) Accounting
The operating system should support various utilities
It is a process of monitoring user activities, to keep
such as editors, compilers and debuggers etc., in
track, which user has accessed which resources and
order to give facility to the programmers to write and
execute their programs. the number of times the system is being accessed.
This recorded statistical information can be used to
(ii) User Interface improve the system performance by tracing out which
The operating system should provide an interface resources are in demand and by increasing the instance
through which a user can interact. Most of the earlier of those resources.
operating systems provide Command Line Interface
(ix) Protection and Security
(CLI), which uses text commands. All the users are
supposed to type their commands through keyboard. Modern computer systems allow multiple users to
Some systems support batch interface which accepts a execute their multiple processes concurrently in the
ile containing a set of commands and executes them. system. These multiple processes may access data
Now-a-days Graphical User Interface (GUI) is used simultaneously which has to be regulated so that only
where a window displays a list of text commands to valid users are given access to the data. It is the job
be chosen by a user through an input or some pointing of operating system to apply protection and security
device. mechanism to the system.
(iii) I/O Device Support 1.2.2 user interface for operating system
There are numerous I/O devices. Each of them has
Q20. Write about user interface for operating system.
its own set of instructions and control signals which
are used during its operation. The operating system Answer :
should take care of all these internal devices details There are many ways through which a user can interacts
and should provide users with simple read( ) and with the operating system. The two fundamentals ways among
write( ) functions for utilizing those devices. these are,
(iv) File System Management (i) Command line interface
The user data is usually stored in iles. An operating
(ii) Graphical user interface.
system should manage all these iles and should provide
functions to perform various operations on them, such (i) Command Line Interface
as create, open, read, write, close, search (according to Commands line interface which is popularly known as
its name), delete etc. Additionally, it should protect iles command interpreter makes use of various commands with
pertaining to different users from any unauthorized which a user can directly interact with the operating system.
access. It can be present as an in-built program in the kernel of the
(v) Interprocess Communication operating system (or) it can be present as a special program
There are several instances, when a process may in the operating systems like windows, UNIX etc. There can
require to communicate with other processes often by be more than one interpreter present in a single system in the
exchanging data among themselves. This interprocess form of shells such as C shell, Korn shell, Bourne-shell etc.
communication is employed by operating system using Apart from these, there can be other shells like the one which
techniques like message passing and shared memory. is bought from third party.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 11
Computer SCienCe paper-V operating SyStemS
The major responsibility of it is to execute the commands 4. Visualization
which can include copy, print, create, delete etc., with respect The term visualization refers to a learning method
to the ile management. The implementation of commands which permits users to understand complex content either of
depends on the following two approaches, voluminous or too abstract type. The system functions are
 The command can be implemented directly if the depicted by modifying representation of entities. Visualization
command interpreter solely carries the code of its is enhanced by displaying specialized graphical images. The
execution. aim is not compulsorily to generate a real graphical image but
to give an image that expresses the most useful information.
 The command interpreter does not carry any code and
hence it doesnot know how to execute the command. Therefore, we can increase production, work with
rapid and exact data, grow knowledge with the help of proper
In this case, the implementation is carried out with the
visualizations.
help of system programs.
5. Behaviour of Objects
(ii) Graphical User Interface (GUI)
Objects and actions constitute a graphical system.
1. Advanced Visual Presentation
Objects are visible elements on the screen viewed by the users.
The visual presentation gives an idea about content to Objects are manipulated as a single unit. The focus of users must
be seen on the interface by the users. The graphical system is be kept on objects rather than actions in case of a well-designed
advanced by adding following features, system. Objects are made up of ‘sub-objects’
 Possibility of displaying more than 16 million colours Example
 Animation and the presentation of photographs and Document is an object whereas paragraph, sentence,
motion videos. word and letter are its sub-objects. The objects are divided
into three classes by IBM’s System Application Architecture
The graphical system provides to its user several useful,
Common User Access Advanced Interface Design Reference
simple, meaningful, obvious visual elements listed below,
(SAA CUA),
(i) Windows (primary, secondary or dialog boxes)
(a) Data objects
(ii) Menus (menu bar, pull-down, pop-up, cascading)
(b) Container objects
(iii) Files or programs are denoted by icons.
(c) Device objects.
(iv) Assorted screen-based controls (text boxes, list boxes,
(a) Data Objects
combination boxes, settings, scroll bars and buttons).
These objects present information i.e., text or graphics
(v) A mouse pointer and cursor.
that appears in the body of the screen. It’s a screen-based
2. Interaction using Pick and Click control.
‘Pick’ deines the motor activity of a user to pick out an (b) Container Objects
element of a graphical screen on which an action is to be taken.
These objects hold other objects. Two or more related
‘Click’ represents the signal to carry out an action.
objects are grouped by container objects for simple
 The pick-and-click technique is carried out with the help access and retrieval.
of the mouse and its buttons.
Types of Container Objects
 The mouse pointer is taken to a speciic element which
 Workplace
accounts for PICK, by the user and the action is signaled
that causes a click. Desktop is the workplace. All objects are stored
on desktop.
 The keyboard is an another technique for carrying out
selection actions.  Folders

3. Limited Interface Options These kind of container objects provide storage of


objects for longer time.
The user has restricted group of choices from the screen
content or information obtained as a result of screen content.  Workareas
Nothing less or nothing more is available. WYSIWYG is the Multiple objects presently being operated are
word formed from limited interface options. stored in temporary storage folders.
12 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
(c) Device Objects  In GUI systems, users have the facility of selecting
Printers or trash baskets denote physical objects in the source and destination iles with a mouse. This can be
real world. Device objects consist of other objects for done using I/O system calls.
acting upon.  The next step is to open the speciied iles. It is done
Example using an open( ) system call. The system may have
the possibility of containing errors if the speciied iles
A ile contents are printed by placing it in a printer. On are not present or failed to open them due to access
the basis of relationships that exist between objects, restrictions (security). Hence, a system call is aborted
objects features can be observed, obscured. Object by the system.
relationships are.
 When both the iles are opened, a read( ) system
Objects (iles (or) directories) are represented in terms call is used to read the contents of source ile and a
of various images and icons which are used by the mouse write( ) system call is used to write those contents in
pointer for performing various operations. It is considered as the destination ile. This is repeated till the end of ile
user friendly method of interfacing with the operating system. (eof) is reached.
This type of interface was irst developed by xerox  Finally, both the iles are closed using another system
PARC in the form of xerox Alto computer in the year 1973. call close( ) and probably user will be notiied by
They became popular with the existence of Apple’s Macintosh displaying a message on screen using another system
computers in the year 1980 and later on it got enhanced call.
functionality with the advent of windows operating systems.
However, developers of application programs need not
Modern version of UNIX operating system is included concern about all these system calls. They can make use of
with number of GUI features such as CDE (Common Desktop
an Application Program Interface (API), which gives them
Environment) and x-windows along with traditional command
a set of built-in functions along with their parameters and
line interface.
return values. Examples of such APIs are Win32 API for
1.2.3 system calls, types of system calls windows based system, POSIX API for Linux, Unix, Mac OS
based system and Java API for JVM (Java Virtual Machine)
Q21. With a neat diagram explain the system call
compatible programs. Different APIs though performing
sequence.
similar operations will have different function names.
Answer :
The programming languages provide system call
System Calls interface which acts as a link between application programs
and system calls (of operating system). This system call
The operating system provides a wide range of system
interface maps the function calls of API to the necessary
services and functionalities. These services can be accessed
system call(s) of operating system. It uses a table for this
using system calls. The system calls acts as the interface
mapping which maintains the numbers and addresses
between user applications and operating system (services).
associated with system calls. After mapping, it invokes its
They are available as built-in functions or routines in almost
all high-level languages such as C, C++, etc. respective system call. Hence, the API programmers are
unaware of the complexity of OS system calls.
User User applications
Parameters are passed to operating system using the
area fork( ) following three methods,
System call
1. Registers
Interface
The processor registers can be used to store the
OS kernel parameters, then system calls will receive those
fork( )
area
{ parameters by reading register values.
//Implementation of
//fork system call 2. Block of Memory
return If the number of parameters are more than processor
}
registers, then they are stored in a block of memory
Figure: User application Invoking System call and address of that block is stored in processor register.
Consider an example of copying contents of one ile 3. Stack
to another. To handle this task, several system calls are used Parameters can also be pushed on a stack and system
from the time of prompting users to enter ile names of source calls can receive them by popping them from stack.
and destination iles for copying the contents and closing The advantage of this approach is that it doesn’t limit
these iles. the length of parameters being passed.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 13
Computer SCienCe paper-V operating SyStemS
Q22. Explain different types of system calls in  Request device from operating system and
operating systems. release device.
Answer :  Read from device, write to device
System calls are categorized into following ive groups  Get and set device attributes
based on their functionality. They are as follow,  Attach device logically and detach device
1. Process Control System Calls logically.
A process refers to a program under execution. Any 5. Communications System Calls
operation in the computer system is executed in the These system calls enables process to exchange
form of a process. The system calls in this group are information. Communication can be performed in two
used to create processes, load, execute and terminate ways,
them. Also end or abort processes if runtime error
occurs. The following are few process control system (i) Message Passing
calls, In this method, messages are exchanged
 Create process, terminate process between processes either through a mailbox or
directly between processes. But before sending a
 Wait for event, wait for time, signal message needs to be created.
 Load process, execute process (ii) Shared Memory
 Allocate memory and free memory The communicating processes uses a memory
 Get attributes and set attributes area called shared memory. Process that wants
 Abort, end. to send a message, writes into shared memory
and process that wants to receive a message
2. File Management System Calls reads from the share memory. Shared memory
This category contains system calls to create, manage, is created using “shared memory create” and
read and write iles. Some of the system calls in this “shared memory attach” system calls.
group are, Some of the communication system calls are,
 Create and Delete  Create communication connection and delete
Before the ile is used for read or write we must be communication connection.
able to create the ile. This system call usually takes  Shared memory create and shared memory attach.
the name of the ile as parameter. The ‘delete’ system
 Send message, receive message.
removes the ile from the memory.
 Send status information.
 Open ile, close ile
 Read ile, write ile 1.2.4 operating system structure
 Get ile attributes and set ile attributes. Q23. Give a brief note on operating system structure.
3. System Information Management System Calls Answer :
This category contain system calls used to manage An operating system provides a platform for application
system information like system date, time, number program to run on it. This platform is provided by considering
of current users, operating system version, amount of some important internal aspects of operating system which
free memory available etc. Some of these calls are, includes,
 Get system information and set system 1. Multiprogramming
information. 2. Time sharing
 Get current date/time and set current date/time. 3. CPU scheduling
 Get device attributes and set device attributes. 4. Virtual memory.
 Get process attributes and set process attributes. 1. Multiprogramming
4. Device Management System Calls In mono-programming, memory contains only one
A computer system comprises several devices like program at any point of time. Whereas in multiprogramming,
main memory, disk drives etc. These are called memory contains more than one user program.
resources of the system and are under control by In case of mono-programming, when CPU is executing
operating system. A process that needs resources the program and I/O operation is encountered then the program
should uses system calls to access these resources goes to I/O devices, during that time CPU sits idle. Thus, in
from the operating system. Few device management mono-programming CPU is not effectively used i.e., CPU
system calls are, utilization is poor.
14 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
However, in multiprogramming, when one user program contains I/O operations, CPU switches to the next user program.
Thus, CPU is made busy at all the times.
A single user cannot keep CPU busy at all times. Hence, multiprogramming increases CPU utilization by organizing jobs
(programs), so that CPU is busy at all times by executing some user program or the other.
The idea in multiprogramming is as follows,
The OS picks one of the jobs from job pool and sends the jobs to CPU. When an I/O operation is encountered in that job,
OS allocates I/O devices for that jobs and allocate CPU to next job in the job pool.
However, in mono-programming, CPU sits idle while I/O operation is being performed.
In multiprogramming most of the time CPU is busy. Advantages of multiprogramming are,
1. CPU utilization is high and
2. Higher job throughput.
Throughput is the amount of work done in a given time interval.

Throughput =
2. Time-sharing
Time sharing is considered as multiprogramming systems logical extension. In time sharing system, the user has a separate
program in memory. Each program in time sharing system is given a certain time slot i.e., operating system allocates CPU to
any process for a given time period. This time period is known as "time quantum" or "time slice". In Unix OS, the time slice is
1 sec i.e., CPU is allocated to every program for one sec. Once the time quantum is completed, the CPU is taken away from the
program and it is given to the next waiting program in job post. Suppose, a program executes I/O operation before 1 sec time
quantum, then the program on its own releases CPU and performs I/O operation. Thus, when the program starts executing, it
executes only for the time quantum period before it inishes or needs to perform I/O operation. Thus, in time sharing any users
shares the CPU simultaneously. The CPU switches rapidly from one user to another giving an impression to each user that he
has own CPU whereas, actually only one CPU is shared among many users. CPU is distributed among all programs.
Example
Job CPU burst
1 5 sec
2 1 sec
3 0.5 sec
4 3 sec
Let the time quantum be 1 sec, then CPU is allocated to the jobs as follows,

When there exist more than one programs both multiprogramming and time sharing along with CPU scheduling makes
the CPU available for every single user with a portion of time.
As the CPU is kept busy all the time, it require several tasks to be kept ready in the memory. In a situation where there are
more number of jobs that are ready to be inserted into the memory when there is not enough memory a decision made to select
the jobs among them. This decision making is known as “job scheduling”.
3. CPU Scheduling
In a situation where there are multiple jobs in the memory are ready to be executed then a decision is made to select the
appropriate jobs among them. This decision making is known as CPU scheduling.
4. Virtual Memory
Virtual memory is the extended concept of swapping technique. Swapping is responsible for swapping in and out processes
from the disk whereas with virtual memory, a process which is not present in the memory, can also be executed.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 15
Computer SCienCe paper-V operating SyStemS
Q24. Discuss various approaches of designing an Application/User
operating system.
Compilers, Shell, Libraries, etc.
Answer : Model Paper-II, Q9(a)
System Call Interface
Operating system must be designed and organized Signals File system, CPU Scheduling,
carefully for the better performance and proper functionality, Kernel Swapping
Terminal Block I/O Paging, Virtual
in such a way that it can be modiied easily in the future. Handlings System Memory
Usually, it is preferable to have several small components
Kernel Interface to Hardware
of a system instead of having a single or monolithic system.
Each component should have a well-deined job and all the Drivers
components are interconnected to form a single operating Physical
Terminals Disk and Devices
system. The following are the various approaches of operating Memory
system design,
Figure (2): Structure of UNIX
1. Simple structure
2. Layered Approach
2. Layered approach
3. Microkernels In layered approach an operating system is divided
into multiple layers or levels. The highest layer (layer N)
4. Modules-based approach.
corresponds to users or application programs and the lowest
1. Simple Structure or bottom layer (i.e., layer 0) corresponds to hardware.
There are several commercial operating systems which
have simple but not well-deined structures. Usually, these Layer N
systems were developed as small, simple and having limited
functionalities, but their popularity grew beyond their original
scope. One such operating system is MS-DOS which was Layer 1
developed keeping in mind that it should give more functionality
within the limited space. Its structure does not carry carefull Layer 0
division of its modules. Hardware
MS-DOS has always experienced vulnerability towards
threats and malicious programs which can cause damage to the
entire system because of the improper separation between the
interfaces and their functionalities. Due to this lack of security
and protection, any application can easily gain access to the I/O
operations and hardware of the system without any restrictions.
In fact the hardware (i.e., 8088 processor) of that period also Figure (3): a Layered operating System
provides no hardware protection.
The earlier versions of Unix also falls in this category. Each layer consists of data structures and operations
It divides the system into two parts, the kernel (which is also which are invoked by its upper layers. A lower layer provides
called as heart of Unix) and the system programs. The kernel some services to the upper layer. The advantage of this approach
contains several device drivers which interacts with the system is that construction and debugging becomes simple. As we know
directly. Later the problem occurs in developing kernel because the irst layer (i.e., layer 0) is nothing but hardware and if we
it became larger to implement as it has much more functionality.
assume that hardware is running correctly, then its services can
Application/User be used by layer 1. Now, the layer 1 is debugged and if any
Programs bug is found it is ixed. The advantage is that the errors can be
ixed easily as they lie in that particular layer. Each higher layer
Resident System simply uses the services of its lower layer without worrying
Programs about how these services are implemented (by the lower layer).

The limitation of this approach is that careful pre-


MS-DOS planning is needed because a particular layer can use only
Device Drivers lower-level layers. For example, the device drivers of hard
disk should be at lower layers than the memory management
BIOS Device Drivers layer, because memory management has to utilize the services
Figure (1): Structure of MS-doS provided by device driver of hard disk.
16 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Another major problem in layered approach is its ineficiency as it increases the overall burden of an operating system.
Consider an example, where a user executes an I/O system call, initially it is catched in I/O layer, which calls memory management
layer and ultimately calls the CPU scheduler layer. In each of these calls an additional increase is observed in the system calls
and processing time.
3. Microkernels
Microkernel approach is used to overcome the limitations of traditional unix kernel. The kernel in Unix was so large
that it itself became a monolithic structure which is dificult to maintain. Microkernel approach removes all the unnecessary
and non-essential components from kernel thereby decreasing its size. These non-essential components can be implemented
outside the kernel as application level or system programs. The microkernel requires only minimal details about the process and
memory management. Its primary function is to provide communication between user programs and various services running
inside the user space. For example, a client program and a ile server can interact indirectly by sending and receiving messages
via microkernel.
The advantage is that it provides lexibility and extensibility. Any new service can be added to user space without modifying
kernel. It also increases portability of operating system from one machine to another. It provides security and protection because
most of the programs are running at user-level instead of kernel process. The examples of microkernel operating system are
Tru64 UNIX, QNX, etc.
4. Module-based Approach
It is the best approach of operating system design which uses object-oriented programming techniques. In this approach
various components of operating system are implemented as dynamically loadable modules. The kernel consists of core components
and dynamic links to load those modules either at runtime or at compile time.
Operating systems such as newer version of Unix, Solaris, Linux and Mac OSX all use module-based approach. The
Solaris operating system has seven loadable kernel modules as shown in igure (4).
CPU
Device and Scheduling File Systems
Bus Drivers

Other
Core Loadable
Miscellaneous
Kernel System Calls
Modules

Executable
STREAMS
Formats

Figure (4): Loadable Kernel Modules of Solaris operating System


The kernel consist of core and other services, like device drivers of certain hardware, various ile system support etc.
These can be added to the kernel dynamically when needed. The approach is similar to layered approach but it is more lexible
when compared to it. The core services include the code for communication among them (modules) and loading modules.
Q25. In what ways is the modular kernel approach similar to the layered approach? In what ways does it
differ from the layered approach?

Answer :
Similarities between Modular-kernel Approach and Layered Approach
The basic similarity between modular kernel approach and layered approach is the subsystem in both the approaches that
interact with each other using interfaces that are typically narrow. Moreover, in both the approaches, modiications done on a
part does not have any affect on other parts. That is, in other words, it can be said that the parts are loosely coupled.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 17
Computer SCienCe paper-V operating SyStemS
Differences between a Modular-kernel Approach and Layered Approach
Layered Approach Modular-kernel Approach
(a) Operating system is divided into different layers. (a) Operating system is divided into system and user-
level program.
(b) Layered approach imposes a strict ordering of sub- (b) There is no such restriction. That is, a lower layer
systems such that each sub-system must perform sub-system can invoke its operation by freely
its operation independently without the upper layer interacting with upper layer sub-system.
sub-system.
(c) There is more overhead relatively in invoking a (c) There is less overhead in invoking a method present
method present in the lower part of the kernel. in the lower part of the kernel.
(d) It is not capable of handling lower-level (d) It is capable of handling lower-level communication
communication and hardware interrupt. and hardware interrupt.
(e) It does not provide services for message passing (e) It provides services for message passing and process
and process scheduling. scheduling.

1.3 process management


1.3.1 process concept
Q26. Deine the following,
(a) Process
(b) Process control block
(c) Process state diagram.
Answer : Model Paper-II, Q9(b)

(a) Process
Process is the fundamental concept of operating systems structure which is deined as a program under execution
Alternatively, it can also be deined as an active entity that can be assigned to a processor for execution. A process is a
dynamic object that resides in main memory and it includes the current values of the program counter and processor’s
registers. Generally every process contains the following two elements,
(i) Program code
(ii) Set of data.
(b) Process Control Block
In a multiprogramming system, it is necessary to get the information about each process that is being executed. All this
information is available in the process control block.
The process control block informations are classiied into the following three groups,
(i) Process identiication
(ii) Process state information
(iii) Process control information.
Process Control Block (PCB)

Process Identification

Process State Information

Process Control Information

Figure: Structure of PcB


18 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
The typical elements of process control block are,
1. Identiiers
(Process identiication)
2. User-visible registers
3. Control and status registers
(Process state information)
4. Stack pointers
5. Scheduling and state information
6. Data structuring
7. Interprocess communication (Process control information)
8. Process privileges
9. Memory management
10. Resource ownership and utilization.
1. Identiiers
The following identiiers are stored in the process control block,
(i) Process identiier
(ii) Parent process identiier
(iii) User identiier.
2. User-visible Registers
These registers are used to manipulate mathematical operations.
3. Control and Status Registers
These registers control the operation of the processor.
(i) Program Counter
It holds the address of the next instruction needed to be fetched for execution.
(ii) Condition Codes
It holds the output of the most recent arithmetic or logical operation.
(iii) Status Information
It holds the interrupts enabled/disabled lags.
4. Stack Pointers
These pointer points to the top of the stack, where parameters and calling addresses for various procedures and system
calls are stored.
5. Scheduling and State Information
It maintains the process state information. The operating system is responsible for gathering this information.
Following items comprise the state information,
(i) Process state
(ii) Priority
(iii) Scheduling related information
(iv) Event.
6. Data Structuring
It speciies the relationship among the processes.
7. Interprocess Communication
It allows two different processes to communicate with each other using lags, signals and messages. And this information
is maintained in PCB.
8. Process Privileges
As system utilities and services make use of the privileges, memory and the type of instructions grants processes as
privileges.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 19
Computer SCienCe paper-V operating SyStemS
9. Memory Management 5. Exit
It refers to the pointers and page tables are assigned to A process is said to be in exit state, if it is aborted or
the processes. halted due to some reason. An exit process must be
freed from the pool of executable processes by the
10. Resource Ownership and Utilization
operating system.
This information speciies the resources which are being
As shown in the diagram, the process can change their
controlled and utilized by the processes. For example,
states according to the situations detailed below,
opened iles.
(i) New-Ready
(c) Process State Diagram
When the operating system becomes capable of taking
For answer refer Unit-I, Page No. 20, Q.No. 27. an additional process, it moves a process from new to
Q27. Explain the process state diagram. ready state.
Answer : (ii) Ready-Running
Process State When a new process has to be selected for running, the
operating systems selects one of the processes in the
A process being executed undergoes many states as per
ready state. This is done either by using ‘scheduler’ or
the demand. And the execution of the process is controlled by
‘dispatcher’ (operating system processes).
the operating system. The operating system is also responsible
for allocating resources to the processes. In order to explain (iii) Running-Exit
the behaviour of the process during their execution, process If the process which is currently running, either
state transition models are used. The igure shown below completes or aborts, that running process must be
depicts a ive-state process model, moved to an exit state by the operating system.
(iv) Running-Ready
Blocked
This transition occurs as per the pre-emption rules of the
operating system. This is allowed in order to maintain
Event occurs Event wait time discipline without any interruption of execution.
(v) Running-Blocked
Admit Dispatch Release If a running process needs some other event to occur
New Ready Running Exit
Time-out
so that it can proceed with its execution in the running
state, it waits for some time. As the process is currently
Figure: Five State Process Transition Model waiting, it is pushed from the running state to the
When the number of processes to be executed are large, blocked state by the operating system. And as soon as
then those processes are moved to a queue and it would be the event occurs for which the process is waiting it is
operated in a Round Robin fashion. A process in the queue can be again moved back from blocked to ready state.
available in any one of the following states, (vi) Blocked-Ready
1. Running As soon as the event occurs for which the process is
waiting, the process is pushed to Ready State from
A process is said to be in running state, if it is being
blocked state.
executed by the processor. In a uniprocessor system, only
one process is executed by the CPU at a time. In case of (vii) Ready-Exit
multiprocessor systems, many processes can exist in a Some process would stay in the queue just because its
running state and the operating system has to keep track parent process is not terminated and it terminates when
of all of them. its parent process gets terminated and thus, a process
2. Ready can move from ready state to an exit state. This is
similar to the transition of a process from the blocked
A process in ready state is waiting for an opportunity to be to an exit state.
executed. All the ready processes are placed in the ready
queue. Q28. Deine thread. What are the advantages of
threads?
3. Blocked
Answer :
In blocked state, the process waits for the occurrence
Thread
of an event, in order to be executed. Until that event is
completed it cannot proceed further. A thread can be thought as a basic unit of CPU
utilization. It is also called as a light-weight process. Multiple
4. New threads can be created by a particular process. Each thread
A newly created process is one which has not even been shares code, data segments and open iles of that process
loaded in the main memory, though its associated PCB with other threads. However, each of them has their separate
has been created. register-set values and stacks.
20 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Consider an example of a pagemaker or any word Whenever, the executing process is interrupted by an
I/O request, then that process is placed into “device queue”.
processing application. It has two important threads executing
Each device like hard disk, loppy disk etc., has their own
in it, one to read the keystrokes from the keyboard and other a
device queue. They take processes from their queue and
spelling and grammar checking thread running in background. serves them.
The following igure shows, a process having two threads in it.
There are a number of reasons, when a process might
Code segment Data segment Open iles stop executing and get placed in ready queue or in any other
devices queue. Some events that interrupt process are,
Stack Stack
 An I/O request occurred in the process, puts
Register set Register set itself in I/O queue.
 If time slice of that process has expired, then it is
put in ready queue.
Thread 1:  If a process creates its sub processes, then those
Keystroke
reading
Thread 2: sub processes are irst executed and after they
Spelling and
grammar checking terminated, the parent process is again placed in
ready queue.
 If any I/O device interrupts CPU, then CPU
Figure: Word Processor application’s Process puts the current process in ready queue and
serves that interrupt by executing its respective
Advantages Interrupt Service Routine (ISR) .
1. Creating a thread is ten times faster than creating a Finished CPU Ready queue Job queue
New

process i.e., it takes lesser time to create a new thread


within an already existing process rather than creating Time slice
expired
it in a new process.
I/O
2. Thread termination is faster than the process termination. request
Device queue

3. Switching between the threads of a single process is faster Sub process


or child Wait till chid
as no memory mapping has to be set up and the memory process invoked executes
and address translation caches need not be invalidated.
Interrupt
4. Threads are more eficient than processes as they occurs
provide feature of shared memory thereby requiring
Figure: Various Scheduling Queues
any system calls for inter-thread communication. So,
threads are more suitable for parallel activities that are Schedulers
tightly coupled and uses the same data structures. Scheduling is deined as the activity of deciding about
5. The processes of thread creation and destruction are the resources to be sent to the processes on their request.
cheaper as no allocation and deallocation of new Scheduler is deined as a program which selects a
address spaces or other process resources are required. user program from disk and allocates CPU to that program.
A process migrates between the various scheduling queues
1.3.2 process scheduling throughout its lifetime. The operating system must select (for
scheduling purposes) processes from the queue in some fashion.
Q29. What is process scheduling? Explain different
The selection process is carried out by the appropriate scheduler.
types of process scheduler.
There are three types of schedulers. They are as follows,
Answer :
(i) Long Term Scheduler (LTS)
Process Scheduling
(ii) Short Term Scheduler (STS)
In multiprogramming system, there are several (iii) Medium Term Scheduler (MTS).
processes running in the system. These processes are not
(i) Long Term Scheduler (LTS)
simultaneously executed by CPU, but scheduling is done to
choose a particular process for execution. It is done using a LTS is used to decide, which processes are to be selected
program called process scheduler. for processing. Long term scheduler is deined as program
(part of operating system) which selects a job from the disk and
Scheduling Queues
transfers it into main memory. If the number of ready processes
Process goes through several queues throughout their in the ready queue becomes very high, the overhead on the
life cycle. Whenever, a process enters a system, it is put operating system for maintaining long lists, context switching
into a “job queue”. From here if all the resources required and dispatching overcrosses the limit. Therefore, it is useful to
by process are available it is put into “ready queue” where let in, only a limited number of processes in the ready queue to
processes complete for obtaining CPU. compete for the CPU. The long term scheduler manages this.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 21
Computer SCienCe paper-V operating SyStemS
(ii) Short Term Scheduler (STS)

Short term scheduler is deined as a program (part of operating system) that selects among the processes that are ready
to execute and allocate the CPU to one of them. It decides which of the ready processes are to be scheduled or dispatched next.

The difference between LTS and STS is that the LTS is called less frequently whereas, STS is called more frequently.
LTS must select a program from disk into main memory only once i.e., when the program is executed. However, STS must
select a job from ready queue quite often (for every 1 second in unix operating system) i.e., for every 1 second the STS is
called, it will select one PCB from the ready queue and gives CPU that job. After 1 second is completed, again STS is called
for selecting one more job from the ready queue. This process repeats. Thus, because of short duration between executions,
the STS must be very fast in selecting a job, otherwise CPU will sit idle. However LTS is called less frequently, so because of
long durations between executions, LTS can afford to take some time in selecting a good job from disk. A good job is deined
as one which is a mix of CPU burst and I/O burst.

(iii) Medium Term Scheduler (MTS)

MTS is used during swapping, where a process is temporarily removed from memory, often to decrease the overhead
of CPU and later resumed.

As the degree of multiprogramming increases, CPU utilization also increases. At one stage the CPU utilization is
maximum for a speciic number of user programs in memory. At this stage, if the degree of multiprogramming is further
increased, CPU utilization drops. Immediately, operating system observes the decrease in CPU utilization and calls MTS.
The MTS will swap on excess programs from memory and puts on disk. With this, the CPU utilization increases. After some
time, when some programs leave memory, MTS will swap in those programs which were swapped out back into memory and
execution stops. This scheme which is known as swapping is performed by MTS. Thus, swap out and swap in should be done
at appropriate times by MTS.

Differences between Long-term and Short-term Schedulers

Long-term Scheduler Short-term Scheduler


1. Long term scheduler selects the processes from the Job 1. Short term scheduler selects the processes from
Queue and load those processes in the Ready Queue for Ready Queue and allocates CPU to those processes
execution. for execution.
2. It is known as Job scheduler. 2. It is known as CPU scheduler.
3. It chooses a process less frequently. 3. It chooses a process more frequently.
4. It has less speed. 4. It has more speed.
5. It usually takes long time during the execution process. 5. It takes less time during the execution process.
6. It has more control over degree of multiprogramming. 6. It has less control over degree of multiprogramming.
7. It has very less probability of having time sharing system. 7. It has a less probability of having time sharing system.

Q30. Explain in brief about context switching.

Answer :
Context switching refers to the process of switching the CPU to some other process thereby saving the state of the old
process and loading the saved state for the new process.

Context switching is actually an overhead which means that apart from switching no other tasks will be performed. Each
machine carry different switching speed based on factors such as memory speed, number of registers that are needed to be copied
and the presence of special instructions (for example, a single instruction that can be used to load or store all registers). The
speed usually ranges from 1 to 1000 µsec.

The time required to do context switching typically depends on hardware support. An example of such type is a processor
that can provide more than one set of registers. In context switch the pointer needs to be changed to active set of registers. The
amount of work that is to be done during the process of context switching is more in case of complex operating systems.
22 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
A context switch may occur without changing the state (d) Priority would be lowest by default, but user can
of the process being executed. Hence, involves lesser overhead specify any priority during creation.
than the situation in which changes in the process states occurs (e) In the beginning, process is not allocated to any
from running to ready or blocked states. In case of changes I/O devices or iles. The user has to request them
in the process state, the operating system has to make certain or if this is a child process it may inherit some
changes in its environment, which are described below, resources from its parent.
1. The context associated with the processor along with (iv) Then the operating system will link this process to
program counter and other registers are saved scheduling queue and the process state would be
2. Updates the PCB associated with the process being changed from ‘New’ to ‘Ready’. Now process is
executed. This involves changing the state of the process competing for the CPU.
to one of the available process states. Updation of other (v) Additionally, operating system will create some other
ields is also required. data structures such as log iles or accounting iles to
3. The PCB of this process is moved to some appropriate keep track of processes activity.
queue. 2. Process Deletion/Termination
4. Execution of the active process is transferred by selecting Processes are terminated by themselves when they inishes
some other process. executing their last statement, then operating system uses exit( )
5. Updates the PCB of the chosen process as it includes system call to delete its context. Then all the resources held by
the changes in its state (to running). that process like physical and virtual memory, I/O buffers, open
6. Updates the data structures associated with the memory iles etc., are taken back by the operating system. A process P
management which may require the management of the can be terminated either by operating system or by the parent
address translation process. process of P. A parent may terminate a process due to one of the
7. Restores the context of the suspended process by loading following reasons,
the previous values of the PC and other CPU registers. (i) When task given to the child is not required now.
Thus, the process switch which involves a state change, (ii) When child has taken more resources than its limit.
requires considerably more effort than the context switch. (iii) The parent of the process is exiting, as a result all its
children are deleted. This is called as cascaded termination.
1.3.3 operations on processes
Q32. Explain the reasons for process termination.
Q31. Explain the process creation and termination. Answer :
Answer : Reasons for Process Termination
There are two basic operations that can be performed A process in an operating system can be terminated
on a process. They are, when certain errors or default conditions occur. Following are
some of the reasons that lead to process termination,
1. Process creation 1. Normal Completion
2. Process deletion/termination. A process can complete its execution in a normal manner
1. Process Creation by executing an operating system service call.
2. Unavailability of the Required Memory
(i) When a new process is created, operating system
assigns a unique Process Identiier (pid) to it and A process is terminated when the system is unable to
inserts a new entry in primary process table. provide the memory required, as it is more than the
memory that it is actually contained in the system.
(ii) Then the required memory space for all the elements
3. Exceed in the Execution Time Limit
of process such as program, data and stack is allocated
including space for its Process Control Block (PCB). Process termination also occurs when its execution
time is very much longer than the speciic time limit
(iii) Next, the various values in PCB are initialized such as, i.e., it takes longer time to execute. This is because of
(a) Process identiication part is illed with PID the following possibilities,
assigned to it in step (i) and also its parent’s PID. (i) Total elapsed time
(b) The processor register values are mostly illed (ii) Time to execute
with zeroes, except for the stack pointer and (iii) The time interval since the last input is provided
program counter. Stack pointer is illed with the by the user. This usually occurs in case of
address of stack allocated to it in step (ii) and interactive processes.
program counter is illed with the address of its 4. Violating Memory Access Limits
program entry point. A process can even be terminated, when it is attempting
(c) The process state information would be set to to access a memory location to which access is not
‘New’. permitted.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 23
Computer SCienCe paper-V operating SyStemS
5. Protection Error (i) Sharing of Information
A protection error occurs when a process is trying to A specific type of information may be useful to
use a resource (e.g. ile) to which access is not granted many users. So, in order to fulil this an cooperative
or using it in an inappropriate manner such as writing environment must be created wherein the users can gain
to a read-only ile. access to all resources concurrently.
6. Arithmetic Error (ii) High Computation Speed
Arithmetic errors such as, division-by-zero or storing The execution of the particular task can be enhanced
a number greater than the hardware capacity also leads by dividing the task into various subtasks wherein each
to process termination. subtask can be executed parallely along with others.
7. Input/Output Failure However, high computation speed can be obtained
It refers to an error that results from some input/output through multiple processing elements such as CPU’s
operation, such as inability to ind a ile, failure of a as I/O channels.
read or write operation even after trying a certain (iii) System’s Modularity
number of times. The systems can be manufactured in a modular way i.e.,
8. Misuse of Data breaking the system’s functions into various separate
Misuse of data i.e., using wrong type or uninitialized processes or threads.
data also terminates the process. (iv) Convenience to Users
9. Exceeding the Waiting Time Limit The cooperating environment facilitates the convience
Exceeding the waiting time for occurrence of an event to users. Many users can perform multitasking i.e., they
also terminates the process. work on more than one task.
10. Invalid Instruction Execution Example
When a process is trying to execute an instruction that The user can handle printing, editing and compiling
actually does not exist, the process gets terminated. simultaneously.
11. Using a Privileged Instruction Models of IPC
An attempt to use an operating system instruction by a Inter process communication has two different models,
process stops its execution. they are as follows,
12. Interference by an Operating System or an Operator (i) Shared memory system
An operator or an operating system sometimes interferes (ii) Message passing system.
with process execution and leads to its termination. One (i) Shared Memory System
such example is the occurrence of deadlocks.
Shared memory system requires communicating process
13. Parent Process Termination
to share some variables. The processes are expected to exchange
When a parent process terminates, it causes all its child information through the use of shared variables. Here the
processes to stop their execution. operating system needs to provide only shared memory and
14. Request from a Parent Process the responsibility for providing communication rests with the
A parent process has a right to terminate any of its child application programmers and the operating system does not
processes, at any time during their execution. interfere in communication.

1.3.4 inter process communication, Process 1


examples: producer-consumer
problem Shared Memory
Q33. What is inter-process communication? What
Process 2
are the models of IPC?
Answer : Model Paper-I, Q9(b)
Interprocess Communication(IPC) is deined as the
communication between process to process. It provides
a mechanism to allow processes to communicate and to
synchronize their actions.
The need for interprocess communication are,
(i) Sharing of information
(ii) High computation speed
(iii) System’s modularity Kernel
(iv) Convenience to users. Figure: Shared Memory System
24 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
(ii) Message Passing System A new process creates an object in the shared-memory
with help of shm_open( ) system call. The syntax of shm_
In message passing system the job of operating system
open( ) system call is as follows,
is to perform both tasks i.e., providing memory space and
performing communication. The main function of message shm_fd = shm_open(name, O_CREAT);
passing system is to allow processes to communicate with each The parameters associated with this system calls include
other without the need to resort to share variables. The message name of the object and O_CREAT parameter is used to avoid
sent by a process can be of either ixed size or variable size. multiple entries of object with identical names. There exist some
If two processes wants to communicate then they must send additional parameters such as O_RDRW which is used to access
and receive messages from each other, thus, a communication the shared memory object with read and write permissions. When
link must exist between them which can be implemented in a an object is created successfully, it is deined with an integer value.
variety of ways. Size of the object can be allocated using ftruncate( )
Process 1 system call as,
ftruncate(shm_fd, 2048);
Where 2048 is the size of object in bytes.
Process 2
The code for producer process depicting POSIX shared
memory API is as follows,
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<fcntl.h>
#include<sys/shm.h>
Message Queue
#include<sys/stat.h>
m0 m1 m2 m3 mn int main( )
{
Kernel const int SIZE = 4295;

Figure: Message Passing System const char name = "OS";


const char *msg = "SIA";
In message system if two processes wish to communicate
with each other, then they can communicate in the following const char *msg1 = "GROUP";
ways, int shm_fd;
(a) Direct and indirect communication. void *ptr;
(b) Symmetric and asymmetric communication. shm_fd = shm_open(name, O_CREATIO_RDRW, 0666);
ftruncate(shm_fd, SIZE);
(c) Automatic and explicit buffering.
ptr = mmap(0, SIZE, PROT_WRITE, MAP_SHARED,
(d) Send by referred and send by copy.
shm_fd, 0);
Q34. Explain about POSIX API windows OS with
sprintf(ptr, "%s", msg0);
respect to Inter Process Communication (IPC).
ptr + = strlen(msg0);
Answer :
sprintf(ptr, "%s", msg1);
POSIX Shared Memory
ptr + = strlen(msg1);
POSIX employs both message passing and shared return 0;
memory for inter process communication shared memory is
in this type of systems refers to ceratin region which carries }
memory-mapped iles. The code for consumer process depicting the POSIX
shared memory API,
POSIX API for shared memory is as follows,
#include<stdio.h>
message next_consumed;
#include<stdlib.h>
while (true)
#include<fcntl.h>
{ #include<sys/shm.h>
receive(next_consumed ; #include<sys/stat.h>
} int main( )
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 25
Computer SCienCe paper-V operating SyStemS
{ ALPC performs the following actions based on message
size.
const int SIZE = 4025;
const char *name = "OS";  If the size of the message is less than 256 bytes, they
are stored in message queue itself for quick transfer.
int shm_fd;
 If the size of the message is larger than 256 bytes, they
void *ptr; are mapped through a region in shared memory called
shm_fd = shm_open(name, O_RDONLY, 0666); section object.
ptr = mmap(0, SIZE, PROT_READ, MAP_SHARED,  In case of large amount of data, an application
programming interface is used that can be used to modify
shm_fd, 0); the client address space directly from the server side.
printf("%s", (char *)ptr);
It is clients responsibility to decide the size of the
shm_unlink(name); message it sends and creation of section object. At server side
server decides the size of section object based on size of replies.
return 0;
} Q35. Write short notes on message passing in Mach
operating system.
The newly created shared_memory object is saved
in a memory_mapped ile using mmap( ) function and for Answer :
quick access to it, a pointer is used. In addition to this, a lag In Mach operating systems, communication among
“MAP_SHARED” be used which is responsible for displaying processes and kernel is made possible using messages. Ports in
the modiications done on the object to all the process sharing it. windows OS is considered as mailboxes in this OS with which
Additionally, writes to the object can be applied using sprintf( ) messages are transferred in and out. Moreover, the system calls
function. are also based on messages. To support this, two mailboxes are
Windows created which are associated with each task among which one
is known as kernel mailbox which is responsible for providing
Windows operating employs message passing platform of exchange of messages between kernel and the task
mechanism for inter process communication. It can also support whereas the other one known as notify mailbox is responsible
subsystems that is different operating environments. It is a for notifying the occurrence of task.
typical client server communication where subsystem acts as
a server and all the applications act as clients. Some of the system calls used in Mach operating system
are,
A speciic version of Remote Procedure Call (RPC)
specially designed for windows operating system called  msg_send( )
Advanced Local Procedure Call (ALPC) is used here as a This system call is used to send messages to the mailbox.
message passing mechanism. As the name suggest, it is used for
communication among local processes i.e., processes present on  msg_receive( )
a single system. This communication is made available through This system call is used to receive messages from
a special port object. mailbox
There are two types of ports used in windows OS. They  msg_rpc( )
are,
This system call is used to employ RPCs(Remote
(i) Connection ports
Procedure Calls) with which it used for both sending
(ii) Communication ports. and receiving a message. In this case, received message
is restricted to one only.
(i) Connection Ports
Server carries certain ports which are visible to all the  port_allocate( )
clients. These ports are used by the clients whenever This system call allows a process to create its own
they require support from the server. When a request is mailbox in which a maximum of eight messages can be
received by the server, it creates a channel carrying a stored. The owner of this mailbox will be the one who
couple of communication ports. creates it and it can handover its responsibility to some
(ii) Communication Ports other task.

The communication ports are private between client and Mach operating system stores the messages in the
server. The reason for using a pair of communication mailbox as soon as it receives messages. If the mailbox carry
ports is that one part is employed for exchange of messages from a single process, these messages are arranged
message from server to client and other is used for in FIFO sequence but this sequence is not effective in case of
exchange of messages from server to client. multiple owner messages.

26 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Typically, a message carries a ixed-length part and a variable-length part. The ixed-length part is used as a header and
the variable length part carries data.
In a situation where mailbox gets full and message is sent to it. The sent message perform either of the following tasks,
 Wait for the mailbox to get enough free space
 Wait for certain period of time.
 Return to the source immediately
 A single message can be temporarily cached and when mailbox gets enough free space, it informs the sender about it.
Mach is designed to be effectively used in distributed systems. The newer versions of it employs virtual memory management
techniques with which double-copy of messages can be avoided.
Q36. Write in short about the following,
(i) Synchronization
(ii) Buffering.
Answer :
(i) Synchronization
Any two process interact with each other by using the system calls send( ) and receive( ). The concept of message passing
can be either blocking or unblocking and synchronous or asynchronous. Blocking refers to the process of halting the sending
or receiving of the messages. Whereas, nonblocking refers to the process allowing free low of the sending or receiving of the
messages. Various combinations of send or receive are allowed. A situation called rendezvous occurs between the sender and
receiver when both the send( ) and receive( ) are blocking.
Example of synchronization in message passing is producer consumer problem.
Producer Process
message producednext;
while(true)
{
send(producednext);
}
Consumer Process
message consumednext;
while(true)
{
receive(consumednext);
}
(ii) Buffering
The messages that are sent and received by the communicating processes are stored in the temporary queue. These type
of queues are implemented as follows.
(a) Bounded Capacity
A queue can be of inite length to store ininite messages. The messages are stored in the queue until it is not full. Even
the sender need not wait. But the capacity of the link is ixed. When the link is full, the sender need to wait to send the
messages through it.
(b) Unbounded Capacity
The queues size is ininite and any number of messages can be stored in it. The sender will not need to wait.
(c) Zero Capacity
The queues maximum size is zero and therefore the link cannot store any number of messages in it. The sender need
to block until the receiver receives the message. This implementation is some times called as message system without
buffering. Others are called systems with automatic buffering.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 27


Computer SCienCe paper-V operating SyStemS

1.4 process synchronization


1.4.1 critical-section problem, peterson's
solution
Q37. Explain the following,
(a) Need for mutual exclusion
(b) Critical resource
(c) Critical section
(d) Starvation.
Answer :
(a) Need for Mutual Exclusion
Consider a situation in which two or more processes need
access to a single non-sharable resource (Example: printer). }
During the execution process, each process sends the com- Figure: Structure of a Process
mands to I/O devices or sends and receives data or receives
(d) Starvation
status information etc. Such an I/O device is said to be a critical
resource and the portion of a program that uses it is called a Two or more processes are said to be in starvation, if
critical section. An important point to be considered here is that they are waiting perpetually for a resource which is occupied
only one program is permitted to enter into the critical section by another process. The process that has occupied the resource
at any time. may or may not present in the list of processes that are starved.
(b) Critical Resource Let P1, P2 and P3 be the three processes, each of which
requires a periodical access to resource R. If access to resource
A resource that cannot be shared between two or more ‘R’ is granted to the process P1, then the other two processes
processes at the same time is called a critical resource. There P2 and P3 are delayed as they are waiting for the resource ‘R’.
may be a situation where more than one process requires to Now, let the access is granted to P3 and if P1 again needs ‘R’
access the critical resource. Then, during the execution of these prior to the completion of its critical section. If the OS permits
processes they can send data to the critical resource, receive P1 to use ‘R’ after P3 has completed its execution, then these
data from the critical resource or they can just get the informa- alternate access permissions provided to P1 and P3 causes P2
tion about the status of the critical resource by sending related to be blocked.
commands to it. An example of a critical or a non-sharable
resource is ‘printer’. A critical resource can be accessed only Here, it is required to illustrate whether starvation is
from the critical section of a program. possible or not in algorithms like FCFS, SPN, SRT and priority.
Consider FCFS (First Come First Served) algorithm, in
(c) Critical Section
this starvation is not possible. The reason is CPU picks the process
A critical section is a segment of code present in a according to arrival of its burst time and runs the process till its
process in which the process may be modifying or accessing completion.
common variables or shared data items. The most important Now, consider SPN (Shortest Processing Next) algorithm,
thing that a system should control is that, when one process is in this starvation is possible with the processes that has long burst
executing its critical section, it should not allow other processes time. The reason is CPU picks the process that has shortest next
to execute in its critical sections. burst time. Here, starvation problem can be overcome by using
Before executing critical section the process should get preemptive SPN algorithm, which prompts the currently running
permission to enter its critical section from the system. This is process.
called an entry section. After that process executes its critical Next, SRT (Shortest Remaining Time) algorithm, in
section and comes out of it this is called exit section. Then, it this starvation is possible with the processes that has shortest
executes the remaining code called remainder section. remaining time. The reason is CPU picks the process that has
while(1) shortest remaining time. Here, the problem of starvation can be
overcome by giving chance to processes that are waiting for a
{
long period of time. Finally, consider priority algorithm, in this
––––––– starvation is possible with low priority processes. The reason
––––––– is, CPU picks the process with highest priority.
––––––– Starvation problem can be overcome by a technique
called aging. This technique increases the priority of the
processes that waiting for long period of time.
28 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Q38. What are the three requirements that a solution The igure below depicts the low of a preemptive kernel
to the critical section problem must satisfy? with three tasks A, B, and C.

Answer :
The following are the important properties that should
be satisied by critical section implementation or solution,
1. Mutual Exclusion
When a process P1 is in its critical section, then no other
Figure: Preemptive Kernels Program Flow
process can be executing in their critical section.
Furthermore, in any process execution, a task can be in
2. Progress either of the following three states,
When a critical section is not in use and other processes 1. Running and waiting
is requesting for it, then it should be granted to only that 2. Waiting
process which is not executing in its remainder section
3. Idle.
enter its own critical section.
1. Running and Waiting
3. Bounded Waiting
A task will be in a running, waiting state when it is not
A limit or bound is ixed for each process to enter critical ready to run.
section beyond which it is not allowed to enter in critical 2. Waiting
section.
A task will be in a waiting state when it is ready to run
There are two general ways for handling critical sections but cannot do so due to the execution of a higher priority
in operating systems. They are, task.
3. Idle
(i) Preemptive Kernel
A task is considered to be idle when no process has a
It allows a Kernel mode process to be preempted (i.e., task that is ready to be executed. This task is a special
interrupted) during execution. purpose entity which has the lowest priority and is
usually incorporated in all Kernel programs.
(ii) Non-preemptive Kernel
Operation of Preemptive Kernel
It doesn’t allow a Kernel mode process to be preempted
The following igure shows the program context for a
during execution, the process will execute until it exits
preemptive Kernel,
Kernel mode or voluntarily leaves control of the CPU.
This approach is helpful in avoiding race conditions. Preempt
Preempt Wait Wait Wait
The preemptive Kernel is used in real-time system,
where a process executing in Kernel mode can also be
preempted which makes the Kernel more responsive. Windows Task A
XP, Windows 2000 and traditional Unix are non-preemptive
Task B
Kernels whereas Linux of Kernel version 2.6 is preemptive
Kernel. Task C
Q39. Explain preemptive kernels and non-preemptive Idle
kernels. Also explain why would any one favour
a preemptive kernel over a non-preemptive one.
Answer : TC TA TB Time
ready ready ready
Preemptive Kernel
Figure: Preemptive Kernel Program context
A Kernel that permits a process to be preempted or Where,
interrupted during its execution is called preemptive kernel.
In preemptive kernel, every task is designed as an independent Denotes ‘Running’.
entity that has total control over the CPU. However, the task Denotes ‘Ready waiting’.
that is ready to run and has the highest priority is executed irst
by the kernel. Denotes ‘Not ready waiting’.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 29
Computer SCienCe paper-V operating SyStemS
Here, there are three tasks A, B and C with priorities A > Here, the non-preemptive kernel acts as a periodic
B > C. Hence, when task C is ready to run, the Kernel interrupts scheduler which serially executes every task. However, every
its idle task and begins the execution of task C. And, when task A task must assist each other by running just once and then
is ready to run, the Kernel interrupts task C and starts executing returning to the scheduler loop. This is because, if any task gets
task A. However, when task B is ready to run, the Kernel does implemented as an endless loop, then the scheduler will never
not halt or preempt task A due to its’ higher priority. get to other tasks.
A this stage all the three tasks are in ready state, but B and Operation of Non-preemptive Kernel
C wait for A to inish. When this is done A goes into a waiting The igure below shows the program context of a non-
state until it is invoked again. The control is then transferred to preemptive kernel,
B as it hold as higher priority than C. Therefore, when B inishes Program
the control is transferred to C to complete it’s execution. context

Advantage and Disadvantage rts rts rts


The major advantage with preemptive kernel is that it
allows the task to be designed independently. However, this
makes it very complex and consumes more memory resources. Task A
Non-preemptive Kernel Task B
A Kernel that does not permit a process to be preempted Task C
or interrupted during its execution is called non-preemptive
kernel. In a non-preemptive kernel, every task has to assist Idle
another task in its completion by giving up the CPU in an
appropriate manner.
The following igure depicts a non-preemptive kernel Time
TC TA TB
along with its three cooperative tasks A, B and C. ready ready ready
Figure: Non-preemptive Kernel Program context
Where,
= Ready waiting.
= 'Running'.
= 'Not ready waiting'.
Here, the task with the highest-priority waits for the
completion of lowest-priority task followed by a normal
execution (according to priority). Therefore, control to task A
is passed only after the completion task C followed by task B.
However, this scheme is adopted by only a few
schedulers. This is because, a round-robin method is widely
used in non-preemptive schedulers which produces the same
Figure: Non-preemptive Program Flow response times for all the tasks.
Advantage and Disadvantage
Non-preemptive Kernels are very easy to be designed
and consume less memory resources. However, they have a
relatively slower response time for higher priority tasks and
are complex to be written.
Favouring a Preemptive Kernel over a Non-preemptive One
The reasons for favouring a preemptive kernel over a
non-preemptive one are,
(a) Preemptive kernel can permit a real-time process to
interrupt a process running in the Kernel. Hence, it is
much appropriate for real-time programming.
(b) Preemptive kernel does not allow a kernel-mode process
to run for a longer period of time without assigning a
process to the processor for execution. Hence, it has a
Figure: Non-preemptive Task higher response time than non-preemptive Kernel.
30 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Q40. Explain the Peterson’s solution of critical Mutual exclusion is preserved, because Pi can enter its
section problem. Critical Section (CS) only if either lag[j] == false or turn ==
i. If both the cases doesn’t hold, then control will be blocked
Answer : in the while loop. Also the value of ‘turn’ which is a boolean
Peterson’s Solution variable can be either 0 or 1 but not both, which implies that
either P0(Pi) or P1(Pj) will execute CS at a particular time, but
Peterson’s solution is a software based solution to the not both at the same instance.
critical section problem that satisies all the requirements like
Progress and bounded waiting requirements are satisied
mutual exclusion, progress and bounded waiting. It provides
by using the condition in the blocking while loop. It consists
alternate execution of critical sections of two processes named
of two conditions lag[ j] = = true and turn = = j. If process Pj
P0 and P1 use the notation Pi for P0 and Pj for P1 where i = 0 and
is not ready to enter its CS, then the value of lag[ j] will be
j = 1 – i (i.e., 1 – 0 = 1). Here, two data structures are used as false, then Pi can execute its CS. If it is not the case i.e., Pj is
follows, ready, then execution of CS depends on the value of ‘turn’. If
int turn; it is equal to i, then Pi executes or else if it is equal to j, then Pj
executes. That is, there is always progress and waiting time is
boolean lag[2]; also bounded.
The variable ‘turn’ indicates the turn of the process,
1.4.2 synchronization
whose turn is to execute its critical section. For example, if turn
==j, then process Pj is allowed to enter its critical section and Q41. Explain the solution to critical-section problem
execute. The ‘lag’ array indicates whether a process is ready using locks and hardware instructions.
to enter its critical section or not. For example, if lag[j] is true, Answer :
then it means process Pj is ready to enter its critical section.
Synchronization Hardware Using Locks
The algorithm of Peterson’s solution is as follows,
To solve the critical section problems and avoid race
conditions, locks can be used. A process should acquire a lock
before entering Critical Section (CS) and should release the
same after exiting from CS.

Using Hardware Instructions


The task of synchronization can be made easier and
system eficiency can be improved by using the hardware
features like instructions and interrupts. In uniprocessor
Figure: The Structure of Process ‘Pi’ in Peterson’s Solution system, the critical-section problem can be solved by blocking
Process Pi sets lag[i] as ‘true’ to indicate that it is ready all interrupts of the processor when a shared variable is being
to enter its critical section. Then it allows Pj to enter its critical accessed or modiied. This blocking guarantees that no other
section (if it wishes) by assigning the value of variable ‘turn’ code is executing and hence, the consistency of shared variable
as ‘j’ (turn:=j). can be preserved and synchronization can be achieved. This
approach is often used in non-preemptive Kernels.
The while loop waits ininitely doing no-operation (no-
op) until the value of ‘turn’ is equal to ‘j’. That is the value of This solution cannot be applied in multiprocessor
‘turn’ should be equal to ‘i’, then only it comes out of while environments because blocking all interrupts will be time
loop and enters the critical section of ‘i’ and after executing consuming and system eficiency decreases. Also other interrupt
that, it exits the critical section by disabling lag (as lag[i] := speciic applications like “interrupt-updatable-system-clock”
false). Later it executes the remainder section. etc., can be effected.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 31
Computer SCienCe paper-V operating SyStemS
Today’s computers provide certain hardware instructions
that atomically test and modify the contents i.e., the operation
is performed as a single uninterruptible unit. These instructions
can be used to solve critical section problems.
Test and Set Instruction
The test and set instruction is deined as follows,
boolean test_set(bool trgt)
{
boolean res;
res = *trgt;
*trgt = true;
return(res);
}
The test_set( ) instruction is an atomic instruction that Test and Set Instruction with Bounded-waiting
is executed uninterruptible. Even if two test_set( ) instructions
are executed simultaneously on different processors, internally The limitation of above algorithms is that they
they will be executed sequentially. Any processor that supports only provides with mutual exclusion not bounded-waiting
the test_set( ) instruction can implement mutual exclusion by requirement. An alternative algorithm that satisies all the
declaring a global variable lock and initializing it to false as critical-section requirement is to use the test_set( ) instruction
follows, with two common data structures boolean waiting_lag [n];
boolean lock;
bool waiting_lag[n];
while(1)
{
waiting_lag[i] = true;
lkey = true;
while(waiting_lag[i] && key)
key = test_set(&lock);
waiting_lag[i] = false;

Swap Instruction
Another type of instruction is “swap”, which is also j = (i + 1)%n;
executed atomically. It operates on two variables as shown while((j! = i) && !waiting_lag[j])
below,
j = (j + 1)%n;
void swap(bool *x, bool *y)
if(j == i)
{
lock = false;
bool var;
else
var = *x;
*x = *y; waiting(j) = false;

*y = var;
}
In this method, mutual exclusion can be provided by
declaring a global boolean “lock” and initializing to false. Each }
process will also has a local variable “lkey”. The code for a In the above algorithm, critical section requirements are
process is shown below, satisied as follows,
32 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Mutual Exclusion
1.4.3 semaphores
It is achieved with the help of the following while loop.
Q42. What is Semaphore?
while (waiting_lag[i] && lkey)
It means a process can enter its critical section only if Answer :
either “waiting_lag” or “key” is “false”. The “key” value can Semaphore
be set to “false” only if test_set( ) is executed. And which ever Signals provide simple means of cooperation between
process executes test_set( ) irst gets the critical section and all two or more processes in such a way that a process can be
others has to wait. Hence, only one process will be in its critical forcefully stopped at some speciied point till it receives the
section at any time. signal. For signalling between the processes a special variable
Progress called semaphore (or counting semaphore) is used. For a
semaphore ‘S’, a process can execute two primitives as follows,
The progress requirement is met, since after executing
critical section the process is setting either “lock = false” or (i) semSignal(S)
“waiting_lag[j] = false”. Any of the two ways allows other This semaphore primitive is used to transmit a signal
waiting processes to proceed. through semaphore ‘S’.
Bounded-waiting (ii) semWait(S)
This requirement is achieved as follows. After a process This semaphore or counting semaphore primitive is
exits its critical section, the following while loop is executed. used to receive a signal using semaphore ‘S’. If the
while((j! = i) && ! waiting_lag[j]) corresponding transmit signal has not yet been sent then
the process is suspended till a signal is received.
j = (j + 1)% n;
Hence, semaphore or counting semaphore is actually
The above loop scans the “waiting_lag” array in cyclic an integer variable, consisting of three operations, deined as
order and allows the irst waiting process to enter the critical follows,
section.
1. A non-negative value can be used to initialize the
There are some advantages and disadvantages of using semaphore.
special machine instruction to implement mutual exclusion.
2. Each semWait operation causes a decrementation in the
Advantages semaphore value and when the value becomes negative,
1. This approach can be used for any number of processes the process gets blocked, else the process execution
executing not only on a uniprocessor but also on the proceeds in a regular manner.
multiprocessors sharing main memory. 3. Each semSignal operation causes an incrementation in
2. It is very simple to verify. the semaphore value and when the value becomes less
than or equal to zero, the process is unblocked which
3. It can be used to provide support for multiple critical was initially blocked by the semWait operation.
sections each of which can be deined by its own variable.
The two semaphore primitives semWait and semSignal
Disadvantages can be deined as follows,
1. Busy waiting is exercised while a process is waiting struct semaphore
for an access to the critical section. During this process
considerable amount of the processor time is consumed. {
int C;
2. There is a possibility for the occurrence of starvation
when one process exits a critical section and many queueType que;
processes are waiting, the selection of a waiting process };
can be done in an arbitrary manner. Hence, for some
processes access is denied. void semWait(semaphore S)
{
3. Deadlock may occur. Consider for an example, a
process P1 which is executing a special instruction by S.C = S.C–1;
entering into the critical-section. During its execution if(S.C<0)
if it is interrupted to assign the processor to some other
process P2, which is having higher priority than P1 and {
further if it is attempting to use the same resource as P1, keep the process in S.que;
its access request is denied due to the mutual exclusion block the process;
mechanism. Hence, it enters into the busy-waiting loop
as process P1 will never be dispatched as it is having }
lower priority. }
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 33
Computer SCienCe paper-V operating SyStemS
void semSignal(semaphore S) 2. Now, process P1 issues a semWait instruction on
semaphore S which decrements its value to ‘0’ thereby
{
allowing process P2 to run. Process P1 now rejoins the
S.C = S.C+1; ready queue as shown in igure (ii).
if(S.C<=0) 3. Process P2 now sends out a semWait instruction and gets
blocked thereby permitting process P4 to run.
{
4. After the completion of process P4, a semSignal
Remove a process from S.que; instruction is issued which allows process P2 to shift to
place the process on the ready list; the ready queue.

}
} Blocked
queue
The two semaphore primitive operations deined above P2
are atomic.
P4
Example Semaphore S = – 1

Consider an example of the semaphore operation


consisting of three processes P1, P2 and P3. These processes Ready
depend on the result generated by process P4. The execution queue P1
P3
steps are shown in igure.

1. In the beginning, process P1 is running and processes Figure (iii)


P2, P3 and P4 are in ready state. The semaphore count
value is 1, specifying that one of the results produced
by process P4 is now available. Blocked
queue

P4
Blocked Semaphore S = 0
queue

Ready P2
P1 P1
Semaphore S = 1 queue
P3

Figure (iv)
Ready P3
P4
5. Process P4 is again placed in the ready queue and P3
queue P2 starts running, this is shown in igure (v).
6. Process P3 gets blocked on issuing a semWait instruction.
Figure (i) Processes P1 and P2 run in a similar manner and are
blocked, allowing process P4 to resume its execution.

Blocked
Blocked
queue queue

P2
Semaphore S = 0 P3
Semaphore S = 0

Ready P4
Ready P1
queue
P3 P2
queue P4 P1

Figure (ii) Figure (v)


34 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
(ii) During Access Control

Blocked Semaphore is also used to control the access to a


P2
queue P1 particular resource with a inite number of instances. It is irst
P3
initialized to the total number of resources available. It counts
the number of remaining resources when a process uses or
Semaphore S = – 3 P4 releases the resource (by incrementing or decrementing its
value) and hence called a counting semaphore. The wait( )
operation is performed on semaphore to use the resource and
Ready signal( ) operation is performed to release the resource.
queue
Semaphore is not only used in the situations discussed
above but also used to solve various synchronization problems.
For example, if two processes wants to run concurrently then they
Figure (vi) can use a semaphore ‘synch’ which synchronizes the processes.
7. When process P4 produces a result, a semSignal Consider two processes p1 and p2 containing the statements s1
instruction is issued and the process P3 is transferred and s2 respectively.
to the ready queue. This process is repeated till the
To execute the statement s2 before the statement s1 the
processes P1 and P2 become unblocked.
semaphore is used in the following manner,
synch = 0;
Blocked s2;
queue P2
P1 signal(synch); in process p2

P4 wait(synch); in process p1
Semaphore S = – 2
s 1;
Implementation of a Semaphore

Ready A semaphore is an integer variable which can be accessed


queue P3 using two operations wait and signal. In order to implement
these operations semaphore maintains an integer value for each
process and a list of processes that wait on a semaphore.
Figure (vii)
In this sense semaphore can be deined as follows,
Q43. describe about semaphores and their usage
and implementation. typedef struct

Answer : Model Paper-III, Q9(b)


{

Semaphores int val;

For answer refer Unit-I, Page No. 33, Q.No. 42. struct process *pList;
Semaphore Usage } semaphore;
Semaphore is used in two situations, When a process executes the wait operation it decrements
(i) To solve the critical-section problem the value of the semaphore and checks whether the value is
positive or negative. If the value is negative then the process
(ii) To gain access control for a given resource.
is blocked using block( ) operation and placed in the waiting
(i) During Critical-section Problem queue (i.e., the process list) maintained by the semaphore. This
Semaphore is used to deal with the critical-section changes the state of the process to waiting state.
problem among multiple processes by setting its value either
The blocked process can be resumed only when some
to 0 or 1 and hence called as ‘Binary Semaphore’. The value 1
directs the process to enter into the critical-section whereas the other process executes the signal operation. The signal operation
value 0 prevents the processes from entering into the critical- increments the value of the semaphore and checks it to be
section (since one of the processes is still in the critical section). less than or equal to zero. If the condition is satisied then it
The methods wait(s) and signal(s) are executed by the processes removes the blocked process from waiting queue and resumes
which sets the semaphore value to 1 and 0 respectively. its execution using wakeup( ) operation.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 35
Computer SCienCe paper-V operating SyStemS
The wait and signal operations of semaphore can be deined as follows,
void wait(semaphore sem)
{
sem.val – –;
if(sem.val<0)
{
// add this process to sem.pList;
block( );
}
}
void signal(semaphore sem)
{
sem.val+ +;
if(sem.val< = 0)
{
// remove a process p from sem.pList;
wakeup(p);
}
}
It is to be noted that the execution of semaphores should be atomic i.e., no two processes should execute the wait and signal
operations on the same semaphore simultaneously. This situation is known as critical-section problem which can be eliminated
from both uniprocessor as well as multiprocessor environment. In uniprocessor system it can be eliminated by inhibiting the
interrupts during the execution of wait and signal operations.
In multiprocessor system it can be prevented by employing a special software.
Q44. Write short notes on,
(i) Priority inversion
(ii) Priority inheritance.
Answer :
(i) Priority Inversion
The priority inversion problem arises when a resource is shared by two or more tasks. In this case, a situation arises where
a higher priority task has to wait till a lower priority task is executed. The low and high priority tasks are inversed.
The priority inversion problem can be better explained with the help of the following example.
Consider three tasks in a system with task 1 having the highest priority, task 2 with medium and task 3 with least priority.
Initially, assume task 3 is in running state and task 2 and 1 are waiting. Consider the igure below,
TASK 1 = 1
TASK 2 = 2
TASK 3 = 3
36 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

Figure (1): Priority Inversion Problem


When task 3 is being executed, it gets a semaphore, but its operation is not completed. Next task-1 is ready to run and its
execution is started. Task 1 needs the semaphore, so it goes to waiting state and task-3 executes. After task-3, task-2 gets executed
which was ready-to-run. Next, task-3 starts running and when it realeases the semaphore, task 1 gets executed. So, in this way
task-1 has to wait for a long time, although it has the highest priority. Here, the priorities of task-1 and 3 are inversed. It is thus,
called as the priority inversion problem.
(ii) Priority Inheritance
Priority inheritance is a type of mechanism used for handling priority inversion problem. It is a lock based process syn-
chronization technique in which a shared resource which is currently in use by a process, cannot be accessed by another process.
In this mechanism, the priority of a low priority task which is currently holding a resource requested by a high priority task,
is raised to that of the high priority task, inheriting the priority temporarily and hence the name. Raising the priority of a low
priority task to the priority of task requesting shared resource held by low priority task, eliminates the preemption (release) of a
low priority task by other tasks and hence the delay encountered in waiting for the resource requested by the high priority task
is reduced. When the low priority task releases the shared resource its boosted priority is brought back to the original value.
Figure (2) below shows the change in execution sequence in priority Inheritance implementation mechanism to handle
priority inversion problem described for Task 1, Task 2 and Task 3.

Figure (2): handling Priority Inversion Problem with Priority Inheritance

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 37


Computer SCienCe paper-V operating SyStemS
Though priority inheritance reduces the delay it cannot completely eliminate the delay in waiting of high priority task
to get resource from low priority task, it can only help the low priority task to continue its execution and release the resource
causing minimum delay. All the overheads in priority inheritance are at run time. It also charges the overheads of checking the
priorities of all the tasks which attempts to access the shared resources and change the priorities dynamically.

Q45. Discuss dining philosopher’s problem and a solution for it.

Answer :

Consider the following situation, there are ive philosophers who have only two jobs in this world like think and eat. Each
philosopher is sitting on one chair laid around a circular table. There is a plate of noodles placed in the center and ive single
chopsticks are placed on the table as shown in below igure,

P1

P5

P2
Noodles
P4

P3

Figure: dining Philosophers Problem


The philosopher thinks independent of each other and when he get hungry he eats. But, in order to eat, he needs two
chopsticks. There is a restriction that philosopher has to take chopsticks of his left and right neighbour only and also he cannot
pickup a chopstick which are currently in the hand of his neighbour. If philosopher gets both chopsticks, he eats and after that
puts the chopsticks on the table and starts thinking again.

The problem is to ensure that all philosophers peacefully thinks and eats and no philosopher should starve of hunger (i.e.,
no starvation) and the chopstick should be given mutually exclusive from each other.

For solving this problem using semaphores, each chopstick has to be represented as a semaphore. Hence, we have an
array of ive chopsticks. To take a particular chopstick the philosopher has to execute a wait( ) operation on that semaphore and
while putting down the chopsticks, it executes a signal( ) operation. Consider the following pseudocode for a philosopher ‘X’.
semaphore chopstk[5];
while(1)

wait(chopstk[x]);

wait(chopstk[(x + 1)%5]);
Critical section

/*perform eating*/

38 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
signal(chopstk[x]);

signal(chopstk[(x + 1)%5]);
Remainder section

/*perform thinking*/

Figure: Structure of Philosopher ‘X’

The above solution may lead to deadlock, consider a situation where all philosophers has grabbed their left chopsticks,
now every body would try to grab right chopsticks but will be delayed forever. There are several other solutions for dining
philosopher’s problem which are deadlock free. One of the technique for the above situation is ‘Hold-n-wait’. No philosopher
should be allowed to hold a chopstick and wait for another. It should grab chopsticks only if both are available. Another solution is
to use asymmetric order, in which an even philosopher (P2 or P4) is allowed to pick their right chopstick irst then left chopstick.
In the same way each odd philosopher (P1, P3, P5) are allowed to take left chopstick irst and then right chopstick.

1.4.4 monitors
Q46. What is meant by monitor? How it is different from semaphore? And also explain various operations
used in monitor?

Answer :

Monitor

A monitor is a construct in a programming language which consists of procedures, variables and data structures. Any
process can call the monitor procedures but access to the internal data structures is restricted. At any time, a monitor contains
only one active procedure.

The condition variables can be used for blocking and non-blocking.

or

A monitor refers to the software module which consists of one or more procedures, an initialization sequence and local
data.

Characteristics

1. Access to the local variables can be granted only to the monitor’s procedures but not to any external procedure.

2. Any process can be allowed to enter into the monitor by invoking one of its procedures.

3. At any time only one process can execute inside the monitor and during its execution if any other process invokes, it
blocked till the monitor becomes available.

Monitor makes use of condition variables for providing synchronization. It can be operated by using two functions, They
are,

1. cwait(c)

Upon executing this function, a calling process is suspended and the monitor becomes available for use by any other
process.

2. csignal(c)

One of the blocked processes resumes its execution upon executing this function.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 39
Computer SCienCe paper-V operating SyStemS
Comparison between a Semaphore and a Monitor
Semaphore Monitor
1. Semaphores can be used anywhere within the 1. Monitor makes use of condition-variables anywhere
program but can’t be used inside a monitor. inside it.
2. The caller is not always be in a blocked 2. The wait( ) function always blocks the caller.
state when wait( ) condition is executed.
3. signal( ) releases the blocked thread, if it 3. signal( ) releases the blocked thread, if it exists,
exists, or increases the semaphore count value. or it lost as if it never occurs.
4. Upon releasing a blocked thread by the signal( ) 4. Upon releasing a blocked thread by the
function, the caller and the released thread can signal( ) function, only one among the caller
continue with their executions. or the released thread can continue, but not both.

Monitors with Notify and Broadcast


According to Hoare’s deinition, a process (if any exist in a condition queue must be executed immediately when some
other process issues a csignal for that condition. Some demerits associated with this approach are,
1. If the process issuing the csignal has not yet completed using the monitor, two process switches may occur for,
(i) Blocking this process and
(ii) Resuming some other process when the monitor becomes available.
2. Scheduling of the processes must be done in a perfectly reliable manner.
Once a csignal is issued, a process in a condition queue must immediately be set to the ready state and the scheduler is
assigned a task of ensuring that no other process can enter into the monitor. In MESA, csignal is replaced with cnotify with an
interpretation that a process executing in a monitor must execute cnotify(x), which causes ‘x’ condition queues to be notiied.
Example
void append_elem(char i)
{
while(count = = n)
cwait(empty);
buf(next_inelem) = i;
next_inelem = (next_inelem+1)% n;
count = count+1;
cnotify(full);
}
void take_elem(char i)
{
while(count = = 0)
cwait(full);
i = buf(next_outelem);
next_outelem=(next_outelem+1)% n;
count=count–1;
cnotify(empty);
}
The broadcast signal associated with the semaphore makes all the processes waiting on a condition to be kept in a ready
state.
40 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Q47. Discuss briely the solution for dining-philosophers problem using monitors.
Answer :
The monitor can be used to solve the dining philosopher’s problem. It assumed that philosopher can grab left and right
chopsticks only if both of them are available. (This is to avoid Hold-n-wait condition leading to deadlock). There are three state
of philosophers i.e., thinking, hungry and eating. These states are implemented as an enumerated data type.
enum{think, hungry, eat} states[5];
The neighbours (left and right) of a philosopher ‘i’ can be accessed as (i + 4)%5 and (i + 1)%5. Now according to the
restriction, the philosopher can grab chopsticks only if state [(i + 4)%5]! = eat and state [(i + 1)%5]! = eat i.e., when both are not
eating.
An array of data type ‘condition’ is declared, which is used when philosopher is hungry and wants to pickup chopsticks.
A monitor is created named “dining philo” which consist of three procedures or operations namely, grab_cpstk( ), leave_cpstk( ),
test_state( ) and also an initialization code. The philosopher ‘X’, must follow the sequence below,
grab_cpstk( );
------
------
------
eat;
------
------
------
leave_cpstk( );
The pseudocode for monitor “diningphilo” is given as follows,
monitor diningphilo
{
enum{think, hungry, eat} state[5];
condition self[5];
void grab_cpstk(int x)
{
state[x] = hungry;
test_state(x);
if(state[x]!= eat)
self[x].wait( );
}
void leave_cpstk(int x)
{
state[x] = think;
test_state((x + 4)%5);
test_state((x + 1)%5);
}
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 41
Computer SCienCe paper-V operating SyStemS
void test_state(int x)
{
if((state[(x + 4)%5)!= eat) && (state[(x + 1)%5]! = eat) && (state[x] == hungry))
{
state[x] = eat;
self[x].signal( );
}
}
initialization( )
{
for(x = 0; x < 5; x ++)
state[x] = think;
}
}
Q48. Discuss briely the procedure for implementing a monitor using semaphores. Also write the advantages
of monitors over semaphores.
Answer :
Implementation of a Monitor using Semaphores
Semaphores can be used to produce the same effect as that of monitors. A semaphore ‘sem_mutex’ is created for each
monitor. Before entering or leaving a monitor every process should execute wait(sem_mutex) and signal (sem_mutex) respectively.
Another semaphore ‘sem_nxt’ is declared with initial value zero. This is used to suspend the signalling process themselves.
An integer variable named nxt_count is provided to count number of suspended processes through semaphore ‘sem_txt’. However,
each procedure coding needs slight modiication to include synchronization code. Any external procedure now contains,
wait(sem_mutex);

if(nxt_count > 0)
signal(sem_nxt);
else
signal(sem_mutex);
Thus, mutual exclusion within a monitor is achieved.
To implement the wait( ) and signal( ) operation for a condition ‘cn’, create a semaphore ‘sem_cn’ and an integer variable
named ‘cn_count’ both with initial value as zero. The operation cn.wait( ) is implemented as,
wait( )
{
cn_count++;
if(nxt_count > 0)
signal(sem_nxt);
else
42 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
signal(sem_mutex);

wait(sem_cn);

cn_count – –;

The operation cn.signal( ) has the following code,

signal( )

if(cn_count > 0)

nxt_count++;

signal(sem_cn);

wait(sem_nxt);

cn_count – –;

Advantages of Monitors Over Semaphores

Monitor is a construct that provides a simple mechanism to implement mutual exclusion whereas semaphores provide
very complex way of implementing it. This is because the semWait( ) and semSignal( ) operations of semaphores are usually
scattered throughout the program and makes implementation dificult.

 Monitors are high level constructs, (usually pro-gramming language constructs that provide lexibility in writing correct
programs. On the other hand, semaphores demand strict sequencing.

 Monitors are shared objects that have many entry points (condition variables), but only one process can enter in the monitor
at a time. Hence, maintaining mutual exclusion.
Q49. How do you resume process within a monitor?
Answer :
When multiple processes are suspended on a single condition (cn) with a signal operation cn.signal( ) then it leads to the
confusion in selecting a process to be resumed among these suspended processes. The simplest solution available for this problem
is to use FIFO approach but, in most of the situations, this solution cannot be considered as effective. Therefore a new approach
is designed which uses ‘conditional-wait’ construct which is of the form cn.wait(x);
In this format, integer expression ‘x’ is computed by executing wait( ) operation and is referred as priority number. It is
usually stored with the name of its associated process which is in suspended state. The process with the smallest priority number
is resumed irst immediately after execution of cn.signal( ) operation.
For instance, consider a monitor for allocating a single resource using a resource allocator ‘ResAlloc’. It allocates the
resource based on the maximum time a process needs to use the resource and hence, the shortest timed process is allocated irst.
Corresponding pseudocode for the above process is,
Monitor ResAlloc
boolean busy;
condition cn;
void grab (int t)
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 43
Computer SCienCe paper-V operating SyStemS
{
if(busy)
cn.wait(t);
busy = true;
}
void release( )
{
busy = False;
cn.signal( );
}
initialize_code( )
{
busy=False;’
}
In the above code ‘t’ represents time
However, there exist certain problems in using this method. These includes,
 A resource might be accessed without getting permitted.
 A resource might be acquired forever once accessed.
 A resource which is never requested, a process might try to release it.
 An already acquired resource might be requested by the same process.

44 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

internal assessment
objective type
I. Multiple Choice Questions
1. The memory that is placed between CPU and main memory is ________. [ ]

(a) Cache (b) Random-access

(c) Read-only (d) Secondary

2. A _________ acts as an interface between a process and the operating system. [ ]

(a) System call (b) I/O device

(c) Memory (d) Interrupt service routine

3. Interactive computing capabilities can be obtained through, [ ]

(a) Batch-processing system (b) Time-sharing system

(c) Multiprogramming system (d) None of the above

4. _________ lies on top of memory hierarchy. [ ]

(a) Cache (b) Registers

(c) Main memory (d) Auxiliary memory

5. Which of the following in not a type of system call? [ ]

(a) Process control (b) File manipulation

(c) Device manipulation (d) Resource allocation

6. Time sharing is a logical extension of _________. [ ]

(a) Single-processor systems (b) Multiprogramming

(c) Blade servers (d) SAN

7. A program in execution is known as _________. [ ]

(a) Thread (b) State

(c) Process (d) Status

8. In UNIX, a process is created using _________ system call. [ ]

(a) create( ) (b) join( )

(c) fork( ) (d) init( )

9. _________ requires that a process should remain in critical section for a inite amount of time. [ ]

(a) Deadlock (b) Live lock

(c) Mutual exclusion (d) Starvation

10. _________ is not a requirement for critical section problem. [ ]

(a) Mutual exclusion (b) Deadlock

(c) Progress (d) Bounded waiting


SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 45
Computer SCienCe paper-V operating SyStemS
II. Fill in the Blanks
1. The computer resources like CPUs, memory and peripheral devices are controlled by a program known as _________.

2. A _________ operating system is one where rigid time requirements are placed on the processor.

3. _________ system refers to small portable devices that can be carried along and are usually battery powered.

4. The most common and user friendly user interface is _________.

5. _______ was the irst system that was not written in assembly language.

6. ________ as modularized kernel using the microkernel approach.

7. Each process is identiied by a unique number called _________.

8. A user process can enter into kernel mode by issuing a _________.

9. _________ refers to a situation wherein processes wait indeinitely for being scheduled.

10. A semaphore whose value can be either ‘0’ or ‘1’ is known as _________.

46 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V

Key
I. Multiple Choice Questions
1. (a) 2. (a) 3. (b) 4. (b) 5. (d)

6. (b) 7. (c) 8. (c) 9. (c) 10. (b)

II. Fill in the Blanks


1. Operating system

2. Real-time

3. Hand held

4. Graphical User Interface (GUI)

5. Master Control Program (MCP)

6. Mach

7. Process Identiier (PID)

8. System call

9. Starvation

10. Binary semaphore

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 47


Computer SCienCe paper-V operating SyStemS
III. Very Short Questions and Answers
Q1. Deine operating system.
Answer :
An operating system is a program or a collection of programs that controls the computer hardware and acts as an inter-
mediate between the user and hardware.
Q2. What is system call?
Answer :
The operating system provides a wide range of system services and functionalities. These services can be accessed by
making use of system calls. The system calls acts as the interface between user applications and operating system (services).
Q3. Deine process.
Answer :
Process is the fundamental concept of operating systems structure. A program under execution is referred to as a process.
Q4. What is IPC?
Answer :
Inter Process Communication (IPC) is deined as the communication between process to process. It provides a mecha-
nism to allow processes to communicate and to synchronize their actions.
Q5. What is semaphore?
Answer :
Signals provide simple means of cooperation between two or more processes in such a way that a process can be forcefully
stopped at some speciied point till it receives the signal. For signalling between the processes a special variable called semaphore
(or counting semaphore) is used.

48 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V

Unit CPU scheduling and


Deadlocks

2
Learning Objectives

After studying this unit, a student will have thorough knowledge about the following key concepts,

 CPU scheduling and various scheduling Algorithms.


 Deadlock and its System Model.
 Various methods of handling Deadlocks.
 Deadlock Prevention, Deadlock Avoidance and Deadlock Detection.
 Process of recovering from Deadlock.

Introduction

In multiprogramming, there are several processes that are running simultaneously in various queues. These queues
are managed using schedulers like LTS, STS and MTS. The decision with respective to the allocation of resources is
done by scheduling algorithms including FIFO, SJF, Round robin, Multilevel, Multilevel feedback queue scheduling
algorithms. During this allocation, a situation might arise during which the requested resources are held by other wait-
ing processes. This is called deadlock. There are various techniques for preventing, avoiding, detecting and recovering
from the deadlock.

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 49


Computer Science Paper-V Operating systems

Part-A
Short Questions with Solutions
Q1. What are short-term, long-term and medium term schedulings?
Answer : Model Paper-I, Q3

Long-term (Job) Scheduling


In operating system, the number of processes submitted for execution is more than the number of processes executed instantly.
Therefore, some processes are spooled (stored) in a storing device and executed later. The processes from job pool are selected by long
term scheduler and loaded into the memory.
This scheduler executes process less frequently (in minutes) as it can afford to take time for selecting processes
from the job pool.
v It is invoked only when a process exits a system.
v It also controls the level of multiprogramming.
Medium-term Scheduler
Medium-term scheduler reduces the degree of multiprogramming. This is done by temporary removal of some processes from
the memory. These processes are reintroduced into the memory and their execution starts from the previous state where they were
taken out from the memory. This process of temporary removal and reassignment of the memory is quite often seen in medium-term
scheduler.
This scheduler utilizes the swapping mechanism that is adopted in time sharing system.
Short-term(CPU) Scheduling
Short term scheduling chooses the processes which are ready to get executed and assigned to the CPU. This schedules
selects and submits a new process more frequently (in milliseconds).
Q2. List any three scheduling algorithms.
Answer : Model Paper-II, Q3

The following algorithms are the three scheduling algorithms,


(i) First Come First Served (FCFS) Scheduling
This algorithm allots the CPU to process that requests first from the ready queue. It is considered as the simplest algo-
rithm as it works on FIFO(First in First Out) approach. In the ready queue when a new process requests CPU, it is at-
tached to the tail of the queue and when the CPU is free, it is allotted to the process located at the head of the queue.
(ii) Shortest Job First (SJF) Scheduling
This algorithm schedules the processes by their CPU burst times which means the process with less CPU burst time will
be processed first, before other processes. If two processes have same burst times then they will be scheduled by using
FCFS scheduling. This is also called as “shortest next CPU burst”.
(iii) Priority Scheduling
This algorithm associates each process with a priority, and the process with highest priority will get the CPU first. If
there are two processes with same priority, FCFS scheduling is used to break the tie. Priorities are of generally some
fixed range of numbers, such as 0 to 7 or 0 to 4064. Here 0 is allotted to the process with lowest CPU burst which is
highest interms of priority.
Q3. What is starvation problem with respect to CPU scheduling?
Answer :
Two or more processes are said to be in starvation, if they are waiting for a resource which is occupied by another
process. The process that has occupied the resource may or may not be present in the list of processes in starving state.

50 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.


UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
Let P1, P2 and P3 be the three processes, each of which requires a periodical access to resource R. If access to resource ‘R’
is granted to the process P1, then the other two processes P2 and P3 are delayed as they are waiting for the resource ‘R’. Now, let
the access is granted to P3 and if P1 again needs ‘R’ prior to the completion of its critical section. Then the OS permits P1 to use
‘R’ after P3 has completed its execution. After this alternate access permissions provided to P1 and P3 causes P2 to be blocked.

Q4. What is FCFS?

Answer :

FCFS stands for First Comes First Served. The typical use of FCFS in OS is to serve the processes waiting in a ready
queue to be allocated with CPU. Using FCFS, a newly, arrived process is placed at the end of the queue and the process which
is at the top of the queue is allocated with the CPU first.

Q5. Define deadlock.

Answer : Model Paper-III, Q3

Deadlock

A situation in which a process waiting indefinitely for requested resources and that resources are held by other processes in a
waiting state. This situation results in disallowing the process to change its state which is called a deadlock situation.

Example

One single rail track i.e., two trains travelling in opposite direction on a single railway track.

Q6. What is a circular wait in deadlocks?

Answer :

There exists a list of waiting processes (P0, P1, ... , Pn) such that process P0 is waiting for a resource currently under the usage
by process P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by P3 and so on. Finally, a process
Pn is waiting for the resource held by P0.

In otherwords, each of the n resources are held by n processes and each process waits for unavailable units of resource
types held by other process. This type of waiting is referred to as circular wait.

Q7. List three overall strategies in handling deadlocks.

Answer : Model Paper-I, Q4

A deadlock can be handled in the following ways,

(i) Avoiding the occurrence of a deadlock by using various deadlock prevention and avoidance techniques.

(ii) In case if a deadlock occurs in a system, different detection and recovery techniques can be implemented.

(iii) None of the method is used to detect, recover, prevent (or) avoid the deadlock. Hence, the deadlock is simply ignored.

Q8. List three examples of deadlocks that are not related to a computer system environment.

Answer :

The following are the real world examples of deadlocks that are not related to a computer system environment.

(i) Two cars crossing a bridge both moving in opposite direction and the bridge has the capability only one car can cross the
bridge at a time.

(ii) Two persons on a single ladder among whom one is climbing up and another is going down.

(iii) On a single railway track, two trains travelling in opposite directions towards each other.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 51
Computer Science Paper-V Operating systems
Q9. What is safe state in deadlocks?
Answer : Model Paper-II, Q4

Consider a system consisting of several processes like < P1, P2, P3,....., Pn>. Each of them requires certain resources
immediately and also specifies the maximum number of resources that they may need in their life time. Using this information, a
“safe sequence” is constructed. A safe sequence is a sequence of processes where their resource request can be satisfied without
having deadlock to occur. If there exists any such safe sequence, then system is said to be in “safe state” during which deadlock
cannot occur. An unsafe state may lead to deadlock but not always. There are some sequences in unsafe state which can lead to
deadlock.
Unsafe state

Deadlock

Safe state

Figure: States in a System


Q10. Draw a resource allocation graph to show a deadlock.
Answer : Model Paper-III, Q4

Consider the following example graph consisting of two processes (P1 and P2) and two resources (R1 and R2) such that P2
has resource R2 and requests for R1 and P1 has resource R1 and may claim for R2 in future. This action can create a cycle in the
graph which means deadlock is possible and system is in unsafe state. Hence, allocation should not be done.
P1

R1 R2

P2

Figure: Resource Allocation Graph Representing Unsafe State

52 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.


UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V

Part-b
Essay Questions with Solutions
2.1 CPU Scheduling

2.1.1 Concepts
Q11. Write a short note on,
(i) Program
(ii) Jobs
(iii) Job scheduling.
Answer :
(i) Program
‘Program’ refers to the collection of instructions given to the system in any programming language. Alternatively a
program is a static object residing in a file. The spanning time of a program is unlimited. A program can exist at a single place
in space. In contrast to process a program is a passive entity. It consist of different types of instructions such as arithmetic
instructions, memory instructions and input/output instructions, etc.
(ii) Jobs
A job is a sequence of programs used to perform a particular task. Typically a job is carried out in various steps where
each step depends on the successful execution of its preceding step. It is usually used in a non-interactive environment.
Example
In a job of executing a C program, a sequence of tasks are involved each for compiling, linking and executing the
program. Here, linking depends on the successful execution of compiling and executing depends on the successful execution of
linking.
(iii) Job Scheduling
Job scheduling is also called as long-term scheduling which is responsible for selecting a job from disk and transferring
it into main memory for execution. It is also responsible for deciding which process is to be selected for processing. When
compared with short-term scheduler, its execution is less frequent.
One of the major function of job scheduler is to control multiprogramming. This is because, if the number of processes in
a ready queue (or) memory becomes high, it imposes an overhead on the operating system. It is difficult for operating system to
maintain long lists, context switching and over limit dispatching. Therefore, job scheduler allows a limited number of processes
in the memory. The process of selecting the processes for execution in job scheduling is independent of time.
Some of the operating systems such as UNIX and Windows, do not use long-term scheduler. These systems simply insert
each new process in the ready queue and uses a short-term scheduler for selecting the process for execution. This approach is
mainly used in time sharing systems.
Q12. Explain various scheduling concepts.
Answer : Model Paper-I, Q10(a)

The various CPU scheduling concepts are as follows,


CPU I/O Burst Cycle
A typical execution of process carries a cycle of CPU burst and I/O burst which are waiting. This implies that a process
executes a CPU burst followed by I/O burst. This cycle repeats continuously till the termination of that process. CPU burst
carries read and write operations and its duration vary depending on the process. An I/O bound and a CPU bound has multiple
short CPU bursts and long CPU bursts respectively.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 53
Computer Science Paper-V Operating systems
CPU Scheduler (or) Short-term Scheduler
2.1.2 Scheduling Criteria
Short term scheduling chooses the processes which
are ready to get executed and assigned to the CPU. This Q13. Explain the scheduling criteria used for short-
schedules selects and submits a new process more frequently term scheduling.
(in milliseconds).
Answer :
Preemptive Scheduling
Scheduling is defined as the activity of deciding, when Short-term Scheduling
processes will receive the requested resources. These decisions Short-term scheduling takes place whenever an event
are made when,
occurs that may lead to the interruption of the current process
1. An active process switches to waiting state. in favour of another. Examples of such events include,
2. An interrupt occurs while , an active process switches to
(a) Clock interrupts
ready state.
3. A process in waiting state jumps to ready state. (b) I/O interrupts
4. A process terminates. (c) Operating system calls
The scheduling process is said to be non-preemptive if the (d) Signals.
process is under case1 and case 4 of the above circumstances. This
means that the process under non-preemptive scheduling goes on This is the actual decision of ready process to execute
processing until its termination (or) switches to waiting state. next.
Unlike non-preemptive scheduling, the processor can The levels of scheduling can be shown as follows,
be allotted to different process before the completion (or)
termination of active process in preemptive scheduling. It was
first used by Windows 95 and later on by all the advanced Running
versions of Windows Use of preemptive scheduling enhances
system performance but results in increase of cost associated
Ready
with shared data. Usually, this problem occurs when two
processes that are sharing data among which, one of them is
updating data. At this stage, if the second process interrupts the Blocked
updations and tries to gain access to the data which is under short-term
updation then it leads to inconsistencies.
Blocked
Another issue related with preemption is that it affects
suspend
the design of kernel. This is in the case when the process of
modification of certain important data associated with kernel is
interrupted. One solution for this problem is to avoid interruption Ready
during the modification to kernel and make the interrupting suspend
process to wait until the completion of kernel processes. This Medium-term
solution is used by certain version of UNIX but it is not considered
as effective in case of real-time systems. Exit Exit
Dispatcher Long-term
Dispatcher is a module associated with CPU scheduling
Figure: Levels of Scheduling
whose responsibility is to allot the CPU to the process which
is selected from ready queue by short-term scheduler. Apart A set of criteria is established against which various
from this it performs certain additional functions which scheduling policies may be evaluated and is categorized
include switching among kernel and user modes, context into user-oriented and system-oriented criteria. User-oriented
switching and pointing out the exact location from which the criteria relate to the behaviour of the system as perceived by
program needs to be restarted. the individual user or process. E.g, Response time.
The time taken by dispatcher to allot the CPU from one Other criteria is system-oriented i.e., the focus is on
process to another is called dispatch latency. This time should effective and efficient use of the processor.
be as less as possible because it is used at every instance of
process switch. Example: Throughput.
54 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
Criteria for Short Term Scheduling
Many criteria have been suggested for comparing CPU scheduling algorithms. Criteria that are used to include are as
follows,
CPU utilization : The amount of time that the CPU is kept busy executing processes.
Throughput : The number of processes that are completed per unit time.
Turnaround time : The interval from the time of submission to the time of completion.
Waiting time : The sum of periods spent waiting in the ready queue.
Response time : The time from the submission of a request until the first response is produced in an interactive
system.
It is necessary to keep the CPU busy all the time which can be done by increasing the CPU utilization and throughput
thereby decreasing response time and waiting time. CPU scheduling basically decides the allotting of the CPU to the processes
in the ready queue the CPU is allocated next.

2.1.3 Scheduling Algorithms


Q14. Explain FCFS, SJF, Priority, Round robin scheduling algorithms.
Answer : Model Paper-II, Q10(a)

Scheduling
Scheduling is defined as the activity of deciding, when processes will receive the resources they request. There exist
several scheduling algorithms among which some are as follows,
1. First Come First Served (FCFS) Scheduling
This algorithm allots the CPU to process that requests first from the ready queue. It is considered as the simplest algorithm
as it works on FIFO(First in First Out) approach. In the ready queue when a new process requests CPU, it is attached to the tail
of the queue and when the CPU is free, it is allotted to the process located at the head of the queue.
One of the difficulties associated with FCFS is that the average waiting time is quite long. For instance, consider a set of
three processes P1, P2, P3 whose CPU burst times are given below,

Process Burst Time (ms)


P1 24
P2 3
P3 3

If the sequence of arrival is P1, P2, P3 then we get the following result.

P1 P2 P3
0 24 27 30
So,
Waiting time for process P1 = 0 ms
Waiting time for process P2 = 24 ms
Waiting time for process P3 = 27 ms
0 + 24 + 27 51
Average waiting time = 3 = = 17 ms
3
If the sequence of arrival is P2, P3, P1, then we get the following Gantt chart.

P2 P3 P1
0 3 6 30
6+0+3
Waiting times for P1, P2, P3 are now 6 ms, 0 ms, 3 ms respectively and average waiting time is, 3 = 3 ms. So,
average waiting time varies with the variation in process CPU-burst times.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 55
Computer Science Paper-V Operating systems
Another difficulty with FCFS is, it tends to favour CPU bound processes over I/O bound processes. Consider that there
is a collection of processes one of which mostly uses CPU and a number of processes which uses I/O devices.
When a CPU-bound process is running, all the I/O bound processes must wait, which causes the I/O devices to be idle.
After finishing its CPU operation, the CPU bound process moves to an I/O device. Now, all the I/O bound processes having
very short CPU bursts execute quickly and move back to I/O queues and causes the CPU to sit idle. In this way FCFS may result
in inefficient use of both processor and I/O devices.
Once the CPU has been allocated to a process, it will not release the CPU until it is terminated or switched to the waiting
state. So, this algorithm is non-preemptive. It is difficult to implement for time-sharing systems in which each user gets the
CPU on a time based sharing.
2. Shortest Job First (SJF) Scheduling
This algorithm schedules the processes by their CPU burst times which means the process with less CPU burst time will
be processed first, before other processes. If two processes have same burst times then they will be scheduled by using FCFS
scheduling. This is also called as “shortest next CPU burst”.
Consider the following example,
Process Burst Time (ms)
P1 6
P2 8
P3 7
P4 3

Using SJF scheduling, the following result is obtained.

P4 P1 P3 P2
0 3 9 16 24

Waiting time for process P1 = 3 ms


Waiting time for process P2 = 16 ms
Waiting time for process P3 = 9 ms
Waiting time for process P4 = 0 ms
3 + 16 + 9 + 0 28
So, average waiting time = 4 = = 7 ms
4
This algorithm gives the minimum average waiting time by moving a short process before a long one which decreases the
waiting time of the short process. CPU needs to know the length of requested process which is difficult to compute. This cannot
be implemented at the level of short-term CPU scheduling and is used frequently in long-term scheduling.
One way to overcome this difficulty is to predict the average length of requested process rather than determining
the length of requested process. This can be done by computing exponential average of length of previous CPU burst value
associated with the process. It can be computed using the formula,
tn + 1 = atn + (1 – a)tn
Where,
tn = Most recent information related to CPU burst
tn = Past history of CPU bursts
a = Parameter that is used to have control over weight of recent and past information.
The SJF algorithm can be considered as preemptive and non-preemptive. In a preemptive SJF, when a process is in running
state and a new process arrives whose CPU burst time is shorter than the active process, then it preempt the active process. It is also
called shortest - remaining time first scheduling.
In a non-preemptive SJF, the process is allocating with CPU till the completion of the process. It is also called as shortest
Path Next (SPN) algorithm.
56 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
3. Priority Scheduling
This algorithm associates each process with a priority and the process with highest priority will get the CPU first. If
there are two processes with same priority, FCFS scheduling is used to break the tie. Priorities are of generally some fixed
range of numbers, such as 0 to 7 or 0 to 4064. Here 0 is allotted to the process with lowest CPU burst which is highest interms
of priority.
Depending on the system, the high priority number which can be either lowest number or highest number. Considering
the numbers that represent high priority. For example, consider set of processes, arrived at time 0, in sequence P1, P2, ... P5, and
with the burst time and priority as follows,

Process Burst time (ms) Priority


P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

Using priority scheduling, the following Gantt chart is obtained,

P2 P5 P1 P3 P4
0 1 6 16 18 19

Waiting time for process P1 = 6


Waiting time for process P2 = 0
Waiting time for process P3 = 16
Waiting time for process P4 = 18
Waiting time for process P5 = 1
The average waiting time
6 + 0 + 16 + 18 + 1 41
= 5 = 5 = 8.2 ms
The allotment of priorities can be carried out internally as well as externally. In an internally defined priority, priorities
are allotted based on the computation of certain measurable quantities such as CPU burst time, time limits etc. In an externally
defined priority, priorities are assigned based on certain external factor associated with the process. An example of such
allotment is assigning the priorities based on the importance of the process.
Similar to SJF, priority scheduling can also be used to preemptive and non-preemptive. A major drawback associated
with this algorithm is indefinite blocking which is also called starvation. In this case, a process with lowest priority will never
get the CPU because it keeps on allotting the higher priority to other processes. To avoid this, a technique called ‘Aging’ is
employed which increases the priority of processes that are present in the ready queue for a long time.
4. Round Robin Scheduling
This algorithm is considered as a preemptive version of FCFS algorithm which is especially designed to be used in time
sharing systems. Preemption i.e., switching between various processes is carried out by creating certain time intervals called
time slices (or) time quantums whose typical value lies between 10 and 100 ms in length. Based on these time slices, CPU is
allotted to the processes present in the ready queue making the ready queue act as a circular queue. These processes uses the
CPU for 1 quantum of time and then the CPU is allotted to the next process.
In a ready queue of RR scheduling new processes are added based on FIFO queue. Starting from the head of the ready
queue. Each process is allotted with certain time interval (time slice) and dispatched.
During the allocation of CPU to the process, either of the two situations can arise,
(a) The process completes within the time slice and the scheduler simply allocates the CPU to the next process present in a
queue.
(b) The process does not completes its execution within the time slice. In this case, an interrupt is made and the process is
jumped to the tail of the ready queue. Now, the CPU is allocated to the preceding process.

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 57


Computer Science Paper-V Operating systems
The average waiting time associated with each process of RR scheduling algorithm is long. For example, consider a set
of processes whose CPU burst times are as follows,
Process CPU Burst Time (ms)
P1 25
P2 4
P3 4

Let us assume the time quantum as 5 ms. In this case, P1 is first allocated with CPU for 5 ms and then it is sent at the
tail of the queue. P1 requires another 20 seconds to complete its execution. Now, the CPU is allocated to P2 which returns it in
4 ms because it needed only 4 ms to complete and hence it quits before the expiration of time slice. Now, the CPU is allocated
to P3 which also requires only 4 ms and hence it also quits before expiration. Now the Gantt chart will be as follows,

P1 P2 P3 P1 P1 P1 P1
0 5 9 13 18 23 28 33

Waiting time for process P1 = (13 – 5) = 8


Waiting time for process P2 = 5
Waiting time for process P3 = 9.
22
The average waiting time associated with the above set of processes can be computed as, 8 + 5 + 9 = 3 = 7.33 ms
3
None of the processes allowed to use the CPU for more than one quantum. For this reason, this algorithm is dependent
on the size of time slice. It acts as a typical FCFS algorithm in case of large time slices whereas, in case of too small time slices,
the algorithm is referred as processor to sharing with the speed of value of real processors speed.
Q15. Explain multilevel queue scheduling.
Answer :
Multilevel queue scheduling categorizes processes into different groups by maintaining a separate ready queue for each
group. The foreground and background processes are popular examples belonging to different category. Both of them have
different response-time and CPU utilization requirements, hence different scheduling algorithms are applied on them. Round
Robin is applied on foreground processes because they need to be interactive and FCFS can be applied on background processes.
There may be several other groups.
Example
1. System processes (Highest priority)
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. User processes (Lowest priority).
Highest priority
System process

Interactive processes

Interactive editing processes

Batch processes

User processes
Lowest priority
Figure: Multilevel Queue Scheduling
58 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
If a higher priority queue has some processes waiting for CPU, then lower priority processes cannot execute unless all
higher queues are empty. If at time instant t0 all higher queues are empty and a user process starts executing then at t1 suppose
that a system process entered its queue, then user process would be preempted in order to execute the higher priority system
process.

Another scheme is to use time slicing among the queues. For example, 80% of CPU time can be given to foreground
processes and time slicing can be applied within the queue again. The remaining 20% of CPU time can be given to background
processes and FCFS can be applied within its queue.

Q16. Discuss about multilevel feedback queue scheduling.

Answer :

Multilevel Feedback Queue Scheduling

A multilevel feedback queue scheduling algorithm processes in various queue can be moved accordingly. This moving
of processes is performed by considering various factors such as,

v Moving the processes from higher priority queue to lower priority queue if they are time consuming.

v Moving the processes from lower priority queue to higher priority queue if they are waiting to be executed for a long
period of time.

This algorithm prevents starvation problem.

Example

Consider four queues that are maintained using multi level feedback queue scheduler as shown in the figure.

Queue-0

Queue-1

Queue-2

Queue-3
(FCFS)
Figure: Multilevel Feedback Queue Scheduling

Here, execution starts at queue 0 to which carries the highest priority processes and if it become empty, the execution
moves to queue-1 and so on. In case that a higher priority process arrives while executing a lower priority process the lower
priority process is preempted and the control is alloted to the higher priority process. The processes of queue ‘0’ given with
in a time slice of 8 seconds. If they fail to accomplish in 8 seconds, they are placed at the end of the next queue i.e., queue-1.
Similarly for queue-1 the time slice for each process is 16 seconds and if they fail to accomplish, they are placed at the end of
queue-2 and so on. The last queue in this algorithm works on First Comes First Served (FCFS) basis.

Processes are placed in queues based on their CPU burst times. Various parameters associated with this scheduler include
the following,

v Quantity of queues

v Scheduling algorithm associated with every queue

v Approach used to identify the correct time for moving a process from lower to higher priority queue.

v Approach used to identify the correct time for moving the process from higher priority to lower priority queue

v Approach used to identify the correct queue for executing a particular process.

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 59


Computer Science Paper-V Operating systems

2.2 Deadlocks

2.2.1 System Model


Q17. Discuss briefly about the system model.
Answer :
System Model
In any system, th­­­­­­ere are number of processes which are competing for the resources available in the system. Memory,
CPU, files, input/output devices (like printer, scanner, etc.) are called resource types. Each resource type may have many instances
i.e., the number of resource of the same type. For example, printer is a particular resource type and if there are two printers in
the system, then two instances of resource type printer will be available. If a process requests same resource, then any instance
of that resource type should satisfy the need of process, only then they belong to same class of resource type.
However, there may be a situation where resource of the same type has to be declared as belonging to different classes.
For example, if there are two printers, one in building-1 and other in building-2 then, from users view both are not same printers.
Any user will prefer to print his documents in the printer near to him. Even though, both the resources are of same type (printers),
they need to be defined as separate type. So, the declaration of resource type should be properly done.
During execution, a process has to “request” a resource before using it and “release” the same after usage. A process
should not request more resources than available.
The sequence of normal operation is as follows,
(i) Request
A process first requests for the resource it needs and then waits for its allocation. If it is busy with some other process, it
is done using system calls such as allocate( ) for memory, open( ) for files, request( ) for devices etc.
(ii) Usage
After getting the resource, the process can use it, like if it is a printer, process will print something. If it is a disk, process
will read or write on it etc.
(iii) Release
After usage, processes should release the resource so that others can use it. It is done by using system calls like free( ) for
memory, close( ) for files, release( ) for devices etc.
The operating system maintains a table of records to store information about resources i.e., whether they are free or
allocated to some other process. Another table of allocated resources is maintained which stores process IDs to which they are
allocated. A queue is created, which carries processes whose requested resources are busy.
Q18. Discuss the two categories of resources.
Answer :
The two categories of resources are,
1. Reusable resources
2. Consumable resources.
1. Reusable Resources
A reusable resource is one that can be used by only one process at any time without causing any damage to it. Also it is
not depleted by the repeated usage. Processes can obtain the resources and after using them they are released, so that the other
processes can reuse them. Processors, input/output channels, main and secondary memory, data structures and semaphores are
all examples of reusable resources.
Example
Consider a situation in which two processes A and B are using a reusable resource i.e., memory, a deadlock occurs when
the requests are made in the following order. Assume that the total memory space available for allocation is 100 kB.
60 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V

Process-A Process-B

Request 50 kB; Request 30 kB;

Request 35 kB; Request 40 kB;


Deadlock occurs if both processes move ahead towards second request. This problem can be eliminated by using virtual
memory.
2. Consumable Resources
A resource that can be easily created and destroyed is referred to as a ‘consumable resource’. A process may obtain any
number of such resources i.e., there is no restriction on the number of consumable resources used by any process.
Interrupts, signals, messages, information contained in input/output buffers are all examples of consumable resources.
Example
Consider a situation in which two processes A and B are using a consumable resource and each process is trying to receive
a message from the other process and sends a message to another process.
Process-A Process-B

Receive (process-B); Receive (process-A);

Send (process-B, msg-A); Send (process-A, msg-B);


Hence, if the receiving process is blocked till the message is received, deadlock arises and such errors are difficult to
detect.

2.2.2 Deadlock Characterization


Q19. Define deadlock. Explain necessary conditions for arising deadlocks.
Answer : Model Paper-III, Q10(a)
Deadlock
A situation in which a process waits indefinitely for requested resources and that resources are held by other processes in a
waiting state. This situation results in disallowing the process to change its state which is called as a deadlock situation.
Conditions for the Deadlock
A deadlock can occur if the following four conditions held simultaneously in a system,
1. Mutual Exclusion
In a non-sharable environment where not more than a single process can be allocated with a particular resource at a time is
referred to as mutual exclusion. In such environment, if a resource which is already in use by one process and is requested by
some other process, then it is kept on hold until the release of that resource.
2. Hold and Wait
A process which is already holding a resource cannot make use of other additional resources which are being used. This
situation is known as hold and wait.
3. No Preemption
A resource allocated to one process can be allocated to other only when the process holding it deallocates it after completion.
This means that user cannot preempt the resources.
4. Circular Wait
There exists a list of waiting processes (P0, P1, ... , Pn) such that process P0 is waiting for a resource currently under the usage
by process P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by P3 and so on. Finally, a
process Pn is waiting for the resource held by P0.
Out of the four conditions described above, the first three conditions are necessary but not enough for the existence of a
deadlock. The fourth condition actually results from the first three conditions i.e., the first three conditions results in a sequence
of events that finally lead to an unresolved circular wait which is actually the main cause for the occurrence of deadlock.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 61
Computer Science Paper-V Operating systems
Q20. Discuss about resource allocation graph. 2.2.3 Methods for Handling Deadlocks
Answer : Q21. Discuss the methods for handling deadlock.

Resource allocation graph is a directed graph, given as Answer :


G = (V, E) containing a set of vertices V and edges E. The A deadlock can be handled in the following ways,
vertices are divided into two types, i.e., a set of processes, (i) Avoiding the entry of a deadlock by using various
P = {P1, P2, P3,...., Pn} and a set of resources, R = {R1, R2, deadlock prevention and avoidance techniques.
R3,...., Rn}. An edge from process (Pi) to resource (Rj) (Pi ® (ii) In case that a deadlock has entered the system, various
Rj) indicates that Pi has requested for resource Rj. It is called as detection and recovery techniques can be implemented.
‘request edge’. Any edge from a resource Rj to a process Pi (Rj
(iii) In this method, none of the methods are used to detect,
® Pi) indicates that Rj is allocated to process Pi. It is called as recover, prevent (or) avoid and hence the deadlock is
“assignment-edge” when a request is fulfilled it is transformed simply ignored.
to assignment edge. Processes are represented as circles and
Deadlock Prevention
resources as rectangles. The following is an example of graph
where process P1 has R1 and requests for R2. For answer refer Unit-II, Page No. 62, Q.No. 22.
Deadlock Avoidance
P1
For answer refer Unit-II, Page No. 64, Q.No. 23.
Deadlock Recovery
For answer refer Unit-II, Page No. 68, Q.No. 27.
R2 R1
2.2.4 Deadlock Prevention
Figure: Resource Allocation Graph
Q22. Briefly explain about deadlock prevention
For avoiding deadlocks using resource allocation methods with examples of each.
graph, it is modified slightly by introducing a new edge Answer : Model Paper-I, Q10(b)
called “claim edge”. It is an edge from process Pi to resource
Deadlock Prevention
Rj (Pi ® Rj) indicates that in future, Pi may request for Rj.
Deadlock prevention means placing restrictions on
The direction of the arrow is same as request edge but with
resource requests so that deadlock cannot occur. Deadlock
dashed line. can be prevented by denying at least any one of the condition
of deadlock as follows,
Pi Rj
(i) Mutual Exclusion
Figure: Claim Edge The concept of mutual exclusion can be applied on
resources that are non-sharable.
To describe the usage of this graph in deadlock
avoidance, let us consider the following example graph Example
consisting of two processes (P1 and P2) and two resources (R1 Printer is a non-sharable resource. If one program is
and R2) such that P2 has resource R2 and requests for R1 and P1 using printer then other programs must wait for the
has resource R1 and may claim for R2 in future. This action can printer. This waiting may lead to indefinite waiting.
To overcome this, non-sharable resources must be
create a cycle in the graph which means deadlock is possible
made sharable. However, by making the printer as
and system is in unsafe state. Hence, allocation should not be sharable, an output error gets generates. In general, it
done. is not possible to prevent deadlock by denying mutual
exclusion principle, since some resources should be
P1 maintained as non-sharable resource i.e., when one
program is accessing it other programs cannot access
R1 R2 it.
(ii) Preventing Hold and Wait
P2 The hold and wait condition preludes a process from
holding some resources while requesting others.
Figure: Resource Allocation Graph Representing Unsafe State There are two protocols to prevent hold and wait.
62 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
(a) Request all the resources before starting an execution. Any program must request all the resources before starting
the execution, acquire them, use them and release them once execution is completed. In this way, no process will
wait for the resources during execution. Thus, hold and wait is prevented.
(b) Any process will request for a new resources only when it has none i.e., if a process has two resources and requires
three more resources, then the process can request for these three resources after releasing two resources it is
holding. After releasing the resources, it will request for new resources. If they are busy, the process waits without
holding any resources. Suppose, if the requested resources are free, then they can be allocated to the process.
Example
Consider a process that needs to copy data from DVD drive to the disk file. If the data need to be printed, it is
copied, sorted and is sent to the printer. For doing all these things, the process either needs to request all the
resources at the beginning of the process or obtain them when needed. Both the cases prevents the hold and wait
condition.
(iii) Preventing No Preemption
If a system follows no preemption, then the resources which are once allocated are not taken back from a process
involuntarily. Any system with above behaviour can lead to hold and wait condition and thereby to deadlock. Hence, no
preemption condition should be applied in order to prevent deadlock.
If a process P1 request any resource R1 which is currently held by some other process P2 then, P2 is preempted and R1 is
given to P1.
Example
Consider a car occupying some part of the street which cannot be taken by other car until and unless the first car has
been moved. This situation is known as No preemption. An occurrence of this condition can be prevented only if the
first car has been moved forcibly giving the other car a chance to place itself and finish its task.
(iv) Preventing Circular Wait
Circular wait occurs, when there are a set of n processes {Pj}, that hold units of a set of n different resources {Rj} such
that, Pj holds Rj while it requests units of a different resources in the set. In other words, each of the n resources are
held by the n processes, but each process then requests unavailable units of one of the resource types held by another
process. A circular wait is reflected by the resource process relationships (represented wholly within a system state), so
the state-transition model does not help in the study of this problem.
To prevent circular wait, users impose ordering of all resource type i.e., a unique integer number is assigned to each
resource. This concept is applied only for resources of higher numbers or higher “id”. If a process is requesting for
resources of higher “id”, it must be released.
If a process acquires all of the resources, it needs at one time, then it will never be in a situation, where it is holding a
resource and waiting for another resource. This will prevent deadlock.
Example
Consider that the resources has been assigned positive integers as follows,
Printer ® 1
Tap drive ® 2
Card punch ® 3
Card reader ® 4
Plotter ® 5
Now, the process must request the resources in numerical order. For example, the process can request the printer first
followed by the card punch and card reader (1, 3, 4). It cannot request card reader first and then printer.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 63
Computer Science Paper-V Operating systems

2.2.5 Deadlock Avoidance


Q23. Write about deadlock avoidance.
Answer : Model Paper-II, Q10(b)

Deadlock avoidance does not impose any rules but, in this technique each resource request is carefully analyzed to check
whether it could be safely fulfilled without causing deadlock. The drawback of this scheme is that it requires its information
about the requested resources in advance. Different algorithms require different type and amount of information like some
require maximum number of resources that each process requires etc.
The following are the various deadlock avoidance algorithms,
1. Safe State
Consider a system consisting of several processes like < P1, P2, P3, ..... , Pn>. Each of them requires certain resources
immediately and also specifies the maximum number of resources that they may need in their life time. Using this information,
a “safe sequence” is constructed. A safe sequence sequence of processes where their resource request can be satisfied without
having deadlock to occur. If there exists any such safe sequence, then system is said to be in “safe state” during which deadlock
cannot occur. An unsafe state may lead to deadlock but not always. There are some sequences in unsafe state which can lead to
deadlock.
Unsafe state

Deadlock

Safe state

Figure: States in a System


For example, consider a system with 24 printers and three processes i.e., P1, P2 and P3 requiring 14, 8 and 13 printers
respectively. This is the maximum need they require, it is not always the case that they require all of them at once. At a particular
time t0 , P1 needs only 9 printers, P2 and P3 needs 6 each. The table (1) shows the same.

Process Current Need Maximum Need


P1 9 14
P2 6 8
P3 6 13

Table (1): Current Need and Maximum Need for a Process


If the processes are executed in the sequence i.e., <P2, P1, P3> then the safety condition can be satisfied. The table (2)
shows the sequence of resource allocation and release.
Total (24)
P2 P1 P3
Printers
Remaining
At t0 resources allocated according to current needs. 6 9 6 3
P2 is allocated its maximum requirements. 6+2=8 9 6 1
P2 completes and returns all resources. – 9 6 9
P1 is allocated its maximum requirements. – 9 + 5 = 14 6 4
P1 completes and returns all resources. – – 6 18
P3 is allocated its maximum requirements. – – 6 + 7 = 13 11
P3 completes and releases all its resources. – – – 24

Table (2): Sequence of Resource Allocation and Release


64 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
The above table (2) shows one of the safe sequence,
Resource Request Algorithm
there may be many safe sequences for the same example. In
the beginning, system will be in safe state then processes are The following algorithm is used to determine whether
allocated resources according to their current need. Thereafter, the request can be safely granted or not.
whenever a process requests, the algorithm must decide
Step1
whether the allocation is safe or unsafe and accordingly the
action should be taken. If Requesti £ Needi, then proceed to step 2, otherwise
raise an exception saying process has exceeded its
2. Resource Allocation Graph
maximum claim.
For answer refer Unit-II, Page No. 62, Q.No. 20.
Step2
3. Banker’s Algorithm
If Requesti £ Available, then proceed to step 3,
It is used to avoid deadlocks when multiple instances otherwise block Pi because resources are not available.
of each resource types are present. It is similar to a banking
system where a bank cannot allocate cash to the customer Step3
more than available even thought it could not satisfy the needs, Allocate resources to Pi as follows,
of its customers. Here, customers are analogous to processes,
cash to resources and bank to operating system. Available: = Available – Requesti

A process must specify in the beginning the maximum Allocationi: = Allocationi + Requesti
number of instances of each resource type it may require. Needi: = Needi – Requesti
It is obvious that this number should not be more than the
available. When process request resources, system decides Safety Algorithm
whether allocation will result in deadlock or not. If not, The job of banker’s algorithm is to perform allocation,
resources are allocated otherwise process has to wait.
without considering whether this allocation has resulted in
The following are the various data structures which has safe or unsafe state. It is the safety algorithm which is called
to be created to implement Banker’s algorithm. immediately after banker’s algorithm to check for the system
Where, n = Number of processes state after allocation. The following is the safety algorithm
m = Number of resources. which requires m × n2 operations to find system state.

(a) Max Step 1

A n × m matrix indicating the maximum resources Assume work and finish as vectors of length m and n
required by each process. respectively.
(b) Allocation Work : = Available
A n × m matrix indicating the number of resources Finish[i] : = ‘false’.
already allocated to each process.
Step 2
(c) Need
Find ‘i’ such that
A n × m matrix indicating the number of resources
required by each process. Finish[i] : = false
(d) Available Needi £ Work
It is a vector of size m which indicates the resources If no such i is found jump to step 4.
that are still available (not allocated to any process).
Step 3
(e) Request
Work : = Work + Allocation
It is a vector of size m which indicates that process Pi
has requested for some resource. Finish[i] : = true
Each rows of matrices “allocation” and “need” can be Jump to step 2
referred as vectors. Then “allocationi” indicates the resources
Step 4
currently allocated to process Pi and “needi” refers to resources
required by Pi. If finish[i] : = True for all then system is in safe state.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 65
Computer Science Paper-V Operating systems

2.2.6 Deadlock Detection P1

Q24. Explain all the strategies involved in deadlock


detection.
P5 P2 P4
Answer : Model Paper-III, Q10(b)

Deadlock Detection Strategies

Deadlocks can be prevented by employing two P3


techniques as follows,
Figure (b): Wait-For Graph for the Given RAG
1. Deadlock prevention
Deadlock Detection in a System Containing Multiple In-
2. Deadlock avoidance. stances of a Resource Type

If either of the above two techniques is not applied then The wait-for graph can’t be used for a resource alloca-
a deadlock may occur and a system must provide, tion system containing multiple instances of each resource type.
Hence, a different algorithm is employed which carries certain
(i) An algorithm that can monitor the state of the data structures.
system to detect the occurrence of a deadlock. The data structures in the algorithm are,
(ii) A recovery algorithm to regain from the deadlocks Available
state. It is a vector of length m that can specify the number of
resources of each type that are available.
Deadlock Detection in a System Containing a Single
Instance of each Resource Type Allocation

A deadlock detection algorithm that makes use of a It is an n × m matrix that is used to define the number
of resources of each type presently assigned to each process in
variant of the resource allocation graph (called the wait-for
a system.
graph) is defined for a system containing only a single instance
Request
for all the resources.
It is an n × m matrix which specifies the current request
An edge from a node Pi to node Pj exists in a wait-for made by each process.
graph, if and only if, the corresponding RAG contains two
If Request [i, j] = k, then process i is currently requesting
edges, one from node Pi to some resource node Ra and the other for k additional instances of resource j.
from the resource Ra to node Pj. The presence of a cycle in the
The deadlock detection algorithm for every possible
wait-for graph indicates the existence of a deadlock. resource allocation sequence is given below.
An algorithm that is used to detect a cycle in the graph Consider two vectors whose lengths are m and n
requires a total of n2 operations, where n is the number of respectively.
vertices in the graph. 1. Initialize work = Available.
Example 2. Determine allocation for each i, where i = 1, 2, ,....n. If
Consider five processes P1, P2, P3, P4 and P5 and five allocation i ≠ 0 then set Finish [i] to false else, Finish
resources R1 to R5. The Resource Allocation Graph (RAG) for [i]is assigned a value true.
such a system is shown in the following figure, 3. Determine an index i for which both Finish [i] = false
R1 R2
and Request i ≤ Work. If no such i value is available
P1
then jump directly to step 5.
4. Set Work = Work + Allocation and Finish [i] = true and
P5 R3 P2 P 42 iterate through step 2.
5. If Finish [i] = False, for some i, 1 ≤ i ≤ n then the system
R4 R5
is in deadlock state and the process Pi is deadlocked.
P N3

Hence, m × n2 operations are required to determine


Figure (a): Resource Allocation Graph whether the system is in deadlocked state.
66 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
Q25. Write the deadlock detection algorithm. Illustrate through an example snap shot of a systems.

Answer :

Deadlock Detection Algorithm

An algorithm for the deadlock detection make use of the ‘allocation matrix’ which describes the current resource allocation
and the ‘available vector’ that describes the total amount of each resource not allocated to any process. In addition to the allocation
matrix and the available vector, a request matrix Q is defined in such a way that Qij specifies the amount of resources of type j
requested by a process i.

The processes that are not under the deadlocked state are marked. All the processes are unmarked at the beginning of an
algorithm. The execution proceeds as follows,

1. Each process, having a row of all zeroes in the allocation matrix is marked.

2. A temporary vector W is initialized with the available vector.

3. Determine an index i for which a process i is currently unmarked and the ith row of Q ≤ W i.e., Qik ≤ Wk for 1 ≤ k ≤ m.
Stop the algorithm if no such row is found.

4. After finding such a row, process i is marked and the associated row in the allocation matrix is added to W i.e., set Wk =
Wk + Aik, for 1 ≤ k ≤ m. Go back to step3.

After executing the algorithm if any unmarked processes are present then a deadlock exists. This algorithm finds a process
whose requests for the resources can be satisfied with the available resources and it is assumed that those resources are allocated
to it and the process completes its execution thereby releasing all its resources. Another process is then looked up by the algorithm
and the whole process is repeated. This algorithm does not assures deadlock prevention but instead it determines the existence
of deadlock.

Example

Consider the given allocation and request matrices along with resource available vectors.
R1 R2 R3 R4 R5 R1 R2 R3 R4 R5
P1 0 1 0 0 1 P1 1 0 1 1 0
P2 0 0 1 0 1  P2 1 1 0 0 0 
P3 0 0 0 0 1 P3 0 0 0 1 0
   
P4 1 0 1 0 1 P4 0 0 0 0 0

Request Matrix, Q Allocation Matrix

R1 R2 R3 R4 R5 R1 R2 R3 R4 R5
2 1 1 2 1 0 0 0 0 1

Available vector Resource vector When the deadlock detection algorithm is applied it proceeds as follows,

1. Mark the process P4 in the allocation matrix as it has no allocated resources.

2. Set W = (0 0 0 0 1).

3. As request mode by the process P3 is less than or equal to W, so it is marked and W is set to,

W = W + ( 0 0 0 1 0)

W = (0 0 0 1 1).

4. As no other unmarked process in Q contains a row which is less than or equal to W. So, the algorithm will be terminated.

The algorithm execution ends, leaving the processes P1 and P2 in an unmarked state. Hence, these processes go into
deadlock state.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 67
Computer Science Paper-V Operating systems
Q26. Discuss the usage of deadlock detection
1. Process Termination
algorithm.
In this method, one or more processes are terminated to
Answer :
eliminate deadlock. It has two methods as follows,
The usage of deadlock detection algorithm depends on
(i) Terminate all deadlocked processes which will
the following factors,
break deadlock immediately, but it is a bit expensive
(i) Frequent occurrence of deadlocks because there may be some processes which have been
executing for a long time consuming considerable CPU
(ii) Effects of deadlock on other process.
time and their termination will result in wasting those
Invocation of deadlock detection algorithm highly CPU cycles.
depends on how frequently deadlocks occur. In case that the
(ii) In order to overcome drawback of the above method,
deadlocks occurring frequently, it must be used as frequently as
this method terminates one process at a time until
their occurrence. This is done because when a process is affected
deadlock is recovered. However, it has some overhead
by deadlock, all its allocated resources can not be released and
since, after terminating each process, a detection
hence, can not be used by other processes. In addition to this, it is
algorithm has to be executed to examine whether any
possible the processes in the ready queue might increase. processes are deadlocked or not. This method is slower
In the second case, the detection algorithm is used than the first one.
every time when a process requested for a resource and it is
2. Resource Preemption
not allocated immediately. By implementing this, it is possible
to grab the process with which a deadlock occurred and the set In this method, resources are deallocated or preempted
of processes associated with it. However, using the algorithm from some processes and the same are allocated to others until
deadlock is resolved the three important issues that are used
on every process request takes a lot of time for computing
for implementing this scheme are as follows,
overall processes.
(i) Selection of Victim Process
One method to overcome the above said drawback
is to use the detection algorithm on certain regular (custom Initially, it is necessary to decide which process or
which resources are to be preempted, the decision is
time intervals (for example, every 30 minutes). However with
based on cost factor which includes the number of
this method, it is difficult to identify the process with which a
resources, a deadlocked process is holding and CPU
deadlock occurred. This is because, within that time interval,
time consumed by it etc.
the resource graph might carry many deadlock cycles.
(ii) Rollback
2.2.7 Recovery From Deadlock
The process which was preempted cannot continue
Q27. Write about recovery from deadlock. normal execution, because its resources are taken back.
Hence, we need to rollback to some previous checkpoint
Answer :
or total rollback to start it from the beginning.
Recovery from Deadlock (iii) Starvation
The detected deadlocks in the system by making use It is necessary to ensure that a particular process should
of deadlock detection algorithms, need to be recovered by not starve every time preemption is done.
using some recovery mechanism. The brute force approach
is to reboot the computer, but it is inefficient because it may
lose complete data and waste computing time. Hence, other
techniques are used to recover from deadlock. They are
broadly classified into two types. They are,

1. Process termination

2. Resource preemption.

68 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.


UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V

Internal Assessment

Objective type
I. Multiple Choice
1. Mutual exclusion can be applied to _________. [ ]

(a) Sharable resources (b) Non-sharable resources

(c) Both (a) and (b) (d) None of the above

2. Which of the following is not related to resource preemption? [ ]

(a) Roll back (b) Starvation

(c) Selection of victim process (d) Process termination

3. Process termination is related to _________. [ ]

(a) Deadlock avoidance (b) Deadlock prevention

(c) Deadlock recovery (d) None of the above

4. The technique in which I/O device addresses are part of memory address space. [ ]

(a) I/O mapped (b) I/O addressing

(c) Memory-mapped I/O (d) None of the above

5. The concept of spooling is related to _______. [ ]

(a) Key board (b) Tape drives

(c) Printers (d) Mouse

6. _________ refers to the number of processes that are completed per unit time. [ ]

(a) CPU utilization (b) Throughput

(c) Response time (d) Waiting time

7. _________ is the amount of time when the cpu is kept busy executing processes. [ ]

(a) CPU utilization (b) Throughput

(c) Response time (d) Waiting time

8. In _________, processes can be moved in different queues. [ ]

(a) FCFS (b) SJF

(c) Multilevel queue scheduling (d) Multilevel feedback queue scheduling

9. A resource that can be easily created and destroyed is referred to as a _________. [ ]

(a) Reusable resource (b) Consumable resources

(c) Resources (d) Blocked resources

10. Deadlock cannot occur when the processes are in _________ state. [ ]

(a) Wait (b) Blocked

(c) Running (d) Safe


SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 69
Computer Science Paper-V Operating systems
II. Fill in the Blanks
1. A _________ resource is used by only one process at any time without causing damage.

2. _________ refers to the use of a resource by only one process at any time.

3. If there exists at least one resource allocation sequence that does not lead to deadlock then a system is in _________
state.

4. To prevent _________ ordering is imposed on all resource type, i.e., a unique positive integer is assigned to each
resource.

5. Pi Rj in figure the arrow denotes _________ edge.

6. Process termination and resource preemption are techniques for _________.

7. The __________ can't be used for a resource allocation system containing multiple instances of each resource type.

8. __________ reduces the degree of multiprogramming.

9. __________ allots the CPU to process that requests first from the ready queue.

10. The time taken by dispatcher to allot the CPU from one process to another is called __________.

70 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.


UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V

KEY
I. Multiple Choice
1. (b) 2. (d) 3. (c) 4. (c) 5. (c)

6. (b) 7. (a) 8. (d) 9. (b) 10. (d)

II. Fill in the Blanks


1. Reusable
2. Mutual-exclusion
3. Safe
4. Circular wait
5. Claim
6. Recovery from deadlock
7. Wait-for graph
8. Medium-term scheduler
9. FCFS
10. Dispatch latency

SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 71


Computer Science Paper-V Operating systems
III. Very Short Questions and Answers
Q1. Define program.
Answer :
‘Program’ refers to the collection of instructions given to the system in any programming language. Alternatively a program
is a static object residing in a file. The spanning time of a program is unlimited. A program can exist at a single place in space.
Q2. What is job scheduling?
Answer :
Job scheduling is also called as long-term scheduling which is responsible for selecting a job from disk and transferring
it into main memory for execution. It is also responsible for deciding which process is to be selected for processing.
Q3. What is FCFS?
Answer :
FCFS stands for First Comes First Served. The typical use of FCFS in OS is to serve the processes waiting in a ready
queue to be allocated with CPU.
Q4. Define deadlock.
Answer :
A situation in which a process waiting indefinitely for requested resources and that resources are held by other processes in a
waiting state. This situation results in disallowing the process to change its state which is called a deadlock situation.
Q5. What is circular wait in deadlock?
Answer :
There exists a list of waiting processes (P0, P1, ... , Pn) such that process P0 is waiting for a resource currently under the usage
by process P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by P3 and so on. Finally, a process
Pn is waiting for the resource held by P0.

72 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

Main and virtual


UnIT MeMory, Mass-storage
structure, file systeMs

3 and iMpleMentation

Learning Objectives

After studying this unit, a student will have thorough knowledge about the following key concepts,

 Swapping, Contiguous Memory Allocation, Segmentation and Paging.

 Page Replacement, Allocation of Frames and Thrashing.

 File concept, Access methods, Directory and Disk structure.

 File System Implementation, Directory Implementation, Allocation Methods and Free Space Management.

intrOductiOn

A popular non-contiguous allocation scheme is paging with which memory can be divided into ixed sized
blocks. To divide the memory into unequal sized blocks, segmentation techniques should be used. There are
certain page replacement algorithms with which page frames can be swapped in and out.

A ile can be deined as a group of similar records or related information together which is stored in secondary
memory. Both the data as well as programs of all users are stored in iles. The operations that can be performed on
iles are creating ile, writing to a ile, reading from a ile, repositioning with a ile, deleting and truncating a ile.
A ile can be accessed in many ways. The most common access methods are sequential access, direct access and
indexed access. Unauthorized access can be prevented using protection mechanism. A collection of iles is known
as directory. The common schemes to deine its structure are single-level, two-level, tree-structures, acyclic-graph
and general graph directories.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 73


Computer SCienCe paper-V operating SyStemS

part-a
short Questions with solutions
Q1. Write the differences between logical and physical address space.
Answer : Model Paper-I, Q5

Logical Address Space Physical Address Space

1. Logical address is also called virtual address 1. Physical address is also called absolute address.
or relative address. It is used in virtual memory. It is used in main memory.

2. Logical address is divided into small parts called 2. Physical address is divided into small parts called
‘pages’. ‘frames’.

3. Logical address refers to a reference to the 3. Physical address refers to the absolute location
memory location, that is relative to the program. of data in the main memory.

4. The set of all logical addresses generated by a 4. The set of all physical addresses mapping to their
program is a logical address space. respective logical addresses is a physical address space.

5. Representation of logical address is as follows, 5. Representation of physical address is as follows,


page no page offset frame no page data

Q2. Deine swapping.


Answer :
In a multiprogramming environment, there are several processes that are executed concurrently. A process needs to be
present in main memory for execution, but its capacity is not enough to hold all active processes. Hence, sometime processes
are swapped-out and stored on disk to make space for others and later they are swapped-in to resume execution. This process of
swapping-in and swapping-out is called as swapping.
Q3. Deine a page and a frame.
Answer : Model Paper-II, Q5

Page
A page refers to the logical memory location which contains ixed-sized blocks.
Frame
A frame refers to the physical memory location which is divided into ixed-sized blocks.
Paging divides the physical memory into ixed-sized blocks called as frames and logical memory into pages. The page
and frame are of same size because one logical page its exactly in one frame (or) blocks. The execution of a program with
'n' pages requires 'n' free frames to be available in the physical memory, where each page is loaded into a free frame. The
information about allocation of frames to various pages is tracked by maintaining a table called page table. It carries page
number, page offset and involves CPU to translate pages into frames.
Q4. Deine ile management.
Answer : Model Paper-III, Q5

The process of managing iles and the operations performed on iles is referred to as ile management. It is responsible
for allocating space to the ile on disk and providing a data structure to deine the information saved on disk so as to provide
quick access to it. Typically, operating system is responsible for managing iles in a system. It uses ile management system
for this purpose.
74 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q5. List the ile operations performed by operating systems.
Answer : Model Paper-I, Q6

The various operations that can be performed on iles are as follows,


1. File Creation
This operation is used to create a ile by using system calls.
2. Write to a File
This operation is used to write the data into a ile by specifying its ilename.
3. Read from a File
This operation is used to read the ile by specifying its ilename.
4. Seeking in a File
This operation is used to reposition the current-ile-position pointer to a speciied location within a ile.
5. File Deletion
This operation is used to delete a ile by specifying its ilename.
6. Truncate the File
This operation is used to erase some content of the ile and keeps all its attributes untouched except the ile length.
Q6. List the differences among the ile access methods.
Answer : Model Paper-II, Q6

Indexed Sequential File


Sequential File Access Method Direct File Access Method Indexed File Access Method
Access Method
1. It access a particular record 1. It access a particular 1. It access a particular record 1. It access a particular
in sequential manner. record randomly. by browsing through its record by performing
index. on two binary search.
2. It is the slowest ile access 2. It is faster than sequential 2. It is faster than direct ile 2. It is the fastest ile
method. ile access method. access method. access method.
3. It is used in editor and 3. It is used in database. 3. It is used in reservation 3. It is mostly used by
compiler. of airlines and inventory IBM.
control system.
Q7. List the operations to be performed on directories.
Answer : Model Paper-III, Q6

The following operations can be performed on directories,


v Searching Files
By reading the directory table, a particular ile, similar iles whose name matches a speciied pattern or criteria can be
found.
v File Creation
When a new ile is created, an entry is inserted in directory table.
v File Deletion
When a particular ile is deleted, its entry is deleted from directory table.
v Listing
The user can list the iles and other sub-directories present in their directory.
v Renaming Files
The name of existing ile can be changed by modifying its entry in directory table.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 75


Computer SCienCe paper-V operating SyStemS
Q8. What advantages are there to the two-level directory?

Answer : Model Paper-I, Q7

The following are the advantages of a two-level directory,

v It resolves the problem of collision occurred with respect to the names of the iles.

v It provides an effective way with which users can be isolated from each other.

v It eficiently improves the task of search by employing Master File Directory (MFD).

Q9. What does OPEN do in ile operations?

Answer : Model Paper-II, Q7

Before using a ile, it needs to be opened using 'Open' system call in most of the systems. When this is done, the operating
system creates an entry in the open ile table which is browsed every time when a ile operation is requested. The responsibility
of open system call is to ind the directory which carries the ile on which ile operations are to be performed. This can be done
by browsing all the directories. Once the ile is found, an entry is made to the open ile table. It also considers the ile access
permissions such as read only, read-write etc., and access is granted based on these permissions.

The use of open system call eliminates the need for searching of iles again and again and simpliies the ile operations.

Q10. What are tree structured directories?

Answer : Model Paper-III, Q7

Tree-structured directories scheme allows user to create any number of own directories within their User File Directory
(UFD). It has a variable number of levels. It gives better lexibility to manage iles.

A sub-directory is treated as a ile. A special bit is used which deines whether the entry is ile (0) or sub-directory (1). A
current directory is normally a directory from where process is executing and carries almost all the associated iles of currently
executing process. When process tries to access a particular ile, it is searched in current directory. If it is not presents, then user
has to specify the path name of that ile or change the current directory to that path which can be done using a system call. This
system call which considers the path name as a parameter and redeines the current directory.

76 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

part-b
essay Questions with solutions
3.1 Main MeMory

3.1.1 introduction
Q11. Write in brief about background of memory management strategies.
Answer :
Memory is the control part of the computer system. It consists of huge array of bytes each with address. The CPU is
responsible for fetching the instructions from memory based on program counter contents. Additional operations such as loading
from memory and storing to memory need to be done as per instructions.
An instruction execution cycle will initially fetch the instruction from the memory. It decodes the instructions, fetches
the operands from memory and store the result back in memory after the execution is done on operands. Memory unit contains
a set of memory addresses sequentially.
Memory can be managed in number of ways by using the memory management strategies such as paging, segmentation
etc. Selecting a particular technique for a system will be based on various factors speciically on the system design.
Memory management has several issues such as basic hardware, binding of symbolic memory addresses to actual physical
addresses and difference between the logical and physical addresses.
Address Binding
A program that is placed on a disk must be in the form of a binary executable ile. For executing this program, it must be
placed with a process in the memory. Based on the usage of memory, the process may be allowed to move between the memory
and disk. An input queue is maintained for those processes that are waiting for execution in memory. Normally, only one process
can be executed at a time. During execution, a process fetches instructions and data from memory and when the process terminates,
its related space in memory becomes free so that the next process is brought into the memory for execution.
Generally, a user process can be placed in any part of the physical memory with the addresses assigned to it. Though the
starting address of the physical address space is 00000, the irst address cannot be stored at this location. Hence, the addresses
used by the user program gets affected.
Logical Versus Physical Address Space
Logical address is deined as the address which is generated by CPU and physical address is deined as the actual memory
address where data instruction is present. Both these address are common in certain address binding methods including compile-
time and load-time methods whereas for execution-time they carry different addresses.
When the logical and physical addresses are different, the logical address is commonly called as virtual address. The term
logical address space is usually referred to the group of addresses associated with a program whereas a logical address space is
referred to the group of addresses associated with the logical addresses.
Usually, mapping can be done by using various methods but for run-time mapping of addresses from logical to physical
is carried out through MMU (Memory Management Unit). The base register used in this case is referred to as relocation register
because it stores its value in all the logical address spaces. This is done at the time of locating address in the memory. A typical
MS-DOS operating system carries four register of this kind.
As the user program always uses logical address, it does not carry any information related to the physical addresses. To
map to a physical address, it creates a pointer that carries the location of the register which keeps on comparing it with the other
addresses.
In case of memory mapping the conversion of logical addresses to physical is done through hardware. User programs
assume that they are associated with logical addresses only but they need to be allocated with physical addresses before accessing
the memory.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 77
Computer SCienCe paper-V operating SyStemS
Q12. Write short notes on, Dynamic loading is independent of operating systems
support whereas dynamic linking requires help from
(i) Dynamic loading
operating system for checking the availability of needed
(ii) Dynamic linking routine in the memory space of other processes. This is the
(iii) Shared libraries. case where each of the process is protected from every other.
(iii) Shared Libraries
Answer : Model Paper-I, Q11(a)
While ixing bugs in libraries, there can be two types
(i) Dynamic Loading
of modiications i.e., major and minor. Major modiications
Dynamic loading is a method used to carry out such as change in the program addresses typically changes
memory space utilization eficiently. Since the size of the (increments) the version number of the library whereas
process depends on the size of physical memory, there are minor bug ixes do not change it.
situations where in the data required for executing a process
When dynamic linking is used the latest version
requires more space than the space available in the physical
installed is just referenced whereas in the absence of dynamic
memory. To overcome this space issue dynamic loading is
linking, they need to be relinked. There exist multiple version
used. It stores the data associated with a process in main
of libraries as there can be programs that might use older
memory in such a way that it can be relocated. This approach
version of libraries (those that were installed before updating
results in executing the program routine only when it is
the library). This system where multiple versions of shared
called. Incase, that an executed routine wants to call other
libraries exist is known as shared libraries.
routine then it veriies whether the desired routine already
exist in the loaded routines. If it already exists, it directly 3.1.2 swapping
executes it and if not, relocatable linking loader comes into Q13. Explain about swapping in memory management.
action and the desired routine is loaded into the memory
Answer :
later, the control is passed to the newly loaded routine.
Swapping
Advantages of Dynamic Loading
In a multiprogramming environment, there are several
v The routine that are not required are not loaded in the processes that are executed concurrently. A process needs to
memory. be present in main memory for execution, but its capacity is
v It is used while handling error routines. not enough to hold all active processes. Hence, sometimes
processes are swapped-out and stored on disk to make space
v It does not require any special support from the
for others and later they are swapped-in to resume execution.
operating system.
This process of swapping-in and swapping-out is called as
(ii) Dynamic Linking swapping.
The concept of dynamic linking is similar to the There are several reasons to perform swapping. They
concept of dynamic loading. The difference is that instead are given as follows,
of postponing the loading of routines, it postpones linking v If time quantum of a particular process is expired.
of libraries until they are called. This feature eliminates the
v If some high priority process pre-empts a particular
requirement of including language library associated with
process.
the program in the executable image.
v When an interrupt occurs and makes this process to
With use of dynamic linking, unnecessary wastage
wait.
of memory and disk can be avoided. This can be done by
using a small code called stub in every library routine. It is v Process is put in wait-state for performing some input/
responsible for pointing out the location of memory resident output operations.
library associated with the called routine. In addition to this, The processes that are swapped-out are kept in a
it is also responsible for checking the existence of routine in backing store (a disk). The swap space stores the images of
the memory and loading it if necessary. When the routine is all processes. A ready queue is maintained to store pointers
to be executed it is done by placing its address at the place to these images. Whenever dispatcher is free, it takes one
of stub and then, this particular routine is executed directly process from ready queue and swaps into main memory for
from the next execution. execution.
78 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

Operating
system
Process Process
Swap-in User
P2 P1
space
Ready queue

t
p-ou
Process Swa
P3

Backing store (Disk) Main memory (RAM)

figure: swapping

If a system follows static or load-time binding, then the process is swapped back into the same memory space it used
to occupy earlier. Otherwise, if it follows dynamic or execution-time binding then the process can be swapped back into any
memory space because physical addresses are calculated during run-time.

The limitation of swapping scheme is that the context switching is expensive i.e., the time required to save all the
information regarding a process like its PCB, data, code and stack segments etc., on the disk is quiet high.

Swapping on Mobiles

Mobile systems does not support the concept of swapping. They make use of lash memory instead of hard disks. The
reason behind not supporting the swapping are space constraint, limited number of writes acceptable by lash memory before
it becomes unreliable, poor throughput between main memory and lash memory in such devices.

The Apple ios will request the applications to relinquish the memory allocated in such a situation when free memory
falls below the threshold. The read only data can be removed and reloaded later if required from lash memory. The applications
that cannot release the memory are removed from the operating system.

The Android will not support the concept of swapping but uses a similar approach that is used by ios. It can delete a
process if there is no free memory. Due to such restrictions, the developers of mobile systems need to carefully allocate the
memory and release so that their applications will not use more memory or have memory leaks.

3.1.3 contiguous Memory allocation


Q14. Explain briely about the contiguous memory allocation.

Answer :
The main memory of computer is divided into two major sections, one contains the Operating System (OS) and other is
left for user processes. The OS is usually placed in starting locations or low memory area and an Interrupt Vector Table (IVT) is
stored before OS.

IVT 000 × 0000

OS

User space
FFF × FFFF

figure (1): Main Memory structure

Memory allocation means to bring the waiting processes from the ready queue to user space in main memory. When
each process is allocated a single contiguous memory section then it is called as contiguous memory allocation. There are two
variations in this scheme.

For remaining answer refer Unit-III, Page No. 82, Q.No. 16.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 79


Computer SCienCe paper-V operating SyStemS
Memory Mapping and Protection
This is one of the important issue that arise during contiguous memory allocation. It refers to protecting the process from
addressing an unspeciied location which can be done using limit register and relocation register. Limit register contains the
range of valid logical addresses and relocation register contains the starting physical address.
If the logical address entered by user is less than the value of limit register then it is a valid address and it is added with
relocation address to map a physical location. If it is greater than limit, an addressing error is raised. Figure (2) shows memory
mapping and protection.
CPU

Logical
address

No
Address error < Limit register
Yes

+ Relocation register

Physical
address

Memory

figure (2): Memory Mapping and protection


Q15. Explain the irst it, best it and worst it allocation algorithms. Which one is better?
Answer :
(a) First Fit Algorithm
In irst it algorithm, the memory manager scans along the linked list until it inds a hole that is large enough to store the
program. Searching starts at the beginning of a set of hole and stop searching as soon as a free hole that is large enough is found.
This algorithm is the fastest because it searches as little as possible.

10K
1K
First Fit

20K
9K

15K 15K

9K 9K

13K 13K

8K 8K

11K 11K

(a) After Allocation (b) Before Allocation

Allocation Block

Free Block
Figure (1): Example of Memory Coniguration Before and After Allocation of 11 KB and 9 KB Block (Based on First Fit Algorithm)
80 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Let us consider a linked list containing the following holes in the order speciied below,

10 20 15 9 13 8 11

Assume that there are two programs waiting to enter into memory. The program sizes are 11 and 9 respectively. The
memory manager allocates hole 20 to program of size 11 and hole 10 for program of size 9. Although, there are holes of sizes
11 and 9 in the linked list, the memory manager will not scan the entire linked list. It starts searching from the beginning of the
linked list until it inds a hole whose size is greater than or equal to program size. Once, such a hole is found, the rest of the linked
list is not scanned.

(b) Best Fit Algorithm

In best it algorithm, the memory manager searches the entire linked list and takes the smallest hole that is adequate to
store the programs.

Suppose, for the program of size 11, it scans the entire linked list and allocates the hole 11 for the program of size 11. For
the program of size 9, it searches the entire linked list and allocates the hole of size 9.

10K 10K

20K
9K

15K 15K

9K 9K

Best Fit of
13K 13K 11k & 9k
program

8K 8K

11K 11K

(a) Before Allocation (b) After Allocation

Figure (2): Example of Memory Coniguration Before and After Allocation to 11 KB and 9 KB Block

Best it algorithm is slow, because every time algorithm is called it scans the entire linked list. However, it results in
minimal internal fragmentation because it allocates the best hole for the program. However, irst it algorithm may result in more
internal fragmentation because it does not scan the entire linked list.

(c) Worst Fit Algorithm

In worst it algorithm, the memory manager scans the entire linked list and allocates the largest hole to the program. For
the program of size 11, it allocates the hole of size 20 which is maximum and for the program of size 9, it allocates hole of size
15 which is next to the largest size. Thus, the entire linked list is scanned twice. After allocation is performed, the remaining part
of the hole can be used to allocate another program.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 81
Computer SCienCe paper-V operating SyStemS

10K 10K

11K

20K
9K
Worst - Fit

15K 9K
6K

9K 9K

13K 13K

8K 8K

11K 11K

(a) Before Allocation (b) After Allocation

figure (3): Worst Fit Algorithm

When best it searches a list of holes from smallest to largest, as soon as it inds a hole that its, it knows that the hole is
the smallest one and that will do the job. No further searching is needed, as it is with the single list scheme. With a hole list sorted
by size, irst it and best it are equally it and next it is point less.

First it and best it allocation algorithms are better than worst it algorithm as per the criteria of storage utilization and
decreasing time. Both irst it and best it are same in terms of storage utilization. However, irst it performs operations faster
than other algorithms.

Q16. Write in brief on the following memory management techniques comparing their relative strengths and
weaknesses,

(i) Fixed partitioning

(ii) Dynamic partitioning.

Answer :
(i) Fixed Partitioning

In ixed partitioning, main memory is divided into a number of ixed sized blocks called as partitions. When a process
has to be loaded in main memory, it is allocated a free partition, which it releases while terminating and that partition becomes
free to be used by some other process. The major drawback of this scheme is internal fragmentation. It arises when a memory
allocated to a process is not fully utilized. For example, consider a system having ixed partitions of size 100 KB, if a process of
50 KB in allocated to one such partition, then it uses only 50 KB and remaining 50 KB is unnecessarily wasted. The igure (1)
shows the problem with internal fragmentation.
82 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
OS Now, suppose that a new process P5 of size 500 KB
P1 100KB wants to get loaded, but that cannot be done because we
50KB P3
P2 100KB don’t have a single contiguous 500 KB free space. Although
P3 we have 500 KB free space but it is fragmented and not
100KB
contiguous hence, cannot be allocated. This problem is called
P4 100KB
= Internal external fragmentation, which is a major drawback of dynamic
fragmentation partitioning.
figure (1): Internal fragmentation
One of the solution to external fragmentation is to apply
Strengths memory compaction where kernel shufles the memory contents
1. Fixed partitioning is simple and easy to implement. in order to place all free memory partitions together to form a
single large block. Compaction is performed periodically and
2. It helps in eficient utilization of processor.
it requires dynamic relocation of program. If compaction is
3. It supports multiprogramming. applied on the previous example then memory status will be
Weakness as shown in igure (4).

1. Ineficient use of memory due to internal fragmentation.


OS OS
2. The number of partitions speciied at the time of system
generation limits the number of active processes. P1 200 KB
500 KB P2
(ii) Dynamic Partitioning
500 KB P2 100 KB P4
In dynamic partitioning, partitions are created
dynamically depending on the size of process being loaded.
P3 300 KB
The partition size is exactly equal to the size of process being 500 KB
loaded. For example, initially the memory will be empty, then 100 KB P4 = Free space
a process of 200 KB is loaded into memory and is allocated
(a) Before (b) After
200 KB. Then some other process of 500 KB is loaded which is
also allocated 500 KB. Likewise more two processes are loaded Figure (4): Memory Before and After Compaction
of size 300 KB and 100 KB. Figure (2) shows the allocation Strengths
sequence.
(1) (2) (3) (4) (5)
1. There is no internal fragmentation in dynamic
OS OS OS OS OS
partitioning.
200 KB P1 200 KB P1 200 KB P1 200 KB P1 2. Main memory can be used more eficiently.

500 KB P2 500 KB P2 500 KB P2 Weakness


1100 KB 900 KB
Due to external fragmentation and use of memory
P3 P3
compaction processor can not be used eficiently.
300 KB 300 KB
400 KB
100 KB 100 KB P4
Q17. What is fragmentation? Explain in detail about
= Free space
the internal and external fragmentation.
figure (2): dynamic partitioning Answer :
Consider that process P1 and P3 are completed and Fragmentation
terminated by releasing the memory occupied by them, then
the memory status would be as shown in igure (3). Two holes Fragmentation is deined as a wastage of memory space.
or free memory sections will be created. It is a problem that occurs in dynamic memory allocation system
when some blocks are too small to satisfy a request.
OS
The problem associated with memory fragmentation
P1 200 KB
particularly in allocating (or) freeing up the disk space is
much similar with the problems associated with variable
500 KB P2 Holes
partitioning that exist in multiprogramming systems while
allocating primary memory. Continuous allocation and clean
P3 300 KB = Free space up of memory fragments could result in the creation of huge
100 KB P4
number of fragments in the memory that can result in various
figure (3): external fragmentation performance problems.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 83
Computer SCienCe paper-V operating SyStemS
(i) Internal Fragmentation

Internal fragmentation means wastage of memory space when a partiton is allocated with a process whose size is less
than the partition. For example, in multiprogramming, with ixed number of partitions, a partition of 400 KB becomes free, then
operating system scans through the queue and selects the largest job i.e., 385 KB job and loads it into 400 KB partition. In this
case 15 KB memory is being wasted. This is called internal fragmentation. In general, if there is a partition of ‘m’ bytes and
there is a program of size ‘n’ bytes where m > n and a program is loaded into the partition, then internal fragmentation is equal
to (m – n) bytes.

(ii) External Fragmentation

When dynamic partitioning is used for the allocation of processes, some of the memory space could be left over after the
allocation of each and every frame. For instance consider the example,

OS
200 kB
P1 – 250 kB

200 kB P2 Free space


250 kB Allocated space
P3 – 300 kB

150 kB
P4 – 200 kB

Figure: Allocating (or) Freeing up the Disk Space


In the above example, 200 kB is alloted to a partition of 250 kB, 250 kB process is allocated to a 300 kB partition and
150 kB process is allocated to a 200 kB frame. These three processes waste 50 kB each which is called internal fragmentation.
Now, these 50 kB partitions cannot be used by a process larger than 50 kB irrespective of the overall free space of 150 kB. This
problem is referred to as external fragmentation.
If there is a partition of size ‘m’ bytes and there are no programs in the queue of size ≤ m bytes, then the entire partition
is left empty until a job of size ≤ m bytes arrives into the queue. Thus, external fragmentation is equal to ‘m’ bytes i.e., the entire
partition.
For example, if 100 KB partition becomes free and there are no programs in the queue of size ≤ 100 KB, then the entire
100 kB partition is left empty. This is called external fragmentation. One solution to the problem of external fragmentation is
compaction.

3.1.4 segmentation, paging


Q18. Write a brief note on segmentation.

Answer :
The programmer is allowed to view memory as consisting of multiple address spaces or segments through the concept of
segmentation. Segments may be of unequal size. Memory references consists of a (segment number, offset) form of address.

The organization has a number of advantages. They are as follows,

(i) It simpliies handling of growing data structures.

(ii) It allows programs to be altered and recompiled independently, without requiring entire set of programs to be relinked
and reloaded.

(iii) It lends itself to share among processes.

(iv) It lends itself to protection.

84 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
For a virtual memory scheme based on segmentation, there is a unique segment table for each process. Because only
some of the segments of a process may be in main memory, a bit is needed to indicate if corresponding segment is present in
main memory and if present, then the entry also includes starting address and length of the segment. There is also a modify
bit indicating whether the contents of the segment have been altered, since the segment was last loaded into main memory. If
protection or sharing is managed at segment level, there are other control bits for them.

Segment table is of variable length and so cannot be held in registers but must be held in main memory. When a particular
process is running, the starting address of the segment table for that process is held in a register. The segment number of a virtual
address is used to index the table and look up the corresponding main memory address from the start of segment. This is added
to the offset portion of virtual address to produce desired real address.
Virtual address Segment table
Seg # Offset = d Base + d
+

Register
Seg table ptr
d Segment
Segment table
Seg #

Length Base

Program Main memory


Segmentation mechanism

Figure: Address Translation in a Segmentation Systems


Q19. Explain the paging concepts.
Answer :
Paging
Paging is a non-contiguous memory allocation scheme. It divides the physical memory into ixed-sized blocks called
as frames and logical memory into pages. The page and block are of the same size. Hence, one logical page its exactly in one
physical block.
Each process ‘Pi’ residing on disk is composed of several pages. Whenever ‘Pi’ has to be executed, its pages are brought
into main memory’s frames. There is no restriction of pages being contiguous, they can be fragmented, here and there in main
memory. Each process maintains a table which maps its page numbers to the block numbers they are residing in.
Frame
Process Pi number
Some other
0 pages xyz
1 Pi's page 4
Process Pi Page Frame
No. No. 2 xyz
Page Page
1 2 1 5 3 Pi's page 2
2 3
Page Page
3 6
4 xyz
3 4
4 1 5 Pi's page 1
Disk 6 Pi's page 3
7 xyz
8 xyz

Main memory
Figure (1): Mapping Pages to Frames
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 85
Computer SCienCe paper-V operating SyStemS
Paging Implementation

In basic implementation of paging, the physical memory is divided into ixed -sized blocks called frames and logical
memory into pages.

There is a page table available which stores the base address of each page available in main memory and the offset
acts as descriptor within the page. The base address is combined with offset to get address of a physical memory location.

The system makes use of a paging table to implement paging. When a process is to be executed, its pages are
loaded into free frames in the physical memory. The information about frame number, where a page stored is entered
in the page table. During the process execution, CPU generates a logical address that comprises of page number (P) and
offset within the page (d). The page number p is used to index into a page table and fetch corresponding frame number.
The physical address is obtained by combining the frame number with the offset. Logical address consists of page
number and page offset
Page number (P) = n - m Page offset = m

22 bits 10 bits

P = Index into the page table

D = Displacement within the page

Size of the logical address (n) = 32

Number of bits to represent page offset (m) = 10

Number of bits to represent page number (n – m) = 22

Page size = 2m = 210 = 1024 bytes

The lower order bits of a logical address represent page offset and higher order bits represent page number. The maximum
size of logical address space is 232 bytes i.e., 4 G bytes.

So the maximum length of a page table of a process = 4 m entries, each entry being 4 bytes so a page table would occupy
16 M bytes in RAM.

There is a page table available which stores the base address of each page available in main memory and the offset acts
as descriptor within the page. The base address is combined with offset to get address of a physical memory location. The igure (2)
shows the hardware requirement of paging scheme.
Logical address
Page
CPU Offset
Number

Page number Frame number

Frame
Frame number
number
+
Physical
address
Main
memory

Page table
figure (2): paging Implementation
86 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q20. Explain paging hardware with translation look-aside buffer.
Answer :
The page table implementation is done in several ways by various operating systems. The simplest way is to use a set
of registers with a high speed logic to translate addresses easily and eficiently. But such implementation cannot store more
than 256 page table entries whereas today’s computers require nearly 1 million page table entries which is infeasible to be
implemented in hardware. Some systems store page table in memory and store its address in a special register called as Page
Table Base Register (PTBR).
One of the feasible solution to this problem is to use a fast, small, associative cache memory called as Translation
Look-aside Buffer (TLB) to look up and translate addresses. A TLB entry is divided into two parts, a key and a value. When
key is provided to TLB it looks up for it, simultaneously in all entries (i.e., a typical property of associative memory) and
returns its corresponding value ield if found (TLB hit). Otherwise, if it is not found (TLB miss), then a page table present
in main memory is used to map that logical address to physical address. The following igure shows the implementation of
paging using TLB.
Logical address
Page
number Offset

Page Frame
number number

TLB hit
+ Physical
address

Translation
look-aside
buffer
Valid bit
TLB miss

Frame
number 1

Page table

Figure: Implementation of Paging using TLB


Q21. Explain how sharing of pages is accomplished in a paged environment.
Answer :
Sharing is one of the important advantages of using paging scheme. The reentrant code (or pure code) is one which
is non-modifying code. For example in a compiler application, the user submit the source program to it and it generates
equivalent object code, but the compiler IDE is not modiied. Such reentrant code can be shared among several processes, other
applications that can be shared includes compilers, run-time libraries, database system etc.
Consider an example, where the user has two processes which are executing a compiler application of size three pages.
Since, the compiler code is reentrant it is shared in between them.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 87
Computer SCienCe paper-V operating SyStemS
In addition to this, each process has a non-shareable 0 × 0000 Code
page to store their source program. The following igure shows
Data
sharing of pages by two different processes. Here, ive pages are
required instead of eight, thereby saving three pages. Heap

0 Src-2
1
Com1 2 F× FFFF Stack
2 Com1
Com2 4
3 Figure: Organization of Virtual Address Space
Com3 8
4 Com2
Src-1 5 The above organization allows heap to grow downwards
5 Src-1 and stack to upwards. The empty space (hole) between stack
Process 1 Page table
6 and heap is the virtual address space, such type of space is also
for process 1
7 known as sparse address space. The advantage of using virtual
Com1 2 8 Com3 memory is sharing of pages and leads to following beneits,
Com2 4 9 v Several processes can share system libraries by
Com3 8 mapping them into virtual address space. These
Src-1 0 libraries are stored as pages in physical memory.
Process 2 Page table v Inter process communication can be done by sharing
for process 2 virtual memory among several processes.
Main
memory v Process creation can speed up by sharing pages with
fork( ) system call.
Figure: Sharing Pages among Processes Virtual Memory Techniques

3.2 virtual MeMory The two fundamental techniques for implementing


virtual memory are paging and segmentation.
3.2.1 introduction 1. Paging
Q22. Explain about virtual memory and its For answer refer Unit-III, Page No. 85, Q.No. 19.
techniques.
2. Segmentation
Answer :
For answer refr Unit-III, Page No. 84, Q.No. 18.
Virtual Memory 3.2.2 demand paging, page replacement
Virtual memory is a concept of giving programmers an Q23. Explain about demand paging and its
illusion that they have a large memory at their disposal even implementation with suitable example.
though they have very small physical memory. Programmers
can write programs that takes more memory than the available Answer :
physical memory (RAM). This is achieved by storing their Demand Paging
big programs in secondary memory or disk storage and then
portions of these programs are brought into main memory Whenever a program has to be executed, its code
whenever needed for execution. Virtual memory also allows which is stored in the form of pages in secondary memory has
processes to share iles, libraries etc. to be fetched into the main memory. The traditional fetching
scheme is called prepaging which fetches all the pages with
The virtual address space is the set of logical respect to that process irrespective of their need.
addresses used by a process. Physical address space is the set
of effective memory address of an instruction or data byte Another technique is called demand paging. Here
where they are actually located. The Memory Management pages are fetched into main memory only when they are
Unit (MMU) maps the virtual addresses to their respective needed by processes. Instead of fetching or swapping all the
physical addresses. pages of a process, the lazy swapper here swaps only selected
or required pages. It never swaps a page unless it is asked
Consider the following igure that shows the dynamic by some process. As the pages are getting swapped-in and
memory allocation of virtual memory. swapped-out the word pager is used here instead of swapper.
88 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Steps to be taken When a Page-fault Occurs

When a process tries to access the page which is not fetched in memory then such condition is called ‘page-fault’ trap,
which causes operating system to load the desired page into memory. The following are the sequence of steps that happen when
a page-fault occurs,

1. The operating system notices the page-fault and veriies whether the reference is valid or not.

2. Then it searches for a free frame.

3. Schedules disk read operation to fetch desired page into the free frame found in step (2).

4. After successful read or fetch, page table is updated.

5. The process is restarted and it access the page.


Valid-bit

Refer
ence

page
Process x
1
 Operating
Restart Trap page system
program fault

 
Update Page table Find
page table page X

Free frame

Fetch Page
page X X
Backing store

Main memory

figure: demand paging

Pure demand paging is a technique where a process starts execution without a single page in memory. As it proceeds
execution, it gradually causes page-faults and those pages are fetched in memory. It never bring a page until it is required by
some process.

The hardware that is necessary to implement demand paging is a page table for performing paging and swap space for
performing swapping.

Example

Consider a program containing 4 pages i.e., P0 to P3. At the beginning, P0 is loaded into the main memory. As soon as the
page is transferred into the main memory, it updates the page table related to that page. A page table consists of two ields such
as page frame and validity bit.

If the bit is set to valid bit(v) then the page becomes available in the main memory at the speciied frame. The format of
the page table is shown below,
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 89
Computer SCienCe paper-V operating SyStemS
Page Frame
Logical Memory Validity Bit
Main Memory
0 A P0 3 v 0

1 1
B P1 5 v
Storage Device
2 C 2
P2 i
3 A
3 D P3 7 v
A
4
Page Table
4 E B D
5 B

7 D

Figure: Format of Page Table


In the above page table, the valid bit is set for three pages i.e., P0, P1 and P3 in the main memory at the frames 3, 5 and 7
respectively. The invalid bit(i) is set for the page P2 which means that P2 is not present in the main memory. So, it means the page
is present on the disk. The execution of P0 instruction results in some address which is known as logical address. The CPU inds
whether the address falls in page 0. If it falls in page 0 then the CPU continues the execution smoothly. Otherwise, a “Page fault”
occurs which means that main memory does not contain the required page. Then the required page is fetched into the main memory.
Implementation of Demand Paging
The implementation of demand paging can be done by keeping a valid bit in the page table. This bit signiies that the
related page is either present in memory or disk. If the valid bit is 0, this indicates that the page is not present in the memory. It
means that the page might be available on disk or it might be an invalid address. A page that has been requested must have its
valid bit set to 1. If it is not, then the OS must check whether the address is valid or not. If the page is not present in memory
then the OS performs the following steps,
(a) It selects a page to replace using a page replacement algorithm
(b) It invalidates the old page present in the page table
(c) It loads a new page from the disk into the memory
(d) It performs content switching to another process while the I/O is under operation
(e) It gets interrupted after the page is loaded completely
(f) It updates the page table entry
(g) It performs context switching back to the faulting process.
Q24. Explain about page replacement algorithms.
Answer : Model Paper-II, Q11(a)

Page Replacement
The operating system inds a free frame in main memory when a page fault occurs, so as to load the desired page. If there
are no free frames available at that particular instance then operating system uses a page replacement algorithm to select a frame
and swap it out and bring the new page in this frame. The frame which is selected for removal is called victim. Figure (1) shows
an example of replacing a page-X with page-Y.
When demand paging is used then page replacement technique automatically comes into picture.
90 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Valid-bit
Busy
Busy

Busy Swap-out
 Page
Page X i Change bit to Busy page X X
in valid Busy
Victim
Busy
 Busy  Page
Page Y v Change bit to Swap-out Y
valid Busy page Y
Backing store
(swap space)
Busy
Busy
Page table Main memory

Figure (1): Page Replacement (Relacing Page–X with Page–Y


Page Replacement Algorithms
There are four types of page replacement algorithms. They are as follows,
1. First In First Out (FIFO)
2. Optimal Page Replacement (OPT)
3. LRU page replacement
4. LRU - approximation or clock page replacement.
1. First In First Out (FIFO) Algorithm
It is the simplest algorithm which maintains a First In First Out (FIFO) queue. Whenever a page is brought into main
memory it is attached at the tail of this queue and victim is selected from the head. An alternative is to maintain the entry
time of each page and replace the page with the oldest among them. Consider the following reference string or page request
sequence,
3, 4, 5, 6, 4, 7, 4, 0, 6, 7, 4, 7, 6, 5, 6, 4, 5, 3, 4, 5
Consider a memory with three page frames. Then the irst three references (3, 4, 5) will cause page-fault and are fetched
into memory one after the other. The next reference is 6 as it is not present in frames, a page-fault occurs (represented by * in
the bottom of igure). Now algorithm replaces 3 because it was the oldest page. Then the next reference is 4, as it is already
available, no page fault occurs and algorithm proceeds in the similar manner. Figure (2) shows the page-replacement sequence.
3 4 5 6 4 7
3 3 3 6 6 6
4 4 4 4 7
5 5 5 5
* * * * *

4 0 6 7 4 7 6
6 0 0 0 4 4 4
7 7 6 6 6 6 6
4 4 4 7 7 7 7
* * * * *

5 6 4 5 3 4 5
4 4 4 4 3 3 3
5 5 5 5 5 4 4
Number of
7 6 6 6 6 6 5 page faults = 15
* * * * *
figure (2): fIfo page replacement
The limitation of this algorithm is that sometimes we replace the page which could be used immediately. Hence,
execution becomes slow.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 91
Computer SCienCe paper-V operating SyStemS
2. Optimal Page Replacement (OPT) Algorithm
This algorithm has the lowest page fault rate and overcomes Belady’s anomaly. Here the victim is selected such that it
is not going to be used for the longest period of time. Consider the following reference string,
3, 4, 5, 6, 4, 7, 4, 0, 6, 7, 4, 7, 6, 5, 6, 4, 5, 3, 4, 5
Initially, the three empty frames will be illed by three page faults to references 3, 4, 5. Then next reference to 6 causes
page fault and algorithm replaces 3, because it is the one among 3, 4 and 5 who is going to be used after a longest period of time
(i.e., 18th reference). Similarly, the algorithm proceeds causing nine (9) page faults instead of 15 in the case of FIFO algorithm.
The igure (3) shows the same.
3 4 5 6 4 7
3 3 3 6 6 6
4 4 4 4 4
5 5 5 7
* * * * *

4 0 6 7 4 7 6
6 6 6 6 6 6 6
4 0 0 0 4 4 4
7 7 7 7 7 7 7
* *

5 6 4 5 3 4 5
6 6 6 6 3 3 3
4 4 4 4 4 4 4
5 5 5 5 5 5 5
* *

figure (3): optimal page replacement


The limitation of this algorithm is that it requires future knowledge of references or page request.
3. Least Recently Used (LRU) Page Replacement Algorithm
This algorithm replaces the page which has not been used for the longest period of time. It is similar to OPT algorithm
except it looks backward in time. Hence, it does not need future knowledge of page references. However, it is dificult to
implement because we need to store the history of page references and may require some hardware assistance. Consider the
following reference string,
3, 4, 5, 6, 4, 7, 4, 0, 6, 7, 4, 7, 6, 5, 6, 4, 5, 3, 4, 5
3 4 5 6 4 7
3 3 3 6 6 6
4 4 4 4 4
5 5 5 7
* * * * *

4 0 6 7 4 7 6
6 0 0 0 4 4 4
4 4 4 7 7 7 7
7 7 6 6 6 6 6
* * * *

5 6 4 5 3 4 5
5 5 5 5 5 5 5
7 7 4 4 4 4 4
6 6 6 6 3 3 3 Total page
* * * faults = 12
Figure (4): Least Recently Used (LRU)
92 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
The following are two techniques to implement LRU algorithm.
(i) Using Counter
Associate each page-table entry with a ield to store the time-of-use and a counter is maintained whose value is
incremented for every memory reference. Every time a page is referenced the contents of counter is copied to time-of-
use ield of page-table entry. The replacement algorithm select the page with smallest time-of-use value.
(ii) Using Stack
In this method a stack is used to keep page numbers. Every time a page is referenced, it is removed from stack and
placed on top of the stack. Hence, the least recently used pages will always be at the bottom of the stack.
Example
4, 7, 0, 7, 0, 1, 2, 7, 1, 2

2 7 Most
recently used
1 2

0 1

7 0

4 4 Least
recently used
Stack Stack

Figure (5): LRU using Stack

4. LRU-approximation or Clock Page Replacement Algorithm

This algorithm requires an additional bit known as use bit. The use bit of a frame is set to zero, when a page is irst
loaded into it and set to one when that page is subsequently referenced.

In this policy, we will replace the pages by maintaining a circular buffer with which a pointer is associated. When a page
is replaced, the pointer is set to indicate the next frame in the buffer. When a page is to be replaced, the operating system scans
the buffer to ind a frame bit zero. Each time it encounters a frame with use bit one, then it resets the bit to zero. If any one of
the frames in the buffer uses bit zero at beginning of this process, we have to replace such frame. If all the frames uses bit one,
then the pointer will make one complete cycle throughout the buffer, setting all bits to zero and stop at its original position and
replace the page in that frame. Consider the following reference string 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2.
PAGE
2 3 2 1 5 2 4 5 3 2 5 2
ADDRESS
STREAM 2* 2* 2* 2* 5* 5* 5* 5* 3* 3* 3* 3*
3* 3* 3* 3 2* 2* 2* 2 2* 2 2*
CLOCK 1* 1 1 4* 4* 4 4 5* 5*
F F F F F

Figure (6): LRU–Approximation

In the above igure, the presence of asterisk indicates the use bit 1 and arrow indicates the position of the pointer.
The frame with asterisk will not be used for replacement. The number of page replacements in this example are ive. In all
processors that support paging, a modify bit is associated with every page in main memory and with every frame of main
memory. This is needed so that, when a page has been modiied, it is not replaced until it has been written back into secondary
memory. If use bit and modify bit are taken into account, each frame falls in one of the four categories,

(i) Not accessed recently, not modiied (u=0; m=0)

(ii) Accessed recently, not modiied (u=1; m=0)

(iii) Not accessed recently, modiied (u=0; m=1)

(iv) Accessed recently, modiied (u=1; m=1)


SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 93
Computer SCienCe paper-V operating SyStemS

3.2.3 allocation of frames Q26. Write briely about the following,

Q25. Explain about allocation of minimum number (a) Equal allocation


of frames to a process. (b) Proportional allocation.
Answer : Answer :
There are various strategies for allocating frames to a (a) Equal Allocation
process. Allocate the frames to a process from the available total Equal allocation algorithm allocates m frames among n
number of frames and not more than that (unless page sharing processes such that each process gets equal number of frames.
is provided). Allocate at least minimum number of frames to That is it allocates m/n frames to each process. The remain-
each process. If the number of frames allocated are less than ing frames, if any, are treated as a free frame buffer pool. For
the minimum number then there is an increase in page fault rate example, if there are 65 frames and 4 processes then equal al-
which in turn decreases the speed of execution process. Hence, location algorithm allocates 65/4 = 16 frames to each process.
we must allocate the minimum number of frames in order to The remaining one frame is used as a free frame buffer pool.
avoid the undesirable performance.
The drawback of equal allocation is that it wastes
If a page fault occurs before the completion of currently memory when various processes need differing amounts of
executing instruction, then that instruction must be executed memory. For example, assume that there are 40 free frames
from the beginning. As a result, provide a number of frames each of size 1 KB and only two processes are running in the
that contain different pages which an individual instruction can system. One process is small and needs only 10 KB while the
refer. Consider the example of a machine in which there is only other process is very large and needs 127 KB of memory. In
one memory address for all the memory reference instructions. this case, if equal allocation is used then it allocates 40/2 = 20
The minimum number of frames required here, is equal to two, frames to each process. Since, one process is very small, it is
i.e., one frame for memory reference and another for instruction. waste of memory to allocate 10 frames to that process.
As an another example, consider computer architecture that
(b) Proportional Allocation
allows one level indirect addressing. In this case the minimum
number of frames required is three. Proportional allocation algorithm overcomes the prob-
lem of equal allocation by allocating the available frames to each
If an instruction, say load, on a page 20 points to the
process according to the requirements of that process. Assume
address on page 16, which in turn points to the address on page
that the size of the virtual memory for process Pi is si and the
10 then this instruction is said to employ one level indirect
total available frames is m. Then using proportional allocation
addressing and hence there is a requirement of three frames
algorithm each process is allocated ai frames, which is deined
for a process.
as,
Generally, the computer architectures that allow multiple ai = si /S × m
levels of indirection face the worst case scenario. Here, each
16-bit word will consist of a 15-bit address and a 1-bit indirect Where, S = Σsi
address. When considering theoretically, each page in the virtual This algorithm needs to adjust the value of ai such that
memory is referenced once with a simple load instruction it is not greater than the minimum number of frames required
(which allow indirect addressing). This causes entire virtual by the instruction set and the total frames allocated among n
memory to be available in physical memory. processes is not more than m.
To avoid the worst case scenario, limit the number of For example, assume there are 40 frames and two pro-
levels of indirection in an instruction. For example, assume cesses with one process taking 100 pages and another taking
that the limitation on the number of levels of indirection is 12. 10 pages, then proportional allocation allocates 100/110 ×
Initially, a counter is set to 12, then for each successive indirection 40 ~ 36 frames to irst process and 10/110 × 40 ~ 3 to second
process.
it is decremented. When the counter reaches zero, a trap occurs
indicating that the limit is exceeded. Thus, by limiting the levels If the system allows multiprogramming then allocation
of indirection reduce the maximum number of memory references of frames among processes with either equal to proportional
for each instruction to 13. Therefore, the minimum number of allocation depends on the level of multiprogramming involved.
frames per process will be 13. If multiprogramming levels are more, then each process need
to free some frames so that they can be allocated to the new
The computer architecture deines the minimum number process. If the multiprogramming levels are less, then the frames
of frames for each process and the amount of available physical of the completed process are splitted among the remaining
memory deines the maximum number of frames. processes.
94 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q27. Compare global and local allocation.
3.2.4 thrashing
Answer :
Q29. What is thrashing? Explain the cause for
Page replacement can be categorized as global (or) local. thrashing with appropriate sketch.
A local replacement policy chooses only among the resident
Answer :
pages (frames) of the process that generates the page fault
whereas, a global replacement considers all the frames present Thrashing
in the memory for replacement regardless of other processes
Thrashing refers to a situation wherein the operating
currently owning a particular page frame.
system waste most of its crucial time in accessing the
When a local replacement strategy is used in selecting secondary storage, looking-out for the referenced pages that
processes with lower priority with respect to processes with are unavailable in the memory. This situation (i.e., thrashing)
high priority number of frames allocated to a particular process can arise in both the demand paging system as well as in
remain the same. In case of global replacement, number of circular job stream.
allocated frames changes (or) increases due to the selection of In this situation, OS swaps in the referenced page from
frames associated with other processes. secondary storage to primary while swapping out certain
In global replacement, a single process might perform pages from the memory. In other words, thrashing is referred
differently at different stages due to the impact of paging to a situation where the processor spends most of the time in
behaviour associated with local and global processes. For this swapping of pages instead of executing the instructions.
reason, handling of page fault is dificult in case of global Reason for Thrashing
replacement. In local replacement, the paging behaviour is not
affected by other processes as it considers only local frames. One of the cause of thrashing is decreased CPU
utilization. In this case, OS increases the degree of
Among both these methods, global replacement method multiprogramming by including a new process which requires
is preferred because it provides higher throughput by making extra page frames. Now, newly joined process takes the
the process available to all the others. frames from active processes. This results in the occurrence
of page faults in these active processes as well. As a result, it
Q28. Discuss in detail about non-uniform memory
leads to more decrement in the CPU utilization and results in
access.
a strucked situation.
Answer : This situation is expressed in the form of a graph as
A processor that takes different (unequal) amount of follows,
time to access different regions of the main memory is said to
have Non-Uniform Memory Access (NUMA). This difference Thrashing

is because in multiprocessing systems, CPU present on one


CPU utilization

board can access the memory present an same board faster than
the memory present on different board. Even with the existence
of high-speed connections such as IniniBand, typical NUMA
systems are slower than the systems that carry CPUs on a single
board.
0 Degree of multiprogramming
To improve the performance, various management Figure: Effect of Thrashing
techniques are employed which modiies the location of the
frames instead of locating them. These frames are stored closer As it can be observed from the graph that CPU utilization
to the CPU with respect to their latency. This method of NUMA increases with increase in the degree of multiprogramming but
saves much of the time when compared with traditional method at a certain point after reaching maximum CPU utilization,
of treating the memory as uniform. it decreases sharply. That particular point is referred to as
thrashing.
The modiications with respect to algorithm includes
a scheduler which is responsible for tracking the CPU who With use of local replacement algorithm of avoiding
executed the processes most recently. With these two concepts, the use of pages associated with active processes, effects
cache hits can be improved and memory access time can be of thrashing can be limited to certain extent. To eliminate
decreased. However, there exist certain complications when thrashing completely, a processes must be alloted with as
threads are considered which can be solved using ‘lgroup’ entity many as page frames it require.
which is included in the kernel of solaris. These groups carries One of the effective methods of doing so is the use of
information related to all the processors and memory which are locality model which allows the use of frames with multiple
closer to each other. Multiple groups are created with respect processes concurrently. It does so by creating a locality set of
to the latency and each lgroup is responsible for scheduling all active frames and the processes are made to move from one
its associated threads and memory. locality to another.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 95
Computer SCienCe paper-V operating SyStemS
Q30. Explain working set model and page fault Page-fault rate
frequency.

Answer : Upper limit


Working Set Model
Working set can be deined as the set of pages that Lower limit
a program is currently using (or) most recently used. It is
denoted by ‘D’ parameter which carries the page references Number of frames
that are used recently. If a page reference in 'D' is not used for
certain period of time, it is removed from the working set. For figure: pff
this reason, the working set is said to be the approximation of
If the page-fault rate moves over the upper limit, it
locality of program. This model is used to prevent thrashing.
implies that the process needs additional frames and if it drops
The size of 'D' parameter is important for the accuracy
down below the lower limit. It speciies that the process has
of this model because if it carry very less number of page
many available frames. In this case, it does the same process as
references, it will not consider overall pages and if it carry
of working set model i.e., it swaps out a process and allocates
large number of references, pages might overlap. Moreover,
the frame to the process whose PFF is above the upper limit
if it carries ininite number of references, it refers to all the
pages used during certain operation. For these reasons, the in case of PFF below the lower limit, it deallocates the frames
size of the working is considered as one of the most important from the process.
property.
3.3 Mass-storage structure
The demand for page frames (D) in a process (say i)
can be given by the formula, 3.3.1 overview
D = SWSi Q31. describe the physical structure of magnetic
Where WSi is the size of working set of process i. disks and magnetic tapes with its merits and
Thrashing occurs if the D value exceeds the total number of demerits.
frames due to their unavailability on some of the processes. Answer :
The responsibility of allocating D value will be given to the
operating system once D value is set. Based on availability of Magnetic disks and magnetic tapes are the mass-storage
frames, the OS initiates the new process and if there are not devices that are used in computers for secondary and tertiary
enough frames, one (or) more processes will be terminated storage.
(or) suspended.
1. Magnetic Disks
In case of suspending a process, it is swapped out
Disks are used to store bulk of data for long-term in a
and its associated frames are allocated to other processes
computer system. A disk is made up of one or more platters
and the suspended process gets swapped in when there
which are flat and circular in shape, like a CD and have
are enough available frames. This increases the degree of
diameters in the range of 1.8 to 5.25 inches. Each platter is
multiprogramming and provide optimized CPU utilization.
a metal disk that is covered with magnetic material on both
One challenge in the working set model is to track the surfaces. The data is stored on the surface in magnetic form.
pages references because these references might change with All platters are fastened to a central spindle that rotates. There
every new reference while dropping the older reference. One is a read-write head which lies above platter, the gap between
solution to this is to use a timer interrupt with a ixed interval the head and platter is extremely thin (a few microns) and if the
along with a reference bit. head touches the magnetic surface it may destroy the platter,
Page Fault Frequency this situation is also called as head crash.

Working set model is a complex method of preventing For every platter there are two read-write heads, one
thrashing. However, a direct method for the same is to use Page for the top surface and the other for the bottom. All heads are
Fault Frequency (PFF). This method imposes a control on page attached to a disk arm (or disk actuator) which is connected to
fault rate by using upper and lower limits on it. a stepper motor to advance or move heads in steps.
96 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
2. Magnetic Tapes
This is the earliest secondary-storage device. They
are very slow in operation because, they read and write data
sequentially as random access to data is not possible. Now-
a-days, they are used only to keep huge data which is not at
all used but has to be kept as a record or history, like sensus
information of a country, stock market history, backup data etc.
The tape has to be inserted in a tape drive to perform
read or write. It is a plastic ribbon with magnetic coating on the
surface to store data. It also carries two spools, one to wound
and other to rewound the tape. The read-write head reads or
writes on the tape using electromagnetic signals.

Figure (1): Magnetic Disk Structure


The surface of platter is logically divided into tracks
which are circular and that are subdivided into several sectors.
The set of tracks on the same position in each platter makes
one logical cylinder as shown in igure (1). Figure (3): Magnetic Tape
The storage space provided by tapes is usually 20 GB
to 200 GB.
Merits
The advantage of magnetic tapes is that they are very
Figure (2): Disk Platter durable. The magnetic tapes can be erased and even reused
Disk drives usually comes in gigabyte capacity and speed many number of times. Magnetic tapes are very much reliable
of rotation of spindle motor is nearly 7200 r.p.m (rotations per and are inexpensive when compared to other secondary storage
minute). However, the speed of operation has several parameters devices.
like,
Demerits
(i) Transfer rate, refers to the rate at which the data travels
Magnetic tapes are sequential in nature and cannot
between the computer memory and the disk drive.
perform random access. The data is transmitted at very slow
(ii) The time required to move the head to the desired speed incomparable to the magnetic disks.
cylinder or track is called as seek time or random access
time or positioning time. 3.3.2 disk scheduling
(iii) The time required to move the head to the desired sector Q32. Explain various disk scheduling algorithms
by spinning the platter is called as rotational latency. with an example.
Now-a-days several megabytes can be transferred in Answer : Model Paper-III, Q11(a)
one second. There are several disks like loppy disks that are
removable and are enclosed in a plastic case and are cheaper. The various disk scheduling algorithms are,
These disks are to be inserted into a disk drive for reading and
1. FCFS (First Come First Serve)
writing purposes. They are relatively slow compared to hard
disks. A disk drive is connected to a computer via a set of wires 2. SSTF (Shortest Seek Time First)
called as an I/O bus.
3. SCAN or Elevator
Merits
4. C-SCAN (Circular SCAN)
(i) Removable in nature, facilitation of different types of
disks to be mounted as needed. 5. LOOK.
(ii) Disks are relatively simple. 1. First Come First Serve (FCFS) Scheduling
Demerits It is the simplest among all scheduling algorithms.
(i) Head of the disk lying on the cushion of air may touch Here, the request which comes irst is served irst.
the disk surface which causes the damage. However, it cannot promise fastest service always.
(ii) If the head crashes, it cannot be repaired the entire disk Consider the following example in which the requests
need to be replaced. for the disk block comes as follows,
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 97
Computer SCienCe paper-V operating SyStemS
Example
104, 189, 43, 128, 20, 130, 71, 73.
Suppose, the head is initially at 59, it will move to 104, then to 189, 43, 128, 20, 130, 71 and inally reads block 73.
Total head movements = |59 – 104| + |104 – 189| + |189 – 43| + |43 – 128| + |128– 20| + |20 – 130| + |130 – 71| + |71 – 73|
= 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640
Average seek length = 640/8 = 80
This scheme is ineficient scheme in some situations. Consider the servicing mechanism for requests 128, 20, 130. As
128 is irst, head moves from 128 to 20 (108 movements), then it moves back to 130 (130 – 20 = 110 movements). It
would have been eficient if 128 and 130 are serviced together (128 – 130 = 2 movements) followed by 20 (130 – 20 =
110 movements). This approach would have avoided nearly 106 head movements, thereby improving the performance.
0 20 43 59 71 73 104 128 130 189

Figure (1): FCFS Disk Scheduling Algorithm


2. Shortest Seek Time First (SSTF) Scheduling
This algorithm serves the requests which are close to the current head position. This means that the next request is
selected such that it has minimum seek time from current head position.
Applying the SSTF algorithm on the previous example. Initially, head is on 59 then it goes to 71 and next to 73. From
here, 43 is closer than 104, hence 43 is served. Then 20, 104, 128, 130 and 189 are served respectively.
Total head movements = |59 – 71| + |71 – 73| + |73 – 43| + |43 – 20| + |20 – 104| + |104 – 128| + |128 – 130| + |130 – 189|
= 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59 = 236
Average seek length = 236/8 = 29.5
This scheme provides a good improvement over FCFS, but it may lead to starvation if requests keeps on coming.
Consider an example that if incoming requests are closer to head. In such situations, they are served and the requests
which are far away from head are kept pending causing starvation.
0 20 43 59 71 73 104 128 130 189

Figure (2): SSTF Disk Scheduling Algorithm

98 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
3. SCAN Scheduling or Elevator Algorithm

In this algorithm, the disk arm moves in one direction, servicing all the requests that comes along that route until last
track is reached. After reaching one end, it reverses the direction and moves to the other end servicing the requests along
the way. This action is similar to that of an elevator (or lift) of a building and hence it is also called as elevator algorithm.

Consider the previous example of disk requests 104, 189, 43, 128, 20, 130, 71, 73 where initial head position is 59. If
SCAN scheduling algorithm, is applied then from 59, the head moves towards the irst track i.e., 0 servicing the requests
43 and 20 and inally reaches to 0. Then the disk arm reverses and starts moving towards the other end servicing
requests 71, 73, 104, 128, 130 and 189. Hence, the sequence of servicing requests is now,

59, 43, 20, 0, 71, 73, 104, 128, 130, 189

Total head movements = |59 – 43| + |43 – 20| + |20 – 0| + |0 – 71| + |71 – 73| + |73 – 104| + |104 – 128| + |128 – 130| + |130 – 189|

= 16 + 23 + 20 + 71 + 2 + 31 + 24 + 2 + 59 = 248

Average seek length = 248/8 = 31

The limitation of this scheme is that the waiting time of some requests increases. This is usually when most of the
requests are present on the other side of the disk so, it would be preferred if that side is scanned irst.
0 20 43 59 71 73 104 128 130 189

Figure (3): SCAN Scheduling Algorithm

4. C-SCAN Scheduling

It is a modiied version of SCAN scheduling which overcomes the limitation of SCAN and provides uniform waiting
time. It is same like SCAN except that the requests are serviced only in one direction or trip.

Apply C-SCAN to previous example with head initially at 59. As there are more requests on the right side, the head
starts moving towards that side servicing 71, 73, 104, 128, 130, 189 and inally reaches the end. Then, it immediately
returns to the same end without servicing any requests in that direction and starts at 0 again. The following igure (4)
depicts the same.
0 20 43 59 71 73 104 128 130 189

Figure (4): C-SCAN Scheduling Algorithm

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 99


Computer SCienCe paper-V operating SyStemS
5. LOOK Scheduling As mirroring provides reliability and striping provides
eficiency several combinations of striping and mirroring are
It is a modiied version of SCAN which overcomes
used to get various RAID levels as follows,
its limitations by not visiting the extreme ends
unnecessarily. It looks for the last request in one RAID Level 0
particular direction and then reverses from there. It
In this scheme, block striping is done by splitting blocks
services requests in both directions. Figure (5) shows
the head movement when this algorithm is applied to across several disks without employing any redundancy.
the previous example.
Disk requests: 104, 189, 43, 128, 20, 130, 71, 73
0 20 43 59 71 73 104 128 130 189
Figure (1): RAID 0 (Non-redundant Stripping)
RAID Level 1
In this scheme disk mirroring is done by creating
duplicate copies of disks.

C = Copy
Figure (2): Mirrored Disks
Figure (5): LOOK Scheduling Algorithm
RAID Level 2
3.3.3 raid structure
It is also called as Memory Style Error Correcting Code
Q33. What is RaId structure? What are the various (ECC) organisation. In this scheme parity bits are used for each
RAID levels? Explain them briely. byte in the memory and these parity bits are striped across
Answer : other disks. ECC can reconstruct the data which is damaged.
Hence, a higher level of reliability is obtained with improved
RAID Structure performance using striping. The following igure shows for four
RAID stands for Redundant Array of Inexpensive Disks. disks of data, user require only three disks of parity.
The idea here is to use multiple disks and keep redundant data or
copies of data so as to improve performance and reliability. The
simplest RAID is to copy a whole disk to another disk. Thus,
if the original disk fails, its duplicate can be used to restore the P = Parity
data. Hence, reliability is increased. figure (3): Memory style error correcting code
Here, both the original and duplicate disks can be used RAID Level 3
concurrently to access data. Consider an example, that we
want to read 16-blocks of data from a disk and it requires 16 It is also called as bit interleaved parity organization
milliseconds. If we read irst eight blocks from one disk and where memory system assumes that the disk controller can
remaining eight from another simultaneously, then the time detect the correctness of a sector during read. Hence, it uses
required will be only 8 milliseconds i.e., half of the traditional only one parity bit for error correction, thereby decreases the
approach. Hence, performance is improved. overload of disks. This level of RAID uses only one disk for
storing parity of four disks.
RAID Levels
There are certain techniques as mirroring and striping
that are employed is RAID concept each performing their own
functions.
figure (4): Bit Interleaved Parity
Mirroring is a technique of making a duplicate copy
Another advantage is that, the transfer rate of
of the complete disk so that in case of disk crash, it can be
reading and writing is improved since data is striped across
used. Hence, it increases reliability. Striping is a technique of
splitting the bits of each byte across multiple disks. In other several disks and each operates parallely. A part from these
words blocks of a ile are splitted across multiple disks. This advantage a performance problem of this level and all RAID
increases performance because all these disks will be active in levels that uses parity bit is that they involve overhead of
parallel. reading and writing parity.
100 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
RAID Level 4 RAID Level 0 + 1

It is also called as block interleaved parity organisation. It is a combination of RAID level 0 and RAID level 1
In this scheme block level striping is performed and for each i.e., it provides both performance and reliability. This level is
block it stores a parity block in a separate disk. better than level 5 but the disadvantage is that the number of
disks needed is more compared to other levels. First striping is
Parity
performed than mirroring of those strips are done as shown in
igure (8) below,
Figure (5): Block Interleaved Parity

In case of a disk failure, the parity disks come into action


and failed the failed blocks are restored from the other disks. The
transfer rate of single data block is slow because not more than
a single disk can be accessed by a block. However, the overall
transfer rate with respect to read access is higher because it
supports multiple read accesses to be processed simultaneously.

One major drawback of this level is that, as there is a


single parity disk, the failure of this disk results in the failure
of entire system. Figure (8): RAID Level 0 +1

RAID Level 5 Raid Level 1 + 0


It is also called as block interleaved distributed parity. It is similar to the above level (i.e., 0 +1) but here
It doesn’t store parity in a single separate disk but distributes the disks are mirrored irst then striping is performed on the
the parity across several disks. If there are N disks to store mirrored pairs.
the data, parity is distributed among N+1 disks.

The parity blocks of one disk are stored in some other


disk or in other words the parity blocks of a particular disk are
not stored in the same disk because if that disk fails, then its
parity bits are also lost. Whereas, if its parity bits are present in
some other disk, then it could be possible to recover the data.

Figure (6): Block Interleaved Distributed Parity Figure (9): RAID Level 1 + 0

The limitation of this scheme is that if multiple disks Q34. Discuss how performance can be improved
fail simultaneously, then it would be impossible to restore the using parallelism.
whole data. Hence, next level is used.
Answer :
RAID Level 6
A single disk cannot fulill all the storage and transmis-
It is also known as P + Q redundancy scheme, Which sion requirements for most of the applications. Thus, a series
is very much similar to level 5 except that it stores additional of multiple disk are used in parallel with a controller. Mirroring
redundant information to overcome multiple disk failures. It and striping techniques are the two major improvements in this
stores extra parity bits and sometimes advanced error correcting approach. When mirroring is used, multiple disks can handle
codes like reed-solomon codes are used. For every 4-bits of data multiple requests simultaneously, with which the processing
2-bits of redundant data is stored with which it can tolerate two rate simply doubles. In addition to this, two types of striping
disk failures. It requires one extra disk than level 5. techniques can also be used. They are,

(i) Bit-level striping

figure (7): p + Q redundancy (ii) Block-level striping.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 101


Computer SCienCe paper-V operating SyStemS
(i) Bit-level Striping (iv) In Storage Area Network (SAN) interconnect layer,
In this type of striping, each byte of data will be splitted RAID can be implemented at visualization devices
interms of bits and distributed among multiple disks. For instance, which act as communication media between storage
consider disks arranged in parallel then for every byte, bits are and host applications. Typically, these devices are
distributed among all the disks. All the disks with respect to an responsible for processing queries generated by hosts
application are considered as a single disk which is the multiple with respect to the storage.
of number of disks with respect to its memory. This simply RAID Problems
improves the access rate by four times as every disk is involved
v Availability of data for operating system and its users
in every read/write operation. For implementing the bit-level
is not always promised by RAID. This is usually in the
striping approach, the number of disks that can be used should
case where incomplete write (or) recover option occurs
always be in the multiples of 8.
due to which the data will be corrupted.
(ii) Block-level Striping
v RAID provides protection measures to errors present
In the block-level striping approach, each ile is divided in physical medium but lacks in software protection
into certain number of blocks and these blocks are striped across and hence, it can not detect the overwrites occur in ile
multiple disks. If there are n disks, the block i of a ile will be system with respect to other processes.
assigned to disk (i mod n) + 1. It is the most common approach
used for striping the data. However, other striping techniques v It is not lexible interms of certain implementation
such as bytes of a sector or sectors of a block can also be applied. where variations exist in ile systems and disk need to
be divided.
Striping data across disks has the following advantages,
These problems got eliminated in ZFs system of solaris.
1. Once the data is striped over multiple disks, load is
distributed among all the disks, such that multiple 3.4 file systeMs
streams can be concurrently served. Thus, the throughput
will be signiicantly increased. 3.4.1 file concept, access Methods
2. Striping of data across multiple disks will improve the Q36. What is a ile? Discuss its attributes.
reliability of the server. For example, if any of the disks
on the server fails, then all the lost segments that were Answer : Model Paper-I, Q11(b)

stored on that disk can be recovered using the segments File


that reside on the remaining disks.
A ile is grouping of similar records or related
3. Striping of data across multiple disks that work information together which is stored in secondary memory.
concurrently will offer higher bit rate compared to the
Both the data as well as programs of all users are stored in
bit rate offered by a single disk.
iles. A collection of iles is called directory. They are used to
Q35. Briely explain about the implementation of organize iles. Files and directories are the basic mechanism
RAID. Also discuss the problems associated of a ile system.
with RaId.
In order to store data on secondary memory it is
Answer :
necessary to create a ile and input data into it, which is then
Implementation of RAID stored in secondary memory. Without a ile, data cannot be
RAID can be implemented at the following layers, store in secondary memory.
(i) It can be implemented by volume-management software Attributes of File
either within kernel or at system software layer.
Each ile has several attributes. They are,
(ii) It can be implemented within Host Bus Adapter (HBA)
hardware. Here, we have to connect all disks to HBA v Name
if we want them to participate in RAID. It is inlexible It is a symbolic name of the ile which gives
method. convenience to users to refer the ile. It is a string of
(iii) With the help of storage array hardware, RAID can be characters like myile.txt, resume.doc etc.
implemented in the form of subsets of RAID sets which
v Identiier
are created for assisting the creation and management of
ile system with respect to the operating systems. These It is a unique number which is used to identify a
subsets are nothing but the splitted parts of the RAID particular ile by the ile system. It is not in user-readable
sets associated with different levels. form.
102 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
v Type (v) File Deletion (delete( ))
There are different types of iles depending on the type User speciies the ilename to be deleted. Operating
of data they store like text, executable code, sound, system searches the ile in directory table and deletes
video, image etc. This attribute tells the type of data the entry of that ile from directory. Then, it marks the
stored in the ile.
space occupied by that ile as free.
v Location
(vi) Truncating a File (truncate( ))
It speciies the physical address of the ile located on a
particular storage device. This operation erases some content of the ile and
keeps all its attributes untouched except the ile length.
v Size
The basic operations discussed above can be combined
It indicates the size of a ile which is usually measured in
in various ways to create other operations such as appending
bytes. It can also specify the maximum size allowed to a
ile. data at the end of ile, renaming existing ile etc. Before
performing any of the ile operations, it needs to be opened
v Protection
using open( ) system call. It accepts various mode informations
This attribute determines the access control in which ile has to be opened like read-only, read-write,
information i.e., who are allowed to use this ile and
append mode etc.
with what privileges.
Q38. Deine the term ‘lock’. What are the two types
Other miscellaneous information include date and time
at which the ile was created, last modiication done, last usage of it? Discuss ile types.
etc. Answer :
Q37. What are the ile operations? Explain them.
Lock
Answer :
If multiple processes are accessing a particular ile then
File Operations
simultaneous operations on this ile could make its contents
The different oeprations performed on a ile are known inconsistent. Hence, some operating systems uses locking
as "ile Operations". Users perform many operations on iles
mechanism to prevent others from gaining access while the ile
by using the system calls provided by operating system. For
example, create( ), open( ), read( ), write( ), close( ), truncate( ), is in use. The two types of locks are,
delete( ) etc. The following are the six basic ile operations, (a) Shared Lock
(i) File Creation (create( ))
Several processes can acquire this lock concurrently.
When a new ile is created by user by calling
They are allowed only to read the ile.
respective system call, operating system performs two
operations. Firstly, it allocates space for that ile and (b) Exclusive Lock
secondly, it inserts a new entry in the directory table
for this ile. Only one process can acquire this layer of lock at a
particular instant of time. This lock is applied when a
(ii) Writing to a File (write( ))
write operation has to be performed on the ile.
To write into a particular ile, user must specify the
ilename and data that has to be written. The operating File Types
system searches the directory to ind that ile, open it
and use a write-pointer to point the location of ile. As There are various types of iles depending on the types
data is written into the ile, the write-pointer is updated of data they store. This type distinction is useful for operating
to specify the next write location. system to determine whether they support that format or not
(iii) Reading from a File (read( )) and if it recognizes the type it can treat it in respective manner.
To read from a particular ile, user has to specify the Normally, the ile type is speciied in the ilename itself. The
ilename. The operating system starts reading ile ilename consist of two parts irstly the user-deined name and
from the current position of read-pointer, which is
other is an extension separated by a period (.). An example of
associated with each process of reading the ile.
a text document in windows operating system is myile.txt.
(iv) File Seek (Repositioning with in a File) (fseek( ))
Where “myile” is the ilename and the extension “.txt”
It refers to repositioning the current-ile-position
pointer to a speciied location within a ile. It is done deines that this is a text ile. The following are various popular
before reading or writing into ile. ile type extension,
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 103
Computer SCienCe paper-V operating SyStemS

Extension File type

exe, com, bin, Executable or ready-to-run program iles.

obj, o Object iles generated by compilers.

c, cc, java, htm or html, Vb etc. Source iles belonging to various programming languages.

asm, a Assembly language iles.

bat, sh File containing sequence commands executable

at command prompt or shell.

doc, txt Text documents or iles.

wp, rtf, doc, tex Various word processor application iles.

lib, a, so, dll Various library iles.

pdf, jpg, ps Format iles that can be printed

out using a printer.

rar, zip Archive iles where some related iles are grouped together.

mpeg, mov, rm, mp3, avi, 3gp, divx, axxo Files containing multimedia data like video, audio etc.

Different operating systems may have different extensions for the same type of ile. Another advantage of using extension
is that operating system can associate with each ile the type of application which is needed to open that ile. Whenever user
opens a particular ile by double-clicking its icon (obviously in GUI environment) the operating system starts the applications
that support that ile format implicitly.

Unix uses a concept called magic number to indicate the type of ile.

Q39. Write short notes on ile structure.

Answer :

Different types of iles have different structure inside. For example, the source ile of a particular programming language
has a structure which matches the expectation of the compiler that reads it. In the same way, a binary ile is expected to have
series of 0’s and 1’s. Modern operating systems support a variety of ile types like text, images, video, audio etc. The default
iletype that every operating system must support is an executable ile so as to load and run programs.

When an operating system does not support a particular ile type, then write new application programs that are capable of
reading and understanding the desired ile structure. For example, windows XP does not support a “rm” iletype, hence, install
application such as real player that can read it.

The iles in Mac OS consist of two parts, a resource fork and a data fork. The resource fork include information like labels
on buttons etc., which can be relabeled in some other language (like Arabic, French etc.) by using tools provided by Mac OS.
The data fork consist of program code and data.

The interval ile structure consist of a number of variable-sized logical blocks which are combined into one or more ixed
size physical block(s) of disk. This is called as packing technique which can be done either by user’s program or by operating
system.

In some operating systems like Unix, all iles are treated as stream of bytes and we can address each byte using its offset
from the starting or end of the ile.

104 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q40. Write short notes on, The read next and write next operations of sequential
access are modiied to read n and write n where n is the
(a) Sequential access
block number. A more promising approach is to use a direct
(b) direct access access to map the position of the ile and sequential access for
performing read next operation in that block. This can be done
(c) Indexed access by using relative block numbers which are nothing but index
related to the access of blocks. For example, relative block
(d) Indexed sequential access. numbers 0, 1, 2, ... can be alloted to block numbers 72, 1423,
20 etc. This approach is adopted by some of the operating
Answer :
systems while others use either sequential (or) direct access.
(a) Sequential Access (c) Indexed Access
Among all the access methods, it is considered as the This method is typically an advancement in the
simplest method. As the name itself suggests that it is the direct access method which is the consideration of index. A
particular record is accessed by browsing through the index
sequential (step by step processing of information present in
and the ile is accessed directly with use of a pointer.
a ile). Due to its simplicity, most of the compilers, editors
To understand the concept, consider a book store where
etc., uses this method.
the database contains a 12-digit ISBN and a four digit product
In this method, processing is carried out with the use price. If the disk can carry 2048(2 kb) bytes per block then
128 records of 16 bytes (12 for ISBN and 4 for price) can be
of two operations namely, read and write. Read operation is
stored in a single block. This results in a ile carrying 128000
responsible for reading the portion present next to the ile records to be reduced to 1000 blocks that are to be considered
pointer which in turn proceeds automatically and tracks the in index where each entry carrying 10 digits. To ind the price
I/O location. Write operation is responsible for writing at the of a book, binary search can be performed over the index with
which the block carrying that book can be identiied.
end of the ile and proceeds towards the new end.
The drawback of this method is, it is considered as
In this type of access while processing the records ineffective in case of larger database with very large iles
sequentially, some of the records can be skipped in both the which results in making the index too large.
directions (either forward (or) backward) and can also be reset (d) Indexed Sequential Access
to the head of the ile (beginning). The following igure shows
To overcome the drawback associated with indexed
a tape model of sequential ile access, access, this method is used where an index of index is created.
Position Primary index points to the secondary index and secondary
Rewind Read (or) write index points to the actual data items. An example of such
a method is ISAM (Indexed-Sequential Access Method) of
IBM which carries two types of indexes. They are a master
Beginning End
index and a secondary index. Master index carries pointers to
Figure: Sequential Access of a File secondary index whereas, the secondary carries blocks that
points directly to the disk blocks. Two binary searches are
(b) Direct Access performed to access a data item. The irst one is performed
This access method is also called as realtime access on a master index and the second one on the secondary index.
This type of method can be considered as a combination of
where the records can be read irrespective of their sequence. two direct access read method.
This means that they can be accessed as a ile which is
accessed from a disk where each record carry sequence
3.4.2 directory and disk structure
number. For example: block 40 can be accessed irst followed Q41. Explain different directory structures with
by block 10 and then block 30 and so on. This eliminates the neat diagram.
need of sequential read (or) write operation. Answer :
A major example of this type of ile access is the Directory Structure
database where the corresponding block of the query is
The disk is usually divided into various parts known as
accessed directly for instant response. This type of access partitions or volumes. Each of them contains ile system that
saves a lot of time when large amount of data is present. In stores device directory or system volume information table.
such cases, hash functions and index methods are used to This table contains information such as name, locations, size,
search for a particular blocks. type etc., about all iles stored in that volume.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 105
Computer SCienCe paper-V operating SyStemS
A directory is used to organize iles. It can be thought as a symbol table that converts the given ilenames into their
directory entries thereby obtaining all the information about that particular ile including its location. The following are the
different operations can be performed on directories.
v Searching Files
By reading the directory table, a particular ile or similar iles or whose name matches a speciied pattern or criteria can
be searched.
v File Creation
When a new ile is created, an entry is inserted in directory table.
v File Deletion
When a particular ile is deleted, its entry is deleted from directory table.
v Listing
The iles and other sub-directories present in their directory can be listed
v Renaming Files
The name of existing ile can be changed by modifying its entry in directory table.
Schemes for Directory Structure
The following are the common schemes used to deine directory structure.
1. Single-level Directory
In this type of directory structure, the volume contains a single directory and all iles are stored in the same directory.
Moreover, sub-directories can be created within that directory.

Directory

File 1 File 2 File 3 ……. File 4 Files

Figure (1): Single-level Directory Structure


There are some limitations to this scheme. All ilenames has to be unique, because they reside in the same container.
As users increase, iles also increase and to give unique ile names to all those iles may become complicated.
2. Two-level Directory
In this structure, each user of the system is given with a separate directory called as User File Directory (UFD) where
all iles of particular user is present. If there are ‘n’ number of users then there will be “n” number of UFDs, all of which are
indexed in a Master File Directory (MFD).
When a user wants to search any ile say, ‘x’ then it is searched in his UFD only. The ilenames within the UFD should
be unique. But, two or more users can have same ilenames because their directories are different. UFD is created whenever a
new user is created.

MFD User 1 User 2 User 3 ..... User n

UFD UFD UFD UFD

File 1 File 2 File 3 File 4 File 2 File 5 File 6 File 5 File 2 File 1 File 2

Figure (2): Two-level Directory Structure

106 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
This directory structure overcomes the problem of having unique ilename by isolating or separating users in different
directories. However, this solution creates problem when several users require to share some iles.
Each ile in the system is identiied by a path name. The syntax of specifying pathname differs from operating system to
operating system. For example, in MS-DOS and Windows a colon(:) is used to specify a volume and a back slash(\) to specify
a directory. The path name is created by using these symbols and the directory names. For example, user 1 wants to access ile
2 of user 2 the path would be,
C:\user2\ ile2
Similarly if ile2 of user3 has to be accessed then path would become,
C:\user3\ile2
3. Tree Structured Directory
This scheme allows user to create any number of their own directories within their User File Directory (UFD). It has a
variable number of levels. It gives better lexibility to manage iles.
A sub-directory is treated as a ile. A special bit is used which deines whether the entry is ile (0) or sub-directory (1). A
current directory is normally a directory from where process is executing and carries almost all the associated iles of currently
executing process. When process tries to access a particular ile, it is searched in current directory. If it is not present, then user
has to specify the pathname of that ile or change the current directory to that path which can be done using a system call. This
system call considers the path name as a parameter and redeines the current directory.
root User 1 User 2 User 3

Schedule E-mail DOCS Schedule E-mail DOCS Schedule E-mail DOCS

F4 F5 F5 F4 F5
Past Future Pics Music Past Future Pics Music

F1 F2 F3 F6 F7 F8 F9

Latest

F6 F7 F8

Figure (3): Tree-structured Directories


The two types of pathnames are,
(i) Absolute Pathname
It gives full address of ile starting from ‘root’ directory through all directories till the ilename. For example, the
absolute pathname of ile “F9” in the igure (3) is,
root\User1\DOCS\Music\F9.
(ii) Relative Pathname
It gives address of ile from current directory. For example, if current directories is “root\User1\DOCS\”, the relative
path of ile “Music\F9” would be,
root\user\DOCS\MUSIC\F9”.
4. Acyclic-graph Directory
It is a technique which allows sub-directories and iles to be shared. Acyclic-graph means a graph without cycle. The
tree-structured directory method does not allow sharing of iles and directories which is resolved with this technique. The
entry of a particular ile is present in all directories which are sharing that ile. Figure (4) shows a ile “main library” shared by
two directories.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 107
Computer SCienCe paper-V operating SyStemS

root User 1 User 2

DOCS Pics Email Project 1 Project 1 DOCS Pics Email

File X File Y

Old Latest New

Main
library

Figure (4): Acyclic-graph Directory Structure


Sharing is totally different from maintaining multiple copies and it is very important in the case where more than one
programmer is working on a single program. In this case, changes made by one programmer to a ile need to be informed to the
others instantly which can only be done by sharing. If a directory is shared, every new ile created in it will be made available
to all its shared users.
Sharing in Unix is accomplished by creating a directory called link which acts as a pointer to the original directory/ile.
When user tries to access the ile present in a shared directory, it is marked as a link and the name of the original ile will be
attached to it.
Acyclic-graph directory structure gives lexibility as well as sharing. The deletion of shared iles is a complex problem
here, because if a ile is detected by searching it through a path, then entry will be deleted in that directory, leaving a question
that what about other directories who are pointing to the same ile? Entry will not be deleted in those directories. This will
create dangling pointers to ile which actually does not exist.
Hence, some operating systems like MS-DOS, does not allow acyclic-graph structure. It uses simple tree structure
rather than acyclic-graph.
5. General Graph Directory
It is a tree-structured directory organization where links can be added to an existing directory. It is same as acyclic-
graph structure except it allows cycles in graph whereas, acyclic-graph does not.

root User 1 User 2

..... ..... ..... Project 1 Project 1 ..... ..... .....

Main
library

Snap shots

File 1 File 2
Figure (5): General Graph Directory with Cycles
108 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Since, cycles are present in graph, searching a particular ile more than once should be avoided. This organization
also suffers from the dangling pointers problem while deletion of iles. To avoid this problem, acyclic-graph structure uses a
variable called reference counter which stores the number of directories referring to this ile. If reference counter is 0 then it
means no directories are referring to it and can be deleted. Since, there are cycles present here, this approach is not useful.
Another approach called garbage collection scheme is used to ind whether all the references to a particular ile are
deleted or not, so that space occupied by that ile is deallocated and it is marked as deleted. The garbage collection scheme
works in two phases. Firstly, it traverses the whole ile system and marks everything (ile and directories) that are accessible
and ensures that it is marked only once (without reputation). In the second phase, it frees all unmarked iles and directories
because they are not referred by any directories and are garbage.
Q42. Write short note on actions to be performed during a ile deletion operation if links exist in the
directory structure.
Answer :
A ile in a directory structure can have either single or multiple parents.
When a ile to be deleted has a single parent. It is easily deleted and its entry is removed from its parent directory. If a
ile has multiple parents then deleting it is a complex task as it will create dangling pointers.
However, its entry still can be removed from parent directory present in the access path of delete command.
The process of checking for multiple parents is very complex. The complixity is reduced by maintaining a reference
count for every ile. When a new ile is created, the count is set to one and whenever a link points to the ile the count is
incremented by one.
On contrary, when a ile deletion attempt is made, its reference count is decremented by one. Further, its parent directory
provided in the access path of delete command’s entry is deleted. Finally, when reference count of the ile becomes zero actual
ile is deleted.
The reference count strategy does not work when a directory structure contain cycles.
A cycle is developed when a link is made between a directory and its grand parent directory as shown in below igure.
Root

= Directory

= File

X Y Z

F1
F2
T

F4 F3

Figure: Link made between a Directory and its Grand Parent Directory
In the above igure, directory T is linked to directory Y and its grand parent directory root.
If directory Y is deleted then its entry in root directory is deleted as it has one as reference count. Further, directory Y
and its ile become unreachable from root directory therefore there is no use to retain the directory.
To solve this unreachable problem, some cycle detection techniques must be applied or formation of cycles must be
prevented.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 109
Computer SCienCe paper-V operating SyStemS
Q43. Explain the advantages and disadvantages of single, two and tree level directory structure.

Answer :
Advantages of Single-level Directory Structure

v It has simple design structure which is easy to understand.

v It can locate the iles quickly as all the iles are present on a single location.

Disadvantages of Single-level Directory Structure

v All ile names must be unique as they are stored in a same container.

v The rule of maintaining unique names becomes complicated and gets violated easily in case of multiple users.

v The ile names might run out of uniqueness as the operating systems allow limited number of characters to be given to
the ile name if there are large number of iles.

Advantages of Two-level Directory Structure

v It resolves the collision problem occurred with respect to the names of the iles.

v It provides an effective way with which users can be isolated from each other.

v It eficiently improvises the task of search by employing Master File Directory (MFD).

Disadvantages of Two-level Directory Structure

v This structure typically isolates users from each other and in some systems, sharing of iles is not allowed.

v It allows logical grouping only in users not by all.

Advantages of Tree-structured Directory

v It supports creation of sub-directories.

v It supports grouping of iles and its storage in sub directories.

v It provides an eficient way of searching iles.

v It provides access to the iles of other users by specifying their path names.

v It provides an eficient way of managing deletion of a shared ile.

Disadvantages of Tree-structured Directory

v It does not support sharing of sub directories among different users.

v It provides complexity in accessing iles of different users because the path names of iles are longer than iles of two
level directories.

3.4.3 file system Mounting, protection


Q44. What do you mean by ile system mounting? Explain.

Answer :

The ile system is normally present on some logical partition of a disk. It has to be mounted i.e., to be connected to
system’s directory structure in order to access the ile present in this partition. The mounting operation can be done only by
system administrator and hence it acts as a protection to the ile system. Local as well as remote ile system can be mounted.

A ile system is mounted on a mount point, which is nothing but any empty directory in system’s directory structure.
Mounting does not mean permanently altering directory structure, but only link is created to that partition of ile system. This
link lasts until it is unmounted or system is rebooted.
110 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
In a ile system, a device driver will be given with a responsibility of informing the operating system that whether the
referred ile is valid (or) not. With this feature, operating system occupies the mount point in its directory structure which helps
in traversing the directory structure. Operating systems including the latest versions of windows supports the feature of mounting
the ile system at any point in the structure.
The syntax of mount operation is “mount <FS_Path>, <Mount_Point>” where FS_Path is the path of the ile system or
volume which is unmounted and mount_Point is the path where FS_Path has to be linked in existing ile system.
Consider the following igures,

Figure (a): Existing File System

Figure (b): Unmounted Volume


If the unmounted volume of igure (b) has to be mounted in the existing ile system of igure (a) at mount point “mnt”
then the following command is executed.
mount ExtraStuff, “root\mnt”
Then the resulting ile system would become as,

Figure (c): File System After Mounting

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 111


Computer SCienCe paper-V operating SyStemS
Q45. What is mounting of a ile system, how mount- The kernel uses the information stored in mount table
ing takes place in different operating system? to retrieve the ile easily.
Explain with examples.
Example
Answer :
Mounting of a File System # mount/doc/new pics

For answer refer Unit-III, Page No. 110, Q.No. 44. Q46. Deine security and protection. Describe the
concept of ile protection.
Mounting of File System in Macintosh Operating System
File system mounting in Macintosh operating system Answer :
is started by initially searching the desired ile system (which
Security
needs to be mounted) on the disk during the booting process. If
the process is successful, then the ile system is automatically The term security refers to a state of being protected
mounted at the root level. This mounting is done by adding a from harm or from those that cause negative effects.
folder on the screen that is labelled with a name identical to Examples can be protecting banks from robbery, computers
the name of ile system achieved in the derive directory. Once
from viruses, data from unauthorized access etc.
the icon is created, the uses chicks on the icon so as to view
the newly mounted ile systems. Protection
Mounting of File System in Microsoft Windows Operating
Protection refers to, keeping the system safe physically
System
as well as from unauthorized access. It can be provided in
Microsoft windows maintains an extended two level many ways, like in single user system, the loppy disk can
directory structure that consists of devices (at irst level) and
be physically removed and kept safe in a locker. But this is
their respective partitions which are labeled with unique derive
letter (at next level). Each of these partition maintain a general a very traditional approach and often cumbersome. There
graph direct structure corresponding to the derive letter. If a are other techniques to employ protection in both single and
user search for a speciic ile then the path for the ile is in the multiuser system.
form
File Protection
Drive-letter: Path/to/ile
Protection refers to providing controlled access to iles
The process of ile system mounting in such operating
system is down at boot time. Initially, in this process every by various users. There are several factors with which the
device present in the system is discovered and then each and protection mechanism veriies before allowing or denying the
every identiied ile system are mounted. access and there are several types of operations which has to
Example be controlled like,

Drive D:\Downloads\songs. v Read


Mounting of File System is Unix Operating System v Write
Unix ile system initially knows the root partitions of v Execute
the system when the device is booted. It uses a special system
call “mount” to mount the ile. v Append
Mount (spl, pn, opt) v Delete
It mounts the ile system given by spl (special) at the
v Listing attributes etc.
point pn(path_name) in the root ile system which allows
multiple ile systems to combine and form a single global tree. In addition to these, some other operations like
All the information of the mount ile systems are kept in the rename, copy, edit, etc., can also be considered.
mount table which is maintained by kernel. Each entry in the
mount table consists of the following, Access Control
(i) Device number of partitions In this method, user identity is used to decide whether
(ii) State of the ile system access should be granted or not. For implementing this,
(iii) Total size of the partition Access Control List (ACL) for each ile or directory needs
to be created. When a user wants to access a particular ile,
(iv) Inode number of the root directory
then ACL of that ile is checked to see whether that user
(v) Pointers to a list of free blocks has permission to access it or not. If yes, access is allowed
(vi) Root inode of the ile system. otherwise denied.
112 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
This method has some disadvantages. For example:
consider that 100 users has to be given read access to a
3.4.4 file system implementation, directory
particular ile. implementation

v Then the size of ile or directory increases due to Q47. Give an overview of ile system implementation.
storing access information in ACL.
Answer :
v Searching ACL for access rights will take time because
list is very long. File System Implementation
v The directory entry is of ixed size, but if ACL is A ile system consists of special blocks of information
stored then it has to be made variable sized which will that helps in loading the operating system when the computer
increase the complexity of managing it. boots. Other blocks contain information regarding the amount
of free space, total number of blocks etc. The following are the
Hence, to reduce the length of ACL and overcome
memory structures each for speciic purpose,
the above problems, three different classiications of users is
done. 1. Boot Control Block

(i) Owner Every partition or volume of a disk has a boot control


block that stores information necessary for loading an operating
It is the user who is the creator of the ile and owns it. system stored in that partition. If there is no operating system
(ii) Group in a particular disk then its boot control block will be empty.
A boot control block is usually situated in the beginning of the
It is a set of users who may want to share that ile. All partition. The unix ile system refers it as “boot block” while
the members of group has similar access rights. It is NTFS named it as “partition boot sector”.
called work group in windows.
2. Volume Control Block
(iii) Universe
There exists a volume control block for every volume
The users other than owner and group are called or partition. It stores details like total number of blocks in the
universe. volume, number of available free blocks and free block pointers,
size of each blocks, number of free ile control blocks and ile
For each of the above class, three access permissions control block pointers. The Unix File System (UFS) refers it
are deined. They are read (r), write (w) and execute (x). In as “Superblock” while NTFS stores the volume control block
Unix, for each ile or directory, a bit pattern is associated as in the “master ile table”.
follows,
3. Directory Structure and File Control Blocks
Group
Every ile system uses directory structure for organizing
r w x the iles. Unix ile system maintains a node table for maintaining
r w x r w x the directory structure, whereas NTFS stores the directory
structure in the “master ile table”. Similarly, every ile has a
ile control block that consists of ile permissions, owner, group,
Owner Universe
ile size, last modiied data, data of creation etc.
An empty ield represents that access right is not
present. Consider the following example,
rw x r – x – – x
It means,
(i) Owner has all permissions to read, write and execute
Figure: File Control Block (FCB)
that ile.
(ii) Group users can read and execute it, but cannot write Other memory structures are used to store in-memory
into it. information. This information is about all the ile resources that
are currently loaded into the main memory. The following are
(iii) Other users (i.e., universe) can only execute that ile. few of the memory structures that store in-memory information,
The permissions can be granted or revoked by the
(i) System Wide Open File Table
owner of ile by using the built-in functions like ‘chmod’ in
Unix. Windows operating systems have a GUI to manage It holds a copy of ile control block of every currently
access control information. opened ile.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 113
Computer SCienCe paper-V operating SyStemS
(ii) Per-process Open File Table
It contains a reference pointer to the respective entries of system wide open ile table i.e., each process has a reference to
the entry for the ile it is currently using.
In addition to these two memory structures, a mount table is used to store information related to the mounting of volumes.
All the recently executed directories information will be stored in a cache memory of in-memory directory structure.
File creation process starts by calling logical ile system with the help of program calls associated with the application. Based
on the directory structure, this ile system allocates a ile control block for a new ile. Now, the directory associated with that ile will
be updated with respect to the new ile and FCB. After the successful creation of ile, it can be opened with an open( ) system call.
An open system call is used to open a ile. It returns a pointer to the corresponding entry of the process that has requested
the ile. These process entries are listed in the per-process ile system table. When a ile is opened for the irst time its ilename is
included into the open-ile table. This ilename entry in the open-ile table is referred with different names in various operating
systems.
In unix it is called as “ile descriptor” and windows refers it with the name “ile handle”.
Q48. Discuss the objectives for a ile management system.
Answer :
A ile management system is a set of system software which provides services to users and applications in the use of iles.
Usually, the only way that a user or application may access iles is through the ile management system. This relieves the user or
programmer of the necessity of developing special purpose software for each application and provides the system with a means
of controlling its most important asset. Some of the important objectives for a ile management system are as follows,
v Data management needs and requirements of the use are met, which include storage of data and the ability to perform the
previously mentioned operations.
v Guarantee of the data in the ile is valid to the maximum possible extent.
v From the system’s point of view in terms of overall throughput and from the user’s point of view in terms of response
time, both contributing to the optimal performance.
v Providing I/O support for a variety of storage device types.
v Minimizing or eliminating the potential for lost or destroyed data.
v Providing a standardized set of I/O interface routines.
v In the case of multiple-user systems, providing I/O support for multiple users.
For an interactive, general purpose system, the following constitute a minimal set of requirements.
v Every user should be able to create, delete, read and change iles.
v Every user may have controlled access to other user’s iles.
v Every user may control what type of access are allowed to the users iles.
v Every user should be able to move data between iles.
v Every user should be able to backup and recover the user’s iles in case of damage.
v Every user should be able to access the user’s iles by using symbolic names.
These objectives and requirements should be kept in mind for a better understanding of ile management system.
Q49. What are the structures and operations that are used to implement ile system operations?
Answer : Model Paper-II, Q11(b)

The structures used for implementing ile system operations are,


1. On-disk structure
2. In-memory structure.
114 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
1. On-disk Structure
In this structure, the ile system contains information about,
(i) Boot Control Block
For answer refer Unit-III, Page No. 113, Q.No. 47, Topic: Boot Control Block.
(ii) Partition Control Block (Volume Control Block)
For answer refer Unit-III, Page No. 113, Q.No. 47, Topic: Volume Control Block.
(iii) Directory Structure
Every ile system uses directory structure for organizing the iles.
(iv) File Control Block
Every ile has a ile control block that consists of ile permissions, owner, group, ile size, last modiied data, date of
creation.
2. In-memory Structure
This structure contains information about all the ile resources that are currently loaded into the main memory. The
following are the different memory structures that store in-memory information.
(i) In-memory Partition Table
This structure contains information regarding every mounted partition.
(ii) In-memory Directory Structure
This structure stores information about the directories that are recently accessed.
(iii) System Wide Open-ile Table
It holds a copy of FCB of currently opened ile.
(iv) Per-process Open-ile Table
It contains a reference pointer to the respective entries of system wide open-ile table.
Operations to Implement File System
The implementation of ile system requires execution of following two major ile operations,
(i) File open
(ii) File read.
However, prior to these operations, it is necessary to create a ile. In order to create a new ile, an application program
must initially invoke logical ile system that has complete information regarding the format of the directory structures. Upon
invocation, the logical ile system performs the following operations,
(i) Allocates a new ile control block.
(ii) Reads the correct directory (for the ile) into the memory.
(iii) Modiies the name of directory in accordance to new ile name and FCB.
(iv) Writes back the directory on to the disk.
Once the ile is created, it can be used for performing I/O operations, which can be done by opening the created ile.
File Open
File open system call is used to open a ile. It returns a pointer to the corresponding entry of the process that has requested
the ile. These process entries are listed in the per-process ile system table. When a ile is opened for the irst time its ilename is
included into the open-ile table. This ilename entry in the open-ile table is referred with different names in various operating
systems.
In unix it is called as “ile descriptor” and windows refers it with the name “ile handle”.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 115
Computer SCienCe paper-V operating SyStemS

Figure: In-memory File-System Structure for “File Open”


File Read
Once the ile is opened, it can be read by invoking the “read” system call. This system call is parameterized with an index
value that is stored as an entry in per-process open ile table. This entry is made along with a pointer that points to an entry in
system-wide open table. The entry helps in reading the data from appropriate data blocks. Once the data is read, the updated FCB
is copied into system-wide open-ile table.

Figure: In-memory File-System Structure for “File Read”


After reading the ile, the process closes it by invoking “close” system call. Upon the invocation, the entry in pre-process
table entry is deleted and the count value of system wide entry is decremented.
116 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q50. Illustrate with a neat diagram about the various elements of a ile management system.
Answer :
Elements of a File Management System

Figure: Elements of File Management Systems

The igure given above shows the functioning of ile management system. The irst half part of the igure is related to the
ile management system and the remaining half is concerned with the operating system.
File Management Concerns
Users and application programs uses commands for interacting with the ile system. For interaction, user irst need to
select, identify or locate the ile. This is accomplished by means of a directory which describes the location of all iles and their
attributes. Some systems (mostly shared systems) also provide user access control where in only authorized users are allowed
to access particular iles using some particular access mechanisms. The basic operations on iles are performed at record level.
The records are then organized using some structures and are viewed as a ile. All the other overheads of handling the iles goes
to the operating system.
Operating System Concerns
An operating system is concerned with the ile system for the I/O operations, storage purpose etc. For the output operations
the records or ields of a ile need to be organized as a sequence of blocks which can be unblocked after input operation. Several
functions are needed for this purpose such as managing the secondary storage which involves allocation of iles to free blocks
on secondary storage and also managing the free storage. This in turn helps to know about the availability of the blocks for new
iles and the growth of existing iles.
Scheduling of individual block I/O must also be handled. The disk scheduling as well as the ile allocation helps in
optimizing the performance of a system.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 117
Computer SCienCe paper-V operating SyStemS
Q51. Write short notes on,
(a) Partitions and mounting
(b) Virtual ile systems.
Answer :
(a) Partitions and Mounting
Partitioning refers to the division of disk into multiple parts.
When a disk is partitioned, it can be formatted with a ile system
or left as “raw” i.e., without any particular ile system. These raw
disks or partitions are used for processes which do not require any
particular ile system. For example, Unix swap space requires a Figure: Implementation of Generic File System
raw partition, raw disks are also used to store RACO coniguration
A generic ile system implementation can be exported
settings in a small database.
by the above diagram. It has three major layers among which
Similarly, boot information is also stored in a raw the second layer is of virtual ile system interface.
partition. This is because no ile system can be interrupted as
A VFS interface has two important functions to perform.
operating system itself is not completely loaded during the boot
They are,
process. The boot information is stored in a ixed location and in
a sequential order, so that the execution of the operating system 1. VFS interface isolates ile system operations from their
is simple and easy. The location where boot information is implementation details. An operating system may have
stored is called as boot block and it can also include information multiple implementations of VFS interfaces to support
as in how to boot a simple operating system in case of multiple- different types of ile systems mounted on the local
operating systems. A computer with more than one operating machine.
systems installed in different partitions is called “dual booted”. 2. While representing the available iles on the network,
Therefore, to determine which operating system has to boot, a VFS ensures that each ile should be uniquely identiied
program called boot loader is used that is capable of interpreting with the help of a data structure called vnode. Every
multiple ile systems and operating systems. single ile or directory on each machine of a network
has an associated vnode. A vnode assigns a number to
The partition that contains actual operating system iles each ile for uniquely identifying it over the network.
and kernel is called root partition. The root partition is loaded at
The VFS makes use of the ile system interface to
boot time which successively loads other volumes as required
perform user initiated operations on iles that may be
by the operating system.
available locally or on remote machines. The third layer
Once a volume is successfully mounted, the operating implements the various ile systems which directly
system checks the ile system of the volume. If it can be interacts with storage devices for data transfer.
recognized as one of the supported ile systems than an entry is An example VFS is the VFS architecture in linux. It has
made into the in-memory “mount table” structure. The mount four main object types. They are,
table keeps track of all mounted ile systems along with their 1. Inode Object
type. Operating systems like microsoft windows assign a
This object is meant for representing a single file
separate name space for each mounted volume that is denoted
available on the disk.
by a speciic letter and a colon. This enables to easily traverse
2. File Object
iles and directories available on that volume.
This object represents a ile that is currently opened.
(b) Virtual File Systems
3. Super Block Object
A Virtual File System (VFS) enables an operating system
This object type represents the complete ile system.
to support multiple ile systems on a single disk, so that users
can easily traverse between ile systems. A VFS also enables 4. Dentry Object
access to remotely available disks using different ile systems. It represents a single directory.
118 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q52. Describe the linear list and hash table directory implementation methods.
Answer :
Directory Implementation
Implementation of a directory structure involves selection of eficient directory allocation and management algorithms
from the available algorithms.
1. Linear List
This is the most simplest of all directory implementation methods. It is easy to implement, but it takes a lot of time to
execute. It simply maintains a sequential list of ile names pointing to their corresponding data blocks.
This kind of implementation requires a ilename to be searched before creating a ile with that name. Similarly, a delete
operation also requires a linear search for the directory and then disallocate the space occupied by the directory. Finally the
corresponding entry is removed from the list. Instead of deleting the entry name, the entry as unused with the help of used-unused
bit, or include it into the list of available directory entries. Another method could be to decrement the directory length and transfer
the directory entry contents to any free spaces on the disk.
Though this approach is simple to implement it has few disadvantages as well. This method requires a linear search for
inding a ile before performing any operation. This makes it slow and users would experience this property very frequently. Even
if a binary search technique is used instead of linear search, though it improves average search time, but it still needs a sorted
list of directory entries to search.
2. Hash Table
In this technique a hash table is used only with the linear list for storing directory entries. The hash table makes use of a
hash function that takes an input value based on the ilename and produces an output as a reference to the corresponding directory
entry in the linear list. Therefore, all ile operations consume very less time to execute. However necessary arrangement should
be made made to handle collisions. A collision is a situation where more than one ile name is hashed to the same location.
Disadvantages of this technique are that a hash table is usually of ixed size and the performance of the hash functions is
also dependent on the size of the hash table. Therefore, whenever a new ile is to be added after all the available free entries have
been used the hash table should be expanded to accommodate addition of new iles and the existing directory entries should be
organized in such a way that the new hash function also maps input values to there corresponding directory entries.
A solution to the above problem could be to use a chained-overlow hash table. A chained-overlow hash table consists
of a linked list that stores all hashed entries. Now, if more than one ilename hashes to the same entry it can be added as another
node to the linked list. Though this approach eliminates all the disadvantages of linear list directory implementation, searching
directory names can consume sometime since, it involves traversing the linked list. However, this technique is considered to
be eficient than linear list directory implementation method.

3.4.5 allocation Methods


Q53. Describe various ile allocation methods briely.
Answer : Model Paper-III, Q11(b)

Allocation Methods
An allocation method is considered to be eficient if it allocates space such that no disk space is wasted and accessing
of the iles take less time. The three most important ile allocation methods are,
1. Contiguous Allocation
In a contiguous allocation method, all the iles are arranged in sequential blocks of memory. Therefore, according to
this technique, if a ile size is k blocks and it starts from a block s then it spans till k-1 block numbers. With this approach,
accessing of iles is much faster as all iles occupy contiguous blocks. For sequential access of iles, the physical address of the
last referred block is noted, so that ile access can start from the next block. Use of this address avoid repeated access of the
already accessed blocks. Direct access is also supported since only the starting address and the block numbers are required.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 119


Computer SCienCe paper-V operating SyStemS
Though access time is minimal, contiguous allocation does not make eficient use of the disk space. This means that the
allocation of space for a new ile should be eficient enough to avoid wastage of disk space.
DISK

FOO
0 1 2 3

Directory
4 5 6 7
Filename Starting Size
IMP
location
8 9 10 11 Foo 0 3
Imp 6 7
12 13 14 15 User 14 6
Chat 26 4
USER
16 17 18 19

20 21 22 23
CHAT
24 25 26 27

28 29 30 31

32 33 34 35

Figure (1): Contiguous Allocation


During storage allocation, a question arises that which free blocks should be selected. This problem is similar to the
dynamic storage allocation problem. The most common strategies used for this purpose are best it, worst it and irst it. First
it is considered as usually faster than other two strategies. However, best it is also considered to be eficient than worse it in
terms of both time and space utilization.
One problem associated with these strategies is that they suffer from external fragmentation. External fragmentation
occurs when a request for a new ile cannot be satisied because the largest available continuous chunk of blocks is not
suficient enough to store the ile. In such situations, even though there is enough space available for the ile, it is not utilized
because it is not contiguous.
External fragmentation can be resolved with the help of a scheme called “compaction”. This scheme involves rearranging
all free memory locations in a sequential order so that contiguous disk space can be allocated for new iles to avoid wastage of
disk space. Compaction is carried out in both off line and online modes. In ofline mode, all operations are suspended and the
ile system is unmounted and inally compaction is performed. In this strategy a lot of time is wasted and hence it is avoided.
In online mode, compaction is performed along with other system operations, It effects business performance but reduces time
wastages.
However, contiguous allocation scheme also requires to know the size of the new ile for storage allocation. This might
be dificult to determine for output ile after its execution. Also, if the size of the ile is determined in advance and the ile takes
a lot of time to reach its inal size, then a lot of space is wasted when the ile does not reaches its inal size. Therefore, a modiied
contiguous allocation scheme is used that allocates some ixed amount of space for the irst time and if the ile requires more
space then another chunk of contiguous blocks is allocated. The chunks of free blocks allocated after the irst allocation is
called “extent”. These two chunks of contiguous blocks are linked to one another.
2. Linked Allocation
This scheme overcomes the problem of contiguous allocation. Any ile can be represented as a linked list of disk blocks.
The directory entry stores the starting block number and last block number. By taking starting block number, that block can be
accessed, the address of next block can be captured to move a head. Similarly, the whole ile can be read by traversing the list.
120 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

0 1 2 3
Directory
4 5 9 6 7 Filename Star End
8 9 12 10 5 11
Sure_doc 15 20
12 17 13 14 15 10

16 17 20 18 19

20 –1 21 22 23

24 25 26 27

Disk

Figure (2): Linked Allocation Method


Creation of new ile involves a new entry in the directory. This entry points to the irst block with its pointer value as nil
and ield size = 0. Thus an empty block is allocated for the new ile.
It is resistant towards external fragmentation. The ile size may be variable and it can grow and shrink without any
dificulty. There are two major disadvantages of this scheme,
v It is suitable for sequential-access-iles only. It cannot access any random (ith) block directly to ind it, user needs
to traverse the whole list.
v Some space is wasted for storing the pointers within the blocks.
One solution to the above problem is to combine multiple blocks forming a cluster and allocating it instead of blocks. It
helps in improving disk access time and reduces some space requirement, but it is prone to internal fragmentation.
Another important solution is the File Allocation Table (FAT) which is used in popular operating systems like MS-DOS,
OS/2 etc. FAT is stored in the beginning of each volume and it has one entry for each disk block. The directory entry consists
of starting block number of the ile and the FAT entry of that block contains address of next block. This linking continues until
last block which contains End-Of-File (EOF) table is reached.
Directory 0
1
File name Starting block 2

Sure_doc 199 199 595

365 EOF

595 365

File Allocation Table

Figure (3): File Allocation Table (FAT)

3. Indexed Allocation
This scheme stores pointers to all the blocks of a particular ile at one location called as index block. It is a simple
modiication of linked allocation. The directory stores the address of this index block. Hence, after getting a particular index
block the whole ile can be accessed.
When the ile is created, OS provides an index block to it which contains nothing. When an ith block is written, its
address is stored in the ith entry of index block. This scheme is resistant to fragmentation but some space is wasted to store the
index blocks for each ile. The size of index block cannot be determined.

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 121


Computer SCienCe paper-V operating SyStemS

0 1 2 3
Directory

4 5 6 7 Filename Index block

4
5 Sure_doc 8
6 8 9 10 11
13
14
12 13 14 15

16 17 18 19

20 21 22 23

Disk
Figure (4): Indexed Allocation Scheme
If large index blocks are used then for small iles, most of their entries will be empty hence space will be wasted. If small
index blocks are used then they may not be able to accommodate all pointers of ile. Hence, the following schemes are used,
(a) Linked Scheme
It creates a single disk block for accommodating small iles. As their size increases, it links together several index blocks.
(b) Multilevel Index
In this scheme, multiple levels of index blocks are used. The irst level index block points to the second level index blocks
which points to actual data blocks. Similarly, it is possible to have many levels of index blocks.
(c) Combined Scheme
This scheme is implemented in Unix ile system. Each inode of Unix stores 15 pointers. The irst 12 are called direct
blocks because they point directly to data blocks. The next pointer is single indirect block which is a pointer pointing to
an index block that contains address of actual data blocks. Similarly, there are two and three levels of index blocks for
double direct block and triple direct block respectively. The following igure shows the same,

data
data
1
2 data
Direct 3
blocks
data
12
data
Single indirect
Double indirect data
Triple indirect data

data
data

data

data

Figure (5): Index Node of Unix File System

122 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

3.4.6 free space Management


Q54. State and explain four approaches to free space management.
Answer :
Free Space Management
When iles are stored, disk space is consumed and the same is released when they are deleted. The system needs to
maintain a free-space list to record all disk locations that are not allocated to any ile or directory. Whenever any ile is created,
space is allocated to it from free space list. After allocations, the block numbers are removed from this list. There are several
methods to keep track of free-space in disk.
1. Bit Vector
In this method a bit map or bit vector is maintained where for each block, one bit is present. If that bit value is 1, block
is considered as free, otherwise block is in use.
Example

1 1 0 0 1 0 0 0

Block0 Block1 Block2

Figure: Bit Vector Method


In the above igure the blocks 0, 1 and 4 are free and blocks 2, 3, 5, 6 and 7 are allocated to some ile. This method is
simple to implement and free space can be found easily by using simple bit manipulation instructions of Intel 86, Motorola
68000 series and many popular processors.
The bit vector has to be kept in main memory so as to improve eficiency. A backup copy of it should be written on disk
periodically to recover in case of data loss. A hard disk of 40 GB with 1 kB sized blocks needs only 5 MB of memory for storing
its bit vector.
2. Linked List
In this approach all free blocks of disk are linked together using pointers. Each block has a pointer which points to the
next free block.
This scheme requires considerable amount of I/O time to traverse through the list. However, this operation is not
frequent because the operating system usually uses the irst free blocks without traversing the list.

Head of
free-space list 0 1 2

3 4 5

6 7 8

91 10 11

= Free block
12 13 14

15 16 17

Figure: Linked List Method


SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 123
Computer SCienCe paper-V operating SyStemS
3. Grouping Disadvantages
It is a modiied version of linked list method. Here, a v It is ineficient for large disks since it has to be stored
group of free blocks are linked together. In each group, the in main memory.
last block contains the address of next group of free blocks.
Memory can be allocated in chunks of blocks and large v It requires special hardware for performing bit
number of free blocks can be found easily. operations.
2. Linked-list Free Space Management
Head of Advantages
free space list Group 1
v It is relatively simple and easy to allocate free blocks.
v It eliminates the need for additional methods required
to maintain free-blocks list because FAT method
maintains the list.
Disadvantages
Group 2
v An overhead of time is incurred while traversing the
list for accessing every disk block.
Group 3
v It lacks in providing reliability
figure: grouping of free space
3. Grouping Free Space Management
4. Counting
Advantage
If contiguous memory allocation scheme is used
then many contiguous blocks will be allocated and freed v By accessing the irst disk block, large number of free
simultaneously. ‘n’ number of contiguous blocks may be free blocks can be addressed quickly.
at several places. In counting scheme, the address of irst
free block is taken and a count of successive free blocks is Disadvantage
stored. Hence, a table is maintained which consists of two
v For every n free blocks, two blocks gets wasted for
columns, one for the address of irst free block and second for
the number of successive free blocks following that address. storing addresses of other free blocks.

0 1 2 3 4. Counting Free Space Management

4 5 6 7 Advantage
Starting
block Count
number 8 9 10 11 v It overcomes the drawback of grouping by storing the
5 5
12 13 14 15 address of irst free block and count of successive free
17 2
26 1 blocks instead of storing a list of free blocks.
16 17 18 19
Free-space table
20 21 22 23
Disadvantages
There are no relative disadvantages of counting
24 25 26 27
approach however, there are certain constraints which include,
Disk
v The entries in the table acquire more memory space
Figure: Free Space Management Using Counting Method
when compared with addresses.
Q55. Explain the advantages and disadvantages of
the free disk space management approaches. v The overall table length becomes shorter with increase
in the count value.
Answer :
1. Bit-vector Free Space Management
Advantages
v It is relatively simple and eficient to search free blocks
either the irst one or multiple consecutive ones.
v It does not consume large space as it makes use of
single bit per each block
124 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V

internal assessMent

objective type
I. Multiple Choice
1. _________ is deined as a waste of memory space. [ ]

(a) Fixed partitioning (b) Variable partitioning

(c) Fragmentation (d) Contiguous memory allocation

2. Paging is a ______ memory allocation scheme. [ ]

(a) Non-contiguous (b) Contiguous

(c) Dynamic allocation (d) Unequal size

3. To lookup and translate addresses, associative cache memory called ________ is used. [ ]

(a) TLB (b) TCB

(c) TBC (d) TBL

4. Which of the following technique is not used to implement page table? [ ]

(a) Hierarchical paging (b) Hashed page table

(c) Invented page table (d) Segmentation

5. The technique in which pages are fetched into main memory only when they are needed by processes. [ ]

(a) Demand paging (b) Page replacement

(c) Segmentation (d) Paging invertly

6. The following is the scheme to deine directory structure. [ ]

(a) Acyclic-graph directory (b) Single-level directory

(c) Two-level directory (d) All the above

7. In UNIX, _______ command is used to grant and revoke access permission. [ ]

(a) fork( ) (b) grant( )

(c) chmod( ) (d) manage( )

8. Creation of a ile system is known as _________ formatting. [ ]

(a) Logical (b) Low level

(c) Physical (d) High-end

9. In disk scheduling SSTF stands for ________. [ ]

(a) Shortest scheduling time irst (b) Smallest scheduling time irst

(c) Snewed scheduling time irst (d) Shortest seek time irst

10. Access time is highest in case of, [ ]

(a) Registers (b) Magnetic disks

(c) Main memory (d) ROM


SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 125
Computer SCienCe paper-V operating SyStemS

II. Fill in the Blanks

1. A process needs to be present in _________ memory for execution.

2. Compaction is a solution for ________.

3. During the implementation of paging a logical address is divided into __________ number of parts.

4. In basic implementation of paging, the physical memory is divided into ixed-sized blocks called _________.

5. ________ and ________ are the two fundamental techniques for implementing virtual memory.

6. If a _________ exist, the performance of the system can be affected by demand paging.

7. The tree-structured directory similar to acyclic-graph structure, where we can add links to an existing directory is
known as _________.

8. The process of attaching a ile system (present on logical disk) to system’s directory structure is known as _________.

9. Magnetic tape is an example of _________ storage device.

10. In order to read or write to a new magnetic disk, _________ needs to performed.

126 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Key

I. Multiple Choice

1. (c) 2. (a) 3. (a) 4. (d) 5. (a)

6. (d) 7. (c) 8. (a) 9. (d) 10. (b)

II. Fill in the Blanks


1. Main
2. External fragmentation
3. Two
4. Frames
5. Paging, segmentation
6. Page fault
7. General graph directory
8. File system mounting
9. Secondary
10. Formatting

SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 127


Computer SCienCe paper-V operating SyStemS
III. Very Short Questions and Answers

Q1. Deine Swapping.


Answer :
Sometime processes are swapped-out and stored on disk to make space for others and later they are swapped-in to resume
execution. This process of swapping-in and swapping-out is called as swapping.

Q2. What is a Page?


Answer :
A page refers to the logical memory location which contains ixed-sized blocks.

Q3. Deine Frame.


Answer :
A frame refers to the physical memory location which is divided into ixed-sized blocks.

Q4. What is Segmentation?


Answer :
The programmer is allowed to view memory as consisting of multiple address spaces or segments through the concept of
segmentation.

Q5. Deine virtual memory.


Answer :
Virtual memory is a concept of giving programmers an illusion that they have a large memory at their disposal even though
they have very small physical memory.

128 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.


Important Questions Computer SCienCe paper-V

Important Questions

Unit- 1
Short QueStionS eSSay QueStionS

Q1. What do you mean by multiprocessor systems? Q10. Discuss briely about,
Answer : Important Question (i) Single processor systems
For answer refer Unit-I, Page No. 2, Q.No. 1. (ii) Multiple processor systems
Q2. Deine operating system. Give two examples. (iii) Clustered systems.
Answer : Important Question Answer : Important Question

For answer refer Unit-I, Page No. 2, Q.No. 2. For answer refer Unit-I, Page No. 5, Q.No. 13.
Q3. List various types of operating system. Q11. Deine operating system. What are the services
Answer : Important Question of an operating system? Explain.

For answer refer Unit-I, Page No. 2, Q.No. 3. Answer : Important Question

Q4. Write the services of operating system. For answer refer Unit-I, Page No. 10, Q.No. 19.
Answer : Important Question Q12. Discuss various approaches of designing an
operating system.
For answer refer Unit-I, Page No. 3, Q.No. 5.
Answer : Important Question
Q5. Deine system call.

Answer : Important Question


For answer refer Unit-I, Page No. 16, Q.No. 24.

For answer refer Unit-I, Page No. 3, Q.No. 6. Q13. Deine the following,

Q6. List the features of system call. (i) Process

Answer : Important Question (ii) Process control block

For answer refer Unit-I, Page No. 3, Q.No. 8. (iii) Process state diagram.

Q7. Deine a process. Answer : Important Question

Answer : Important Question For answer refer Unit-I, Page No. 18, Q.No. 26.

For answer refer Unit-I, Page No. 4, Q.No. 9. Q14. What is inter-process communication? What
are the models of IPC?
Q8. What is Inter Process Communication (IPC)? List
the models of IPC in operating systems. Answer : Important Question

Answer : Important Question For answer refer Unit-I, Page No. 24, Q.No. 33.
For answer refer Unit-I, Page No. 4, Q.No. 10. Q15. Describe about semaphores and their usage
Q9. Write short notes on semaphore. and implementation.

Answer : Important Question


Answer : Important Question

For answer refer Unit-I, Page No. 4, Q.No. 12. For answer refer Unit-I, Page No. 35, Q.No. 43.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. IQ.1
Computer SCienCe paper-V OperatIng SyStemS

Unit- 2
Short QueStionS

Q1. What are short-term, long-term and medium term schedulings?

Answer : Important Question

For answer refer Unit-II, Page No. 50, Q.No. 1.


Q2. List any three scheduling algorithms.

Answer : Important Question

For answer refer Unit-II, Page No. 50, Q.No. 2.


Q3. Deine deadlock.

Answer : Important Question

For answer refer Unit-II, Page No. 51, Q.No. 5.


Q4. List three overall strategies in handling deadlocks.

Answer : Important Question

For answer refer Unit-II, Page No. 51, Q.No. 7.


Q5. What is safe state in deadlocks?

Answer : Important Question

For answer refer Unit-II, Page No. 52, Q.No. 9.


Q6. Draw a resource allocation graph to show a deadlock.

Answer : Important Question

For answer refer Unit-II, Page No. 52, Q.No. 10.


eSSay QueStionS

Q7. Explain various scheduling concepts.

Answer : Important Question

For answer refer Unit-II, Page No. 53, Q.No. 12.


Q8. Explain FCFS, SJF, Priority, Round robin scheduling algorithms.

Answer : Important Question

For answer refer Unit-II, Page No. 55, Q.No. 14.


Q9. Deine deadlock. Explain necessary conditions for arising deadlocks.

Answer : Important Question

For answer refer Unit-II, Page No. 61, Q.No. 19.


Q10. Briely explain about deadlock prevention methods with examples of each.

Answer : Important Question

For answer refer Unit-II, Page No. 62, Q.No. 22.


IQ.2 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
Important Questions Computer SCienCe paper-V

Q11. Write about deadlock avoidance.


Answer : Important Question

For answer refer Unit-II, Page No. 64, Q.No. 23.


Q12. Explain all the strategies involved in deadlock detection.
Answer : Important Question

For answer refer Unit-II, Page No. 66, Q.No. 24.

Unit- 3
Short QueStionS
Q1. Write the differences between logical and physical address space.
Answer : Important Question

For answer refer Unit-III, Page No. 74, Q.No. 1.


Q2. Deine a page and a frame.
Answer : Important Question

For answer refer Unit-III, Page No. 74, Q.No. 3.


Q3. Deine ile management.
Answer : Important Question

For answer refer Unit-III, Page No. 74, Q.No. 4.


Q4. List the ile operations performed by operating systems.
Answer : Important Question

For answer refer Unit-III, Page No. 75, Q.No. 5.


Q5. List the differences among the ile access methods.
Answer : Important Question

For answer refer Unit-III, Page No. 75, Q.No. 6.


Q6. List the operations to be performed on directories.
Answer : Important Question

For answer refer Unit-III, Page No. 75, Q.No. 7.


Q7. What advantages are there to the two-level directory?
Answer : Important Question

For answer refer Unit-III, Page No. 76, Q.No. 8.


Q8. What does OPEN do in ile operations?
Answer : Important Question

For answer refer Unit-III, Page No. 76, Q.No. 9.


Q9. What are tree structured directories?
Answer : Important Question

For answer refer Unit-III, Page No. 76, Q.No. 10.


SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. IQ.3
Computer SCienCe paper-V OperatIng SyStemS

eSSay QueStionS
Q10. Write short notes on,
(i) Dynamic loading
(ii) Dynamic linking
(iii) Shared libraries.
Answer : Important Question

For answer refer Unit-III, Page No. 78, Q.No. 12.


Q11. Explain about page replacement algorithms.
Answer : Important Question

For answer refer Unit-III, Page No. 90, Q.No. 24.


Q12. Explain various disk scheduling algorithms with an example.
Answer : Important Question

For answer refer Unit-III, Page No. 97, Q.No. 32.


Q13. What is a ile? Discuss its attributes.
Answer : Important Question

For answer refer Unit-III, Page No. 102, Q.No. 36.

Q14. What are the structures and operations that are used to implement ile system operations?

Answer : Important Question

For answer refer Unit-III, Page No. 114, Q.No. 49.

Q15. Describe various ile allocation methods briely.


Answer : Important Question

For answer refer Unit-III, Page No. 119, Q.No. 53.

IQ.4 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.

You might also like