Operating Systems
Operating Systems
Process is the fundamental concept of operating systems structure. A program under execution is referred to as a
process. It can also be defined as an active entity that can be assigned to a processor for execution. A process is a
dynamic object that resides in main memory. A process includes the current values of the program counter and
processor’s registers. Each process possesses its own virtual CPU. A file is grouping of similar records or related
information together which is stored in secondary memory. A collection of files is called directory. Files and
directories are the basic mechanism of a file system. Directories are used to organize files. Protection is a
mechanism of controlling access of computer resources by users or processes. A protection enabled system can
find the differences between authorized and unauthorized access or usage and can take measure to defend the
system against misuse. If protection is not employed then errors may also occur among subcomponents of system.
This happens usually when a defected subsystem interacts with healthy subsystem through its interface. Then
healthy subsystem gets corrupted.
According to the examination pattern of B.Sc students, this book provides the following features:
List of Definitions are provided before the units for easy reference.
Every unit is structured into two main sections viz. Short Questions and Essay Questions with solutions
along with Learning Objectives and Introduction.
Three Model Papers are provided in order to help students to understand the paper pattern in the end examination.
Important Questions are included to help the students to prepare for Internal and External Assessment.
The table below gives complete idea about the subject, which will be helpful to plan and score good marks in their
exams.
Unit
Unit Name Description
No.
This unit includes topics like Introduction of Computer System
Deadlock.
This unit includes topics like Main Memory: Introduction,
It is sincerely hoped that this book will satisfy the expectations of students and at the same time helps them to score
maximum marks in exams.
Suggestions for improvement of the book from our esteemed readers will be highly appreciated and incorporated
in our forthcoming editions.
Model question papers with solutions Computer science Paper-V
Faculty of science
Model
Pa p e r 1
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60
Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.
1. What do you mean by multiprocessor systems? (Unit-I, Page No. 2, Q1)
2. Define system call. (Unit-I, Page No. 3, Q6)
3. What are short-term, long-term and medium term schedulings? . (Unit-II, Page No. 50, Q1)
4. List three overall strategies in handling deadlocks. (Unit-II, Page No. 51, Q7)
5. Write the differences between logical and physical address space. (Unit-III, Page No. 74, Q1)
6. List the file operations performed by operating systems. (Unit-III, Page No. 75, Q5)
7. What advantages are there to the two-level directory? . (Unit-III, Page No. 76, Q8)
8. Define a process. (Unit-I, Page No. 4, Q9)
Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.
9. (a) Discuss briefly about,
(i) Single processor systems
(ii) Multiple processor systems
(iii) Clustered systems. (Unit-I, Page No. 5, Q13)
OR
(b) What is inter-process communication? What are the models of IPC? (Unit-I, Page No. 24, Q33)
10. (a) Explain various scheduling concepts. (Unit-II, Page No. 53, Q12)
OR
(b) Briefly explain about deadlock prevention methods with examples of each. (Unit-II, Page No. 62, Q22)
11. (a) Write short notes on,
(i) Dynamic loading
(ii) Dynamic linking
(iii) Shared libraries. (Unit-III, Page No. 78, Q12)
OR
(b) What is a file? Discuss its attributes. (Unit-III, Page No. 102, Q36)
Faculty of science
Model
Pa p e r 2
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60
Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.
1. Define operating system. Give two examples. (Unit-I, Page No. 2, Q2)
3. List any three scheduling algorithms. . (Unit-II, Page No. 50, Q2)
6. List the differences among the file access methods. (Unit-III, Page No. 75, Q6)
7. What does OPEN do in file operations? . (Unit-III, Page No. 76, Q9)
Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.
9. (a) Discuss various approaches of designing an operating system. (Unit-II, Page No. 16, Q24)
OR
10. (a) Explain FCFS, SJF, Priority, Round robin scheduling algorithms. (Unit-II, Page No. 55, Q14)
OR
(b) Write about deadlock avoidance. (Unit-II, Page No. 64, Q23)
11. (a) Explain about page replacement algorithms. (Unit-III, Page No. 90, Q24)
OR
(b) What are the structures and operations that are used to implement file
system operations? (Unit-III, Page No. 114, Q49)
MP.2 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
Model question papers with solutions Computer science Paper-V
Faculty of science
Model
Pa p e r 3
B.Sc. (CBCS) V-Semester Examinations
Subject: Computer Science
DSE-1E: Operating Systems
Paper-V
Time: 2 Hours Max. Marks: 60
Section - A ( 5 × 3 = 15 Marks )
Answer any Five of the following Eight questions. Each carries Three marks.
2. What is Inter Process Communication (IPC)? List the models of IPC in operating systems. (Unit-I, Page No. 4, Q10)
4. Draw a resource allocation graph to show a deadlock. (Unit-II, Page No. 52, Q10)
6. List the operations to be performed on directories. (Unit-III, Page No. 75, Q7)
7. What are tree structured directories? . (Unit-III, Page No. 76, Q10)
Section - B ( 3 × 15 = 45 Marks )
Answer all of the following Three questions. Each carries Fifteen marks.
9. (a) Define operating system. What are the services of an operating system?
Explain. (Unit-I, Page No. 10, Q19)
OR
(b) Describe about semaphores and their usage and implementation. (Unit-I, Page No. 35, Q43)
10. (a) Define deadlock. Explain necessary conditions for arising deadlocks. (Unit-II, Page No. 61, Q19)
OR
(b) Explain all the strategies involved in deadlock detection. (Unit-II, Page No. 66, Q24)
11. (a) Explain various disk scheduling algorithms with an example. (Unit-III, Page No. 97, Q32)
OR
(b) Describe various file allocation methods briefly. (Unit-III, Page No. 119, Q53)
CONT ENT S
Syllabus (As per (2016-17) Curriculum)
List of Important Deinitions l.1 – l.3
internAl ASSeSSMent 45 - 48
2.2 Deadlocks 60
internAl ASSeSSMent 69 - 72
Operating System Structures: Operating System Services, User Interface for Operating System, System
Process Management: Process Concept, Process Scheduling, Operations on Processes, Inter Process
Monitors.
UNIT-II
Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks, Deadlock
UNIT-III
Virtual Memory: Introduction, Demand Paging, Page Replacement, Allocation of Frames, Thrashing.
File Systems: File Concept, Access Methods, Directory and Disk Structure, File System Mounting,
Management.
List of important Definitions Computer SCienCe paper-V
UNIT - I
1. Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an
intermediate between the user and hardware.
2. Process
Process is the fundamental concept of operating systems structure. A program under execution is referred to as a process.
3. Inter Process Communication (IPC)
Inter Process Communication(IPC) is deined as the communication between process to process.
4. Command Line Interface
Commands line interface which is popularly known as command interpreter makes use of various commands with which
a user can directly interact with the operating system.
5. Thread
A thread can be thought as a basic unit of CPU utilization.
6. Schedulers
Scheduler is deined as a program which selects a user program from disk and allocates CPU to that program.
7. Short Term Scheduler (STS)
Short term scheduler is deined as a program (part of operating system) that selects among the processes that are ready
to execute and allocate the CPU to one of them.
8. Context Switching
Context switching refers to the process of switching the CPU to some other process thereby saving the state of the old
process and loading the saved state for the new process.
9. Critical Resource
A resource that cannot be shared between two or more processes at the same time is called a critical resource.
10. Critical Section
A critical section is a segment of code present in a process in which the process may be modifying or accessing common
variables or shared data items.
11. Starvation
Two or more processes are said to be in starvation, if they are waiting perpetually for a resource which is occupied by
another process.
12. Preemptive Kernel
A Kernel that permits a process to be preempted or interrupted during its execution is called preemptive kernel.
13. Non-preemptive Kernel
A Kernel that does not permit a process to be preempted or interrupted during its execution is called non-preemptive
kernel.
14. Peterson’s Solution
Peterson’s solution is a software based solution to the critical section problem that satisies all the requirements like mutual
exclusion, progress and bounded waiting.
15. Semaphore
A semaphore is an integer variable which can be accessed using two operations wait and signal.
16. Monitor
A monitor is a construct in a programming language which consists of procedures, variables and data structures.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. L.1
Computer SCienCe paper-V operating sYstems
UNIT - II
1. Deadlock
A situation in which a process waiting indeinitely for requested resources and that resources are held by other processes in a
waiting state.
2. Program
‘Program’ refers to the collection of instructions given to the system in any programming language. Alternatively a
program is a static object residing in a ile.
3. Scheduling
Scheduling is deined as the activity of deciding, when processes will receive the resources they request.
4. Jobs
A job is a sequence of programs used to perform a particular task. Typically a job is carried out in various steps where
each step depends on the successful execution of its preceding step. It is usually used in a non-interactive environment.
5. Job Scheduling
Job scheduling is also called as long-term scheduling which is responsible for selecting a job from disk and transferring
it into main memory for execution.
6. CPU Utilization
The amount of time that the CPU is kept busy executing processes.
7. Throughput
The number of processes that are completed per unit time.
8. Turnaround Time
The interval from the time of submission to the time of completion.
9. Waiting Time
The sum of periods spent waiting in the ready queue.
10. Response Time
The time from the submission of a request until the irst response is produced in an interactive system
UNIT - III
1. Page
A page refers to the logical memory location which contains ixed-sized blocks.
2. Frame
A frame refers to the physical memory location which is divided into ixed-sized blocks.
3. File
A ile is grouping of similar records or related information together which is stored in secondary memory.
4. File Management
The process of managing iles and the operations performed on iles is referred to as ile management.
5. Logical Address Space
Logical address is deined as the address which is generated by CPU.
6. Physical Address Space
Physical address is deined as the actual memory address where data instruction is present.
L.2 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
List of important Definitions Computer SCienCe paper-V
7. Fragmentation
Fragmentation is deined as a wastage of memory space.
8. Virtual Memory
Virtual memory is a concept of giving programmers an illusion that they have a large memory at their disposal even
though they have very small physical memory.
9. Demand Paging
Pure demand paging is a technique where a process starts execution without a single page in memory.
10. Thrashing
Thrashing refers to a situation wherein the operating system waste most of its crucial time in accessing the secondary
storage, looking-out for the referenced pages that are unavailable in the memory.
11. Working Set Model
Working set can be deined as the set of pages that a program is currently using (or) most recently used.
12. Seek Time
The time required to move the head to the desired cylinder or track is called as seek time or random access time or
positioning time.
13. Rotational Latency
The time required to move the head to the desired sector by spinning the platter is called as rotational latency.
14. Security
The term security refers to a state of being protected from harm or from those that cause negative effects.
15. Protection
Protection refers to, keeping the system safe physically as well as from unauthorized access.
16. Allocation Methods
An allocation method is considered to be eficient if it allocates space such that no disk space is wasted and accessing
of the iles take less time.
Marketed by:
introduction, os
UnIT structures, process
management and
Learning Objectives
After studying this unit, a student will have thorough knowledge about the following key concepts,
Computer System Architecture and Computing Environments.
Various Operating System Services and User Interfaces for Operating System.
System Call and various types of it.
Process concept, Scheduling, Operations and Inter Process Communication.
Critical Section Problem, Peterson's Solution, Synchronization, Semaphores and Monitors.
intrOductiOn
An operating system is a program or collection of programs that controls the computer hardware and acts as
an interface between user and hardware. The irst operating system was developed in early 1950s by General
Motors Research Laboratories. Later, different operating systems were developed such as batch processing,
multiprogramming, time sharing, distributed and real time systems. The major components of a typical OS are
process management, memory management, ile management, storage management and I/O system management.
The primary functionalities of an operating system is that, it acts as resource manager and as user/computer interface.
There are many services provided by OS which are accessed using different types of system calls.
A process is referred to as program which is under execution. The information about each process that is in
execution mode is made available in the process control block. There are two basic operations that can be
performed on a process i.e., process creation and deletion.
part-a
short Questions with solutions
Q1. What do you mean by multiprocessor systems?
Answer : Model Paper-I, Q1
Computer systems that carry more than one general purpose processors are known as multiprocessor (or) parallel (or)
tightly coupled systems. These processors share computer bus, memory, clocks and various hardware components. These type
of systems are used because of following advantages.
(a) Increased reliability
(b) Increased throughput
(c) Economy of scale.
Q2. Deine operating system. Give two examples.
Answer : Model Paper-II, Q1
Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an interme-
diate between the user and hardware. It provides platform for application programs to run on it. It has the following objectives,
(i) Eficiency
An operating system must be capable of managing all the resources present in the system.
(ii) Convenience
An operating system should provide an environment that is simple and easy to use.
(iii) Ability to Evolve
An operating system should be developed in such a way that it provides lexibility and maintainability. Hence, the changes
can be done easily.
Operating Computer
User
system hardware
The followings are the examples of multi-user operat- (i) Process control system calls
ing system.
(ii) File management system calls
(i) Structure of operating system which includes six lay-
(iii) System information management system calls
ers.
(iv) Device management system calls
(ii) Structure of MULTICS system which include several
concentric layers. (v) Communication system calls.
Q5. Write the services of operating system. Q8. List the features of system call.
Answer : Model Paper-III, Q8 Answer : Model Paper-II, Q2
The following are the services provided by an operating The following are the features of system calls,
system.
1. It offers a process to create, load, execute and terminate
(i) Program creation and execution them.
(ii) User interface 2. It offers a ile to perform operations such as open,
(iii) I/O device support close, read, write, get ile attributes and set ile attrib-
utes.
(iv) File system management
(v) Interprocess communication 3. It offers managing of system information like system
date and time, operating system version etc.
(vi) Resource allocation
4. It offers accessing of the system resources like main
(vii) Error detection
memory, disk drives etc.
(viii) Accounting
5. It offers a process to exchange information by message
(ix) Protection and security. passing or shared memory.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 3
Computer SCienCe paper-V operating SyStemS
Q9. Deine a process.
Answer : Model Paper-I, Q8
Process is the fundamental concept of operating systems structure. A program under execution is referred to as a process.
It can also be deined as an active entity that can be assigned to a processor for execution. A process is a dynamic object that
resides in main memory. A process includes the current values of the program counter and processor’s registers. Each process
possesses its own virtual CPU. A process contains the following two elements,
(a) Program code
(b) A set of data.
Q10. What is Inter Process Communication (IPC)? List the models of IPC in operating systems.
Answer : Model Paper-III, Q2
Semaphore
Signals provide simple means of cooperation between two or more processes in such a way that a process can be forcefully
stopped at some speciied point till it receives the signal. For signalling between the processes a special variable called semaphore
(or counting semaphore) is used. For a semaphore ‘S’, a process can execute two primitives as follows,
(i) semSignal(S)
This semaphore primitive is used to transmit a signal through semaphore ‘S’.
(ii) semWait(S)
This semaphore or counting semaphore primitive is used to receive a signal using semaphore ‘S’. If the corresponding
transmit signal has not yet been sent then the process is suspended till a signal is received.
PART-B
ESSAY QUESTIONS WITH SOLUTIONS
1.1 introduction
Storage Area
Network
Memory
6. The client server model is mostly used in big 6. The peer-to-peer model is mostly used in small
corporations or organizations with high security business, home users and peer-to-peer ile sharing
data, e-mail, banking services etc. programs like Napster, Bitorrent etc.
7. Consider an example of client server network to 7. Consider an example of peer-to-peer network to which
which computer P, Q, R and S are connected. computer P, Q, R and S are connected. If P needs a
P is the server and Q, R and S are the clients. ile from then it sends a request to R. R accepts the
Suppose that a printer is attached to P, If Q requests and sends the ile to P if it inds. During this
needs to print a ile, it will send a request to process Q and S are ignored, but they function
P. Thus P will respond to Q by printing the ile. normally. If suppose all the computers are connected
If R sends a request asking for a ile to access. to a network printer and if P and Q each sends a
P will check R’s authentication for the data request to print. Then the request that reached irst
access, if it inds R as unauthorized it will reject will be granted irst and later the printer is granted
the request and will respond to C by turning the next request.
down its request.
8. Highly expensive to setup and maintain. 8. It is cheaper than client-server model.
9. Work load of the server increases on addition 9. It increases the eficiency with the addition of new
of more number of clients, thus causing low member to the system.
network speed.
10. It is not a robust model. 10. It is a very robust model.
11. It provides security to the network. 11. It does not provide security.
Operating System
An operating system is a program or a collection of programs that controls the computer hardware and acts as an interme-
diate between the user and hardware. It provides platform for application programs to run on it. It has the following objectives,
(i) Eficiency
An operating system must be capable of managing all the resources present in the system.
(ii) Convenience
An operating system should provide an environment that is simple and easy to use.
10 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
(iii) Ability to Evolve (vi) Resource Allocation
An operating system should be developed in such a way In a system, multiple programs may executed concurrently.
that it provides lexibility and maintainability. Hence, It is the responsibility of the operating system to
the changes can be done easily. allocate resources (such as CPU time, main memory,
User
Operating Computer iles, etc.) to them. For example, various scheduling
system hardware
algorithms are used for allocating CPU time and
Figure: operating System as an Interface resources to processes.
Examples (vii) Error Detection
Windows, Unix, MS-DOS. The operating system is responsible for keeping track
Services of Operating System of various errors that may occur in CPU, memory, I/O
devices, user program, etc. Whenever errors occur,
The following are the services provided by an
the operating system takes appropriate steps and may
operating system,
provide debugging facilities.
(i) Program Creation and Execution
(viii) Accounting
The operating system should support various utilities
It is a process of monitoring user activities, to keep
such as editors, compilers and debuggers etc., in
track, which user has accessed which resources and
order to give facility to the programmers to write and
execute their programs. the number of times the system is being accessed.
This recorded statistical information can be used to
(ii) User Interface improve the system performance by tracing out which
The operating system should provide an interface resources are in demand and by increasing the instance
through which a user can interact. Most of the earlier of those resources.
operating systems provide Command Line Interface
(ix) Protection and Security
(CLI), which uses text commands. All the users are
supposed to type their commands through keyboard. Modern computer systems allow multiple users to
Some systems support batch interface which accepts a execute their multiple processes concurrently in the
ile containing a set of commands and executes them. system. These multiple processes may access data
Now-a-days Graphical User Interface (GUI) is used simultaneously which has to be regulated so that only
where a window displays a list of text commands to valid users are given access to the data. It is the job
be chosen by a user through an input or some pointing of operating system to apply protection and security
device. mechanism to the system.
(iii) I/O Device Support 1.2.2 user interface for operating system
There are numerous I/O devices. Each of them has
Q20. Write about user interface for operating system.
its own set of instructions and control signals which
are used during its operation. The operating system Answer :
should take care of all these internal devices details There are many ways through which a user can interacts
and should provide users with simple read( ) and with the operating system. The two fundamentals ways among
write( ) functions for utilizing those devices. these are,
(iv) File System Management (i) Command line interface
The user data is usually stored in iles. An operating
(ii) Graphical user interface.
system should manage all these iles and should provide
functions to perform various operations on them, such (i) Command Line Interface
as create, open, read, write, close, search (according to Commands line interface which is popularly known as
its name), delete etc. Additionally, it should protect iles command interpreter makes use of various commands with
pertaining to different users from any unauthorized which a user can directly interact with the operating system.
access. It can be present as an in-built program in the kernel of the
(v) Interprocess Communication operating system (or) it can be present as a special program
There are several instances, when a process may in the operating systems like windows, UNIX etc. There can
require to communicate with other processes often by be more than one interpreter present in a single system in the
exchanging data among themselves. This interprocess form of shells such as C shell, Korn shell, Bourne-shell etc.
communication is employed by operating system using Apart from these, there can be other shells like the one which
techniques like message passing and shared memory. is bought from third party.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 11
Computer SCienCe paper-V operating SyStemS
The major responsibility of it is to execute the commands 4. Visualization
which can include copy, print, create, delete etc., with respect The term visualization refers to a learning method
to the ile management. The implementation of commands which permits users to understand complex content either of
depends on the following two approaches, voluminous or too abstract type. The system functions are
The command can be implemented directly if the depicted by modifying representation of entities. Visualization
command interpreter solely carries the code of its is enhanced by displaying specialized graphical images. The
execution. aim is not compulsorily to generate a real graphical image but
to give an image that expresses the most useful information.
The command interpreter does not carry any code and
hence it doesnot know how to execute the command. Therefore, we can increase production, work with
rapid and exact data, grow knowledge with the help of proper
In this case, the implementation is carried out with the
visualizations.
help of system programs.
5. Behaviour of Objects
(ii) Graphical User Interface (GUI)
Objects and actions constitute a graphical system.
1. Advanced Visual Presentation
Objects are visible elements on the screen viewed by the users.
The visual presentation gives an idea about content to Objects are manipulated as a single unit. The focus of users must
be seen on the interface by the users. The graphical system is be kept on objects rather than actions in case of a well-designed
advanced by adding following features, system. Objects are made up of ‘sub-objects’
Possibility of displaying more than 16 million colours Example
Animation and the presentation of photographs and Document is an object whereas paragraph, sentence,
motion videos. word and letter are its sub-objects. The objects are divided
into three classes by IBM’s System Application Architecture
The graphical system provides to its user several useful,
Common User Access Advanced Interface Design Reference
simple, meaningful, obvious visual elements listed below,
(SAA CUA),
(i) Windows (primary, secondary or dialog boxes)
(a) Data objects
(ii) Menus (menu bar, pull-down, pop-up, cascading)
(b) Container objects
(iii) Files or programs are denoted by icons.
(c) Device objects.
(iv) Assorted screen-based controls (text boxes, list boxes,
(a) Data Objects
combination boxes, settings, scroll bars and buttons).
These objects present information i.e., text or graphics
(v) A mouse pointer and cursor.
that appears in the body of the screen. It’s a screen-based
2. Interaction using Pick and Click control.
‘Pick’ deines the motor activity of a user to pick out an (b) Container Objects
element of a graphical screen on which an action is to be taken.
These objects hold other objects. Two or more related
‘Click’ represents the signal to carry out an action.
objects are grouped by container objects for simple
The pick-and-click technique is carried out with the help access and retrieval.
of the mouse and its buttons.
Types of Container Objects
The mouse pointer is taken to a speciic element which
Workplace
accounts for PICK, by the user and the action is signaled
that causes a click. Desktop is the workplace. All objects are stored
on desktop.
The keyboard is an another technique for carrying out
selection actions. Folders
Throughput =
2. Time-sharing
Time sharing is considered as multiprogramming systems logical extension. In time sharing system, the user has a separate
program in memory. Each program in time sharing system is given a certain time slot i.e., operating system allocates CPU to
any process for a given time period. This time period is known as "time quantum" or "time slice". In Unix OS, the time slice is
1 sec i.e., CPU is allocated to every program for one sec. Once the time quantum is completed, the CPU is taken away from the
program and it is given to the next waiting program in job post. Suppose, a program executes I/O operation before 1 sec time
quantum, then the program on its own releases CPU and performs I/O operation. Thus, when the program starts executing, it
executes only for the time quantum period before it inishes or needs to perform I/O operation. Thus, in time sharing any users
shares the CPU simultaneously. The CPU switches rapidly from one user to another giving an impression to each user that he
has own CPU whereas, actually only one CPU is shared among many users. CPU is distributed among all programs.
Example
Job CPU burst
1 5 sec
2 1 sec
3 0.5 sec
4 3 sec
Let the time quantum be 1 sec, then CPU is allocated to the jobs as follows,
When there exist more than one programs both multiprogramming and time sharing along with CPU scheduling makes
the CPU available for every single user with a portion of time.
As the CPU is kept busy all the time, it require several tasks to be kept ready in the memory. In a situation where there are
more number of jobs that are ready to be inserted into the memory when there is not enough memory a decision made to select
the jobs among them. This decision making is known as “job scheduling”.
3. CPU Scheduling
In a situation where there are multiple jobs in the memory are ready to be executed then a decision is made to select the
appropriate jobs among them. This decision making is known as CPU scheduling.
4. Virtual Memory
Virtual memory is the extended concept of swapping technique. Swapping is responsible for swapping in and out processes
from the disk whereas with virtual memory, a process which is not present in the memory, can also be executed.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 15
Computer SCienCe paper-V operating SyStemS
Q24. Discuss various approaches of designing an Application/User
operating system.
Compilers, Shell, Libraries, etc.
Answer : Model Paper-II, Q9(a)
System Call Interface
Operating system must be designed and organized Signals File system, CPU Scheduling,
carefully for the better performance and proper functionality, Kernel Swapping
Terminal Block I/O Paging, Virtual
in such a way that it can be modiied easily in the future. Handlings System Memory
Usually, it is preferable to have several small components
Kernel Interface to Hardware
of a system instead of having a single or monolithic system.
Each component should have a well-deined job and all the Drivers
components are interconnected to form a single operating Physical
Terminals Disk and Devices
system. The following are the various approaches of operating Memory
system design,
Figure (2): Structure of UNIX
1. Simple structure
2. Layered Approach
2. Layered approach
3. Microkernels In layered approach an operating system is divided
into multiple layers or levels. The highest layer (layer N)
4. Modules-based approach.
corresponds to users or application programs and the lowest
1. Simple Structure or bottom layer (i.e., layer 0) corresponds to hardware.
There are several commercial operating systems which
have simple but not well-deined structures. Usually, these Layer N
systems were developed as small, simple and having limited
functionalities, but their popularity grew beyond their original
scope. One such operating system is MS-DOS which was Layer 1
developed keeping in mind that it should give more functionality
within the limited space. Its structure does not carry carefull Layer 0
division of its modules. Hardware
MS-DOS has always experienced vulnerability towards
threats and malicious programs which can cause damage to the
entire system because of the improper separation between the
interfaces and their functionalities. Due to this lack of security
and protection, any application can easily gain access to the I/O
operations and hardware of the system without any restrictions.
In fact the hardware (i.e., 8088 processor) of that period also Figure (3): a Layered operating System
provides no hardware protection.
The earlier versions of Unix also falls in this category. Each layer consists of data structures and operations
It divides the system into two parts, the kernel (which is also which are invoked by its upper layers. A lower layer provides
called as heart of Unix) and the system programs. The kernel some services to the upper layer. The advantage of this approach
contains several device drivers which interacts with the system is that construction and debugging becomes simple. As we know
directly. Later the problem occurs in developing kernel because the irst layer (i.e., layer 0) is nothing but hardware and if we
it became larger to implement as it has much more functionality.
assume that hardware is running correctly, then its services can
Application/User be used by layer 1. Now, the layer 1 is debugged and if any
Programs bug is found it is ixed. The advantage is that the errors can be
ixed easily as they lie in that particular layer. Each higher layer
Resident System simply uses the services of its lower layer without worrying
Programs about how these services are implemented (by the lower layer).
Other
Core Loadable
Miscellaneous
Kernel System Calls
Modules
Executable
STREAMS
Formats
Answer :
Similarities between Modular-kernel Approach and Layered Approach
The basic similarity between modular kernel approach and layered approach is the subsystem in both the approaches that
interact with each other using interfaces that are typically narrow. Moreover, in both the approaches, modiications done on a
part does not have any affect on other parts. That is, in other words, it can be said that the parts are loosely coupled.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 17
Computer SCienCe paper-V operating SyStemS
Differences between a Modular-kernel Approach and Layered Approach
Layered Approach Modular-kernel Approach
(a) Operating system is divided into different layers. (a) Operating system is divided into system and user-
level program.
(b) Layered approach imposes a strict ordering of sub- (b) There is no such restriction. That is, a lower layer
systems such that each sub-system must perform sub-system can invoke its operation by freely
its operation independently without the upper layer interacting with upper layer sub-system.
sub-system.
(c) There is more overhead relatively in invoking a (c) There is less overhead in invoking a method present
method present in the lower part of the kernel. in the lower part of the kernel.
(d) It is not capable of handling lower-level (d) It is capable of handling lower-level communication
communication and hardware interrupt. and hardware interrupt.
(e) It does not provide services for message passing (e) It provides services for message passing and process
and process scheduling. scheduling.
(a) Process
Process is the fundamental concept of operating systems structure which is deined as a program under execution
Alternatively, it can also be deined as an active entity that can be assigned to a processor for execution. A process is a
dynamic object that resides in main memory and it includes the current values of the program counter and processor’s
registers. Generally every process contains the following two elements,
(i) Program code
(ii) Set of data.
(b) Process Control Block
In a multiprogramming system, it is necessary to get the information about each process that is being executed. All this
information is available in the process control block.
The process control block informations are classiied into the following three groups,
(i) Process identiication
(ii) Process state information
(iii) Process control information.
Process Control Block (PCB)
Process Identification
Short term scheduler is deined as a program (part of operating system) that selects among the processes that are ready
to execute and allocate the CPU to one of them. It decides which of the ready processes are to be scheduled or dispatched next.
The difference between LTS and STS is that the LTS is called less frequently whereas, STS is called more frequently.
LTS must select a program from disk into main memory only once i.e., when the program is executed. However, STS must
select a job from ready queue quite often (for every 1 second in unix operating system) i.e., for every 1 second the STS is
called, it will select one PCB from the ready queue and gives CPU that job. After 1 second is completed, again STS is called
for selecting one more job from the ready queue. This process repeats. Thus, because of short duration between executions,
the STS must be very fast in selecting a job, otherwise CPU will sit idle. However LTS is called less frequently, so because of
long durations between executions, LTS can afford to take some time in selecting a good job from disk. A good job is deined
as one which is a mix of CPU burst and I/O burst.
MTS is used during swapping, where a process is temporarily removed from memory, often to decrease the overhead
of CPU and later resumed.
As the degree of multiprogramming increases, CPU utilization also increases. At one stage the CPU utilization is
maximum for a speciic number of user programs in memory. At this stage, if the degree of multiprogramming is further
increased, CPU utilization drops. Immediately, operating system observes the decrease in CPU utilization and calls MTS.
The MTS will swap on excess programs from memory and puts on disk. With this, the CPU utilization increases. After some
time, when some programs leave memory, MTS will swap in those programs which were swapped out back into memory and
execution stops. This scheme which is known as swapping is performed by MTS. Thus, swap out and swap in should be done
at appropriate times by MTS.
Answer :
Context switching refers to the process of switching the CPU to some other process thereby saving the state of the old
process and loading the saved state for the new process.
Context switching is actually an overhead which means that apart from switching no other tasks will be performed. Each
machine carry different switching speed based on factors such as memory speed, number of registers that are needed to be copied
and the presence of special instructions (for example, a single instruction that can be used to load or store all registers). The
speed usually ranges from 1 to 1000 µsec.
The time required to do context switching typically depends on hardware support. An example of such type is a processor
that can provide more than one set of registers. In context switch the pointer needs to be changed to active set of registers. The
amount of work that is to be done during the process of context switching is more in case of complex operating systems.
22 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
A context switch may occur without changing the state (d) Priority would be lowest by default, but user can
of the process being executed. Hence, involves lesser overhead specify any priority during creation.
than the situation in which changes in the process states occurs (e) In the beginning, process is not allocated to any
from running to ready or blocked states. In case of changes I/O devices or iles. The user has to request them
in the process state, the operating system has to make certain or if this is a child process it may inherit some
changes in its environment, which are described below, resources from its parent.
1. The context associated with the processor along with (iv) Then the operating system will link this process to
program counter and other registers are saved scheduling queue and the process state would be
2. Updates the PCB associated with the process being changed from ‘New’ to ‘Ready’. Now process is
executed. This involves changing the state of the process competing for the CPU.
to one of the available process states. Updation of other (v) Additionally, operating system will create some other
ields is also required. data structures such as log iles or accounting iles to
3. The PCB of this process is moved to some appropriate keep track of processes activity.
queue. 2. Process Deletion/Termination
4. Execution of the active process is transferred by selecting Processes are terminated by themselves when they inishes
some other process. executing their last statement, then operating system uses exit( )
5. Updates the PCB of the chosen process as it includes system call to delete its context. Then all the resources held by
the changes in its state (to running). that process like physical and virtual memory, I/O buffers, open
6. Updates the data structures associated with the memory iles etc., are taken back by the operating system. A process P
management which may require the management of the can be terminated either by operating system or by the parent
address translation process. process of P. A parent may terminate a process due to one of the
7. Restores the context of the suspended process by loading following reasons,
the previous values of the PC and other CPU registers. (i) When task given to the child is not required now.
Thus, the process switch which involves a state change, (ii) When child has taken more resources than its limit.
requires considerably more effort than the context switch. (iii) The parent of the process is exiting, as a result all its
children are deleted. This is called as cascaded termination.
1.3.3 operations on processes
Q32. Explain the reasons for process termination.
Q31. Explain the process creation and termination. Answer :
Answer : Reasons for Process Termination
There are two basic operations that can be performed A process in an operating system can be terminated
on a process. They are, when certain errors or default conditions occur. Following are
some of the reasons that lead to process termination,
1. Process creation 1. Normal Completion
2. Process deletion/termination. A process can complete its execution in a normal manner
1. Process Creation by executing an operating system service call.
2. Unavailability of the Required Memory
(i) When a new process is created, operating system
assigns a unique Process Identiier (pid) to it and A process is terminated when the system is unable to
inserts a new entry in primary process table. provide the memory required, as it is more than the
memory that it is actually contained in the system.
(ii) Then the required memory space for all the elements
3. Exceed in the Execution Time Limit
of process such as program, data and stack is allocated
including space for its Process Control Block (PCB). Process termination also occurs when its execution
time is very much longer than the speciic time limit
(iii) Next, the various values in PCB are initialized such as, i.e., it takes longer time to execute. This is because of
(a) Process identiication part is illed with PID the following possibilities,
assigned to it in step (i) and also its parent’s PID. (i) Total elapsed time
(b) The processor register values are mostly illed (ii) Time to execute
with zeroes, except for the stack pointer and (iii) The time interval since the last input is provided
program counter. Stack pointer is illed with the by the user. This usually occurs in case of
address of stack allocated to it in step (ii) and interactive processes.
program counter is illed with the address of its 4. Violating Memory Access Limits
program entry point. A process can even be terminated, when it is attempting
(c) The process state information would be set to to access a memory location to which access is not
‘New’. permitted.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 23
Computer SCienCe paper-V operating SyStemS
5. Protection Error (i) Sharing of Information
A protection error occurs when a process is trying to A specific type of information may be useful to
use a resource (e.g. ile) to which access is not granted many users. So, in order to fulil this an cooperative
or using it in an inappropriate manner such as writing environment must be created wherein the users can gain
to a read-only ile. access to all resources concurrently.
6. Arithmetic Error (ii) High Computation Speed
Arithmetic errors such as, division-by-zero or storing The execution of the particular task can be enhanced
a number greater than the hardware capacity also leads by dividing the task into various subtasks wherein each
to process termination. subtask can be executed parallely along with others.
7. Input/Output Failure However, high computation speed can be obtained
It refers to an error that results from some input/output through multiple processing elements such as CPU’s
operation, such as inability to ind a ile, failure of a as I/O channels.
read or write operation even after trying a certain (iii) System’s Modularity
number of times. The systems can be manufactured in a modular way i.e.,
8. Misuse of Data breaking the system’s functions into various separate
Misuse of data i.e., using wrong type or uninitialized processes or threads.
data also terminates the process. (iv) Convenience to Users
9. Exceeding the Waiting Time Limit The cooperating environment facilitates the convience
Exceeding the waiting time for occurrence of an event to users. Many users can perform multitasking i.e., they
also terminates the process. work on more than one task.
10. Invalid Instruction Execution Example
When a process is trying to execute an instruction that The user can handle printing, editing and compiling
actually does not exist, the process gets terminated. simultaneously.
11. Using a Privileged Instruction Models of IPC
An attempt to use an operating system instruction by a Inter process communication has two different models,
process stops its execution. they are as follows,
12. Interference by an Operating System or an Operator (i) Shared memory system
An operator or an operating system sometimes interferes (ii) Message passing system.
with process execution and leads to its termination. One (i) Shared Memory System
such example is the occurrence of deadlocks.
Shared memory system requires communicating process
13. Parent Process Termination
to share some variables. The processes are expected to exchange
When a parent process terminates, it causes all its child information through the use of shared variables. Here the
processes to stop their execution. operating system needs to provide only shared memory and
14. Request from a Parent Process the responsibility for providing communication rests with the
A parent process has a right to terminate any of its child application programmers and the operating system does not
processes, at any time during their execution. interfere in communication.
The communication ports are private between client and Mach operating system stores the messages in the
server. The reason for using a pair of communication mailbox as soon as it receives messages. If the mailbox carry
ports is that one part is employed for exchange of messages from a single process, these messages are arranged
message from server to client and other is used for in FIFO sequence but this sequence is not effective in case of
exchange of messages from server to client. multiple owner messages.
Answer :
The following are the important properties that should
be satisied by critical section implementation or solution,
1. Mutual Exclusion
When a process P1 is in its critical section, then no other
Figure: Preemptive Kernels Program Flow
process can be executing in their critical section.
Furthermore, in any process execution, a task can be in
2. Progress either of the following three states,
When a critical section is not in use and other processes 1. Running and waiting
is requesting for it, then it should be granted to only that 2. Waiting
process which is not executing in its remainder section
3. Idle.
enter its own critical section.
1. Running and Waiting
3. Bounded Waiting
A task will be in a running, waiting state when it is not
A limit or bound is ixed for each process to enter critical ready to run.
section beyond which it is not allowed to enter in critical 2. Waiting
section.
A task will be in a waiting state when it is ready to run
There are two general ways for handling critical sections but cannot do so due to the execution of a higher priority
in operating systems. They are, task.
3. Idle
(i) Preemptive Kernel
A task is considered to be idle when no process has a
It allows a Kernel mode process to be preempted (i.e., task that is ready to be executed. This task is a special
interrupted) during execution. purpose entity which has the lowest priority and is
usually incorporated in all Kernel programs.
(ii) Non-preemptive Kernel
Operation of Preemptive Kernel
It doesn’t allow a Kernel mode process to be preempted
The following igure shows the program context for a
during execution, the process will execute until it exits
preemptive Kernel,
Kernel mode or voluntarily leaves control of the CPU.
This approach is helpful in avoiding race conditions. Preempt
Preempt Wait Wait Wait
The preemptive Kernel is used in real-time system,
where a process executing in Kernel mode can also be
preempted which makes the Kernel more responsive. Windows Task A
XP, Windows 2000 and traditional Unix are non-preemptive
Task B
Kernels whereas Linux of Kernel version 2.6 is preemptive
Kernel. Task C
Q39. Explain preemptive kernels and non-preemptive Idle
kernels. Also explain why would any one favour
a preemptive kernel over a non-preemptive one.
Answer : TC TA TB Time
ready ready ready
Preemptive Kernel
Figure: Preemptive Kernel Program context
A Kernel that permits a process to be preempted or Where,
interrupted during its execution is called preemptive kernel.
In preemptive kernel, every task is designed as an independent Denotes ‘Running’.
entity that has total control over the CPU. However, the task Denotes ‘Ready waiting’.
that is ready to run and has the highest priority is executed irst
by the kernel. Denotes ‘Not ready waiting’.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 29
Computer SCienCe paper-V operating SyStemS
Here, there are three tasks A, B and C with priorities A > Here, the non-preemptive kernel acts as a periodic
B > C. Hence, when task C is ready to run, the Kernel interrupts scheduler which serially executes every task. However, every
its idle task and begins the execution of task C. And, when task A task must assist each other by running just once and then
is ready to run, the Kernel interrupts task C and starts executing returning to the scheduler loop. This is because, if any task gets
task A. However, when task B is ready to run, the Kernel does implemented as an endless loop, then the scheduler will never
not halt or preempt task A due to its’ higher priority. get to other tasks.
A this stage all the three tasks are in ready state, but B and Operation of Non-preemptive Kernel
C wait for A to inish. When this is done A goes into a waiting The igure below shows the program context of a non-
state until it is invoked again. The control is then transferred to preemptive kernel,
B as it hold as higher priority than C. Therefore, when B inishes Program
the control is transferred to C to complete it’s execution. context
Swap Instruction
Another type of instruction is “swap”, which is also j = (i + 1)%n;
executed atomically. It operates on two variables as shown while((j! = i) && !waiting_lag[j])
below,
j = (j + 1)%n;
void swap(bool *x, bool *y)
if(j == i)
{
lock = false;
bool var;
else
var = *x;
*x = *y; waiting(j) = false;
*y = var;
}
In this method, mutual exclusion can be provided by
declaring a global boolean “lock” and initializing to false. Each }
process will also has a local variable “lkey”. The code for a In the above algorithm, critical section requirements are
process is shown below, satisied as follows,
32 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Mutual Exclusion
1.4.3 semaphores
It is achieved with the help of the following while loop.
Q42. What is Semaphore?
while (waiting_lag[i] && lkey)
It means a process can enter its critical section only if Answer :
either “waiting_lag” or “key” is “false”. The “key” value can Semaphore
be set to “false” only if test_set( ) is executed. And which ever Signals provide simple means of cooperation between
process executes test_set( ) irst gets the critical section and all two or more processes in such a way that a process can be
others has to wait. Hence, only one process will be in its critical forcefully stopped at some speciied point till it receives the
section at any time. signal. For signalling between the processes a special variable
Progress called semaphore (or counting semaphore) is used. For a
semaphore ‘S’, a process can execute two primitives as follows,
The progress requirement is met, since after executing
critical section the process is setting either “lock = false” or (i) semSignal(S)
“waiting_lag[j] = false”. Any of the two ways allows other This semaphore primitive is used to transmit a signal
waiting processes to proceed. through semaphore ‘S’.
Bounded-waiting (ii) semWait(S)
This requirement is achieved as follows. After a process This semaphore or counting semaphore primitive is
exits its critical section, the following while loop is executed. used to receive a signal using semaphore ‘S’. If the
while((j! = i) && ! waiting_lag[j]) corresponding transmit signal has not yet been sent then
the process is suspended till a signal is received.
j = (j + 1)% n;
Hence, semaphore or counting semaphore is actually
The above loop scans the “waiting_lag” array in cyclic an integer variable, consisting of three operations, deined as
order and allows the irst waiting process to enter the critical follows,
section.
1. A non-negative value can be used to initialize the
There are some advantages and disadvantages of using semaphore.
special machine instruction to implement mutual exclusion.
2. Each semWait operation causes a decrementation in the
Advantages semaphore value and when the value becomes negative,
1. This approach can be used for any number of processes the process gets blocked, else the process execution
executing not only on a uniprocessor but also on the proceeds in a regular manner.
multiprocessors sharing main memory. 3. Each semSignal operation causes an incrementation in
2. It is very simple to verify. the semaphore value and when the value becomes less
than or equal to zero, the process is unblocked which
3. It can be used to provide support for multiple critical was initially blocked by the semWait operation.
sections each of which can be deined by its own variable.
The two semaphore primitives semWait and semSignal
Disadvantages can be deined as follows,
1. Busy waiting is exercised while a process is waiting struct semaphore
for an access to the critical section. During this process
considerable amount of the processor time is consumed. {
int C;
2. There is a possibility for the occurrence of starvation
when one process exits a critical section and many queueType que;
processes are waiting, the selection of a waiting process };
can be done in an arbitrary manner. Hence, for some
processes access is denied. void semWait(semaphore S)
{
3. Deadlock may occur. Consider for an example, a
process P1 which is executing a special instruction by S.C = S.C–1;
entering into the critical-section. During its execution if(S.C<0)
if it is interrupted to assign the processor to some other
process P2, which is having higher priority than P1 and {
further if it is attempting to use the same resource as P1, keep the process in S.que;
its access request is denied due to the mutual exclusion block the process;
mechanism. Hence, it enters into the busy-waiting loop
as process P1 will never be dispatched as it is having }
lower priority. }
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 33
Computer SCienCe paper-V operating SyStemS
void semSignal(semaphore S) 2. Now, process P1 issues a semWait instruction on
semaphore S which decrements its value to ‘0’ thereby
{
allowing process P2 to run. Process P1 now rejoins the
S.C = S.C+1; ready queue as shown in igure (ii).
if(S.C<=0) 3. Process P2 now sends out a semWait instruction and gets
blocked thereby permitting process P4 to run.
{
4. After the completion of process P4, a semSignal
Remove a process from S.que; instruction is issued which allows process P2 to shift to
place the process on the ready list; the ready queue.
}
} Blocked
queue
The two semaphore primitive operations deined above P2
are atomic.
P4
Example Semaphore S = – 1
P4
Blocked Semaphore S = 0
queue
Ready P2
P1 P1
Semaphore S = 1 queue
P3
Figure (iv)
Ready P3
P4
5. Process P4 is again placed in the ready queue and P3
queue P2 starts running, this is shown in igure (v).
6. Process P3 gets blocked on issuing a semWait instruction.
Figure (i) Processes P1 and P2 run in a similar manner and are
blocked, allowing process P4 to resume its execution.
Blocked
Blocked
queue queue
P2
Semaphore S = 0 P3
Semaphore S = 0
Ready P4
Ready P1
queue
P3 P2
queue P4 P1
P4 wait(synch); in process p1
Semaphore S = – 2
s 1;
Implementation of a Semaphore
For answer refer Unit-I, Page No. 33, Q.No. 42. struct process *pList;
Semaphore Usage } semaphore;
Semaphore is used in two situations, When a process executes the wait operation it decrements
(i) To solve the critical-section problem the value of the semaphore and checks whether the value is
positive or negative. If the value is negative then the process
(ii) To gain access control for a given resource.
is blocked using block( ) operation and placed in the waiting
(i) During Critical-section Problem queue (i.e., the process list) maintained by the semaphore. This
Semaphore is used to deal with the critical-section changes the state of the process to waiting state.
problem among multiple processes by setting its value either
The blocked process can be resumed only when some
to 0 or 1 and hence called as ‘Binary Semaphore’. The value 1
directs the process to enter into the critical-section whereas the other process executes the signal operation. The signal operation
value 0 prevents the processes from entering into the critical- increments the value of the semaphore and checks it to be
section (since one of the processes is still in the critical section). less than or equal to zero. If the condition is satisied then it
The methods wait(s) and signal(s) are executed by the processes removes the blocked process from waiting queue and resumes
which sets the semaphore value to 1 and 0 respectively. its execution using wakeup( ) operation.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 35
Computer SCienCe paper-V operating SyStemS
The wait and signal operations of semaphore can be deined as follows,
void wait(semaphore sem)
{
sem.val – –;
if(sem.val<0)
{
// add this process to sem.pList;
block( );
}
}
void signal(semaphore sem)
{
sem.val+ +;
if(sem.val< = 0)
{
// remove a process p from sem.pList;
wakeup(p);
}
}
It is to be noted that the execution of semaphores should be atomic i.e., no two processes should execute the wait and signal
operations on the same semaphore simultaneously. This situation is known as critical-section problem which can be eliminated
from both uniprocessor as well as multiprocessor environment. In uniprocessor system it can be eliminated by inhibiting the
interrupts during the execution of wait and signal operations.
In multiprocessor system it can be prevented by employing a special software.
Q44. Write short notes on,
(i) Priority inversion
(ii) Priority inheritance.
Answer :
(i) Priority Inversion
The priority inversion problem arises when a resource is shared by two or more tasks. In this case, a situation arises where
a higher priority task has to wait till a lower priority task is executed. The low and high priority tasks are inversed.
The priority inversion problem can be better explained with the help of the following example.
Consider three tasks in a system with task 1 having the highest priority, task 2 with medium and task 3 with least priority.
Initially, assume task 3 is in running state and task 2 and 1 are waiting. Consider the igure below,
TASK 1 = 1
TASK 2 = 2
TASK 3 = 3
36 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
Answer :
Consider the following situation, there are ive philosophers who have only two jobs in this world like think and eat. Each
philosopher is sitting on one chair laid around a circular table. There is a plate of noodles placed in the center and ive single
chopsticks are placed on the table as shown in below igure,
P1
P5
P2
Noodles
P4
P3
The problem is to ensure that all philosophers peacefully thinks and eats and no philosopher should starve of hunger (i.e.,
no starvation) and the chopstick should be given mutually exclusive from each other.
For solving this problem using semaphores, each chopstick has to be represented as a semaphore. Hence, we have an
array of ive chopsticks. To take a particular chopstick the philosopher has to execute a wait( ) operation on that semaphore and
while putting down the chopsticks, it executes a signal( ) operation. Consider the following pseudocode for a philosopher ‘X’.
semaphore chopstk[5];
while(1)
wait(chopstk[x]);
wait(chopstk[(x + 1)%5]);
Critical section
/*perform eating*/
signal(chopstk[(x + 1)%5]);
Remainder section
/*perform thinking*/
The above solution may lead to deadlock, consider a situation where all philosophers has grabbed their left chopsticks,
now every body would try to grab right chopsticks but will be delayed forever. There are several other solutions for dining
philosopher’s problem which are deadlock free. One of the technique for the above situation is ‘Hold-n-wait’. No philosopher
should be allowed to hold a chopstick and wait for another. It should grab chopsticks only if both are available. Another solution is
to use asymmetric order, in which an even philosopher (P2 or P4) is allowed to pick their right chopstick irst then left chopstick.
In the same way each odd philosopher (P1, P3, P5) are allowed to take left chopstick irst and then right chopstick.
1.4.4 monitors
Q46. What is meant by monitor? How it is different from semaphore? And also explain various operations
used in monitor?
Answer :
Monitor
A monitor is a construct in a programming language which consists of procedures, variables and data structures. Any
process can call the monitor procedures but access to the internal data structures is restricted. At any time, a monitor contains
only one active procedure.
or
A monitor refers to the software module which consists of one or more procedures, an initialization sequence and local
data.
Characteristics
1. Access to the local variables can be granted only to the monitor’s procedures but not to any external procedure.
2. Any process can be allowed to enter into the monitor by invoking one of its procedures.
3. At any time only one process can execute inside the monitor and during its execution if any other process invokes, it
blocked till the monitor becomes available.
Monitor makes use of condition variables for providing synchronization. It can be operated by using two functions, They
are,
1. cwait(c)
Upon executing this function, a calling process is suspended and the monitor becomes available for use by any other
process.
2. csignal(c)
One of the blocked processes resumes its execution upon executing this function.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 39
Computer SCienCe paper-V operating SyStemS
Comparison between a Semaphore and a Monitor
Semaphore Monitor
1. Semaphores can be used anywhere within the 1. Monitor makes use of condition-variables anywhere
program but can’t be used inside a monitor. inside it.
2. The caller is not always be in a blocked 2. The wait( ) function always blocks the caller.
state when wait( ) condition is executed.
3. signal( ) releases the blocked thread, if it 3. signal( ) releases the blocked thread, if it exists,
exists, or increases the semaphore count value. or it lost as if it never occurs.
4. Upon releasing a blocked thread by the signal( ) 4. Upon releasing a blocked thread by the
function, the caller and the released thread can signal( ) function, only one among the caller
continue with their executions. or the released thread can continue, but not both.
if(nxt_count > 0)
signal(sem_nxt);
else
signal(sem_mutex);
Thus, mutual exclusion within a monitor is achieved.
To implement the wait( ) and signal( ) operation for a condition ‘cn’, create a semaphore ‘sem_cn’ and an integer variable
named ‘cn_count’ both with initial value as zero. The operation cn.wait( ) is implemented as,
wait( )
{
cn_count++;
if(nxt_count > 0)
signal(sem_nxt);
else
42 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-1 INTrodUcTIoN, oS STrUcTUreS, ProceSS MaNageMeNT aNd SyNchroNIzaTIoN Computer SCienCe paper-V
signal(sem_mutex);
wait(sem_cn);
cn_count – –;
signal( )
if(cn_count > 0)
nxt_count++;
signal(sem_cn);
wait(sem_nxt);
cn_count – –;
Monitor is a construct that provides a simple mechanism to implement mutual exclusion whereas semaphores provide
very complex way of implementing it. This is because the semWait( ) and semSignal( ) operations of semaphores are usually
scattered throughout the program and makes implementation dificult.
Monitors are high level constructs, (usually pro-gramming language constructs that provide lexibility in writing correct
programs. On the other hand, semaphores demand strict sequencing.
Monitors are shared objects that have many entry points (condition variables), but only one process can enter in the monitor
at a time. Hence, maintaining mutual exclusion.
Q49. How do you resume process within a monitor?
Answer :
When multiple processes are suspended on a single condition (cn) with a signal operation cn.signal( ) then it leads to the
confusion in selecting a process to be resumed among these suspended processes. The simplest solution available for this problem
is to use FIFO approach but, in most of the situations, this solution cannot be considered as effective. Therefore a new approach
is designed which uses ‘conditional-wait’ construct which is of the form cn.wait(x);
In this format, integer expression ‘x’ is computed by executing wait( ) operation and is referred as priority number. It is
usually stored with the name of its associated process which is in suspended state. The process with the smallest priority number
is resumed irst immediately after execution of cn.signal( ) operation.
For instance, consider a monitor for allocating a single resource using a resource allocator ‘ResAlloc’. It allocates the
resource based on the maximum time a process needs to use the resource and hence, the shortest timed process is allocated irst.
Corresponding pseudocode for the above process is,
Monitor ResAlloc
boolean busy;
condition cn;
void grab (int t)
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 43
Computer SCienCe paper-V operating SyStemS
{
if(busy)
cn.wait(t);
busy = true;
}
void release( )
{
busy = False;
cn.signal( );
}
initialize_code( )
{
busy=False;’
}
In the above code ‘t’ represents time
However, there exist certain problems in using this method. These includes,
A resource might be accessed without getting permitted.
A resource might be acquired forever once accessed.
A resource which is never requested, a process might try to release it.
An already acquired resource might be requested by the same process.
internal assessment
objective type
I. Multiple Choice Questions
1. The memory that is placed between CPU and main memory is ________. [ ]
9. _________ requires that a process should remain in critical section for a inite amount of time. [ ]
2. A _________ operating system is one where rigid time requirements are placed on the processor.
3. _________ system refers to small portable devices that can be carried along and are usually battery powered.
5. _______ was the irst system that was not written in assembly language.
9. _________ refers to a situation wherein processes wait indeinitely for being scheduled.
10. A semaphore whose value can be either ‘0’ or ‘1’ is known as _________.
Key
I. Multiple Choice Questions
1. (a) 2. (a) 3. (b) 4. (b) 5. (d)
2. Real-time
3. Hand held
6. Mach
8. System call
9. Starvation
2
Learning Objectives
After studying this unit, a student will have thorough knowledge about the following key concepts,
Introduction
In multiprogramming, there are several processes that are running simultaneously in various queues. These queues
are managed using schedulers like LTS, STS and MTS. The decision with respective to the allocation of resources is
done by scheduling algorithms including FIFO, SJF, Round robin, Multilevel, Multilevel feedback queue scheduling
algorithms. During this allocation, a situation might arise during which the requested resources are held by other wait-
ing processes. This is called deadlock. There are various techniques for preventing, avoiding, detecting and recovering
from the deadlock.
Part-A
Short Questions with Solutions
Q1. What are short-term, long-term and medium term schedulings?
Answer : Model Paper-I, Q3
Answer :
FCFS stands for First Comes First Served. The typical use of FCFS in OS is to serve the processes waiting in a ready
queue to be allocated with CPU. Using FCFS, a newly, arrived process is placed at the end of the queue and the process which
is at the top of the queue is allocated with the CPU first.
Deadlock
A situation in which a process waiting indefinitely for requested resources and that resources are held by other processes in a
waiting state. This situation results in disallowing the process to change its state which is called a deadlock situation.
Example
One single rail track i.e., two trains travelling in opposite direction on a single railway track.
Answer :
There exists a list of waiting processes (P0, P1, ... , Pn) such that process P0 is waiting for a resource currently under the usage
by process P1, P1 is waiting for a resource that is held by P2, P2 is waiting for a resource that is held by P3 and so on. Finally, a process
Pn is waiting for the resource held by P0.
In otherwords, each of the n resources are held by n processes and each process waits for unavailable units of resource
types held by other process. This type of waiting is referred to as circular wait.
(i) Avoiding the occurrence of a deadlock by using various deadlock prevention and avoidance techniques.
(ii) In case if a deadlock occurs in a system, different detection and recovery techniques can be implemented.
(iii) None of the method is used to detect, recover, prevent (or) avoid the deadlock. Hence, the deadlock is simply ignored.
Q8. List three examples of deadlocks that are not related to a computer system environment.
Answer :
The following are the real world examples of deadlocks that are not related to a computer system environment.
(i) Two cars crossing a bridge both moving in opposite direction and the bridge has the capability only one car can cross the
bridge at a time.
(ii) Two persons on a single ladder among whom one is climbing up and another is going down.
(iii) On a single railway track, two trains travelling in opposite directions towards each other.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 51
Computer Science Paper-V Operating systems
Q9. What is safe state in deadlocks?
Answer : Model Paper-II, Q4
Consider a system consisting of several processes like < P1, P2, P3,....., Pn>. Each of them requires certain resources
immediately and also specifies the maximum number of resources that they may need in their life time. Using this information, a
“safe sequence” is constructed. A safe sequence is a sequence of processes where their resource request can be satisfied without
having deadlock to occur. If there exists any such safe sequence, then system is said to be in “safe state” during which deadlock
cannot occur. An unsafe state may lead to deadlock but not always. There are some sequences in unsafe state which can lead to
deadlock.
Unsafe state
Deadlock
Safe state
Consider the following example graph consisting of two processes (P1 and P2) and two resources (R1 and R2) such that P2
has resource R2 and requests for R1 and P1 has resource R1 and may claim for R2 in future. This action can create a cycle in the
graph which means deadlock is possible and system is in unsafe state. Hence, allocation should not be done.
P1
R1 R2
P2
Part-b
Essay Questions with Solutions
2.1 CPU Scheduling
2.1.1 Concepts
Q11. Write a short note on,
(i) Program
(ii) Jobs
(iii) Job scheduling.
Answer :
(i) Program
‘Program’ refers to the collection of instructions given to the system in any programming language. Alternatively a
program is a static object residing in a file. The spanning time of a program is unlimited. A program can exist at a single place
in space. In contrast to process a program is a passive entity. It consist of different types of instructions such as arithmetic
instructions, memory instructions and input/output instructions, etc.
(ii) Jobs
A job is a sequence of programs used to perform a particular task. Typically a job is carried out in various steps where
each step depends on the successful execution of its preceding step. It is usually used in a non-interactive environment.
Example
In a job of executing a C program, a sequence of tasks are involved each for compiling, linking and executing the
program. Here, linking depends on the successful execution of compiling and executing depends on the successful execution of
linking.
(iii) Job Scheduling
Job scheduling is also called as long-term scheduling which is responsible for selecting a job from disk and transferring
it into main memory for execution. It is also responsible for deciding which process is to be selected for processing. When
compared with short-term scheduler, its execution is less frequent.
One of the major function of job scheduler is to control multiprogramming. This is because, if the number of processes in
a ready queue (or) memory becomes high, it imposes an overhead on the operating system. It is difficult for operating system to
maintain long lists, context switching and over limit dispatching. Therefore, job scheduler allows a limited number of processes
in the memory. The process of selecting the processes for execution in job scheduling is independent of time.
Some of the operating systems such as UNIX and Windows, do not use long-term scheduler. These systems simply insert
each new process in the ready queue and uses a short-term scheduler for selecting the process for execution. This approach is
mainly used in time sharing systems.
Q12. Explain various scheduling concepts.
Answer : Model Paper-I, Q10(a)
Scheduling
Scheduling is defined as the activity of deciding, when processes will receive the resources they request. There exist
several scheduling algorithms among which some are as follows,
1. First Come First Served (FCFS) Scheduling
This algorithm allots the CPU to process that requests first from the ready queue. It is considered as the simplest algorithm
as it works on FIFO(First in First Out) approach. In the ready queue when a new process requests CPU, it is attached to the tail
of the queue and when the CPU is free, it is allotted to the process located at the head of the queue.
One of the difficulties associated with FCFS is that the average waiting time is quite long. For instance, consider a set of
three processes P1, P2, P3 whose CPU burst times are given below,
If the sequence of arrival is P1, P2, P3 then we get the following result.
P1 P2 P3
0 24 27 30
So,
Waiting time for process P1 = 0 ms
Waiting time for process P2 = 24 ms
Waiting time for process P3 = 27 ms
0 + 24 + 27 51
Average waiting time = 3 = = 17 ms
3
If the sequence of arrival is P2, P3, P1, then we get the following Gantt chart.
P2 P3 P1
0 3 6 30
6+0+3
Waiting times for P1, P2, P3 are now 6 ms, 0 ms, 3 ms respectively and average waiting time is, 3 = 3 ms. So,
average waiting time varies with the variation in process CPU-burst times.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 55
Computer Science Paper-V Operating systems
Another difficulty with FCFS is, it tends to favour CPU bound processes over I/O bound processes. Consider that there
is a collection of processes one of which mostly uses CPU and a number of processes which uses I/O devices.
When a CPU-bound process is running, all the I/O bound processes must wait, which causes the I/O devices to be idle.
After finishing its CPU operation, the CPU bound process moves to an I/O device. Now, all the I/O bound processes having
very short CPU bursts execute quickly and move back to I/O queues and causes the CPU to sit idle. In this way FCFS may result
in inefficient use of both processor and I/O devices.
Once the CPU has been allocated to a process, it will not release the CPU until it is terminated or switched to the waiting
state. So, this algorithm is non-preemptive. It is difficult to implement for time-sharing systems in which each user gets the
CPU on a time based sharing.
2. Shortest Job First (SJF) Scheduling
This algorithm schedules the processes by their CPU burst times which means the process with less CPU burst time will
be processed first, before other processes. If two processes have same burst times then they will be scheduled by using FCFS
scheduling. This is also called as “shortest next CPU burst”.
Consider the following example,
Process Burst Time (ms)
P1 6
P2 8
P3 7
P4 3
P4 P1 P3 P2
0 3 9 16 24
P2 P5 P1 P3 P4
0 1 6 16 18 19
Let us assume the time quantum as 5 ms. In this case, P1 is first allocated with CPU for 5 ms and then it is sent at the
tail of the queue. P1 requires another 20 seconds to complete its execution. Now, the CPU is allocated to P2 which returns it in
4 ms because it needed only 4 ms to complete and hence it quits before the expiration of time slice. Now, the CPU is allocated
to P3 which also requires only 4 ms and hence it also quits before expiration. Now the Gantt chart will be as follows,
P1 P2 P3 P1 P1 P1 P1
0 5 9 13 18 23 28 33
Interactive processes
Batch processes
User processes
Lowest priority
Figure: Multilevel Queue Scheduling
58 SIA PUBLISHERS and DISTRIBUTORS PVT. LTD.
UNIT-2 CPU Scheduling and Deadlocks Computer Science Paper-V
If a higher priority queue has some processes waiting for CPU, then lower priority processes cannot execute unless all
higher queues are empty. If at time instant t0 all higher queues are empty and a user process starts executing then at t1 suppose
that a system process entered its queue, then user process would be preempted in order to execute the higher priority system
process.
Another scheme is to use time slicing among the queues. For example, 80% of CPU time can be given to foreground
processes and time slicing can be applied within the queue again. The remaining 20% of CPU time can be given to background
processes and FCFS can be applied within its queue.
Answer :
A multilevel feedback queue scheduling algorithm processes in various queue can be moved accordingly. This moving
of processes is performed by considering various factors such as,
v Moving the processes from higher priority queue to lower priority queue if they are time consuming.
v Moving the processes from lower priority queue to higher priority queue if they are waiting to be executed for a long
period of time.
Example
Consider four queues that are maintained using multi level feedback queue scheduler as shown in the figure.
Queue-0
Queue-1
Queue-2
Queue-3
(FCFS)
Figure: Multilevel Feedback Queue Scheduling
Here, execution starts at queue 0 to which carries the highest priority processes and if it become empty, the execution
moves to queue-1 and so on. In case that a higher priority process arrives while executing a lower priority process the lower
priority process is preempted and the control is alloted to the higher priority process. The processes of queue ‘0’ given with
in a time slice of 8 seconds. If they fail to accomplish in 8 seconds, they are placed at the end of the next queue i.e., queue-1.
Similarly for queue-1 the time slice for each process is 16 seconds and if they fail to accomplish, they are placed at the end of
queue-2 and so on. The last queue in this algorithm works on First Comes First Served (FCFS) basis.
Processes are placed in queues based on their CPU burst times. Various parameters associated with this scheduler include
the following,
v Quantity of queues
v Approach used to identify the correct time for moving a process from lower to higher priority queue.
v Approach used to identify the correct time for moving the process from higher priority to lower priority queue
v Approach used to identify the correct queue for executing a particular process.
2.2 Deadlocks
Process-A Process-B
Deadlock avoidance does not impose any rules but, in this technique each resource request is carefully analyzed to check
whether it could be safely fulfilled without causing deadlock. The drawback of this scheme is that it requires its information
about the requested resources in advance. Different algorithms require different type and amount of information like some
require maximum number of resources that each process requires etc.
The following are the various deadlock avoidance algorithms,
1. Safe State
Consider a system consisting of several processes like < P1, P2, P3, ..... , Pn>. Each of them requires certain resources
immediately and also specifies the maximum number of resources that they may need in their life time. Using this information,
a “safe sequence” is constructed. A safe sequence sequence of processes where their resource request can be satisfied without
having deadlock to occur. If there exists any such safe sequence, then system is said to be in “safe state” during which deadlock
cannot occur. An unsafe state may lead to deadlock but not always. There are some sequences in unsafe state which can lead to
deadlock.
Unsafe state
Deadlock
Safe state
A process must specify in the beginning the maximum Allocationi: = Allocationi + Requesti
number of instances of each resource type it may require. Needi: = Needi – Requesti
It is obvious that this number should not be more than the
available. When process request resources, system decides Safety Algorithm
whether allocation will result in deadlock or not. If not, The job of banker’s algorithm is to perform allocation,
resources are allocated otherwise process has to wait.
without considering whether this allocation has resulted in
The following are the various data structures which has safe or unsafe state. It is the safety algorithm which is called
to be created to implement Banker’s algorithm. immediately after banker’s algorithm to check for the system
Where, n = Number of processes state after allocation. The following is the safety algorithm
m = Number of resources. which requires m × n2 operations to find system state.
A n × m matrix indicating the maximum resources Assume work and finish as vectors of length m and n
required by each process. respectively.
(b) Allocation Work : = Available
A n × m matrix indicating the number of resources Finish[i] : = ‘false’.
already allocated to each process.
Step 2
(c) Need
Find ‘i’ such that
A n × m matrix indicating the number of resources
required by each process. Finish[i] : = false
(d) Available Needi £ Work
It is a vector of size m which indicates the resources If no such i is found jump to step 4.
that are still available (not allocated to any process).
Step 3
(e) Request
Work : = Work + Allocation
It is a vector of size m which indicates that process Pi
has requested for some resource. Finish[i] : = true
Each rows of matrices “allocation” and “need” can be Jump to step 2
referred as vectors. Then “allocationi” indicates the resources
Step 4
currently allocated to process Pi and “needi” refers to resources
required by Pi. If finish[i] : = True for all then system is in safe state.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 65
Computer Science Paper-V Operating systems
If either of the above two techniques is not applied then The wait-for graph can’t be used for a resource alloca-
a deadlock may occur and a system must provide, tion system containing multiple instances of each resource type.
Hence, a different algorithm is employed which carries certain
(i) An algorithm that can monitor the state of the data structures.
system to detect the occurrence of a deadlock. The data structures in the algorithm are,
(ii) A recovery algorithm to regain from the deadlocks Available
state. It is a vector of length m that can specify the number of
resources of each type that are available.
Deadlock Detection in a System Containing a Single
Instance of each Resource Type Allocation
A deadlock detection algorithm that makes use of a It is an n × m matrix that is used to define the number
of resources of each type presently assigned to each process in
variant of the resource allocation graph (called the wait-for
a system.
graph) is defined for a system containing only a single instance
Request
for all the resources.
It is an n × m matrix which specifies the current request
An edge from a node Pi to node Pj exists in a wait-for made by each process.
graph, if and only if, the corresponding RAG contains two
If Request [i, j] = k, then process i is currently requesting
edges, one from node Pi to some resource node Ra and the other for k additional instances of resource j.
from the resource Ra to node Pj. The presence of a cycle in the
The deadlock detection algorithm for every possible
wait-for graph indicates the existence of a deadlock. resource allocation sequence is given below.
An algorithm that is used to detect a cycle in the graph Consider two vectors whose lengths are m and n
requires a total of n2 operations, where n is the number of respectively.
vertices in the graph. 1. Initialize work = Available.
Example 2. Determine allocation for each i, where i = 1, 2, ,....n. If
Consider five processes P1, P2, P3, P4 and P5 and five allocation i ≠ 0 then set Finish [i] to false else, Finish
resources R1 to R5. The Resource Allocation Graph (RAG) for [i]is assigned a value true.
such a system is shown in the following figure, 3. Determine an index i for which both Finish [i] = false
R1 R2
and Request i ≤ Work. If no such i value is available
P1
then jump directly to step 5.
4. Set Work = Work + Allocation and Finish [i] = true and
P5 R3 P2 P 42 iterate through step 2.
5. If Finish [i] = False, for some i, 1 ≤ i ≤ n then the system
R4 R5
is in deadlock state and the process Pi is deadlocked.
P N3
Answer :
An algorithm for the deadlock detection make use of the ‘allocation matrix’ which describes the current resource allocation
and the ‘available vector’ that describes the total amount of each resource not allocated to any process. In addition to the allocation
matrix and the available vector, a request matrix Q is defined in such a way that Qij specifies the amount of resources of type j
requested by a process i.
The processes that are not under the deadlocked state are marked. All the processes are unmarked at the beginning of an
algorithm. The execution proceeds as follows,
1. Each process, having a row of all zeroes in the allocation matrix is marked.
3. Determine an index i for which a process i is currently unmarked and the ith row of Q ≤ W i.e., Qik ≤ Wk for 1 ≤ k ≤ m.
Stop the algorithm if no such row is found.
4. After finding such a row, process i is marked and the associated row in the allocation matrix is added to W i.e., set Wk =
Wk + Aik, for 1 ≤ k ≤ m. Go back to step3.
After executing the algorithm if any unmarked processes are present then a deadlock exists. This algorithm finds a process
whose requests for the resources can be satisfied with the available resources and it is assumed that those resources are allocated
to it and the process completes its execution thereby releasing all its resources. Another process is then looked up by the algorithm
and the whole process is repeated. This algorithm does not assures deadlock prevention but instead it determines the existence
of deadlock.
Example
Consider the given allocation and request matrices along with resource available vectors.
R1 R2 R3 R4 R5 R1 R2 R3 R4 R5
P1 0 1 0 0 1 P1 1 0 1 1 0
P2 0 0 1 0 1 P2 1 1 0 0 0
P3 0 0 0 0 1 P3 0 0 0 1 0
P4 1 0 1 0 1 P4 0 0 0 0 0
R1 R2 R3 R4 R5 R1 R2 R3 R4 R5
2 1 1 2 1 0 0 0 0 1
Available vector Resource vector When the deadlock detection algorithm is applied it proceeds as follows,
2. Set W = (0 0 0 0 1).
3. As request mode by the process P3 is less than or equal to W, so it is marked and W is set to,
W = W + ( 0 0 0 1 0)
W = (0 0 0 1 1).
4. As no other unmarked process in Q contains a row which is less than or equal to W. So, the algorithm will be terminated.
The algorithm execution ends, leaving the processes P1 and P2 in an unmarked state. Hence, these processes go into
deadlock state.
SIA PUBLISHERS and DISTRIBUTORS PVT. LTD. 67
Computer Science Paper-V Operating systems
Q26. Discuss the usage of deadlock detection
1. Process Termination
algorithm.
In this method, one or more processes are terminated to
Answer :
eliminate deadlock. It has two methods as follows,
The usage of deadlock detection algorithm depends on
(i) Terminate all deadlocked processes which will
the following factors,
break deadlock immediately, but it is a bit expensive
(i) Frequent occurrence of deadlocks because there may be some processes which have been
executing for a long time consuming considerable CPU
(ii) Effects of deadlock on other process.
time and their termination will result in wasting those
Invocation of deadlock detection algorithm highly CPU cycles.
depends on how frequently deadlocks occur. In case that the
(ii) In order to overcome drawback of the above method,
deadlocks occurring frequently, it must be used as frequently as
this method terminates one process at a time until
their occurrence. This is done because when a process is affected
deadlock is recovered. However, it has some overhead
by deadlock, all its allocated resources can not be released and
since, after terminating each process, a detection
hence, can not be used by other processes. In addition to this, it is
algorithm has to be executed to examine whether any
possible the processes in the ready queue might increase. processes are deadlocked or not. This method is slower
In the second case, the detection algorithm is used than the first one.
every time when a process requested for a resource and it is
2. Resource Preemption
not allocated immediately. By implementing this, it is possible
to grab the process with which a deadlock occurred and the set In this method, resources are deallocated or preempted
of processes associated with it. However, using the algorithm from some processes and the same are allocated to others until
deadlock is resolved the three important issues that are used
on every process request takes a lot of time for computing
for implementing this scheme are as follows,
overall processes.
(i) Selection of Victim Process
One method to overcome the above said drawback
is to use the detection algorithm on certain regular (custom Initially, it is necessary to decide which process or
which resources are to be preempted, the decision is
time intervals (for example, every 30 minutes). However with
based on cost factor which includes the number of
this method, it is difficult to identify the process with which a
resources, a deadlocked process is holding and CPU
deadlock occurred. This is because, within that time interval,
time consumed by it etc.
the resource graph might carry many deadlock cycles.
(ii) Rollback
2.2.7 Recovery From Deadlock
The process which was preempted cannot continue
Q27. Write about recovery from deadlock. normal execution, because its resources are taken back.
Hence, we need to rollback to some previous checkpoint
Answer :
or total rollback to start it from the beginning.
Recovery from Deadlock (iii) Starvation
The detected deadlocks in the system by making use It is necessary to ensure that a particular process should
of deadlock detection algorithms, need to be recovered by not starve every time preemption is done.
using some recovery mechanism. The brute force approach
is to reboot the computer, but it is inefficient because it may
lose complete data and waste computing time. Hence, other
techniques are used to recover from deadlock. They are
broadly classified into two types. They are,
1. Process termination
2. Resource preemption.
Internal Assessment
Objective type
I. Multiple Choice
1. Mutual exclusion can be applied to _________. [ ]
4. The technique in which I/O device addresses are part of memory address space. [ ]
6. _________ refers to the number of processes that are completed per unit time. [ ]
7. _________ is the amount of time when the cpu is kept busy executing processes. [ ]
10. Deadlock cannot occur when the processes are in _________ state. [ ]
2. _________ refers to the use of a resource by only one process at any time.
3. If there exists at least one resource allocation sequence that does not lead to deadlock then a system is in _________
state.
4. To prevent _________ ordering is imposed on all resource type, i.e., a unique positive integer is assigned to each
resource.
7. The __________ can't be used for a resource allocation system containing multiple instances of each resource type.
9. __________ allots the CPU to process that requests first from the ready queue.
10. The time taken by dispatcher to allot the CPU from one process to another is called __________.
KEY
I. Multiple Choice
1. (b) 2. (d) 3. (c) 4. (c) 5. (c)
3 and iMpleMentation
Learning Objectives
After studying this unit, a student will have thorough knowledge about the following key concepts,
File System Implementation, Directory Implementation, Allocation Methods and Free Space Management.
intrOductiOn
A popular non-contiguous allocation scheme is paging with which memory can be divided into ixed sized
blocks. To divide the memory into unequal sized blocks, segmentation techniques should be used. There are
certain page replacement algorithms with which page frames can be swapped in and out.
A ile can be deined as a group of similar records or related information together which is stored in secondary
memory. Both the data as well as programs of all users are stored in iles. The operations that can be performed on
iles are creating ile, writing to a ile, reading from a ile, repositioning with a ile, deleting and truncating a ile.
A ile can be accessed in many ways. The most common access methods are sequential access, direct access and
indexed access. Unauthorized access can be prevented using protection mechanism. A collection of iles is known
as directory. The common schemes to deine its structure are single-level, two-level, tree-structures, acyclic-graph
and general graph directories.
part-a
short Questions with solutions
Q1. Write the differences between logical and physical address space.
Answer : Model Paper-I, Q5
1. Logical address is also called virtual address 1. Physical address is also called absolute address.
or relative address. It is used in virtual memory. It is used in main memory.
2. Logical address is divided into small parts called 2. Physical address is divided into small parts called
‘pages’. ‘frames’.
3. Logical address refers to a reference to the 3. Physical address refers to the absolute location
memory location, that is relative to the program. of data in the main memory.
4. The set of all logical addresses generated by a 4. The set of all physical addresses mapping to their
program is a logical address space. respective logical addresses is a physical address space.
Page
A page refers to the logical memory location which contains ixed-sized blocks.
Frame
A frame refers to the physical memory location which is divided into ixed-sized blocks.
Paging divides the physical memory into ixed-sized blocks called as frames and logical memory into pages. The page
and frame are of same size because one logical page its exactly in one frame (or) blocks. The execution of a program with
'n' pages requires 'n' free frames to be available in the physical memory, where each page is loaded into a free frame. The
information about allocation of frames to various pages is tracked by maintaining a table called page table. It carries page
number, page offset and involves CPU to translate pages into frames.
Q4. Deine ile management.
Answer : Model Paper-III, Q5
The process of managing iles and the operations performed on iles is referred to as ile management. It is responsible
for allocating space to the ile on disk and providing a data structure to deine the information saved on disk so as to provide
quick access to it. Typically, operating system is responsible for managing iles in a system. It uses ile management system
for this purpose.
74 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q5. List the ile operations performed by operating systems.
Answer : Model Paper-I, Q6
v It resolves the problem of collision occurred with respect to the names of the iles.
v It provides an effective way with which users can be isolated from each other.
v It eficiently improves the task of search by employing Master File Directory (MFD).
Before using a ile, it needs to be opened using 'Open' system call in most of the systems. When this is done, the operating
system creates an entry in the open ile table which is browsed every time when a ile operation is requested. The responsibility
of open system call is to ind the directory which carries the ile on which ile operations are to be performed. This can be done
by browsing all the directories. Once the ile is found, an entry is made to the open ile table. It also considers the ile access
permissions such as read only, read-write etc., and access is granted based on these permissions.
The use of open system call eliminates the need for searching of iles again and again and simpliies the ile operations.
Tree-structured directories scheme allows user to create any number of own directories within their User File Directory
(UFD). It has a variable number of levels. It gives better lexibility to manage iles.
A sub-directory is treated as a ile. A special bit is used which deines whether the entry is ile (0) or sub-directory (1). A
current directory is normally a directory from where process is executing and carries almost all the associated iles of currently
executing process. When process tries to access a particular ile, it is searched in current directory. If it is not presents, then user
has to specify the path name of that ile or change the current directory to that path which can be done using a system call. This
system call which considers the path name as a parameter and redeines the current directory.
part-b
essay Questions with solutions
3.1 Main MeMory
3.1.1 introduction
Q11. Write in brief about background of memory management strategies.
Answer :
Memory is the control part of the computer system. It consists of huge array of bytes each with address. The CPU is
responsible for fetching the instructions from memory based on program counter contents. Additional operations such as loading
from memory and storing to memory need to be done as per instructions.
An instruction execution cycle will initially fetch the instruction from the memory. It decodes the instructions, fetches
the operands from memory and store the result back in memory after the execution is done on operands. Memory unit contains
a set of memory addresses sequentially.
Memory can be managed in number of ways by using the memory management strategies such as paging, segmentation
etc. Selecting a particular technique for a system will be based on various factors speciically on the system design.
Memory management has several issues such as basic hardware, binding of symbolic memory addresses to actual physical
addresses and difference between the logical and physical addresses.
Address Binding
A program that is placed on a disk must be in the form of a binary executable ile. For executing this program, it must be
placed with a process in the memory. Based on the usage of memory, the process may be allowed to move between the memory
and disk. An input queue is maintained for those processes that are waiting for execution in memory. Normally, only one process
can be executed at a time. During execution, a process fetches instructions and data from memory and when the process terminates,
its related space in memory becomes free so that the next process is brought into the memory for execution.
Generally, a user process can be placed in any part of the physical memory with the addresses assigned to it. Though the
starting address of the physical address space is 00000, the irst address cannot be stored at this location. Hence, the addresses
used by the user program gets affected.
Logical Versus Physical Address Space
Logical address is deined as the address which is generated by CPU and physical address is deined as the actual memory
address where data instruction is present. Both these address are common in certain address binding methods including compile-
time and load-time methods whereas for execution-time they carry different addresses.
When the logical and physical addresses are different, the logical address is commonly called as virtual address. The term
logical address space is usually referred to the group of addresses associated with a program whereas a logical address space is
referred to the group of addresses associated with the logical addresses.
Usually, mapping can be done by using various methods but for run-time mapping of addresses from logical to physical
is carried out through MMU (Memory Management Unit). The base register used in this case is referred to as relocation register
because it stores its value in all the logical address spaces. This is done at the time of locating address in the memory. A typical
MS-DOS operating system carries four register of this kind.
As the user program always uses logical address, it does not carry any information related to the physical addresses. To
map to a physical address, it creates a pointer that carries the location of the register which keeps on comparing it with the other
addresses.
In case of memory mapping the conversion of logical addresses to physical is done through hardware. User programs
assume that they are associated with logical addresses only but they need to be allocated with physical addresses before accessing
the memory.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 77
Computer SCienCe paper-V operating SyStemS
Q12. Write short notes on, Dynamic loading is independent of operating systems
support whereas dynamic linking requires help from
(i) Dynamic loading
operating system for checking the availability of needed
(ii) Dynamic linking routine in the memory space of other processes. This is the
(iii) Shared libraries. case where each of the process is protected from every other.
(iii) Shared Libraries
Answer : Model Paper-I, Q11(a)
While ixing bugs in libraries, there can be two types
(i) Dynamic Loading
of modiications i.e., major and minor. Major modiications
Dynamic loading is a method used to carry out such as change in the program addresses typically changes
memory space utilization eficiently. Since the size of the (increments) the version number of the library whereas
process depends on the size of physical memory, there are minor bug ixes do not change it.
situations where in the data required for executing a process
When dynamic linking is used the latest version
requires more space than the space available in the physical
installed is just referenced whereas in the absence of dynamic
memory. To overcome this space issue dynamic loading is
linking, they need to be relinked. There exist multiple version
used. It stores the data associated with a process in main
of libraries as there can be programs that might use older
memory in such a way that it can be relocated. This approach
version of libraries (those that were installed before updating
results in executing the program routine only when it is
the library). This system where multiple versions of shared
called. Incase, that an executed routine wants to call other
libraries exist is known as shared libraries.
routine then it veriies whether the desired routine already
exist in the loaded routines. If it already exists, it directly 3.1.2 swapping
executes it and if not, relocatable linking loader comes into Q13. Explain about swapping in memory management.
action and the desired routine is loaded into the memory
Answer :
later, the control is passed to the newly loaded routine.
Swapping
Advantages of Dynamic Loading
In a multiprogramming environment, there are several
v The routine that are not required are not loaded in the processes that are executed concurrently. A process needs to
memory. be present in main memory for execution, but its capacity is
v It is used while handling error routines. not enough to hold all active processes. Hence, sometimes
processes are swapped-out and stored on disk to make space
v It does not require any special support from the
for others and later they are swapped-in to resume execution.
operating system.
This process of swapping-in and swapping-out is called as
(ii) Dynamic Linking swapping.
The concept of dynamic linking is similar to the There are several reasons to perform swapping. They
concept of dynamic loading. The difference is that instead are given as follows,
of postponing the loading of routines, it postpones linking v If time quantum of a particular process is expired.
of libraries until they are called. This feature eliminates the
v If some high priority process pre-empts a particular
requirement of including language library associated with
process.
the program in the executable image.
v When an interrupt occurs and makes this process to
With use of dynamic linking, unnecessary wastage
wait.
of memory and disk can be avoided. This can be done by
using a small code called stub in every library routine. It is v Process is put in wait-state for performing some input/
responsible for pointing out the location of memory resident output operations.
library associated with the called routine. In addition to this, The processes that are swapped-out are kept in a
it is also responsible for checking the existence of routine in backing store (a disk). The swap space stores the images of
the memory and loading it if necessary. When the routine is all processes. A ready queue is maintained to store pointers
to be executed it is done by placing its address at the place to these images. Whenever dispatcher is free, it takes one
of stub and then, this particular routine is executed directly process from ready queue and swaps into main memory for
from the next execution. execution.
78 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Operating
system
Process Process
Swap-in User
P2 P1
space
Ready queue
t
p-ou
Process Swa
P3
figure: swapping
If a system follows static or load-time binding, then the process is swapped back into the same memory space it used
to occupy earlier. Otherwise, if it follows dynamic or execution-time binding then the process can be swapped back into any
memory space because physical addresses are calculated during run-time.
The limitation of swapping scheme is that the context switching is expensive i.e., the time required to save all the
information regarding a process like its PCB, data, code and stack segments etc., on the disk is quiet high.
Swapping on Mobiles
Mobile systems does not support the concept of swapping. They make use of lash memory instead of hard disks. The
reason behind not supporting the swapping are space constraint, limited number of writes acceptable by lash memory before
it becomes unreliable, poor throughput between main memory and lash memory in such devices.
The Apple ios will request the applications to relinquish the memory allocated in such a situation when free memory
falls below the threshold. The read only data can be removed and reloaded later if required from lash memory. The applications
that cannot release the memory are removed from the operating system.
The Android will not support the concept of swapping but uses a similar approach that is used by ios. It can delete a
process if there is no free memory. Due to such restrictions, the developers of mobile systems need to carefully allocate the
memory and release so that their applications will not use more memory or have memory leaks.
Answer :
The main memory of computer is divided into two major sections, one contains the Operating System (OS) and other is
left for user processes. The OS is usually placed in starting locations or low memory area and an Interrupt Vector Table (IVT) is
stored before OS.
OS
User space
FFF × FFFF
Memory allocation means to bring the waiting processes from the ready queue to user space in main memory. When
each process is allocated a single contiguous memory section then it is called as contiguous memory allocation. There are two
variations in this scheme.
For remaining answer refer Unit-III, Page No. 82, Q.No. 16.
Logical
address
No
Address error < Limit register
Yes
+ Relocation register
Physical
address
Memory
10K
1K
First Fit
20K
9K
15K 15K
9K 9K
13K 13K
8K 8K
11K 11K
Allocation Block
Free Block
Figure (1): Example of Memory Coniguration Before and After Allocation of 11 KB and 9 KB Block (Based on First Fit Algorithm)
80 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Let us consider a linked list containing the following holes in the order speciied below,
10 20 15 9 13 8 11
Assume that there are two programs waiting to enter into memory. The program sizes are 11 and 9 respectively. The
memory manager allocates hole 20 to program of size 11 and hole 10 for program of size 9. Although, there are holes of sizes
11 and 9 in the linked list, the memory manager will not scan the entire linked list. It starts searching from the beginning of the
linked list until it inds a hole whose size is greater than or equal to program size. Once, such a hole is found, the rest of the linked
list is not scanned.
In best it algorithm, the memory manager searches the entire linked list and takes the smallest hole that is adequate to
store the programs.
Suppose, for the program of size 11, it scans the entire linked list and allocates the hole 11 for the program of size 11. For
the program of size 9, it searches the entire linked list and allocates the hole of size 9.
10K 10K
20K
9K
15K 15K
9K 9K
Best Fit of
13K 13K 11k & 9k
program
8K 8K
11K 11K
Figure (2): Example of Memory Coniguration Before and After Allocation to 11 KB and 9 KB Block
Best it algorithm is slow, because every time algorithm is called it scans the entire linked list. However, it results in
minimal internal fragmentation because it allocates the best hole for the program. However, irst it algorithm may result in more
internal fragmentation because it does not scan the entire linked list.
In worst it algorithm, the memory manager scans the entire linked list and allocates the largest hole to the program. For
the program of size 11, it allocates the hole of size 20 which is maximum and for the program of size 9, it allocates hole of size
15 which is next to the largest size. Thus, the entire linked list is scanned twice. After allocation is performed, the remaining part
of the hole can be used to allocate another program.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 81
Computer SCienCe paper-V operating SyStemS
10K 10K
11K
20K
9K
Worst - Fit
15K 9K
6K
9K 9K
13K 13K
8K 8K
11K 11K
When best it searches a list of holes from smallest to largest, as soon as it inds a hole that its, it knows that the hole is
the smallest one and that will do the job. No further searching is needed, as it is with the single list scheme. With a hole list sorted
by size, irst it and best it are equally it and next it is point less.
First it and best it allocation algorithms are better than worst it algorithm as per the criteria of storage utilization and
decreasing time. Both irst it and best it are same in terms of storage utilization. However, irst it performs operations faster
than other algorithms.
Q16. Write in brief on the following memory management techniques comparing their relative strengths and
weaknesses,
Answer :
(i) Fixed Partitioning
In ixed partitioning, main memory is divided into a number of ixed sized blocks called as partitions. When a process
has to be loaded in main memory, it is allocated a free partition, which it releases while terminating and that partition becomes
free to be used by some other process. The major drawback of this scheme is internal fragmentation. It arises when a memory
allocated to a process is not fully utilized. For example, consider a system having ixed partitions of size 100 KB, if a process of
50 KB in allocated to one such partition, then it uses only 50 KB and remaining 50 KB is unnecessarily wasted. The igure (1)
shows the problem with internal fragmentation.
82 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
OS Now, suppose that a new process P5 of size 500 KB
P1 100KB wants to get loaded, but that cannot be done because we
50KB P3
P2 100KB don’t have a single contiguous 500 KB free space. Although
P3 we have 500 KB free space but it is fragmented and not
100KB
contiguous hence, cannot be allocated. This problem is called
P4 100KB
= Internal external fragmentation, which is a major drawback of dynamic
fragmentation partitioning.
figure (1): Internal fragmentation
One of the solution to external fragmentation is to apply
Strengths memory compaction where kernel shufles the memory contents
1. Fixed partitioning is simple and easy to implement. in order to place all free memory partitions together to form a
single large block. Compaction is performed periodically and
2. It helps in eficient utilization of processor.
it requires dynamic relocation of program. If compaction is
3. It supports multiprogramming. applied on the previous example then memory status will be
Weakness as shown in igure (4).
Internal fragmentation means wastage of memory space when a partiton is allocated with a process whose size is less
than the partition. For example, in multiprogramming, with ixed number of partitions, a partition of 400 KB becomes free, then
operating system scans through the queue and selects the largest job i.e., 385 KB job and loads it into 400 KB partition. In this
case 15 KB memory is being wasted. This is called internal fragmentation. In general, if there is a partition of ‘m’ bytes and
there is a program of size ‘n’ bytes where m > n and a program is loaded into the partition, then internal fragmentation is equal
to (m – n) bytes.
When dynamic partitioning is used for the allocation of processes, some of the memory space could be left over after the
allocation of each and every frame. For instance consider the example,
OS
200 kB
P1 – 250 kB
150 kB
P4 – 200 kB
Answer :
The programmer is allowed to view memory as consisting of multiple address spaces or segments through the concept of
segmentation. Segments may be of unequal size. Memory references consists of a (segment number, offset) form of address.
(ii) It allows programs to be altered and recompiled independently, without requiring entire set of programs to be relinked
and reloaded.
Segment table is of variable length and so cannot be held in registers but must be held in main memory. When a particular
process is running, the starting address of the segment table for that process is held in a register. The segment number of a virtual
address is used to index the table and look up the corresponding main memory address from the start of segment. This is added
to the offset portion of virtual address to produce desired real address.
Virtual address Segment table
Seg # Offset = d Base + d
+
Register
Seg table ptr
d Segment
Segment table
Seg #
Length Base
Main memory
Figure (1): Mapping Pages to Frames
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 85
Computer SCienCe paper-V operating SyStemS
Paging Implementation
In basic implementation of paging, the physical memory is divided into ixed -sized blocks called frames and logical
memory into pages.
There is a page table available which stores the base address of each page available in main memory and the offset
acts as descriptor within the page. The base address is combined with offset to get address of a physical memory location.
The system makes use of a paging table to implement paging. When a process is to be executed, its pages are
loaded into free frames in the physical memory. The information about frame number, where a page stored is entered
in the page table. During the process execution, CPU generates a logical address that comprises of page number (P) and
offset within the page (d). The page number p is used to index into a page table and fetch corresponding frame number.
The physical address is obtained by combining the frame number with the offset. Logical address consists of page
number and page offset
Page number (P) = n - m Page offset = m
22 bits 10 bits
The lower order bits of a logical address represent page offset and higher order bits represent page number. The maximum
size of logical address space is 232 bytes i.e., 4 G bytes.
So the maximum length of a page table of a process = 4 m entries, each entry being 4 bytes so a page table would occupy
16 M bytes in RAM.
There is a page table available which stores the base address of each page available in main memory and the offset acts
as descriptor within the page. The base address is combined with offset to get address of a physical memory location. The igure (2)
shows the hardware requirement of paging scheme.
Logical address
Page
CPU Offset
Number
Frame
Frame number
number
+
Physical
address
Main
memory
Page table
figure (2): paging Implementation
86 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q20. Explain paging hardware with translation look-aside buffer.
Answer :
The page table implementation is done in several ways by various operating systems. The simplest way is to use a set
of registers with a high speed logic to translate addresses easily and eficiently. But such implementation cannot store more
than 256 page table entries whereas today’s computers require nearly 1 million page table entries which is infeasible to be
implemented in hardware. Some systems store page table in memory and store its address in a special register called as Page
Table Base Register (PTBR).
One of the feasible solution to this problem is to use a fast, small, associative cache memory called as Translation
Look-aside Buffer (TLB) to look up and translate addresses. A TLB entry is divided into two parts, a key and a value. When
key is provided to TLB it looks up for it, simultaneously in all entries (i.e., a typical property of associative memory) and
returns its corresponding value ield if found (TLB hit). Otherwise, if it is not found (TLB miss), then a page table present
in main memory is used to map that logical address to physical address. The following igure shows the implementation of
paging using TLB.
Logical address
Page
number Offset
Page Frame
number number
TLB hit
+ Physical
address
Translation
look-aside
buffer
Valid bit
TLB miss
Frame
number 1
Page table
0 Src-2
1
Com1 2 F× FFFF Stack
2 Com1
Com2 4
3 Figure: Organization of Virtual Address Space
Com3 8
4 Com2
Src-1 5 The above organization allows heap to grow downwards
5 Src-1 and stack to upwards. The empty space (hole) between stack
Process 1 Page table
6 and heap is the virtual address space, such type of space is also
for process 1
7 known as sparse address space. The advantage of using virtual
Com1 2 8 Com3 memory is sharing of pages and leads to following beneits,
Com2 4 9 v Several processes can share system libraries by
Com3 8 mapping them into virtual address space. These
Src-1 0 libraries are stored as pages in physical memory.
Process 2 Page table v Inter process communication can be done by sharing
for process 2 virtual memory among several processes.
Main
memory v Process creation can speed up by sharing pages with
fork( ) system call.
Figure: Sharing Pages among Processes Virtual Memory Techniques
When a process tries to access the page which is not fetched in memory then such condition is called ‘page-fault’ trap,
which causes operating system to load the desired page into memory. The following are the sequence of steps that happen when
a page-fault occurs,
1. The operating system notices the page-fault and veriies whether the reference is valid or not.
3. Schedules disk read operation to fetch desired page into the free frame found in step (2).
Refer
ence
page
Process x
1
Operating
Restart Trap page system
program fault
Update Page table Find
page table page X
Free frame
Fetch Page
page X X
Backing store
Main memory
Pure demand paging is a technique where a process starts execution without a single page in memory. As it proceeds
execution, it gradually causes page-faults and those pages are fetched in memory. It never bring a page until it is required by
some process.
The hardware that is necessary to implement demand paging is a page table for performing paging and swap space for
performing swapping.
Example
Consider a program containing 4 pages i.e., P0 to P3. At the beginning, P0 is loaded into the main memory. As soon as the
page is transferred into the main memory, it updates the page table related to that page. A page table consists of two ields such
as page frame and validity bit.
If the bit is set to valid bit(v) then the page becomes available in the main memory at the speciied frame. The format of
the page table is shown below,
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 89
Computer SCienCe paper-V operating SyStemS
Page Frame
Logical Memory Validity Bit
Main Memory
0 A P0 3 v 0
1 1
B P1 5 v
Storage Device
2 C 2
P2 i
3 A
3 D P3 7 v
A
4
Page Table
4 E B D
5 B
7 D
Page Replacement
The operating system inds a free frame in main memory when a page fault occurs, so as to load the desired page. If there
are no free frames available at that particular instance then operating system uses a page replacement algorithm to select a frame
and swap it out and bring the new page in this frame. The frame which is selected for removal is called victim. Figure (1) shows
an example of replacing a page-X with page-Y.
When demand paging is used then page replacement technique automatically comes into picture.
90 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Valid-bit
Busy
Busy
Busy Swap-out
Page
Page X i Change bit to Busy page X X
in valid Busy
Victim
Busy
Busy Page
Page Y v Change bit to Swap-out Y
valid Busy page Y
Backing store
(swap space)
Busy
Busy
Page table Main memory
4 0 6 7 4 7 6
6 0 0 0 4 4 4
7 7 6 6 6 6 6
4 4 4 7 7 7 7
* * * * *
5 6 4 5 3 4 5
4 4 4 4 3 3 3
5 5 5 5 5 4 4
Number of
7 6 6 6 6 6 5 page faults = 15
* * * * *
figure (2): fIfo page replacement
The limitation of this algorithm is that sometimes we replace the page which could be used immediately. Hence,
execution becomes slow.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 91
Computer SCienCe paper-V operating SyStemS
2. Optimal Page Replacement (OPT) Algorithm
This algorithm has the lowest page fault rate and overcomes Belady’s anomaly. Here the victim is selected such that it
is not going to be used for the longest period of time. Consider the following reference string,
3, 4, 5, 6, 4, 7, 4, 0, 6, 7, 4, 7, 6, 5, 6, 4, 5, 3, 4, 5
Initially, the three empty frames will be illed by three page faults to references 3, 4, 5. Then next reference to 6 causes
page fault and algorithm replaces 3, because it is the one among 3, 4 and 5 who is going to be used after a longest period of time
(i.e., 18th reference). Similarly, the algorithm proceeds causing nine (9) page faults instead of 15 in the case of FIFO algorithm.
The igure (3) shows the same.
3 4 5 6 4 7
3 3 3 6 6 6
4 4 4 4 4
5 5 5 7
* * * * *
4 0 6 7 4 7 6
6 6 6 6 6 6 6
4 0 0 0 4 4 4
7 7 7 7 7 7 7
* *
5 6 4 5 3 4 5
6 6 6 6 3 3 3
4 4 4 4 4 4 4
5 5 5 5 5 5 5
* *
4 0 6 7 4 7 6
6 0 0 0 4 4 4
4 4 4 7 7 7 7
7 7 6 6 6 6 6
* * * *
5 6 4 5 3 4 5
5 5 5 5 5 5 5
7 7 4 4 4 4 4
6 6 6 6 3 3 3 Total page
* * * faults = 12
Figure (4): Least Recently Used (LRU)
92 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
The following are two techniques to implement LRU algorithm.
(i) Using Counter
Associate each page-table entry with a ield to store the time-of-use and a counter is maintained whose value is
incremented for every memory reference. Every time a page is referenced the contents of counter is copied to time-of-
use ield of page-table entry. The replacement algorithm select the page with smallest time-of-use value.
(ii) Using Stack
In this method a stack is used to keep page numbers. Every time a page is referenced, it is removed from stack and
placed on top of the stack. Hence, the least recently used pages will always be at the bottom of the stack.
Example
4, 7, 0, 7, 0, 1, 2, 7, 1, 2
2 7 Most
recently used
1 2
0 1
7 0
4 4 Least
recently used
Stack Stack
This algorithm requires an additional bit known as use bit. The use bit of a frame is set to zero, when a page is irst
loaded into it and set to one when that page is subsequently referenced.
In this policy, we will replace the pages by maintaining a circular buffer with which a pointer is associated. When a page
is replaced, the pointer is set to indicate the next frame in the buffer. When a page is to be replaced, the operating system scans
the buffer to ind a frame bit zero. Each time it encounters a frame with use bit one, then it resets the bit to zero. If any one of
the frames in the buffer uses bit zero at beginning of this process, we have to replace such frame. If all the frames uses bit one,
then the pointer will make one complete cycle throughout the buffer, setting all bits to zero and stop at its original position and
replace the page in that frame. Consider the following reference string 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2.
PAGE
2 3 2 1 5 2 4 5 3 2 5 2
ADDRESS
STREAM 2* 2* 2* 2* 5* 5* 5* 5* 3* 3* 3* 3*
3* 3* 3* 3 2* 2* 2* 2 2* 2 2*
CLOCK 1* 1 1 4* 4* 4 4 5* 5*
F F F F F
In the above igure, the presence of asterisk indicates the use bit 1 and arrow indicates the position of the pointer.
The frame with asterisk will not be used for replacement. The number of page replacements in this example are ive. In all
processors that support paging, a modify bit is associated with every page in main memory and with every frame of main
memory. This is needed so that, when a page has been modiied, it is not replaced until it has been written back into secondary
memory. If use bit and modify bit are taken into account, each frame falls in one of the four categories,
board can access the memory present an same board faster than
the memory present on different board. Even with the existence
of high-speed connections such as IniniBand, typical NUMA
systems are slower than the systems that carry CPUs on a single
board.
0 Degree of multiprogramming
To improve the performance, various management Figure: Effect of Thrashing
techniques are employed which modiies the location of the
frames instead of locating them. These frames are stored closer As it can be observed from the graph that CPU utilization
to the CPU with respect to their latency. This method of NUMA increases with increase in the degree of multiprogramming but
saves much of the time when compared with traditional method at a certain point after reaching maximum CPU utilization,
of treating the memory as uniform. it decreases sharply. That particular point is referred to as
thrashing.
The modiications with respect to algorithm includes
a scheduler which is responsible for tracking the CPU who With use of local replacement algorithm of avoiding
executed the processes most recently. With these two concepts, the use of pages associated with active processes, effects
cache hits can be improved and memory access time can be of thrashing can be limited to certain extent. To eliminate
decreased. However, there exist certain complications when thrashing completely, a processes must be alloted with as
threads are considered which can be solved using ‘lgroup’ entity many as page frames it require.
which is included in the kernel of solaris. These groups carries One of the effective methods of doing so is the use of
information related to all the processors and memory which are locality model which allows the use of frames with multiple
closer to each other. Multiple groups are created with respect processes concurrently. It does so by creating a locality set of
to the latency and each lgroup is responsible for scheduling all active frames and the processes are made to move from one
its associated threads and memory. locality to another.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 95
Computer SCienCe paper-V operating SyStemS
Q30. Explain working set model and page fault Page-fault rate
frequency.
Working set model is a complex method of preventing For every platter there are two read-write heads, one
thrashing. However, a direct method for the same is to use Page for the top surface and the other for the bottom. All heads are
Fault Frequency (PFF). This method imposes a control on page attached to a disk arm (or disk actuator) which is connected to
fault rate by using upper and lower limits on it. a stepper motor to advance or move heads in steps.
96 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
2. Magnetic Tapes
This is the earliest secondary-storage device. They
are very slow in operation because, they read and write data
sequentially as random access to data is not possible. Now-
a-days, they are used only to keep huge data which is not at
all used but has to be kept as a record or history, like sensus
information of a country, stock market history, backup data etc.
The tape has to be inserted in a tape drive to perform
read or write. It is a plastic ribbon with magnetic coating on the
surface to store data. It also carries two spools, one to wound
and other to rewound the tape. The read-write head reads or
writes on the tape using electromagnetic signals.
In this algorithm, the disk arm moves in one direction, servicing all the requests that comes along that route until last
track is reached. After reaching one end, it reverses the direction and moves to the other end servicing the requests along
the way. This action is similar to that of an elevator (or lift) of a building and hence it is also called as elevator algorithm.
Consider the previous example of disk requests 104, 189, 43, 128, 20, 130, 71, 73 where initial head position is 59. If
SCAN scheduling algorithm, is applied then from 59, the head moves towards the irst track i.e., 0 servicing the requests
43 and 20 and inally reaches to 0. Then the disk arm reverses and starts moving towards the other end servicing
requests 71, 73, 104, 128, 130 and 189. Hence, the sequence of servicing requests is now,
Total head movements = |59 – 43| + |43 – 20| + |20 – 0| + |0 – 71| + |71 – 73| + |73 – 104| + |104 – 128| + |128 – 130| + |130 – 189|
= 16 + 23 + 20 + 71 + 2 + 31 + 24 + 2 + 59 = 248
The limitation of this scheme is that the waiting time of some requests increases. This is usually when most of the
requests are present on the other side of the disk so, it would be preferred if that side is scanned irst.
0 20 43 59 71 73 104 128 130 189
4. C-SCAN Scheduling
It is a modiied version of SCAN scheduling which overcomes the limitation of SCAN and provides uniform waiting
time. It is same like SCAN except that the requests are serviced only in one direction or trip.
Apply C-SCAN to previous example with head initially at 59. As there are more requests on the right side, the head
starts moving towards that side servicing 71, 73, 104, 128, 130, 189 and inally reaches the end. Then, it immediately
returns to the same end without servicing any requests in that direction and starts at 0 again. The following igure (4)
depicts the same.
0 20 43 59 71 73 104 128 130 189
C = Copy
Figure (2): Mirrored Disks
Figure (5): LOOK Scheduling Algorithm
RAID Level 2
3.3.3 raid structure
It is also called as Memory Style Error Correcting Code
Q33. What is RaId structure? What are the various (ECC) organisation. In this scheme parity bits are used for each
RAID levels? Explain them briely. byte in the memory and these parity bits are striped across
Answer : other disks. ECC can reconstruct the data which is damaged.
Hence, a higher level of reliability is obtained with improved
RAID Structure performance using striping. The following igure shows for four
RAID stands for Redundant Array of Inexpensive Disks. disks of data, user require only three disks of parity.
The idea here is to use multiple disks and keep redundant data or
copies of data so as to improve performance and reliability. The
simplest RAID is to copy a whole disk to another disk. Thus,
if the original disk fails, its duplicate can be used to restore the P = Parity
data. Hence, reliability is increased. figure (3): Memory style error correcting code
Here, both the original and duplicate disks can be used RAID Level 3
concurrently to access data. Consider an example, that we
want to read 16-blocks of data from a disk and it requires 16 It is also called as bit interleaved parity organization
milliseconds. If we read irst eight blocks from one disk and where memory system assumes that the disk controller can
remaining eight from another simultaneously, then the time detect the correctness of a sector during read. Hence, it uses
required will be only 8 milliseconds i.e., half of the traditional only one parity bit for error correction, thereby decreases the
approach. Hence, performance is improved. overload of disks. This level of RAID uses only one disk for
storing parity of four disks.
RAID Levels
There are certain techniques as mirroring and striping
that are employed is RAID concept each performing their own
functions.
figure (4): Bit Interleaved Parity
Mirroring is a technique of making a duplicate copy
Another advantage is that, the transfer rate of
of the complete disk so that in case of disk crash, it can be
reading and writing is improved since data is striped across
used. Hence, it increases reliability. Striping is a technique of
splitting the bits of each byte across multiple disks. In other several disks and each operates parallely. A part from these
words blocks of a ile are splitted across multiple disks. This advantage a performance problem of this level and all RAID
increases performance because all these disks will be active in levels that uses parity bit is that they involve overhead of
parallel. reading and writing parity.
100 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
RAID Level 4 RAID Level 0 + 1
It is also called as block interleaved parity organisation. It is a combination of RAID level 0 and RAID level 1
In this scheme block level striping is performed and for each i.e., it provides both performance and reliability. This level is
block it stores a parity block in a separate disk. better than level 5 but the disadvantage is that the number of
disks needed is more compared to other levels. First striping is
Parity
performed than mirroring of those strips are done as shown in
igure (8) below,
Figure (5): Block Interleaved Parity
Figure (6): Block Interleaved Distributed Parity Figure (9): RAID Level 1 + 0
The limitation of this scheme is that if multiple disks Q34. Discuss how performance can be improved
fail simultaneously, then it would be impossible to restore the using parallelism.
whole data. Hence, next level is used.
Answer :
RAID Level 6
A single disk cannot fulill all the storage and transmis-
It is also known as P + Q redundancy scheme, Which sion requirements for most of the applications. Thus, a series
is very much similar to level 5 except that it stores additional of multiple disk are used in parallel with a controller. Mirroring
redundant information to overcome multiple disk failures. It and striping techniques are the two major improvements in this
stores extra parity bits and sometimes advanced error correcting approach. When mirroring is used, multiple disks can handle
codes like reed-solomon codes are used. For every 4-bits of data multiple requests simultaneously, with which the processing
2-bits of redundant data is stored with which it can tolerate two rate simply doubles. In addition to this, two types of striping
disk failures. It requires one extra disk than level 5. techniques can also be used. They are,
c, cc, java, htm or html, Vb etc. Source iles belonging to various programming languages.
rar, zip Archive iles where some related iles are grouped together.
mpeg, mov, rm, mp3, avi, 3gp, divx, axxo Files containing multimedia data like video, audio etc.
Different operating systems may have different extensions for the same type of ile. Another advantage of using extension
is that operating system can associate with each ile the type of application which is needed to open that ile. Whenever user
opens a particular ile by double-clicking its icon (obviously in GUI environment) the operating system starts the applications
that support that ile format implicitly.
Unix uses a concept called magic number to indicate the type of ile.
Answer :
Different types of iles have different structure inside. For example, the source ile of a particular programming language
has a structure which matches the expectation of the compiler that reads it. In the same way, a binary ile is expected to have
series of 0’s and 1’s. Modern operating systems support a variety of ile types like text, images, video, audio etc. The default
iletype that every operating system must support is an executable ile so as to load and run programs.
When an operating system does not support a particular ile type, then write new application programs that are capable of
reading and understanding the desired ile structure. For example, windows XP does not support a “rm” iletype, hence, install
application such as real player that can read it.
The iles in Mac OS consist of two parts, a resource fork and a data fork. The resource fork include information like labels
on buttons etc., which can be relabeled in some other language (like Arabic, French etc.) by using tools provided by Mac OS.
The data fork consist of program code and data.
The interval ile structure consist of a number of variable-sized logical blocks which are combined into one or more ixed
size physical block(s) of disk. This is called as packing technique which can be done either by user’s program or by operating
system.
In some operating systems like Unix, all iles are treated as stream of bytes and we can address each byte using its offset
from the starting or end of the ile.
Directory
File 1 File 2 File 3 File 4 File 2 File 5 File 6 File 5 File 2 File 1 File 2
F4 F5 F5 F4 F5
Past Future Pics Music Past Future Pics Music
F1 F2 F3 F6 F7 F8 F9
Latest
F6 F7 F8
File X File Y
Main
library
Main
library
Snap shots
File 1 File 2
Figure (5): General Graph Directory with Cycles
108 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Since, cycles are present in graph, searching a particular ile more than once should be avoided. This organization
also suffers from the dangling pointers problem while deletion of iles. To avoid this problem, acyclic-graph structure uses a
variable called reference counter which stores the number of directories referring to this ile. If reference counter is 0 then it
means no directories are referring to it and can be deleted. Since, there are cycles present here, this approach is not useful.
Another approach called garbage collection scheme is used to ind whether all the references to a particular ile are
deleted or not, so that space occupied by that ile is deallocated and it is marked as deleted. The garbage collection scheme
works in two phases. Firstly, it traverses the whole ile system and marks everything (ile and directories) that are accessible
and ensures that it is marked only once (without reputation). In the second phase, it frees all unmarked iles and directories
because they are not referred by any directories and are garbage.
Q42. Write short note on actions to be performed during a ile deletion operation if links exist in the
directory structure.
Answer :
A ile in a directory structure can have either single or multiple parents.
When a ile to be deleted has a single parent. It is easily deleted and its entry is removed from its parent directory. If a
ile has multiple parents then deleting it is a complex task as it will create dangling pointers.
However, its entry still can be removed from parent directory present in the access path of delete command.
The process of checking for multiple parents is very complex. The complixity is reduced by maintaining a reference
count for every ile. When a new ile is created, the count is set to one and whenever a link points to the ile the count is
incremented by one.
On contrary, when a ile deletion attempt is made, its reference count is decremented by one. Further, its parent directory
provided in the access path of delete command’s entry is deleted. Finally, when reference count of the ile becomes zero actual
ile is deleted.
The reference count strategy does not work when a directory structure contain cycles.
A cycle is developed when a link is made between a directory and its grand parent directory as shown in below igure.
Root
= Directory
= File
X Y Z
F1
F2
T
F4 F3
Figure: Link made between a Directory and its Grand Parent Directory
In the above igure, directory T is linked to directory Y and its grand parent directory root.
If directory Y is deleted then its entry in root directory is deleted as it has one as reference count. Further, directory Y
and its ile become unreachable from root directory therefore there is no use to retain the directory.
To solve this unreachable problem, some cycle detection techniques must be applied or formation of cycles must be
prevented.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 109
Computer SCienCe paper-V operating SyStemS
Q43. Explain the advantages and disadvantages of single, two and tree level directory structure.
Answer :
Advantages of Single-level Directory Structure
v It can locate the iles quickly as all the iles are present on a single location.
v All ile names must be unique as they are stored in a same container.
v The rule of maintaining unique names becomes complicated and gets violated easily in case of multiple users.
v The ile names might run out of uniqueness as the operating systems allow limited number of characters to be given to
the ile name if there are large number of iles.
v It resolves the collision problem occurred with respect to the names of the iles.
v It provides an effective way with which users can be isolated from each other.
v It eficiently improvises the task of search by employing Master File Directory (MFD).
v This structure typically isolates users from each other and in some systems, sharing of iles is not allowed.
v It provides access to the iles of other users by specifying their path names.
v It provides complexity in accessing iles of different users because the path names of iles are longer than iles of two
level directories.
Answer :
The ile system is normally present on some logical partition of a disk. It has to be mounted i.e., to be connected to
system’s directory structure in order to access the ile present in this partition. The mounting operation can be done only by
system administrator and hence it acts as a protection to the ile system. Local as well as remote ile system can be mounted.
A ile system is mounted on a mount point, which is nothing but any empty directory in system’s directory structure.
Mounting does not mean permanently altering directory structure, but only link is created to that partition of ile system. This
link lasts until it is unmounted or system is rebooted.
110 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
In a ile system, a device driver will be given with a responsibility of informing the operating system that whether the
referred ile is valid (or) not. With this feature, operating system occupies the mount point in its directory structure which helps
in traversing the directory structure. Operating systems including the latest versions of windows supports the feature of mounting
the ile system at any point in the structure.
The syntax of mount operation is “mount <FS_Path>, <Mount_Point>” where FS_Path is the path of the ile system or
volume which is unmounted and mount_Point is the path where FS_Path has to be linked in existing ile system.
Consider the following igures,
For answer refer Unit-III, Page No. 110, Q.No. 44. Q46. Deine security and protection. Describe the
concept of ile protection.
Mounting of File System in Macintosh Operating System
File system mounting in Macintosh operating system Answer :
is started by initially searching the desired ile system (which
Security
needs to be mounted) on the disk during the booting process. If
the process is successful, then the ile system is automatically The term security refers to a state of being protected
mounted at the root level. This mounting is done by adding a from harm or from those that cause negative effects.
folder on the screen that is labelled with a name identical to Examples can be protecting banks from robbery, computers
the name of ile system achieved in the derive directory. Once
from viruses, data from unauthorized access etc.
the icon is created, the uses chicks on the icon so as to view
the newly mounted ile systems. Protection
Mounting of File System in Microsoft Windows Operating
Protection refers to, keeping the system safe physically
System
as well as from unauthorized access. It can be provided in
Microsoft windows maintains an extended two level many ways, like in single user system, the loppy disk can
directory structure that consists of devices (at irst level) and
be physically removed and kept safe in a locker. But this is
their respective partitions which are labeled with unique derive
letter (at next level). Each of these partition maintain a general a very traditional approach and often cumbersome. There
graph direct structure corresponding to the derive letter. If a are other techniques to employ protection in both single and
user search for a speciic ile then the path for the ile is in the multiuser system.
form
File Protection
Drive-letter: Path/to/ile
Protection refers to providing controlled access to iles
The process of ile system mounting in such operating
system is down at boot time. Initially, in this process every by various users. There are several factors with which the
device present in the system is discovered and then each and protection mechanism veriies before allowing or denying the
every identiied ile system are mounted. access and there are several types of operations which has to
Example be controlled like,
v Then the size of ile or directory increases due to Q47. Give an overview of ile system implementation.
storing access information in ACL.
Answer :
v Searching ACL for access rights will take time because
list is very long. File System Implementation
v The directory entry is of ixed size, but if ACL is A ile system consists of special blocks of information
stored then it has to be made variable sized which will that helps in loading the operating system when the computer
increase the complexity of managing it. boots. Other blocks contain information regarding the amount
of free space, total number of blocks etc. The following are the
Hence, to reduce the length of ACL and overcome
memory structures each for speciic purpose,
the above problems, three different classiications of users is
done. 1. Boot Control Block
The igure given above shows the functioning of ile management system. The irst half part of the igure is related to the
ile management system and the remaining half is concerned with the operating system.
File Management Concerns
Users and application programs uses commands for interacting with the ile system. For interaction, user irst need to
select, identify or locate the ile. This is accomplished by means of a directory which describes the location of all iles and their
attributes. Some systems (mostly shared systems) also provide user access control where in only authorized users are allowed
to access particular iles using some particular access mechanisms. The basic operations on iles are performed at record level.
The records are then organized using some structures and are viewed as a ile. All the other overheads of handling the iles goes
to the operating system.
Operating System Concerns
An operating system is concerned with the ile system for the I/O operations, storage purpose etc. For the output operations
the records or ields of a ile need to be organized as a sequence of blocks which can be unblocked after input operation. Several
functions are needed for this purpose such as managing the secondary storage which involves allocation of iles to free blocks
on secondary storage and also managing the free storage. This in turn helps to know about the availability of the blocks for new
iles and the growth of existing iles.
Scheduling of individual block I/O must also be handled. The disk scheduling as well as the ile allocation helps in
optimizing the performance of a system.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. 117
Computer SCienCe paper-V operating SyStemS
Q51. Write short notes on,
(a) Partitions and mounting
(b) Virtual ile systems.
Answer :
(a) Partitions and Mounting
Partitioning refers to the division of disk into multiple parts.
When a disk is partitioned, it can be formatted with a ile system
or left as “raw” i.e., without any particular ile system. These raw
disks or partitions are used for processes which do not require any
particular ile system. For example, Unix swap space requires a Figure: Implementation of Generic File System
raw partition, raw disks are also used to store RACO coniguration
A generic ile system implementation can be exported
settings in a small database.
by the above diagram. It has three major layers among which
Similarly, boot information is also stored in a raw the second layer is of virtual ile system interface.
partition. This is because no ile system can be interrupted as
A VFS interface has two important functions to perform.
operating system itself is not completely loaded during the boot
They are,
process. The boot information is stored in a ixed location and in
a sequential order, so that the execution of the operating system 1. VFS interface isolates ile system operations from their
is simple and easy. The location where boot information is implementation details. An operating system may have
stored is called as boot block and it can also include information multiple implementations of VFS interfaces to support
as in how to boot a simple operating system in case of multiple- different types of ile systems mounted on the local
operating systems. A computer with more than one operating machine.
systems installed in different partitions is called “dual booted”. 2. While representing the available iles on the network,
Therefore, to determine which operating system has to boot, a VFS ensures that each ile should be uniquely identiied
program called boot loader is used that is capable of interpreting with the help of a data structure called vnode. Every
multiple ile systems and operating systems. single ile or directory on each machine of a network
has an associated vnode. A vnode assigns a number to
The partition that contains actual operating system iles each ile for uniquely identifying it over the network.
and kernel is called root partition. The root partition is loaded at
The VFS makes use of the ile system interface to
boot time which successively loads other volumes as required
perform user initiated operations on iles that may be
by the operating system.
available locally or on remote machines. The third layer
Once a volume is successfully mounted, the operating implements the various ile systems which directly
system checks the ile system of the volume. If it can be interacts with storage devices for data transfer.
recognized as one of the supported ile systems than an entry is An example VFS is the VFS architecture in linux. It has
made into the in-memory “mount table” structure. The mount four main object types. They are,
table keeps track of all mounted ile systems along with their 1. Inode Object
type. Operating systems like microsoft windows assign a
This object is meant for representing a single file
separate name space for each mounted volume that is denoted
available on the disk.
by a speciic letter and a colon. This enables to easily traverse
2. File Object
iles and directories available on that volume.
This object represents a ile that is currently opened.
(b) Virtual File Systems
3. Super Block Object
A Virtual File System (VFS) enables an operating system
This object type represents the complete ile system.
to support multiple ile systems on a single disk, so that users
can easily traverse between ile systems. A VFS also enables 4. Dentry Object
access to remotely available disks using different ile systems. It represents a single directory.
118 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
Q52. Describe the linear list and hash table directory implementation methods.
Answer :
Directory Implementation
Implementation of a directory structure involves selection of eficient directory allocation and management algorithms
from the available algorithms.
1. Linear List
This is the most simplest of all directory implementation methods. It is easy to implement, but it takes a lot of time to
execute. It simply maintains a sequential list of ile names pointing to their corresponding data blocks.
This kind of implementation requires a ilename to be searched before creating a ile with that name. Similarly, a delete
operation also requires a linear search for the directory and then disallocate the space occupied by the directory. Finally the
corresponding entry is removed from the list. Instead of deleting the entry name, the entry as unused with the help of used-unused
bit, or include it into the list of available directory entries. Another method could be to decrement the directory length and transfer
the directory entry contents to any free spaces on the disk.
Though this approach is simple to implement it has few disadvantages as well. This method requires a linear search for
inding a ile before performing any operation. This makes it slow and users would experience this property very frequently. Even
if a binary search technique is used instead of linear search, though it improves average search time, but it still needs a sorted
list of directory entries to search.
2. Hash Table
In this technique a hash table is used only with the linear list for storing directory entries. The hash table makes use of a
hash function that takes an input value based on the ilename and produces an output as a reference to the corresponding directory
entry in the linear list. Therefore, all ile operations consume very less time to execute. However necessary arrangement should
be made made to handle collisions. A collision is a situation where more than one ile name is hashed to the same location.
Disadvantages of this technique are that a hash table is usually of ixed size and the performance of the hash functions is
also dependent on the size of the hash table. Therefore, whenever a new ile is to be added after all the available free entries have
been used the hash table should be expanded to accommodate addition of new iles and the existing directory entries should be
organized in such a way that the new hash function also maps input values to there corresponding directory entries.
A solution to the above problem could be to use a chained-overlow hash table. A chained-overlow hash table consists
of a linked list that stores all hashed entries. Now, if more than one ilename hashes to the same entry it can be added as another
node to the linked list. Though this approach eliminates all the disadvantages of linear list directory implementation, searching
directory names can consume sometime since, it involves traversing the linked list. However, this technique is considered to
be eficient than linear list directory implementation method.
Allocation Methods
An allocation method is considered to be eficient if it allocates space such that no disk space is wasted and accessing
of the iles take less time. The three most important ile allocation methods are,
1. Contiguous Allocation
In a contiguous allocation method, all the iles are arranged in sequential blocks of memory. Therefore, according to
this technique, if a ile size is k blocks and it starts from a block s then it spans till k-1 block numbers. With this approach,
accessing of iles is much faster as all iles occupy contiguous blocks. For sequential access of iles, the physical address of the
last referred block is noted, so that ile access can start from the next block. Use of this address avoid repeated access of the
already accessed blocks. Direct access is also supported since only the starting address and the block numbers are required.
FOO
0 1 2 3
Directory
4 5 6 7
Filename Starting Size
IMP
location
8 9 10 11 Foo 0 3
Imp 6 7
12 13 14 15 User 14 6
Chat 26 4
USER
16 17 18 19
20 21 22 23
CHAT
24 25 26 27
28 29 30 31
32 33 34 35
0 1 2 3
Directory
4 5 9 6 7 Filename Star End
8 9 12 10 5 11
Sure_doc 15 20
12 17 13 14 15 10
16 17 20 18 19
20 –1 21 22 23
24 25 26 27
Disk
365 EOF
595 365
3. Indexed Allocation
This scheme stores pointers to all the blocks of a particular ile at one location called as index block. It is a simple
modiication of linked allocation. The directory stores the address of this index block. Hence, after getting a particular index
block the whole ile can be accessed.
When the ile is created, OS provides an index block to it which contains nothing. When an ith block is written, its
address is stored in the ith entry of index block. This scheme is resistant to fragmentation but some space is wasted to store the
index blocks for each ile. The size of index block cannot be determined.
0 1 2 3
Directory
4
5 Sure_doc 8
6 8 9 10 11
13
14
12 13 14 15
16 17 18 19
20 21 22 23
Disk
Figure (4): Indexed Allocation Scheme
If large index blocks are used then for small iles, most of their entries will be empty hence space will be wasted. If small
index blocks are used then they may not be able to accommodate all pointers of ile. Hence, the following schemes are used,
(a) Linked Scheme
It creates a single disk block for accommodating small iles. As their size increases, it links together several index blocks.
(b) Multilevel Index
In this scheme, multiple levels of index blocks are used. The irst level index block points to the second level index blocks
which points to actual data blocks. Similarly, it is possible to have many levels of index blocks.
(c) Combined Scheme
This scheme is implemented in Unix ile system. Each inode of Unix stores 15 pointers. The irst 12 are called direct
blocks because they point directly to data blocks. The next pointer is single indirect block which is a pointer pointing to
an index block that contains address of actual data blocks. Similarly, there are two and three levels of index blocks for
double direct block and triple direct block respectively. The following igure shows the same,
data
data
1
2 data
Direct 3
blocks
data
12
data
Single indirect
Double indirect data
Triple indirect data
data
data
data
data
1 1 0 0 1 0 0 0
Head of
free-space list 0 1 2
3 4 5
6 7 8
91 10 11
= Free block
12 13 14
15 16 17
4 5 6 7 Advantage
Starting
block Count
number 8 9 10 11 v It overcomes the drawback of grouping by storing the
5 5
12 13 14 15 address of irst free block and count of successive free
17 2
26 1 blocks instead of storing a list of free blocks.
16 17 18 19
Free-space table
20 21 22 23
Disadvantages
There are no relative disadvantages of counting
24 25 26 27
approach however, there are certain constraints which include,
Disk
v The entries in the table acquire more memory space
Figure: Free Space Management Using Counting Method
when compared with addresses.
Q55. Explain the advantages and disadvantages of
the free disk space management approaches. v The overall table length becomes shorter with increase
in the count value.
Answer :
1. Bit-vector Free Space Management
Advantages
v It is relatively simple and eficient to search free blocks
either the irst one or multiple consecutive ones.
v It does not consume large space as it makes use of
single bit per each block
124 SIA PUBLISHERS and dISTRIBUTORS PVT. LTd.
UNIT-3 MaIN aNd vIrTUal MeMory, Mass-sTorage sTrUcTUre, fIle sysTeMs aNd IMpleMeNTaTIoN Computer SCienCe paper-V
internal assessMent
objective type
I. Multiple Choice
1. _________ is deined as a waste of memory space. [ ]
3. To lookup and translate addresses, associative cache memory called ________ is used. [ ]
5. The technique in which pages are fetched into main memory only when they are needed by processes. [ ]
(a) Shortest scheduling time irst (b) Smallest scheduling time irst
(c) Snewed scheduling time irst (d) Shortest seek time irst
3. During the implementation of paging a logical address is divided into __________ number of parts.
4. In basic implementation of paging, the physical memory is divided into ixed-sized blocks called _________.
5. ________ and ________ are the two fundamental techniques for implementing virtual memory.
6. If a _________ exist, the performance of the system can be affected by demand paging.
7. The tree-structured directory similar to acyclic-graph structure, where we can add links to an existing directory is
known as _________.
8. The process of attaching a ile system (present on logical disk) to system’s directory structure is known as _________.
10. In order to read or write to a new magnetic disk, _________ needs to performed.
I. Multiple Choice
Important Questions
Unit- 1
Short QueStionS eSSay QueStionS
Q1. What do you mean by multiprocessor systems? Q10. Discuss briely about,
Answer : Important Question (i) Single processor systems
For answer refer Unit-I, Page No. 2, Q.No. 1. (ii) Multiple processor systems
Q2. Deine operating system. Give two examples. (iii) Clustered systems.
Answer : Important Question Answer : Important Question
For answer refer Unit-I, Page No. 2, Q.No. 2. For answer refer Unit-I, Page No. 5, Q.No. 13.
Q3. List various types of operating system. Q11. Deine operating system. What are the services
Answer : Important Question of an operating system? Explain.
For answer refer Unit-I, Page No. 2, Q.No. 3. Answer : Important Question
Q4. Write the services of operating system. For answer refer Unit-I, Page No. 10, Q.No. 19.
Answer : Important Question Q12. Discuss various approaches of designing an
operating system.
For answer refer Unit-I, Page No. 3, Q.No. 5.
Answer : Important Question
Q5. Deine system call.
For answer refer Unit-I, Page No. 3, Q.No. 6. Q13. Deine the following,
For answer refer Unit-I, Page No. 3, Q.No. 8. (iii) Process state diagram.
Answer : Important Question For answer refer Unit-I, Page No. 18, Q.No. 26.
For answer refer Unit-I, Page No. 4, Q.No. 9. Q14. What is inter-process communication? What
are the models of IPC?
Q8. What is Inter Process Communication (IPC)? List
the models of IPC in operating systems. Answer : Important Question
Answer : Important Question For answer refer Unit-I, Page No. 24, Q.No. 33.
For answer refer Unit-I, Page No. 4, Q.No. 10. Q15. Describe about semaphores and their usage
Q9. Write short notes on semaphore. and implementation.
For answer refer Unit-I, Page No. 4, Q.No. 12. For answer refer Unit-I, Page No. 35, Q.No. 43.
SIA PUBLISHERS and dISTRIBUTORS PVT. LTd. IQ.1
Computer SCienCe paper-V OperatIng SyStemS
Unit- 2
Short QueStionS
Unit- 3
Short QueStionS
Q1. Write the differences between logical and physical address space.
Answer : Important Question
eSSay QueStionS
Q10. Write short notes on,
(i) Dynamic loading
(ii) Dynamic linking
(iii) Shared libraries.
Answer : Important Question
Q14. What are the structures and operations that are used to implement ile system operations?